Updates from: 04/28/2022 01:07:17
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Partner Asignio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-asignio.md
Use the following steps to add Asignio as a claims provider:
<OutputClaims> <OutputClaim ClaimTypeReferenceId="issuerUserId" PartnerClaimType="sub" /> <OutputClaim ClaimTypeReferenceId="tenantId" PartnerClaimType="tid" AlwaysUseDefaultValue="true" DefaultValue="{Policy:TenantObjectId}" />
- <!-- <OutputClaim ClaimTypeReferenceId="givenName" PartnerClaimType="given_name" /> -->
- <!-- <OutputClaim ClaimTypeReferenceId="surName" PartnerClaimType="family_name" />
- <OutputClaim ClaimTypeReferenceId="displayName" PartnerClaimType="name" /> -->
- <!-- <OutputClaim ClaimTypeReferenceId="email" PartnerClaimType="email" /> -->
<OutputClaim ClaimTypeReferenceId="authenticationSource" DefaultValue="socialIdpAuthentication" AlwaysUseDefaultValue="true" /> <OutputClaim ClaimTypeReferenceId="identityProvider" PartnerClaimType="iss" DefaultValue="https://authorization.asignio.com" /> <OutputClaim ClaimTypeReferenceId="identityProviderAccessToken" PartnerClaimType="{oauth2:access_token}" />
active-directory-b2c Partner Xid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-xid.md
Previously updated : 03/18/2022 Last updated : 04/27/2022 # Configure xID with Azure Active Directory B2C for passwordless authentication
-In this sample tutorial, learn how to integrate Azure Active Directory B2C (Azure AD B2C) authentication with the xID digital ID solution. The xID app provides users with passwordless, secure, multifactor authentication. xID-authenticated users obtain their identities verified by a My Number Card, the digital ID card issued by the Japanese government. Organizations can get users verified Personal Identification Information (customer content) through the xID API. Furthermore, the xID app generates a private key in a secure area within userΓÇÖs mobile device, which can be used as a digital signing device.
+In this sample tutorial, learn how to integrate Azure Active Directory B2C (Azure AD B2C) authentication with the xID digital ID solution. The xID app provides users with passwordless, secure, multifactor authentication. xID-authenticated users obtain their identities verified by a My Number Card, the digital ID card issued by the Japanese government. Organizations can get users verified Personal Identification Information (customer content) through the xID API. Furthermore, the xID app generates a private key in a secure area within user's mobile device, which can be used as a digital signing device.
## Prerequisites
The following architecture diagram shows the implementation.
| Step | Description | |:--|:--|
-| 1. |User opens Azure AD B2C's sign in page, and then signs in or signs up by entering their username. |
-| 2. |Azure AD B2C redirects the user to xID authorize API endpoint using an OpenID Connect (OIDC) request. An OIDC endpoint is available containing information about the endpoints. xID Identity provider (IdP) redirects the user to the xID authorization sign in page, allows the user to fill in or select their email address. |
-| 3. |xID IdP sends the push notification to the userΓÇÖs mobile device. |
-| 4. |The user opens the xID app and checks the request, then enters the PIN or authenticates with their biometrics. If PIN or biometrics is successfully verified, xID app activates the private key and creates an electronic signature. |
+| 1. |User opens Azure AD B2C's sign-in page and then signs in or signs up by entering their username. |
+| 2. |Azure AD B2C redirects the user to xID authorize API endpoint using an OpenID Connect (OIDC) request. An OIDC endpoint is available containing information about the endpoints. xID Identity provider (IdP) redirects the user to the xID authorization sign-in page allowing the user to fill in or select their email address. |
+| 3. |xID IdP sends the push notification to the user's mobile device. |
+| 4. |The user opens the xID app, checks the request, then enters the PIN or authenticates with their biometrics. If PIN or biometrics is successfully verified, xID app activates the private key and creates an electronic signature. |
| 5. |xID app sends the signature to xID IdP for verification. |
-| 6. |xID IdP shows consent screen to the user, requesting authorization to give their personal information to the service they're signing in. |
+| 6. |xID IdP shows a consent screen to the user, requesting authorization to give their personal information to the service they're signing in. |
| 7. |xID IdP returns the OAuth authorization code to Azure AD B2C. |
-| 8. |Using the authorization code, Azure AD B2C sends a token request. |
-| 9. |xID IdP checks the token request, and if still valid, returns the OAuth access token and the ID token containing the requested userΓÇÖs identifier and email address. |
+| 8. | Azure AD B2C sends a token request using the authorization code. |
+| 9. |xID IdP checks the token request and, if still valid, returns the OAuth access token and the ID token containing the requested user's identifier and email address. |
| 10. |In addition, if the user's customer content is needed, Azure AD B2C calls the xID userdata API. |
-| 11. |The xID userdata API returns the userΓÇÖs encrypted customer content. User can decrypt it with their private key, which they create when they request the xID client information. |
+| 11. |The xID userdata API returns the user's encrypted customer content. Users can decrypt it with their private key, which they create when requesting the xID client information. |
| 12. | User is either granted or denied access to the customer application based on the verification results. | ## Onboard with xID
-Request for API documents by filling out [the form](https://xid.inc/contact-us). In the message field, indicate that you would like to onboard with Azure AD B2C. The xID sales representatives will contact you. Follow the instructions provided in the xID API document and request a xID API client. xID tech team will send client information to you in 3-4 working days.
+Request API documents by filling out [the request form](https://xid.inc/contact-us). In the message field, indicate that you'd like to onboard with Azure AD B2C. Then, an xID sales representative will contact you. Follow the instructions provided in the xID API document and request an xID API client. xID tech team will send client information to you in 3-4 working days.
## Step 1: Create a xID policy key
Store the client secret that you received from xID in your Azure AD B2C tenant.
## Step 2: Configure xID as an Identity provider
-To enable users to sign in using xID, you need to define xID as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated using digital identity available on their device, proving the userΓÇÖs identity.
+To enable users to sign in using xID, you need to define xID as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims Azure AD B2C uses to verify that a specific user has authenticated using digital identity available on their device. Proving the user's identity.
Use the following steps to add xID as a claims provider:
Use the following steps to add xID as a claims provider:
<Item Key="UseClaimAsBearerToken">identityProviderAccessToken</Item> <!-- <Item Key="AllowInsecureAuthInProduction">true</Item> --> <Item Key="DebugMode">true</Item>
- <Item Key="DefaultUserMessageIfRequestFailed">Cannot process your request right now, please try again later.</Item>
+ <Item Key="DefaultUserMessageIfRequestFailed">Can't process your request right now, please try again later.</Item>
</Metadata> <InputClaims> <!-- Claims sent to your REST API -->
Use the following steps to add xID as a claims provider:
## Step 3: Add a user journey
-At this point, you've set up the identity provider, but it's not yet available in any of the sign in pages. If you've your own custom user journey continue to [step 4](#step-4-add-the-identity-provider-to-a-user-journey), otherwise, create a duplicate of an existing template user journey as follows:
+At this point, you've set up the identity provider, but it's not yet available on any of the sign-in pages. If you have a custom user journey, continue to [step 4](#step-4-add-the-identity-provider-to-a-user-journey). Otherwise, create a duplicate of an existing template user journey as follows:
1. Open the `TrustFrameworkBase.xml` file from the starter pack.
At this point, you've set up the identity provider, but it's not yet available i
## Step 4: Add the identity provider to a user journey
-Now that you have a user journey, add the new identity provider to the user journey.
+Now that you have a user journey add the new identity provider to the user journey.
-1. Find the orchestration step element that includes Type=`CombinedSignInAndSignUp`, or Type=`ClaimsProviderSelection` in the user journey. It's usually the first orchestration step. The **ClaimsProviderSelections** element contains a list of identity providers that a user can sign in with. The order of the elements controls the order of the sign-in buttons presented to the user. Add a **ClaimsProviderSelection** XML element. Set the value of **TargetClaimsExchangeId** to a friendly name, such as `X-IDExchange`.
+1. Find the orchestration step element that includes Type=`CombinedSignInAndSignUp`, or Type=`ClaimsProviderSelection` in the user journey. It's usually the first orchestration step. The **ClaimsProviderSelections** element contains a list of identity providers used for signing in. The order of the elements controls the order of the sign-in buttons presented to the user. Add a **ClaimsProviderSelection** XML element. Set the value of **TargetClaimsExchangeId** to a friendly name, such as `X-IDExchange`.
-2. In the next orchestration step, add a **ClaimsExchange** element. Set the **Id** to the value of the target claims exchange ID to link the xID button to `X-ID-SignIn` action. Update the value of **TechnicalProfileReferenceId** to the ID of the technical profile you created earlier.
+2. In the next orchestration step, add a **ClaimsExchange** element. Set the **Id** to the value of the target claims exchange ID to link the xID button to `X-ID-SignIn` action. Next, update the value of **TechnicalProfileReferenceId** to the ID of the technical profile you created earlier.
- The following XML demonstrates orchestration steps of a user journey with the identity provider:
+ The following XML demonstrates the orchestration steps of a user journey with the identity provider:
- ```xml
+ ```xml
<UserJourney Id="X-IDSignUpOrSignIn"> <OrchestrationSteps>
Now that you have a user journey, add the new identity provider to the user jour
a. Select the **Directories + subscriptions** icon in the portal toolbar.
- b. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+ b. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and select **Switch**.
3. In the [Azure portal](https://portal.azure.com/#home), search for and select **Azure AD B2C**. 4. Under Policies, select **Identity Experience Framework**.
-5. Select **Upload Custom Policy**, and then upload the files in the **LocalAccounts** starter pack in the following order: the extension policy, for example `TrustFrameworkExtensions.xml`, then the relying party policy, such as `SignUpSignIn.xml`.
+5. Select **Upload Custom Policy**, and then upload the files in the **LocalAccounts** starter pack in the following order: the extension policy, for example, `TrustFrameworkExtensions.xml`, then the relying party policy, such as `SignUpSignIn.xml`.
+
+## Step 6: Configure the relying party policy
+
+The relying party policy, for example [SignUpSignIn.xml](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/blob/main/LocalAccounts/SignUpOrSignin.xml), specifies the user journey which Azure AD B2C will execute. First, find the **DefaultUserJourney** element within the relying party. Then, update the **ReferenceId** to match the user journey ID you added to the identity provider.
+
+In the following example, for the `X-IDSignUpOrSignIn` user journey, the **ReferenceId** is set to `X-IDSignUpOrSignIn`:
+
+```xml
+ <RelyingParty>
+ <DefaultUserJourney ReferenceId="X-IDSignUpOrSignIn" />
+ <TechnicalProfile Id="PolicyProfile">
+ <DisplayName>PolicyProfile</DisplayName>
+ <Protocol Name="OpenIdConnect" />
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="objectId" PartnerClaimType="sub" />
+ <OutputClaim ClaimTypeReferenceId="tenantId" AlwaysUseDefaultValue="true" DefaultValue="{Policy:TenantObjectId}" />
+ <OutputClaim ClaimTypeReferenceId="correlationId" DefaultValue="{Context:CorrelationId}" />
+ <OutputClaim ClaimTypeReferenceId="issuerUserId" />
+ <OutputClaim ClaimTypeReferenceId="givenName" PartnerClaimType="first_name" />
+ <OutputClaim ClaimTypeReferenceId="surName" PartnerClaimType="last_name" />
+ <OutputClaim ClaimTypeReferenceId="previous_name" />
+ <OutputClaim ClaimTypeReferenceId="year" />
+ <OutputClaim ClaimTypeReferenceId="month" />
+ <OutputClaim ClaimTypeReferenceId="date" />
+ <OutputClaim ClaimTypeReferenceId="prefecture" />
+ <OutputClaim ClaimTypeReferenceId="city" />
+ <OutputClaim ClaimTypeReferenceId="address" />
+ <OutputClaim ClaimTypeReferenceId="sub_char_common_name" />
+ <OutputClaim ClaimTypeReferenceId="sub_char_previous_name" />
+ <OutputClaim ClaimTypeReferenceId="sub_char_address" />
+ <OutputClaim ClaimTypeReferenceId="gender" />
+ <OutputClaim ClaimTypeReferenceId="verified_at" />
+ <OutputClaim ClaimTypeReferenceId="email" />
+ <OutputClaim ClaimTypeReferenceId="sid" />
+ <OutputClaim ClaimTypeReferenceId="userdataid" />
+ <OutputClaim ClaimTypeReferenceId="xid_verified" />
+ <OutputClaim ClaimTypeReferenceId="email_verified" />
+ </OutputClaims>
+ <SubjectNamingInfo ClaimType="sub" />
+ </TechnicalProfile>
+ </RelyingParty>
+
+```
+
-## Step 6: Test your custom policy
+## Step 7: Test your custom policy
-1. In your Azure AD B2C tenant blade, and under **Policies**, select **Identity Experience Framework**.
+1. In your Azure AD B2C tenant, and under **Policies**, select **Identity Experience Framework**.
1. Under **Custom policies**, select **CustomSignUpSignIn**. 3. For **Application**, select the web application that you previously registered as part of this article's prerequisites. The **Reply URL** should show `https://jwt.ms`.
-4. Select **Run now**. Your browser should be redirected to the xID sign in page.
+4. Select **Run now**. Your browser should redirect to the xID sign in page.
-5. If the sign-in process is successful, your browser is redirected to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C.
+5. If the sign-in process is successful, your browser redirects to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C.
## Next steps
active-directory-b2c Tutorial Register Spa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-register-spa.md
If your SPA app uses MSAL.js 1.3 or earlier and the implicit grant flow or you c
1. In the left menu, under **Manage**, select **Authentication**.
-1. Under **Implicit grant and hybrid flows**, select both the **Access tokens (used for implicit flows)** and **D tokens (used for implicit and hybrid flows)** check boxes.
+1. Under **Implicit grant and hybrid flows**, select both the **Access tokens (used for implicit flows)** and **ID tokens (used for implicit and hybrid flows)** check boxes.
1. Select **Save**.
active-directory Concept Authentication Operator Assistance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-operator-assistance.md
Previously updated : 10/21/2021 Last updated : 04/27/2022
# How to enable and disable operator assistance
+On September 30, 2023, we will retire operator assistance in Azure AD Multi-Factor Authentication and it will no longer be available. To avoid service disruption, follow the steps in this topic to disable operator assistance before September 30, 2023.
+ Operator assistance is a feature within Azure AD that allows an operator to manually transfer phone calls instead of automatic transfer. When this setting is enabled, the office phone number is dialed and when answered, the system asks the operator to transfer the call to a given extension. Operator assistance can be enabled for an entire tenant or for an individual user. If the setting is **On**, the entire tenant is enabled for operator assistance. If you choose **Phone call** as the default method and have an extension specified as part of your office phone number (delineated by **x**), an operator can manually transfer the phone call.
active-directory Concept Conditional Access Conditions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-conditions.md
Previously updated : 04/05/2022 Last updated : 04/27/2022
For more information, see the following articles:
By selecting **Other clients**, you can specify a condition that affects apps that use basic authentication with mail protocols like IMAP, MAPI, POP, SMTP, and older Office apps that don't use modern authentication.
-## Device state (preview)
+## Device state (deprecated)
-**This preview feature is being deprecated.** Customers should use the **Filter for devices** condition in the Conditional Access policy, to satisfy scenarios previously achieved using device state (preview) condition.
+**This preview feature has been deprecated.** Customers should use the **Filter for devices** condition in the Conditional Access policy, to satisfy scenarios previously achieved using device state (preview) condition.
The device state condition was used to exclude devices that are hybrid Azure AD joined and/or devices marked as compliant with a Microsoft Intune compliance policy from an organization's Conditional Access policies.
The device state condition was used to exclude devices that are hybrid Azure AD
For example, *All users* accessing the *Microsoft Azure Management* cloud app including **All device state** excluding **Device Hybrid Azure AD joined** and **Device marked as compliant** and for *Access controls*, **Block**. - This example would create a policy that only allows access to Microsoft Azure Management from devices that are either hybrid Azure AD joined or devices marked as compliant.
-The above scenario, can be configured using *All users* accessing the *Microsoft Azure Management* cloud app with **Filter for devices** condition in include mode using the following rule **device.trustType -ne "ServerAD" -or device.isCompliant -ne True** and for *Access controls*, **Block**.
+The above scenario, can be configured using *All users* accessing the *Microsoft Azure Management* cloud app with **Filter for devices** condition in **exclude** mode using the following rule **device.trustType -eq "ServerAD" -or device.isCompliant -eq True** and for *Access controls*, **Block**.
- This example would create a policy that blocks access to Microsoft Azure Management cloud app from unmanaged or non-compliant devices. > [!IMPORTANT]
active-directory Workload Identity Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation.md
The workflow for exchanging an external token for an access token is the same, h
1. When the checks are satisfied, Microsoft identity platform issues an access token to the external workload. 1. The external workload accesses Azure AD protected resources using the access token from Microsoft identity platform. A GitHub Actions workflow, for example, uses the access token to publish a web app to Azure App Service.
-The Microsoft identity platform stores only the first 10 signing keys when they're downloaded from the external IdP's OIDC endpoint. If the external IdP exposes more than 10 signing keys, you may experience errors when using Workload Identity Federation.
+The Microsoft identity platform stores only the first 25 signing keys when they're downloaded from the external IdP's OIDC endpoint. If the external IdP exposes more than 25 signing keys, you may experience errors when using Workload Identity Federation.
## Next steps Learn more about how workload identity federation works:
active-directory Recover From Deletions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/recover-from-deletions.md
This article addresses recovering from soft and hard deletions in your Azure AD
## Monitor for deletions
-The [Azure AD Audit Log](../reports-monitoring/concept-audit-logs.md) contains information on all delete operations performed in your tenant. We recommend that you export these logs to a security information and event management (SIEM) tool such as [Microsoft Sentinel](../../sentinel/overview.md). You can also use Microsoft Graph to audit changes and build a custom solution to monitor differences over time. For more information on finding deleted items using Microsoft Graph, see [List deleted items - Microsoft Graph v1.0. ](/graph/api/directory-deleteditems-list?view=graph-rest-1.0&tabs=http)
+The [Azure AD Audit Log](../reports-monitoring/concept-audit-logs.md) contains information on all delete operations performed in your tenant. We recommend that you export these logs to a security information and event management (SIEM) tool such as [Microsoft Sentinel](../../sentinel/overview.md). You can also use Microsoft Graph to audit changes and build a custom solution to monitor differences over time. For more information on finding deleted items using Microsoft Graph, see [List deleted items - Microsoft Graph v1.0. ](/graph/api/directory-deleteditems-list?tabs=http)
### Audit log
-The Audit Log always records a ΓÇ£Delete <object>ΓÇ¥ event when an object in the tenant is removed from an active state by either a soft or hard deletion.
+The Audit Log always records a "Delete \<object\>" event when an object in the tenant is removed from an active state by either a soft or hard deletion.
[![Screenshot of audit log showing deletions](./media/recoverability/delete-audit-log.png)](./media/recoverability/delete-audit-log.png#lightbox)
-A delete event for applications, users, and Microsoft 365 Groups is a soft delete. For any other object type, it's a hard delete. Track the occurrence of hard-delete events by comparing ΓÇ£Delete <object>ΓÇ¥ events with the type of object that has been deleted, noting those that do not support soft-delete. In addition, note "Hard Delete <object>" events.
+A delete event for applications, users, and Microsoft 365 Groups is a soft delete. For any other object type, it's a hard delete. Track the occurrence of hard-delete events by comparing "Delete \<object\>" events with the type of object that has been deleted, noting those that do not support soft-delete. In addition, note "Hard Delete \<object\>" events.
| Object type | Activity in log| Result |
For details on restoring users, see the following documentation:
* See [Restore or permanently remove recently deleted user](active-directory-users-restore.md) for restoring in the Azure portal.
-* See [Restore deleted item ΓÇô Microsoft Graph v1.0](%20/graph/api/directory-deleteditems-restore?view=graph-rest-1.0&tabs=http) for restoring with Microsoft Graph.
+* See [Restore deleted item ΓÇô Microsoft Graph v1.0](/graph/api/directory-deleteditems-restore?tabs=http) for restoring with Microsoft Graph.
### Groups
For details on restoring soft deleted Microsoft 365 Groups, see the following do
* To restore from the Azure portal, see [Restore a deleted Microsoft 365 group. ](../enterprise-users/groups-restore-deleted.md)
-* To restore by using Microsoft Graph, see [Restore deleted item ΓÇô Microsoft Graph v1.0](/graph/api/directory-deleteditems-restore?view=graph-rest-1.0&tabs=http).
+* To restore by using Microsoft Graph, see [Restore deleted item ΓÇô Microsoft Graph v1.0](/graph/api/directory-deleteditems-restore?tabs=http).
### Applications
Hard deleted items must be recreated and reconfigured. It's best to avoid unwant
Ensure you have a process to frequently review items in the soft delete state and restore them if appropriate. To do so, you should:
-* Frequently [list deleted items](/graph/api/directory-deleteditems-list?view=graph-rest-1.0&tabs=http).
+* Frequently [list deleted items](/graph/api/directory-deleteditems-list?tabs=http).
* Ensure that you have specific criteria for what should be restored.
active-directory Recoverability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/recoverability-overview.md
Create a process of pre-defined communications to make others aware of the issue
Document the state of your tenant and its objects regularly so that in the event of a hard delete or misconfiguration you have a road map to recovery. The following tools can help you in documenting your current state. -- The [Microsoft Graph APIs](https://docs.microsoft.com/graph/overview?view=graph-rest-1.0) can be used to export the current state of many Azure AD configurations.
+- The [Microsoft Graph APIs](/graph/overview) can be used to export the current state of many Azure AD configurations.
- You can use the [Azure AD Exporter](https://github.com/microsoft/azureadexporter) to regularly export your configuration settings.
Graph APIs are highly customizable based on your organizational needs. To implem
| Resource types| Reference links | | - | - |
-| Users, groups, and other directory objects| [directoryObject API](/graph/api/resources/directoryObject?view=graph-rest-1.0) |
-| Directory roles| [directoryRole API](/graph/api/resources/directoryrole?view=graph-rest-1.0) |
-| Conditional Access policies| [Conditional Access policy API](/graph/api/resources/conditionalaccesspolicy?view=graph-rest-1.0) |
-| Devices| [devices API](/graph/api/resources/device?view=graph-rest-1.0) |
-| Domains| [domains API](/graph/api/domain-list?view=graph-rest-1.0&tabs=http) |
-| Administrative Units| [administrativeUnit API)](/graph/api/resources/administrativeunit?view=graph-rest-1.0) |
-| Deleted Items*| [deletedItems API](/graph/api/resources/directory?view=graph-rest-1.0) |
+| Users, groups, and other directory objects| [directoryObject API](/graph/api/resources/directoryObject) |
+| Directory roles| [directoryRole API](/graph/api/resources/directoryrole) |
+| Conditional Access policies| [Conditional Access policy API](/graph/api/resources/conditionalaccesspolicy) |
+| Devices| [devices API](/graph/api/resources/device) |
+| Domains| [domains API](/graph/api/domain-list?tabs=http) |
+| Administrative Units| [administrativeUnit API)](/graph/api/resources/administrativeunit) |
+| Deleted Items*| [deletedItems API](/graph/api/resources/directory) |
Securely store these configuration exports with access provided to a limited number of admins.
The deletion of some objects can cause a ripple effect due to dependencies. For
## Monitoring and data retention
-The [Azure AD Audit Log](../reports-monitoring/concept-audit-logs.md) contains information on all delete and configuration operations performed in your tenant. We recommend that you export these logs to a security information and event management (SIEM) tool such as [Microsoft Sentinel](../../sentinel/overview.md). You can also use Microsoft Graph to audit changes, and build a custom solution to monitor differences over time. For more information on finding deleted items using Microsoft Graph, see [List deleted items - Microsoft Graph v1.0 ](/graph/api/directory-deleteditems-list?view=graph-rest-1.0&tabs=http)
+The [Azure AD Audit Log](../reports-monitoring/concept-audit-logs.md) contains information on all delete and configuration operations performed in your tenant. We recommend that you export these logs to a security information and event management (SIEM) tool such as [Microsoft Sentinel](../../sentinel/overview.md). You can also use Microsoft Graph to audit changes, and build a custom solution to monitor differences over time. For more information on finding deleted items using Microsoft Graph, see [List deleted items - Microsoft Graph v1.0 ](/graph/api/directory-deleteditems-list?tabs=http)
### Audit logs
-The Audit Log always records a ΓÇ£Delete <object>ΓÇ¥ event when an object in the tenant is removed from an active state (either from active to soft-deleted or active to hard-deleted).
+The Audit Log always records a "Delete \<object\>" event when an object in the tenant is removed from an active state (either from active to soft-deleted or active to hard-deleted).
:::image type="content" source="media/recoverability/deletions-audit-log.png" alt-text="Screenshot of audit log detail." lightbox="media/recoverability/deletions-audit-log.png"::: A Delete event for applications, users, and Microsoft 365 Groups is a soft delete. For any other object type it's a hard delete.
-| | Activity in log| Result |
+| Object Type | Activity in log| Result |
| - | - | - | | Application| Delete application| Soft deleted | | Application| Hard delete application| Hard deleted | | User| Delete user| Soft deleted | | User| Hard delete user| Hard deleted | | Microsoft 365 Groups| Delete group| Soft deleted |
-| Microsoft 365 Group| Hard delete group| Hard deleted |
+| Microsoft 365 Groups| Hard delete group| Hard deleted |
| All other objects| Delete ΓÇ£objectTypeΓÇ¥| Hard deleted | > [!NOTE]
There are several Azure Monitor workbooks that can help you to monitor configura
- Directory role and group membership updates for service principals - Modified federation settings - The [Cross-tenant access activity workbook ](../reports-monitoring/workbook-cross-tenant-access-activity.md)can help you monitor which applications in external tenants your users are accessing, and which applications I your tenant external users are accessing. Use this workbook to look for anomalous changes in either inbound or outbound application access across tenants. ## Operational security
active-directory Secure External Access Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-external-access-resources.md
Both methods have significant drawbacks in themselves.
| Area of concern | Local credentials | Federation | |:--|:-|:-| | Security | - Access continues after external user terminated<br> - Usertype is ΓÇ£memberΓÇ¥ by default which grants too much default access | - No user level visibility <br> - Unknown partner security posture|
-| Expense | - Password + Multi-Factor Authentication management<br> - Onboarding process<br> - Identity cleanup<br> - Overhead of running a separate directory | - Small partners cannot afford the infrastructure<br> - Small partners do not have the expertise<br> - Small Partners might only have consumer emails (none IT) |
+| Expense | - Password + Multi-Factor Authentication management<br> - Onboarding process<br> - Identity cleanup<br> - Overhead of running a separate directory | - Small partners cannot afford the infrastructure<br> - Small partners do not have the expertise<br> - Small Partners might only have consumer emails (no IT) |
| Complexity | - Partner users need to manage an additional set of credentials | - Complexity grows with each new partner<br> - Complexity grows on partnersΓÇÖ side as well |
active-directory How To Connect Health Agent Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-health-agent-install.md
na Previously updated : 01/21/2022 Last updated : 04/27/2022
In this article, you'll learn how to install and configure the Azure Active Directory (Azure AD) Connect Health agents. To download the agents, see [these instructions](how-to-connect-install-roadmap.md#download-and-install-azure-ad-connect-health-agent).
+> [!NOTE]
+> Azure AD Connect Health is not available in the China sovereign cloud
+ ## Requirements The following table lists requirements for using Azure AD Connect Health.
active-directory Whatis Azure Ad Connect V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/whatis-azure-ad-connect-v2.md
# Introduction to Azure AD Connect V2.0
-Azure AD Connect was released several years ago. Since this time, several of the components that Azure AD Connect uses have been scheduled for deprecation and updated to newer versions. To attempt to update all of these components individually would take time and planning.
+Azure AD Connect was released several years ago. Since this time, several of the components that Azure AD Connect uses have been scheduled for deprecation and updated to newer versions. Attempting to update all of these components individually would take time and planning.
-To address this, we have bundled as many of these newer components into a new, single release, so you only have to update once. This release is Azure AD Connect V2. This is a new version of the same software used to accomplish your hybrid identity goals that is built using the latest foundational components.
++
+To address this, we have bundled as many of these newer components into a new, single release, so you only have to update once. This release is Azure AD Connect V2. This release is a new version of the same software used to accomplish your hybrid identity goals, built using the latest foundational components.
## What are the major changes? ### SQL Server 2019 LocalDB
-The previous versions of Azure AD Connect shipped with a SQL Server 2012 LocalDB. V2.0 ships with a SQL Server 2019 LocalDB, which promises enhanced stability and performance and has several security-related bug fixes. SQL Server 2012 will go out of extended support in July 2022. For more information see [Microsoft SQL 2019](https://www.microsoft.com/sql-server/sql-server-2019).
+The previous versions of Azure AD Connect shipped with a SQL Server 2012 LocalDB. V2.0 ships with a SQL Server 2019 LocalDB, which promises enhanced stability and performance and has several security-related bug fixes. SQL Server 2012 will go out of extended support in July 2022. For more information, see [Microsoft SQL 2019](https://www.microsoft.com/sql-server/sql-server-2019).
### MSAL authentication library
-The previous versions of Azure AD Connect shipped with the ADAL authentication library. This library will be deprecated in June 2022. The V2 release ships with the newer MSAL library. For more information see [Overview of the MSAL library](../../active-directory/develop/msal-overview.md).
+The previous versions of Azure AD Connect shipped with the ADAL authentication library. This library will be deprecated in June 2022. The V2 release ships with the newer MSAL library. For more information, see [Overview of the MSAL library](../../active-directory/develop/msal-overview.md).
### Visual C++ Redist 14
-SQL Server 2019 requires the Visual C++ Redist 14 runtime, so we are updating the C++ runtime library to use this version. This will be installed with the Azure AD Connect V2 package, so you do not have to take any action for the C++ runtime update.
+SQL Server 2019 requires the Visual C++ Redist 14 runtime, so we are updating the C++ runtime library to use this version. This redistributable will be installed with the Azure AD Connect V2 package, so you do not have to take any action for the C++ runtime update.
### TLS 1.2
Yes ΓÇô upgrades from any previous version of Azure AD Connect to Azure AD Conne
Yes, you can do that, and it is a great way to migrate to Azure AD Connect V2 ΓÇô especially if you are also upgrading to a new operating system version. You can read more about the Import/export configuration feature and how you can use it in this [article](how-to-connect-import-export-config.md). **I have enabled auto upgrade for Azure AD Connect ΓÇô will I get this new version automatically?** </br>
-Yes - your Azure AD Connect server will be upgraded to the latest release if you have enabled the auto-upgrade feature. Note that we have no yet release an autop upgrade version for Azure AD Connect.
+Yes - your Azure AD Connect server will be upgraded to the latest release if you have enabled the auto-upgrade feature. Note that we have no yet release an auto upgrade version for Azure AD Connect.
**I am not ready to upgrade yet ΓÇô how much time do I have?** </br> You should upgrade to Azure AD Connect V2 as soon as you can. **__All Azure AD Connect V1 versions will be retired on 31 August, 2022.__** For the time being we will continue to support older versions of Azure AD Connect, but it may prove difficult to provide a good support experience if some of the components in Azure AD Connect have dropped out of support. This upgrade is particularly important for ADAL and TLS1.0/1.1 as these services might stop working unexpectedly after they are deprecated. **I use an external SQL database and do not use SQL 2012 LocalDb ΓÇô do I still have to upgrade?** </br>
-Yes, you still need to upgrade to remain in a supported state even if you do not use SQL Server 2012, due to the TLS1.0/1.1 and ADAL deprecation. Note that SQL Server 2012 can still be used as an external SQL database with Azure AD Connect V2 - the SQL 2019 drivers in Azure AD Connect V2 are compatible with SQL Server 2012.
+Yes, you still need to upgrade to remain in a supported state even if you do not use SQL Server 2012, due to the TLS1.0/1.1 and ADAL deprecation. Note that SQL Server 2012 can still be used as an external SQL database with Azure AD Connect V2. The SQL 2019 drivers in Azure AD Connect V2 are compatible with SQL Server 2012.
**After the upgrade of my Azure AD Connect instance to V2, will the SQL 2012 components automatically get uninstalled?** </br> No, the upgrade to SQL 2019 does not remove any SQL 2012 components from your server. If you no longer need these components then you should follow [the SQL Server uninstallation instructions](/sql/sql-server/install/uninstall-an-existing-instance-of-sql-server-setup).
Until one of the components that are being retired are actually deprecated, you
We expect TLS 1.0/1.1 to be deprecated in 2022, and you need to make sure you are not using these protocols by that date as your service may stop working unexpectedly. You can manually configure your server for TLS 1.2 though, and that does not require an update of Azure AD Connect to V2
-In June 2022, ADAL is planned to go out of support. When ADAL goes out of support authentication may stop working unexpectedly and this will block the Azure AD Connect server from working properly. We strongly advise you to upgrade to Azure AD Connect V2 before June 2022. You cannot upgrade to a supported authentication library with your current Azure AD Connect version.
+In June 2022, ADAL is planned to go out of support. When ADAL goes out of support, authentication may stop working unexpectedly, and this will block the Azure AD Connect server from working properly. We strongly advise you to upgrade to Azure AD Connect V2 before June 2022. You cannot upgrade to a supported authentication library with your current Azure AD Connect version.
**After upgrading to 2 the ADSync PowerShell cmdlets do not work?** </br>
-This is a known issue. To resolve this, restart your PowerShell session after installing or upgrading to version 2 and then re-import the module. Use the following instructions to import the module.
+This is a known issue. Restart your PowerShell session after installing or upgrading to version 2 and then reimport the module. Use the following instructions to import the module.
1. Open Windows PowerShell with administrative privileges. 1. Type or copy and paste the following code:
active-directory Whatis Azure Ad Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/whatis-azure-ad-connect.md
Azure AD Connect provides the following features:
Azure Active Directory (Azure AD) Connect Health provides robust monitoring of your on-premises identity infrastructure. It enables you to maintain a reliable connection to Microsoft 365 and Microsoft Online Services. This reliability is achieved by providing monitoring capabilities for your key identity components. Also, it makes the key data points about these components easily accessible. ++ The information is presented in the [Azure AD Connect Health portal](https://aka.ms/aadconnecthealth). Use the Azure AD Connect Health portal to view alerts, performance monitoring, usage analytics, and other information. Azure AD Connect Health enables the single lens of health for your key identity components in one place. ![What is Azure AD Connect Health](./media/whatis-hybrid-identity-health/aadconnecthealth2.png)
active-directory Delegate By Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/delegate-by-task.md
Previously updated : 12/01/2021 Last updated : 04/26/2022
You can further restrict permissions by assigning roles at smaller scopes or by
> | Create named locations | [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator) | [Security Administrator](../roles/permissions-reference.md#security-administrator) | > | Create policies | [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator) | [Security Administrator](../roles/permissions-reference.md#security-administrator) | > | Create terms of use | [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator) | [Security Administrator](../roles/permissions-reference.md#security-administrator) |
-> | Create VPN connectivity certificate | [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator) | [Security Administrator](../roles/permissions-reference.md#security-administrator) |
+> | Create VPN connectivity certificate | [Global Administrator](../roles/permissions-reference.md#global-administrator) | &nbsp; |
> | Delete classic policy | [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator) | [Security Administrator](../roles/permissions-reference.md#security-administrator) | > | Delete terms of use | [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator) | [Security Administrator](../roles/permissions-reference.md#security-administrator) | > | Delete VPN connectivity certificate | [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator) | [Security Administrator](../roles/permissions-reference.md#security-administrator) |
active-directory Envoy Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/envoy-tutorial.md
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
### Create Envoy test user
-In this section, a user called Britta Simon is created in Envoy. Envoy supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Envoy, a new one is created after authentication.
+In this section, a user called Britta Simon is created in Envoy.
-Envoy also supports automatic user provisioning, you can find more details [here](./envoy-provisioning-tutorial.md) on how to configure automatic user provisioning.
+Envoy supports automatic user provisioning, which you can read about [here](./envoy-provisioning-tutorial.md) on how to configure automatic user provisioning.
## Test SSO
active-directory Verifiable Credentials Configure Issuer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-issuer.md
Previously updated : 04/26/2022 Last updated : 10/08/2021 # Customer intent: As an enterprise, we want to enable customers to manage information about themselves by using verifiable credentials.
The following diagram illustrates the Azure AD Verifiable Credentials architectu
## Create a storage account
-Azure Blob Storage is an object storage solution for the cloud. Azure AD Verifiable Credentials use [Azure Blob Storage](../../storage/blobs/storage-blobs-introduction.md) to store the configuration files when the service is issuing verifiable credentials.
+Azure Blob Storage is an object storage solution for the cloud. Azure AD Verifiable Credentials uses [Azure Blob Storage](../../storage/blobs/storage-blobs-introduction.md) to store the configuration files when the service is issuing verifiable credentials.
Create and configure Blob Storage by following these steps:
Create and configure Blob Storage by following these steps:
![Screenshot that shows how to create a container.](media/verifiable-credentials-configure-issuer/create-container.png)
+## Grant access to the container
+
+After you create your container, grant the signed-in user the correct role assignment so they can access the files in Blob Storage.
+
+1. From the list of containers, select **vc-container**.
+
+1. From the menu, select **Access Control (IAM)**.
+
+1. Select **+ Add,** and then select **Add role assignment**.
+
+ ![Screenshot that shows how to add a new role assignment to the blob container.](media/verifiable-credentials-configure-issuer/add-role-assignment.png)
+
+1. In **Add role assignment**:
+
+ 1. For the **Role**, select **Storage Blob Data Reader**.
+
+ 1. For the **Assign access to**, select **User, group, or service
+ principal**.
+
+ 1. Then, search the account that you're using to perform these steps, and
+ select it.
+
+ ![Screenshot that shows how to set up the new role assignment.](media/verifiable-credentials-configure-issuer/add-role-assignment-container.png)
+
+>[!IMPORTANT]
+>By default, container creators get the owner role assigned. The owner role isn't enough on its own. Your account needs the storage blob data reader role. For more information, see [Use the Azure portal to assign an Azure role for access to blob and queue data](../../storage/blobs/assign-azure-role-data-access.md).
+ ### Upload the configuration files
-Azure AD Verifiable Credentials service uses two JSON configuration files, the rules file and the display file.
+Azure AD Verifiable Credentials uses two JSON configuration files, the rules file and the display file.
- The *rules* file describes important properties of verifiable credentials. In particular, it describes the claims that subjects (users) need to provide before a verifiable credential is issued for them. - The *display* file controls the branding of the credential and styling of the claims.
In this step, you create the verified credential expert card by using Azure AD V
1. For **Subscription**, select your Azure AD subscription where you created Blob Storage.
- 1. Under the **Display file**, select **Select display file**. In the Storage accounts section, select **vc-container**. Then select the **VerifiedCredentialExpertDisplay.json** file and select **Select**.
+ 1. Under the **Display file**, select **Select display file**. In the Storage accounts section, select **vc-container**. Then select the **VerifiedCredentialExpertDisplay.json** file and click **Select**.
1. Under the **Rules file**, **Select rules file**. In the Storage accounts section, select the **vc-container**. Then select the **VerifiedCredentialExpertRules.json** file, and choose **Select**.
Now you're ready to issue your first verified credential expert card by running
![Screenshot that shows how to respond to the warning message.](media/verifiable-credentials-configure-issuer/at-risk.png)
-1. At the risky website warning, select **Proceed anyways (unsafe)**. You're seeing this warning because your domain isn't linked to your decentralized identifier (DID). To verify your domain, follow the guidance in [Link your domain to your decentralized identifier (DID)](how-to-dnsbind.md). For this tutorial, you can skip the domain registration, and select **Proceed anyways (unsafe).**
+1. At the risky website warning, select **Proceed anyways (unsafe)**. You're seeing this warning because your domain isn't linked to your decentralized identifier (DID). To verify your domain, follow [Link your domain to your decentralized identifier (DID)](how-to-dnsbind.md). For this tutorial, you can skip the domain registration, and select **Proceed anyways (unsafe).**
![Screenshot that shows how to proceed with the risky warning.](media/verifiable-credentials-configure-issuer/proceed-anyway.png)
active-directory Verifiable Credentials Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-faq.md
This page contains commonly asked questions about Verifiable Credentials and Dec
### What is a DID?
-Decentralized Identifers(DIDs) are identifiers that can be used to secure access to resources, sign and verify credentials, and facilitate application data exchange. Unlike traditional usernames and email addresses, DIDs are owned and controlled by the entity itself (be it a person, device, or company). DIDs exist independently of any external organization or trusted intermediary. [The W3C Decentralized Identifier spec](https://www.w3.org/TR/did-core/) explains this in further detail.
+Decentralized Identifers(DIDs) are unique identifiers that can be used to secure access to resources, sign and verify credentials, and facilitate application data exchange. Unlike traditional usernames and email addresses, DIDs are owned and controlled by the entity itself (be it a person, device, or company). DIDs exist independently of any external organization or trusted intermediary. [The W3C Decentralized Identifier spec](https://www.w3.org/TR/did-core/) explains this in further detail.
### Why do we need a DID?
For the Request API the new scope for your application or Postman is now:
```3db474b9-6a0c-96ac-1fceb342124f/.default```
-#### **5. Clean up configuration**
-
-**Suggested after May 6, 2022**. Once you have confirmed that the Azure AD verifiable credentials service is working normally, you can issue, verify, etc after May 6, 2022 you can proceed to clean up your tenant so that the Azure AD Verifiable Credentials service has only the new service principals.
-
-1. Run the following PowerShell command to connect to your Azure AD tenant. Replace ```<your tenant ID>``` with your Azure AD tenant ID.
-1. Run the following commands in the same PowerShell session. The AppId ```603b8c59-ba28-40ff-83d1-408eee9a93e5``` and ```bbb94529-53a3-4be5-a069-7eaf2712b826``` refer to the previous Verifiable Credentials service principals.
- ### How do I reset the Azure AD Verifiable credentials service? Resetting requires that you opt out and opt back into the Azure Active Directory Verifiable Credentials service, your existing verifiable credentials configurations will reset and your tenant will obtain a new DID to use during issuance and presentation.
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/whats-new.md
Previously updated : 04/26/2022 Last updated : 04/27/2022
This article lists the latest features, improvements, and changes in the Azure A
## April
-From April 25th, 2022 the Verifiable Credentials service is available to more Azure tenants. This important update requires any tenant created prior to April 25, 2022 to make a 15 minutes reconfiguration of the service to ensure ongoing operation. Verifiable Credentials service Administrators must perform the [following steps](verifiable-credentials-faq.md?#updating-the-vc-service-configuration) to avoid service disruptions.
+Verifiable Credentials service Administrators must perform a small configuration change before **May 4, 2022** following [these steps](verifiable-credentials-faq.md?#updating-the-vc-service-configuration) to avoid service disruptions. On May 4, 2022 we'll roll out updates on our service that will result in errors on issuance and presentation on those tenants that haven't applied the changes.
>[!IMPORTANT]
-> When the configuration on your tenant has not been updated, there will be errors on issuance and presentation flows of verifiable credentials from/to your tenant. [Service configuration instructions](verifiable-credentials-faq.md?#updating-the-vc-service-configuration).
+> When the configuration on your tenant has not been updated, . [Service configuration instructions](verifiable-credentials-faq.md?#updating-the-vc-service-configuration).
## March 2022 - Azure AD Verifiable Credentials customers can now change the [domain linked](how-to-dnsbind.md) to their DID easily from the Azure portal.
api-management Api Management Howto Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-ip-addresses.md
GET https://management.azure.com/subscriptions/<subscription-id>/resourceGroups/
} ```
-API Management uses a public IP address for connections outside the VNet and a private IP address for connections within the VNet.
+### IP addresses for outbound traffic
-When API management is deployed in the [internal VNet configuration](api-management-using-with-internal-vnet.md) and API management connects to private (intranet-facing) backends, internal IP addresses (dynamic IP, or DIP addresses) from the subnet are used for the runtime API traffic. When a request is sent from API Management to a private backend, a private IP address will be visible as the origin of the request. Therefore in this configuration, if IP restriction lists secure resources within the VNet, it is recommended to use the whole API Management [subnet range](virtual-network-concepts.md#subnet-size) with an IP rule and not just the private IP address associated with the API Management resource.
+API Management uses a public IP address for a connection outside the VNet or a peered VNet and a private IP address for a connection in the VNet or a peered VNet.
-When a request is sent from API Management to a public-facing (internet-facing) backend, a public IP address will always be visible as the origin of the request.
+* When API management is deployed in an external or internal virtual network and API management connects to private (intranet-facing) backends, internal IP addresses (dynamic IP, or DIP addresses) from the subnet are used for the runtime API traffic. When a request is sent from API Management to a private backend, a private IP address will be visible as the origin of the request.
+
+ Therefore, if IP restriction lists secure resources within the VNet or a peered VNet, it is recommended to use the whole API Management [subnet range](virtual-network-concepts.md#subnet-size) with an IP rule - and (in internal mode) not just the private IP address associated with the API Management resource.
+
+* When a request is sent from API Management to a public (internet-facing) backend, a public IP address will always be visible as the origin of the request.
## IP addresses of Consumption tier API Management service
For traffic restriction purposes, you can use the range of IP addresses of Azure
## Changes to the IP addresses
-In the Developer, Basic, Standard, and Premium tiers of API Management, the public IP addresses (VIP) and private IP addresses (if configured in the internal VNet mode) are static for the lifetime of a service, with the following exceptions:
+In the Developer, Basic, Standard, and Premium tiers of API Management, the public IP addresses (VIP) and private VIP addresses (if configured in the internal VNet mode) are static for the lifetime of a service, with the following exceptions:
* The service is deleted and then re-created. * The service subscription is [suspended](https://github.com/Azure/azure-resource-manager-rpc/blob/master/v1.0/subscription-lifecycle-api-reference.md#subscription-states) or [warned](https://github.com/Azure/azure-resource-manager-rpc/blob/master/v1.0/subscription-lifecycle-api-reference.md#subscription-states) (for example, for nonpayment) and then reinstated.
api-management Api Management Using With Internal Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-using-with-internal-vnet.md
# Connect to a virtual network in internal mode using Azure API Management With Azure virtual networks (VNets), Azure API Management can manage internet-inaccessible APIs using several VPN technologies to make the connection. For VNet connectivity options, requirements, and considerations, see [Using a virtual network with Azure API Management](virtual-network-concepts.md).
-This article explains how to set up VNet connectivity for your API Management instance in the *internal* mode, In this mode, you can only access the following service endpoints within a VNet whose access you control.
+This article explains how to set up VNet connectivity for your API Management instance in the *internal* mode. In this mode, you can only access the following service endpoints within a VNet whose access you control.
* The API gateway * The developer portal * Direct management
The load-balanced public and private IP addresses can be found on the **Overview
For more information and considerations, see [IP addresses of Azure API Management](api-management-howto-ip-addresses.md#ip-addresses-of-api-management-service-in-vnet).
-The load-balanced public and private IP addresses can be found on the **Overview** blade in the Azure portal.
-
-> [!NOTE]
-> The VIP address(es) of the API Management instance will change when:
-> * The VNet is enabled or disabled.
-> * API Management is moved from **External** to **Internal** virtual network mode, or vice versa.
-> * [Zone redundancy](zone-redundancy.md) settings are enabled, updated, or disabled in a location for your instance (Premium SKU only).
->
-> You may need to update DNS registrations, routing rules, and IP restriction lists within the VNet.
-
-### VIP and DIP addresses
-
-Dynamic IP (DIP) addresses will be assigned to each underlying virtual machine in the service and used to access resources *within* the VNet. The API Management service's public virtual IP (VIP) address will be used to access resources *outside* the VNet. If IP restriction lists secure resources within the VNet, you must specify the entire subnet range where the API Management service is deployed to grant or restrict access from the service.
-
-Learn more about the [recommended subnet size](virtual-network-concepts.md#subnet-size).
#### Example
-if you deploy 1 [capacity unit](api-management-capacity.md) of API Management in the Premium tier in an internal VNet, 3 IP addresses will be used: 1 for the private VIP and one each for the DIPs for two VMs. If you scale out to 4 units, more IPs will be consumed for additional DIPs from the subnet.
+If you deploy 1 [capacity unit](api-management-capacity.md) of API Management in the Premium tier in an internal VNet, 3 IP addresses will be used: 1 for the private VIP and one each for the DIPs for two VMs. If you scale out to 4 units, more IPs will be consumed for additional DIPs from the subnet.
If the destination endpoint has allow-listed only a fixed set of DIPs, connection failures will result if you add new units in the future. For this reason and since the subnet is entirely in your control, we recommend allow-listing the entire subnet in the backend.
api-management Api Management Using With Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-using-with-vnet.md
The API Management service depends on several Azure services. When API Managemen
+ A load-balanced public IP address (VIP) is reserved to provide access to all service endpoints and resources outside the VNet. + The public VIP can be found on the **Overview/Essentials** blade in the Azure portal.
-+ An IP address from a subnet IP range (DIP) is used to access resources within the VNet.
For more information and considerations, see [IP addresses of Azure API Management](api-management-howto-ip-addresses.md#ip-addresses-of-api-management-service-in-vnet). +++ ## <a name="network-configuration-issues"> </a>Common network configuration issues This section has moved. See [Virtual network configuration reference](virtual-network-reference.md).
api-management Howto Use Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/howto-use-analytics.md
Azure API Management provides built-in analytics for your APIs. Analyze the usag
* Requests > [!NOTE]
-> * API analytics provides data on requests that are matched with an API and operation. Other calls aren't reported.
+> * API analytics provides data on requests (including failed and unauthorized requests) that are matched with an API and operation. Other calls aren't reported.
> * Geography values are approximate based on IP address mapping. :::image type="content" source="media/howto-use-analytics/analytics-report-portal.png" alt-text="Timeline analytics in portal":::
app-service Configure Language Dotnetcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-dotnetcore.md
namespace SomeNamespace
If you configure an app setting with the same name in App Service and in *appsettings.json*, for example, the App Service value takes precedence over the *appsettings.json* value. The local *appsettings.json* value lets you debug the app locally, but the App Service value lets your run the app in production with production settings. Connection strings work in the same way. This way, you can keep your application secrets outside of your code repository and access the appropriate values without changing your code.
+> [!NOTE]
+> Note the [hierarchical configuration data](/aspnet/core/fundamentals/configuration/#hierarchical-configuration-data) in *appsettings.json* is accessed using the `__` (double underscore) delimiter that's standard on Linux to .NET Core. To override a specific hierarchical configuration setting in App Service, set the app setting name with the same delimited format in the key. you can run the following example in the [Cloud Shell](https://shell.azure.com):
+
+```azurecli-interactive
+az webapp config appsettings set --name <app-name> --resource-group <resource-group-name> --settings My__Hierarchical__Config__Data="some value"
+```
+ > [!NOTE] > Note the [hierarchical configuration data](/aspnet/core/fundamentals/configuration/#hierarchical-configuration-data) in *appsettings.json* is accessed using the `:` delimiter that's standard to .NET Core. To override a specific hierarchical configuration setting in App Service, set the app setting name with the same delimited format in the key. you can run the following example in the [Cloud Shell](https://shell.azure.com): ```azurecli-interactive az webapp config appsettings set --name <app-name> --resource-group <resource-group-name> --settings My:Hierarchical:Config:Data="some value" ``` ## Deploy multi-project solutions
app-service Configure Ssl Bindings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-bindings.md
description: Secure HTTPS access to your custom domain by creating a TLS/SSL bin
tags: buy-ssl-certificates Previously updated : 05/13/2021 Last updated : 04/27/2022
Language specific configuration guides, such as the [Linux Node.js configuration
### Azure CLI
-[!code-azurecli[main](../../cli_scripts/app-service/configure-ssl-certificate/configure-ssl-certificate.sh?highlight=3-5 "Bind a custom TLS/SSL certificate to a web app")]
+[Bind a custom TLS/SSL certificate to a web app](scripts/cli-configure-ssl-certificate.md)
### PowerShell
app-service Configure Ssl Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-certificate.md
description: Create a free certificate, import an App Service certificate, impor
tags: buy-ssl-certificates Previously updated : 05/13/2021 Last updated : 04/27/2022
Now you can delete the App Service certificate. From the left navigation, select
### Azure CLI
-[!code-azurecli[main](../../cli_scripts/app-service/configure-ssl-certificate/configure-ssl-certificate.sh?highlight=3-5 "Bind a custom TLS/SSL certificate to a web app")]
+[Bind a custom TLS/SSL certificate to a web app](scripts/cli-configure-ssl-certificate.md)
### PowerShell
app-service How To Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-migrate.md
Title: Use the migration feature to migrate App Service Environment v2 to App Se
description: Learn how to migrate your App Service Environment v2 to App Service Environment v3 using the migration feature Previously updated : 4/11/2022 Last updated : 4/27/2022 zone_pivot_groups: app-service-cli-portal
az appservice ase show --name $ASE_NAME --resource-group $ASE_RG
## 1. Validate migration is supported
-From the [Azure portal](https://portal.azure.com), navigate to the **Overview** page for the App Service Environment you'll be migrating. The platform will validate if migration is supported for your App Service Environment. Wait a couple seconds after the page loads for this validation to take place. See the [troubleshooting](migrate.md#troubleshooting) section for descriptions of the potential error messages if migration for your environment isn't supported by the migration feature.
-
-If migration is supported for your App Service Environment, there are three ways to access the migration feature. These methods include a banner at the top of the Overview page, a new item in the left-hand side menu called **Migration**, and an info box on the **Configuration** page. Select any of these methods to move on to the next step in the migration process.
+From the [Azure portal](https://portal.azure.com), navigate to the **Migration** page for the App Service Environment you'll be migrating. You can do this by clicking on the banner at the top of the **Overview** page for your App Service Environment or by clicking the **Migration** item on the left-hand side.
![migration access points](./media/migration/portal-overview.png)
-![configuration page view](./media/migration/configuration-migration-support.png)
+On the migration page, the platform will validate if migration is supported for your App Service Environment. If your environment isn't supported for migration, a banner will appear at the top of the page and include an error message with a reason. See the [troubleshooting](migrate.md#troubleshooting) section for descriptions of the error messages you may see if you aren't eligible for migration. If your App Service Environment isn't supported for migration at this time or your environment is in an unhealthy or suspended state, you won't be able to use the migration feature. If your environment [won't be supported for migration with the migration feature](migrate.md#supported-scenarios) or you want to migrate to App Service Environment v3 without using the migration feature, see the [manual migration options](migration-alternatives.md).
-If you don't see these elements, your App Service Environment isn't supported for migration at this time or your environment is in an unhealthy or suspended state (which blocks migration). If your environment [won't be supported for migration with the migration feature](migrate.md#supported-scenarios) or you want to migrate to App Service Environment v3 without using the migration feature, see the [manual migration options](migration-alternatives.md).
+![migration not supported sample](./media/migration/migration-not-supported.png)
-The migration page will guide you through the series of steps to complete the migration.
+If migration is supported for your App Service Environment, you'll be able to proceed to the next step in the process. The migration page will guide you through the series of steps to complete the migration.
![migration page sample](./media/migration/migration-ux-pre.png)
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
Title: Migrate to App Service Environment v3 by using the migration feature
description: Overview of the migration feature for migration to App Service Environment v3 Previously updated : 4/15/2022 Last updated : 4/27/2022
At this time, App Service Environment migrations to v3 using the migration featu
- Australia East - Australia Central - Australia Southeast
+- Brazil South
- Canada Central - Central India
+- Central US
- East Asia - East US - East US 2 - France Central - Germany West Central - Korea Central
+- North Central US
+- North Europe
- Norway East
+- South Central US
- Switzerland North - UAE North - UK South
+- UK West
- West Central US
+- West Europe
+- West US
+- West US 3
You can find the version of your App Service Environment by navigating to your App Service Environment in the [Azure portal](https://portal.azure.com) and selecting **Configuration** under **Settings** on the left-hand side. You can also use [Azure Resource Explorer](https://resources.azure.com/) and review the value of the `kind` property for your App Service Environment.
app-service Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/networking.md
The normal app access ports inbound are as follows:
You can set route tables without restriction. You can tunnel all of the outbound application traffic from your App Service Environment to an egress firewall device, such as Azure Firewall. In this scenario, the only thing you have to worry about is your application dependencies.
-You can put your web application firewall devices, such as Azure Application Gateway, in front of inbound traffic. Doing so exposes specific apps on that App Service Environment. If you want to customize the outbound address of your applications on an App Service Environment, you can add a NAT gateway to your subnet.
+You can put your web application firewall devices, such as Azure Application Gateway, in front of inbound traffic. Doing so allows you to expose specific apps on that App Service Environment.
+
+Your application will use one of the default outbound addresses for egress traffic to public endpoints. If you want to customize the outbound address of your applications on an App Service Environment, you can add a NAT gateway to your subnet.
## Private endpoint
app-service Monitor Instances Health Check https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/monitor-instances-health-check.md
In addition to configuring the Health check options, you can also configure the
Health check integrates with App Service's [authentication and authorization features](overview-authentication-authorization.md). No additional settings are required if these security features are enabled.
-If you're using your own authentication system, the Health check path must allow anonymous access. To secure the Health check endpoint, you should first use features such as [IP restrictions](app-service-ip-restrictions.md#set-an-ip-address-based-rule), [client certificates](app-service-ip-restrictions.md#set-an-ip-address-based-rule), or a Virtual Network to restrict application access. You can secure the Health check endpoint by requiring the `User-Agent` of the incoming request matches `HealthCheck/1.0`. The User-Agent can't be spoofed since the request would already be secured by prior security features.
+If you're using your own authentication system, the Health check path must allow anonymous access. To secure the Health check endpoint, you should first use features such as [IP restrictions](app-service-ip-restrictions.md#set-an-ip-address-based-rule), [client certificates](app-service-ip-restrictions.md#set-an-ip-address-based-rule), or a Virtual Network to restrict application access. Once you have those features in-place, you can authenticate the health check request by inspecting the header, `x-ms-auth-internal-token`, and validating that it matches the SHA256 hash of the environment variable `WEBSITE_AUTH_ENCRPYTION_KEY`. If they match, then the health check request is valid and originating from App Service.
+
+##### [.NET](#tab/dotnet)
+
+```C#
+using System;
+using System.Text;
+
+/// <summary>
+/// Method <c>HeaderMatchesEnvVar</c> returns true if <c>headerValue</c> matches WEBSITE_AUTH_ENCRYPTION_KEY.
+/// </summary>
+public Boolean HeaderMatchesEnvVar(string headerValue) {
+ var sha = System.Security.Cryptography.SHA256.Create();
+ String envVar = Environment.GetEnvironmentVariable("WEBSITE_AUTH_ENCRYPTION_KEY");
+ String hash = System.Convert.ToBase64String(sha.ComputeHash(Encoding.UTF8.GetBytes(envVar)));
+ return hash == headerValue;
+}
+```
+
+##### [Python](#tab/python)
+
+```python
+from hashlib import sha256
+import base64
+import os
+
+def header_matches_env_var(header_value):
+ """
+ Returns true if SHA256 of header_value matches WEBSITE_AUTH_ENCRYPTION_KEY.
+
+ :param header_value: Value of the x-ms-auth-internal-token header.
+ """
+
+ env_var = os.getenv('WEBSITE_AUTH_ENCRYPTION_KEY')
+ hash = base64.b64encode(sha256(env_var.encode('utf-8')).digest()).decode('utf-8')
+ return hash == header_value
+```
+
+##### [Java](#tab/java)
+
+```java
+import java.io.Console;
+import java.security.MessageDigest;
+import java.security.NoSuchAlgorithmException;
+import java.util.Base64;
+import java.nio.charset.StandardCharsets;
+
+public static Boolean headerMatchesEnvVar(String headerValue) throws NoSuchAlgorithmException {
+ MessageDigest digest = MessageDigest.getInstance("SHA-256");
+ String envVar = System.getenv("WEBSITE_AUTH_ENCRYPTION_KEY");
+ String hash = new String(Base64.getDecoder().decode(digest.digest(envVar.getBytes(StandardCharsets.UTF_8))));
+ return hash == headerValue;
+}
+```
+
+##### [Node.js](#tab/node)
+
+```javascript
+var crypto = require('crypto');
+
+function envVarMatchesHeader(headerValue) {
+ let envVar = process.env.WEBSITE_AUTH_ENCRYPTION_KEY;
+ let hash = crypto.createHash('sha256').update(envVar).digest('base64');
+ return hash == headerValue;
+}
+```
+++
+> [!NOTE]
+> The `x-ms-auth-internal-token` header is only available on Windows App Service.
+ ## Monitoring
app-service Overview Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-managed-identity.md
This response is the same as the [response for the Azure AD service-to-service a
> [!NOTE] > When connecting to Azure SQL data sources with [Entity Framework Core](/ef/core/), consider [using Microsoft.Data.SqlClient](/sql/connect/ado-net/sql/azure-active-directory-authentication), which provides special connection strings for managed identity connectivity. For an example, see [Tutorial: Secure Azure SQL Database connection from App Service using a managed identity](tutorial-connect-msi-sql-database.md).
-For .NET apps and functions, the simplest way to work with a managed identity is through the [Azure Identity client library for .NET](/dotnet/api/overview/azure/identity-readme?). See the respective documentation headings of the client library for information:
+For .NET apps and functions, the simplest way to work with a managed identity is through the [Azure Identity client library for .NET](/dotnet/api/overview/azure/identity-readme?). For detailed guidance, see [Tutorial: Connect to Azure databases from App Service without secrets using a managed identity](tutorial-connect-msi-azure-database.md).
+
+See the respective documentation headings of the client library for information:
- [Add Azure Identity client library to your project](/dotnet/api/overview/azure/identity-readme#getting-started) - [Access Azure service with a system-assigned identity](/dotnet/api/overview/azure/identity-readme#authenticating-with-defaultazurecredential)
The linked examples use [`DefaultAzureCredential`](/dotnet/api/overview/azure/id
# [JavaScript](#tab/javascript)
-For Node.js apps and JavaScript functions, the simplest way to work with a managed identity is through the [Azure Identity client library for JavaScript](/javascript/api/overview/azure/identity-readme?). See the respective documentation headings of the client library for information:
+For Node.js apps and JavaScript functions, the simplest way to work with a managed identity is through the [Azure Identity client library for JavaScript](/javascript/api/overview/azure/identity-readme?). For detailed guidance, see [Tutorial: Connect to Azure databases from App Service without secrets using a managed identity](tutorial-connect-msi-azure-database.md).
+
+See the respective documentation headings of the client library for information:
- [Add Azure Identity client library to your project](/javascript/api/overview/azure/identity-readme#install-the-package) - [Access Azure service with a system-assigned identity](/javascript/api/overview/azure/identity-readme#authenticating-with-defaultazurecredential)
For more code examples of the Azure Identity client library for JavaScript, see
# [Python](#tab/python)
-For Python apps and functions, the simplest way to work with a managed identity is through the [Azure Identity client library for Python](/python/api/overview/azure/identity-readme). See the respective documentation headings of the client library for information:
+For Python apps and functions, the simplest way to work with a managed identity is through the [Azure Identity client library for Python](/python/api/overview/azure/identity-readme). For detailed guidance, see [Tutorial: Connect to Azure databases from App Service without secrets using a managed identity](tutorial-connect-msi-azure-database.md).
+
+See the respective documentation headings of the client library for information:
- [Add Azure Identity client library to your project](/python/api/overview/azure/identity-readme#getting-started) - [Access Azure service with a system-assigned identity](/python/api/overview/azure/identity-readme#authenticating-with-defaultazurecredential)
The linked examples use [`DefaultAzureCredential`](/python/api/overview/azure/id
# [Java](#tab/java)
-For Java apps and functions, the simplest way to work with a managed identity is through the [Azure Identity client library for Java](/java/api/overview/azure/identity-readme). See the respective documentation headings of the client library for information:
+For Java apps and functions, the simplest way to work with a managed identity is through the [Azure Identity client library for Java](/jav).
+
+See the respective documentation headings of the client library for information:
- [Add Azure Identity client library to your project](/java/api/overview/azure/identity-readme#include-the-package) - [Access Azure service with a system-assigned identity](/java/api/overview/azure/identity-readme#authenticating-with-defaultazurecredential)
app-service Overview Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-vnet-integration.md
You can't use gateway-required virtual network integration:
* From a Linux app. * From a [Windows container](./quickstart-custom-container.md). * To access service endpoint-secured resources.
+* To resolve App Settings referencing a network protected Key Vault.
* With a coexistence gateway that supports both ExpressRoute and point-to-site or site-to-site VPNs. ### Set up a gateway in your Azure virtual network
app-service Tutorial Connect Msi Azure Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-azure-database.md
+
+ Title: 'Tutorial: Access Azure databases with managed identity'
+description: Secure database connectivity (Azure SQL Database, Database for MySQL, and Database for PostgreSQL) with managed identity from .NET, Node.js, Python, and Java apps.
+keywords: azure app service, web app, security, msi, managed service identity, managed identity, .net, dotnet, asp.net, c#, csharp, node.js, node, python, java, visual studio, visual studio code, visual studio for mac, azure cli, azure powershell, defaultazurecredential
+
+ms.devlang: csharp,java,javascript,python
+ Last updated : 04/12/2022++
+# Tutorial: Connect to Azure databases from App Service without secrets using a managed identity
+
+[App Service](overview.md) provides a highly scalable, self-patching web hosting service in Azure. It also provides a [managed identity](overview-managed-identity.md) for your app, which is a turn-key solution for securing access to Azure databases, including:
+
+- [Azure SQL Database](/azure/sql-database/)
+- [Azure Database for MySQL](/azure/mysql/)
+- [Azure Database for PostgreSQL](/azure/postgresql/)
+
+> [!NOTE]
+> This tutorial doesn't include guidance for [Azure Cosmos DB](/azure/cosmos-db/), which supports Azure Active Directory authentication differently. For information, see Cosmos DB documentation. For example: [Use system-assigned managed identities to access Azure Cosmos DB data](../cosmos-db/managed-identity-based-authentication.md).
+
+Managed identities in App Service make your app more secure by eliminating secrets from your app, such as credentials in the connection strings. This tutorial shows you how to connect to the above-mentioned databases from App Service using managed identities.
+
+<!-- ![Architecture diagram for tutorial scenario.](./media/tutorial-connect-msi-sql-database/architecture.png) -->
+
+What you will learn:
+
+> [!div class="checklist"]
+> * Configure an Azure AD user as an administrator for your Azure database.
+> * Connect to your database as the Azure AD user.
+> * Configure a system-assigned or user-assigned managed identity for an App Service app.
+> * Grant database access to the managed identity.
+> * Connect to the Azure database from your code (.NET Framework 4.8, .NET 6, Node.js, Python, Java) using a managed identity.
+> * Connect to the Azure database from your development environment using the Azure AD user.
++
+## Prerequisites
+
+- Create an app in App Service based on .NET, Node.js, Python, or Java.
+- Create a database server with Azure SQL Database, Azure Database for MySQL, or Azure Database for PostgreSQL.
+- You should be familiar with the standard connectivity pattern (with username and password) and be able to connect successfully from your App Service app to your database of choice.
+
+Prepare your environment for the Azure CLI.
++
+## 1. Grant database access to Azure AD user
+
+First, enable Azure Active Directory authentication to the Azure database by assigning an Azure AD user as the administrator of the server. For the scenario in the tutorial, you'll use this user to connect to your Azure database from the local development environment. Later, you set up the managed identity for your App Service app to connect from within Azure.
+
+> [!NOTE]
+> This user is different from the Microsoft account you used to sign up for your Azure subscription. It must be a user that you created, imported, synced, or invited into Azure AD. For more information on allowed Azure AD users, see [Azure AD features and limitations in SQL Database](../azure-sql/database/authentication-aad-overview.md#azure-ad-features-and-limitations).
+
+1. If your Azure AD tenant doesn't have a user yet, create one by following the steps at [Add or delete users using Azure Active Directory](../active-directory/fundamentals/add-users-azure-active-directory.md).
+
+1. Find the object ID of the Azure AD user using the [`az ad user list`](/cli/azure/ad/user#az_ad_user_list) and replace *\<user-principal-name>*. The result is saved to a variable.
+
+ ```azurecli-interactive
+ azureaduser=$(az ad user list --filter "userPrincipalName eq '<user-principal-name>'" --query [].objectId --output tsv)
+ ```
+
+# [Azure SQL Database](#tab/sqldatabase)
+
+3. Add this Azure AD user as an Active Directory administrator using [`az sql server ad-admin create`](/cli/azure/sql/server/ad-admin#az_sql_server_ad_admin_create) command in the Cloud Shell. In the following command, replace *\<group-name>* and *\<server-name>* with your own parameters.
+
+ ```azurecli-interactive
+ az sql server ad-admin create --resource-group <group-name> --server-name <server-name> --display-name ADMIN --object-id $azureaduser
+ ```
+
+ For more information on adding an Active Directory administrator, see [Provision an Azure Active Directory administrator for your server](../azure-sql/database/authentication-aad-configure.md#provision-azure-ad-admin-sql-managed-instance)
+
+# [Azure Database for MySQL](#tab/mysql)
+
+3. Add this Azure AD user as an Active Directory administrator using [`az mysql server ad-admin create`](/cli/azure/mysql/server/ad-admin#az_mysql_server_ad_admin_create) command in the Cloud Shell. In the following command, replace *\<group-name>* and *\<server-name>* with your own parameters.
+
+ ```azurecli-interactive
+ az mysql server ad-admin create --resource-group <group-name> --server-name <server-name> --display-name <user-principal-name> --object-id $azureaduser
+ ```
+
+ > [!NOTE]
+ > The command is currently unavailable for Azure Database for MySQL Flexible Server.
+
+# [Azure Database for PostgreSQL](#tab/postgresql)
+
+3. Add this Azure AD user as an Active Directory administrator using [`az postgres server ad-admin create`](/cli/azure/postgres/server/ad-admin#az_postgres_server_ad_admin_create) command in the Cloud Shell. In the following command, replace *\<group-name>* and *\<server-name>* with your own parameters.
+
+ ```azurecli-interactive
+ az postgres server ad-admin create --resource-group <group-name> --server-name <server-name> --display-name <user-principal-name> --object-id $azureaduser
+ ```
+
+ > [!NOTE]
+ > The command is currently unavailable for Azure Database for PostgreSQL Flexible Server.
+
+--
+
+## 2. Configure managed identity for app
+
+Next, you configure your App Service app to connect to SQL Database with a managed identity.
+
+1. Enable a managed identity for your App Service app with the [az webapp identity assign](/cli/azure/webapp/identity#az_webapp_identity_assign) command in the Cloud Shell. In the following command, replace *\<app-name>*.
+
+ # [System-assigned identity](#tab/systemassigned/sqldatabase)
+
+ ```azurecli-interactive
+ az webapp identity assign --resource-group <group-name> --name <app-name>
+ ```
+
+ # [System-assigned identity](#tab/systemassigned/mysql)
+
+ ```azurecli-interactive
+ az webapp identity assign --resource-group <group-name> --name <app-name> --output tsv --query principalId
+ az ad sp show --id <output-from-previous-command> --output tsv --query appId
+ ```
+
+ The output of [az ad sp show](/cli/azure/ad/sp#az-ad-sp-show) is the application ID of the system-assigned identity. You'll need it later.
+
+ # [System-assigned identity](#tab/systemassigned/postgresql)
+
+ ```azurecli-interactive
+ az webapp identity assign --resource-group <group-name> --name <app-name> --output tsv --query principalId
+ az ad sp show --id <output-from-previous-command> --output tsv --query appId
+ ```
+
+ The output of [az ad sp show](/cli/azure/ad/sp#az-ad-sp-show) is the application ID of the system-assigned identity. You'll need it later.
+
+ # [User-assigned identity](#tab/userassigned)
+
+ ```azurecli-interactive
+ # Create a user-assigned identity and get its client ID
+ az identity create --name <identity-name> --resource-group <group-name> --output tsv --query "id"
+ # assign identity to app
+ az webapp identity assign --resource-group <group-name> --name <app-name> --identities <output-of-previous-command>
+ # get client ID of identity for later
+ az webapp identity show --name <identity-name> --resource-group <group-name> --output tsv --query "clientId"
+ ```
+
+ The output of [az webapp identity show](/cli/azure/webapp/identity#az-webapp-identity-show) is the client ID of the user-assigned identity. You'll need it later.
+
+ --
+
+ > [!NOTE]
+ > To enable managed identity for a [deployment slot](deploy-staging-slots.md), add `--slot <slot-name>` and use the name of the slot in *\<slot-name>*.
+
+1. The identity needs to be granted permissions to access the database. In the Cloud Shell, sign in to your database with the following command. Replace _\<server-name>_ with your server name, _\<database-name>_ with the database name your app uses, and _\<aad-user-name>_ and _\<aad-password>_ with your Azure AD user's credentials from [1. Grant database access to Azure AD user]().
+
+ # [Azure SQL Database](#tab/sqldatabase)
+
+ ```azurecli-interactive
+ sqlcmd -S <server-name>.database.windows.net -d <database-name> -U <aad-user-name> -P "<aad-password>" -G -l 30
+ ```
+
+ # [Azure Database for MySQL](#tab/mysql)
+
+ ```azurecli-interactive
+ # Sign into Azure using the Azure AD user from "1. Grant database access to Azure AD user"
+ az login --allow-no-subscriptions
+ # Get access token for MySQL with the Azure AD user
+ az account get-access-token --resource-type oss-rdbms --output tsv --query accessToken
+ # Sign into the MySQL server using the token
+ mysql -h <server-name>.mysql.database.azure.com --user <aad-user-name>@<server-name> --enable-cleartext-plugin --password=<token-output-from-last-command> --ssl
+ ```
+
+ The full username *\<aad-user-name>@\<server-name>* looks like `admin1@contoso.onmicrosoft.com@mydbserver1`.
+
+ # [Azure Database for PostgreSQL](#tab/postgresql)
+
+ ```azurecli-interactive
+ # Sign into Azure using the Azure AD user from "1. Grant database access to Azure AD user"
+ az login --allow-no-subscriptions
+ # Get access token for PostgreSQL with the Azure AD user
+ az account get-access-token --resource-type oss-rdbms --output tsv --query accessToken
+ # Sign into the Postgres server
+ psql "host=<server-name>.postgres.database.azure.com port=5432 dbname=<database-name> user=<aad-user-name>@<server-name> password=<token-output-from-last-command>"
+ ```
+
+ The full username *\<aad-user-name>@\<server-name>* looks like `admin1@contoso.onmicrosoft.com@mydbserver1`.
+
+ --
+
+1. Run the following database commands to grant the permissions your app needs. For example,
+
+ # [System-assigned identity](#tab/systemassigned/sqldatabase)
+
+ ```sql
+ CREATE USER [<app-name>] FROM EXTERNAL PROVIDER;
+ ALTER ROLE db_datareader ADD MEMBER [<app-name>];
+ ALTER ROLE db_datawriter ADD MEMBER [<app-name>];
+ ALTER ROLE db_ddladmin ADD MEMBER [<app-name>];
+ GO
+ ```
+
+ For a [deployment slot](deploy-staging-slots.md), use *\<app-name>/slots/\<slot-name>* instead of *\<app-name>*.
+
+ # [User-assigned identity](#tab/userassigned/sqldatabase)
+
+ ```sql
+ CREATE USER [<identity-name>] FROM EXTERNAL PROVIDER;
+ ALTER ROLE db_datareader ADD MEMBER [<identity-name>];
+ ALTER ROLE db_datawriter ADD MEMBER [<identity-name>];
+ ALTER ROLE db_ddladmin ADD MEMBER [<identity-name>];
+ GO
+ ```
+
+ # [System-assigned identity](#tab/systemassigned/mysql)
+
+ ```sql
+ SET aad_auth_validate_oids_in_tenant = OFF;
+ CREATE AADUSER '<mysql-user-name>' IDENTIFIED BY '<application-id-of-system-assigned-identity>';
+ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER ON *.* TO '<mysql-user-name>'@'%' WITH GRANT OPTION;
+ FLUSH PRIVILEGES;
+ ```
+
+ Whatever name you choose for *\<mysql-user-name>*, it's the MySQL user you'll use to connect to the database later from your code in App Service.
+
+ # [User-assigned identity](#tab/userassigned/mysql)
+
+ ```sql
+ SET aad_auth_validate_oids_in_tenant = OFF;
+ CREATE AADUSER '<mysql-user-name>' IDENTIFIED BY '<client-id-of-user-assigned-identity>';
+ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER ON *.* TO '<mysql-user-name>'@'%' WITH GRANT OPTION;
+ FLUSH PRIVILEGES;
+ ```
+
+ Whatever name you choose for *\<mysql-user-name>*, it's the MySQL user you'll use to connect to the database later from your code in App Service.
+
+ # [System-assigned identity](#tab/systemassigned/postgresql)
+
+ ```sql
+ SET aad_validate_oids_in_tenant = off;
+ CREATE ROLE <postgresql-user-name> WITH LOGIN PASSWORD '<application-id-of-system-assigned-identity>' IN ROLE azure_ad_user;
+ ```
+
+ Whatever name you choose for *\<postgresql-user-name>*, it's the PostgreSQL user you'll use to connect to the database later from your code in App Service.
+
+ # [User-assigned identity](#tab/userassigned/postgresql)
+
+ ```sql
+ SET aad_validate_oids_in_tenant = off;
+ CREATE ROLE <postgresql-user-name> WITH LOGIN PASSWORD '<application-id-of-system-assigned-identity>' IN ROLE azure_ad_user;
+ ```
+
+ Whatever name you choose for *\<postgresql-user-name>*, it's the PostgreSQL user you'll use to connect to the database later from your code in App Service.
+
+ --
+
+## 3. Modify your code
+
+In this section, connectivity to the Azure database in your code follows the `DefaultAzureCredential` pattern for all language stacks. `DefaultAzureCredential` is flexible enough to adapt to both the development environment and the Azure environment. When running locally, it can retrieve the logged-in Azure user from the environment of your choice (Visual Studio, Visual Studio Code, Azure CLI, or Azure PowerShell). When running in Azure, it retrieves the managed identity. So it's possible to have connectivity to database both at development time and in production. The pattern is as follows:
+
+1. Instantiate a `DefaultAzureCredential` from the Azure Identity client library. If you're using a user-assigned identity, specify the client ID of the identity.
+1. Get an access token for the resource URI respective to the database type.
+ - For Azure SQL Database: `https://database.windows.net/.default`
+ - For Azure Database for MySQL: `https://ossrdbms-aad.database.windows.net`
+ - For Azure Database for PostgreSQL: `https://ossrdbms-aad.database.windows.net`
+1. Add the token to your connection string.
+1. Open the connection.
+
+For Azure Database for MySQL and Azure Database for PostgreSQL, the database username that you created in [2. Configure managed identity for app](#2-configure-managed-identity-for-app) is also required in the connection string.
+
+# [.NET Framework](#tab/netfx)
+
+1. In Visual Studio, open the Package Manager Console and add the NuGet packages you need:
+
+ # [Azure SQL Database](#tab/sqldatabase)
+
+ ```powershell
+ Install-Package Azure.Identity
+ Install-Package System.Data.SqlClient
+ ```
+
+ # [Azure Database for MySQL](#tab/mysql)
+
+ ```powershell
+ Install-Package Azure.Identity
+ Install-Package MySql.Data
+ ```
+
+ # [Azure Database for PostgreSQL](#tab/postgresql)
+
+ ```powershell
+ Install-Package Azure.Identity
+ Install-Package Npgsql
+ ```
+
+ --
+
+1. Connect to the Azure database by adding an access token. If you're using a user-assigned identity, make sure you uncomment the applicable lines.
+
+ # [Azure SQL Database](#tab/sqldatabase)
+
+ ```csharp
+ // Uncomment one of the two lines depending on the identity type
+ //var credential = new Azure.Identity.DefaultAzureCredential(); // system-assigned identity
+ //var credential = new Azure.Identity.DefaultAzureCredential(new DefaultAzureCredentialOptions { ManagedIdentityClientId = '<client-id-of-user-assigned-identity>' }); // user-assigned identity
+
+ // Get token for Azure SQL Database
+ var token = credential.GetToken(new Azure.Core.TokenRequestContext(new[] { "https://database.windows.net/.default" }));
+
+ // Add the token to the SQL connection
+ var connection = new System.Data.SqlClient.SqlConnection("Server=tcp:<server-name>.database.windows.net;Database=<database-name>;TrustServerCertificate=True");
+ connection.AccessToken = token.Token;
+
+ // Open the SQL connection
+ connection.Open();
+ ```
+
+ For a more detailed tutorial, see [Tutorial: Connect to SQL Database from .NET App Service without secrets using a managed identity](tutorial-connect-msi-sql-database.md).
+
+ # [Azure Database for MySQL](#tab/mysql)
+
+ ```csharp
+ using Azure.Identity;
+
+ ...
+
+ // Uncomment one of the two lines depending on the identity type
+ //var credential = new DefaultAzureCredential(); // system-assigned identity
+ //var credential = new DefaultAzureCredential(new DefaultAzureCredentialOptions { ManagedIdentityClientId = '<client-id-of-user-assigned-identity>' }); // user-assigned identity
+
+ // Get token for Azure Database for MySQL
+ var token = credential.GetToken(new Azure.Core.TokenRequestContext(new[] { "https://ossrdbms-aad.database.windows.net" }));
+
+ // Set MySQL user depending on the environment
+ string user;
+ if (String.IsNullOrEmpty(Environment.GetEnvironmentVariable("IDENTITY_ENDPOINT")))
+ user = "<aad-user-name>@<server-name>";
+ else user = "<mysql-user-name>@<server-name>";
+
+ // Add the token to the MySQL connection
+ var connectionString = "Server=<server-name>.mysql.database.azure.com;" +
+ "Port=3306;" +
+ "SslMode=Required;" +
+ "Database=<database-name>;" +
+ "Uid=" + user+ ";" +
+ "Password="+ token.Token;
+ var connection = new MySql.Data.MySqlClient.MySqlConnection(connectionString);
+
+ connection.Open();
+ ```
+
+ # [Azure Database for PostgreSQL](#tab/postgresql)
+
+ ```csharp
+ using Azure.Identity;
+
+ ...
+
+ // Uncomment one of the two lines depending on the identity type
+ //var credential = new DefaultAzureCredential(); // system-assigned identity
+ //var credential = new DefaultAzureCredential(new DefaultAzureCredentialOptions { ManagedIdentityClientId = '<client-id-of-user-assigned-identity>' }); // user-assigned identity
+
+ // Get token for Azure Database for PostgreSQL
+ var token = credential.GetToken(new Azure.Core.TokenRequestContext(new[] { "https://ossrdbms-aad.database.windows.net" }));
+
+ // Check if in Azure and set user accordingly
+ string postgresqlUser;
+ if (String.IsNullOrEmpty(Environment.GetEnvironmentVariable("IDENTITY_ENDPOINT")))
+ postgresqlUser = "<aad-user-name>@<server-name>";
+ else postgresqlUser = "<postgresql-user-name>@<server-name>";
+
+ // Add the token to the PostgreSQL connection
+ var connectionString = "Server=<server-name>.postgres.database.azure.com;" +
+ "Port=5432;" +
+ "Database=<database-name>;" +
+ "User Id=" + postgresqlUser + ";" +
+ "Password="+ token.Token;
+ var connection = new Npgsql.NpgsqlConnection(connectionString);
+
+ connection.Open();
+ ```
+
+ --
+
+# [.NET 6](#tab/dotnet)
+
+1. Install the .NET packages you need into your .NET project:
+
+ # [Azure SQL Database](#tab/sqldatabase)
+
+ ```dotnetcli
+ dotnet add package Microsoft.Data.SqlClient
+ ```
+
+ # [Azure Database for MySQL](#tab/mysql)
+
+ ```dotnetcli
+ dotnet add package Azure.Identity
+ dotnet add package MySql.Data
+ ```
+
+ # [Azure Database for PostgreSQL](#tab/postgresql)
+
+ ```dotnetcli
+ dotnet add package Azure.Identity
+ dotnet add package Npgsql
+ ```
+
+ --
+
+1. Connect to the Azure database by adding an access token. If you're using a user-assigned identity, make sure you uncomment the applicable lines.
+
+ # [Azure SQL Database](#tab/sqldatabase)
+
+ ```csharp
+ using Microsoft.Data.SqlClient;
+
+ ...
+
+ // Uncomment one of the two lines depending on the identity type
+ //SqlConnection connection = new SqlConnection("Server=tcp:<server-name>.database.windows.net;Database=<database-name>;Authentication=Active Directory Default;TrustServerCertificate=True"); // system-assigned identity
+ //SqlConnection connection = new SqlConnection("Server=tcp:<server-name>.database.windows.net;Database=<database-name>;Authentication=Active Directory Default;User Id=<client-id-of-user-assigned-identity>;TrustServerCertificate=True"); // user-assigned identity
+
+ // Open the SQL connection
+ connection.Open();
+ ```
+
+ [Microsoft.Data.SqlClient](/sql/connect/ado-net/sql/azure-active-directory-authentication?view=azuresqldb-current&preserve-view=true) provides integrated support of Azure AD authentication. In this case, the [Active Directory Default](/sql/connect/ado-net/sql/azure-active-directory-authentication?view=azuresqldb-current&preserve-view=true#using-active-directory-default-authentication) uses `DefaultAzureCredential` to retrieve the required token for you and adds it to the database connection directly.
+
+ For a more detailed tutorial, see [Tutorial: Connect to SQL Database from .NET App Service without secrets using a managed identity](tutorial-connect-msi-sql-database.md).
+
+ # [Azure Database for MySQL](#tab/mysql)
+
+ ```csharp
+ using Azure.Identity;
+
+ ...
+
+ // Uncomment one of the two lines depending on the identity type
+ //var credential = new DefaultAzureCredential(); // system-assigned identity
+ //var credential = new DefaultAzureCredential(new DefaultAzureCredentialOptions { ManagedIdentityClientId = '<client-id-of-user-assigned-identity>' }); // user-assigned identity
+
+ // Get token for Azure Database for MySQL
+ var token = credential.GetToken(new Azure.Core.TokenRequestContext(new[] { "https://ossrdbms-aad.database.windows.net" }));
+
+ // Set MySQL user depending on the environment
+ string user;
+ if (String.IsNullOrEmpty(Environment.GetEnvironmentVariable("IDENTITY_ENDPOINT")))
+ user = "<aad-user-name>@<server-name>";
+ else user = "<mysql-user-name>@<server-name>";
+
+ // Add the token to the MySQL connection
+ var connectionString = "Server=<server-name>.mysql.database.azure.com;" +
+ "Port=3306;" +
+ "SslMode=Required;" +
+ "Database=<database-name>;" +
+ "Uid=" + user+ ";" +
+ "Password="+ token.Token;
+ var connection = new MySql.Data.MySqlClient.MySqlConnection(connectionString);
+
+ connection.Open();
+ ```
+
+ The `if` statement sets the MySQL username based on which identity the token applies to. The token is then passed in to the MySQL connection as the password for the Azure identity. For more information, see [Connect with Managed Identity to Azure Database for MySQL](../postgresql/howto-connect-with-managed-identity.md).
+
+ # [Azure Database for PostgreSQL](#tab/postgresql)
+
+ ```csharp
+ using Azure.Identity;
+
+ ...
+
+ // Uncomment one of the two lines depending on the identity type
+ //var credential = new DefaultAzureCredential(); // system-assigned identity
+ //var credential = new DefaultAzureCredential(new DefaultAzureCredentialOptions { ManagedIdentityClientId = '<client-id-of-user-assigned-identity>' }); // user-assigned identity
+
+ // Get token for Azure Database for PostgreSQL
+ var token = credential.GetToken(new Azure.Core.TokenRequestContext(new[] { "https://ossrdbms-aad.database.windows.net" }));
+
+ // Check if in Azure and set user accordingly
+ string postgresqlUser;
+ if (String.IsNullOrEmpty(Environment.GetEnvironmentVariable("IDENTITY_ENDPOINT")))
+ postgresqlUser = "<aad-user-name>@<server-name>";
+ else postgresqlUser = "<postgresql-user-name>@<server-name>";
+
+ // Add the token to the PostgreSQL connection
+ var connectionString = "Server=<server-name>.postgres.database.azure.com;" +
+ "Port=5432;" +
+ "Database=<database-name>;" +
+ "User Id=" + postgresqlUser + ";" +
+ "Password="+ token.Token;
+ var connection = new Npgsql.NpgsqlConnection(connectionString);
+
+ connection.Open();
+ ```
+
+ The `if` statement sets the PostgreSQL username based on which identity the token applies to. The token is then passed in to the PostgreSQL connection as the password for the Azure identity. For more information, see [Connect with Managed Identity to Azure Database for PostgreSQL](../postgresql/howto-connect-with-managed-identity.md).
+
+ --
+
+# [Node.js](#tab/nodejs)
+
+1. Install the required npm packages you need into your Node.js project:
+
+ # [Azure SQL Database](#tab/sqldatabase)
+
+ ```terminal
+ npm install --save @azure/identity
+ npm install --save tedious
+ ```
+
+ # [Azure Database for MySQL](#tab/mysql)
+
+ ```terminal
+ npm install --save @azure/identity
+ npm install --save mysql2
+ ```
+
+ # [Azure Database for PostgreSQL](#tab/postgresql)
+
+ ```terminal
+ npm install --save @azure/identity
+ npm install --save pg
+ ```
+
+ --
+
+1. Connect to the Azure database by adding an access token. If you're using a user-assigned identity, make sure you uncomment the applicable lines.
+
+ # [Azure SQL Database](#tab/sqldatabase)
+
+ ```javascript
+ const { Connection, Request } = require("tedious");
+ const { DefaultAzureCredential } = require("@azure/identity");
+
+ // Uncomment one of the two lines depending on the identity type
+ //const credential = new DefaultAzureCredential(); // system-assigned identity
+ //const credential = new DefaultAzureCredential({ managedIdentityClientId: '<client-id-of-user-assigned-identity>' }); // user-assigned identity
+
+ // Get token for Azure SQL Database
+ const accessToken = await credential.getToken("https://database.windows.net/.default");
+
+ // Create connection to database
+ const connection = new Connection({
+ server: '<server-name>.database.windows.net',
+ authentication: {
+ type: 'azure-active-directory-access-token',
+ options: {
+ token: accessToken.token
+ }
+ },
+ options: {
+ database: '<database-name>',
+ encrypt: true,
+ port: 1433
+ }
+ });
+
+ // Open the database connection
+ connection.connect();
+ ```
+
+ The [tedious](https://tediousjs.github.io/tedious/) library also has an authentication type `azure-active-directory-msi-app-service`, which doesn't require you to retrieve the token yourself, but the use of `DefaultAzureCredential` in this example works both in App Service and in your local development environment. For more information, see [Quickstart: Use Node.js to query a database in Azure SQL Database or Azure SQL Managed Instance](../azure-sql/database/connect-query-nodejs.md)
+
+ # [Azure Database for MySQL](#tab/mysql)
+
+ ```javascript
+ const mysql = require('mysql2');
+ const { DefaultAzureCredential } = require("@azure/identity");
+
+ // Uncomment one of the two lines depending on the identity type
+ //const credential = new DefaultAzureCredential(); // system-assigned identity
+ //const credential = new DefaultAzureCredential({ managedIdentityClientId: '<client-id-of-user-assigned-identity>' }); // user-assigned identity
+
+ // Get token for Azure Database for MySQL
+ const accessToken = await credential.getToken("https://ossrdbms-aad.database.windows.net");
+
+ // Set MySQL user depending on the environment
+ if(process.env.IDENTITY_ENDPOINT) {
+ var mysqlUser = '<mysql-user-name>@<server-name>';
+ } else {
+ var mysqlUser = '<aad-user-name>@<server-name>';
+ }
+
+ // Add the token to the MySQL connection
+ var config =
+ {
+ host: '<server-name>.mysql.database.azure.com',
+ user: mysqlUser,
+ password: accessToken.token,
+ database: '<database-name>',
+ port: 3306,
+ insecureAuth: true,
+ authPlugins: {
+ mysql_clear_password: () => () => {
+ return Buffer.from(accessToken.token + '\0')
+ }
+ }
+ };
+
+ const conn = new mysql.createConnection(config);
+
+ // Open the database connection
+ conn.connect(
+ function (err) {
+ if (err) {
+ console.log("!!! Cannot connect !!! Error:");
+ throw err;
+ }
+ else
+ {
+ ...
+ }
+ });
+ ```
+
+ The `if` statement sets the MySQL username based on which identity the token applies to. The token is then passed in to the [standard MySQL connection](../mysql/connect-nodejs.md) as the password of the Azure identity.
+
+ # [Azure Database for PostgreSQL](#tab/postgresql)
+
+ ```javascript
+ const pg = require('pg');
+ const { DefaultAzureCredential } = require("@azure/identity");
+
+ // Uncomment one of the two lines depending on the identity type
+ //const credential = new DefaultAzureCredential(); // system-assigned identity
+ //const credential = new DefaultAzureCredential({ managedIdentityClientId: '<client-id-of-user-assigned-identity>' }); // user-assigned identity
+
+ // Get token for Azure Database for PostgreSQL
+ const accessToken = await credential.getToken("https://ossrdbms-aad.database.windows.net");
+
+ // Set PosrgreSQL user depending on the environment
+ if(process.env.IDENTITY_ENDPOINT) {
+ var postgresqlUser = '<postgresql-user-name>@<server-name>';
+ } else {
+ var postgresqlUser = '<aad-user-name>@<server-name>';
+ }
+
+ // Add the token to the PostgreSQL connection
+ var config =
+ {
+ host: '<server-name>.postgres.database.azure.com',
+ user: postgresqlUser,
+ password: accessToken.token,
+ database: '<database-name>',
+ port: 5432
+ };
+
+ const client = new pg.Client(config);
+
+ // Open the database connection
+ client.connect(err => {
+ if (err) throw err;
+ else {
+ // Do something with the connection...
+ }
+ });
+
+ ```
+
+ The `if` statement sets the PostgreSQL username based on which identity the token applies to. The token is then passed in to the [standard PostgreSQL connection](../postgresql/connect-nodejs.md) as the password of the Azure identity.
+
+ --
+
+# [Python](#tab/python)
+
+1. In your Python project, install the required packages.
+
+ # [Azure SQL Database](#tab/sqldatabase)
+
+ ```terminal
+ pip install azure-identity
+ pip install pyodbc
+ ```
+
+ The required [ODBC Driver 17 for SQL Server](/sql/connect/odbc/download-odbc-driver-for-sql-server) is already installed in App Service. To run the same code locally, install it in your local environment too.
+
+ # [Azure Database for MySQL](#tab/mysql)
+
+ ```terminal
+ pip install azure-identity
+ pip install mysql-connector-python
+ ```
+
+ # [Azure Database for PostgreSQL](#tab/postgresql)
+
+ ```terminal
+ pip install azure-identity
+ pip install psycopg2-binary
+ ```
+
+ --
+
+1. Connect to the Azure database by using an access token:
+
+ # [Azure SQL Database](#tab/sqldatabase)
+
+ ```python
+ from azure.identity import DefaultAzureCredential
+ import pyodbc, struct
+
+ # Uncomment one of the two lines depending on the identity type
+ #credential = DefaultAzureCredential() # system-assigned identity
+ #credential = DefaultAzureCredential(managed_identity_client_id='<client-id-of-user-assigned-identity>') # user-assigned identity
+
+ # Get token for Azure SQL Database and convert to UTF-16-LE for SQL Server driver
+ token = credential.get_token("https://database.windows.net/.default").token.encode("UTF-16-LE")
+ token_struct = struct.pack(f'<I{len(token)}s', len(token), token)
+
+ # Connect with the token
+ SQL_COPT_SS_ACCESS_TOKEN = 1256
+ connString = f"Driver={{ODBC Driver 17 for SQL Server}};SERVER=<server-name>.database.windows.net;DATABASE=<database-name>"
+ conn = pyodbc.connect(connString, attrs_before={SQL_COPT_SS_ACCESS_TOKEN: token_struct})
+ ```
+
+ The ODBC Driver 17 for SQL Server also supports an authentication type `ActiveDirectoryMsi`. You can connect from App Service without getting the token yourself, simply with the connection string `Driver={{ODBC Driver 17 for SQL Server}};SERVER=<server-name>.database.windows.net;DATABASE=<database-name>;Authentication=ActiveDirectoryMsi`. The difference with the above code is that it gets the token with `DefaultAzureCredential`, which works both in App Service and in your local development environment.
+
+ For more information about PyODBC, see [PyODBC SQL Driver](/sql/connect/python/pyodbc/python-sql-driver-pyodbc).
+
+ # [Azure Database for MySQL](#tab/mysql)
+
+ ```python
+ from azure.identity import DefaultAzureCredential
+ import mysql.connector
+ import os
+
+ # Uncomment one of the two lines depending on the identity type
+ #credential = DefaultAzureCredential() # system-assigned identity
+ #credential = DefaultAzureCredential(managed_identity_client_id='<client-id-of-user-assigned-identity>') # user-assigned identity
+
+ # Get token for Azure Database for MySQL
+ token = credential.get_token("https://ossrdbms-aad.database.windows.net")
+
+ # Set MySQL user depending on the environment
+ if 'IDENTITY_ENDPOINT' in os.environ:
+ mysqlUser = '<mysql-user-name>@<server-name>'
+ else:
+ mysqlUser = '<aad-user-name>@<server-name>'
+
+ # Connect with the token
+ os.environ['LIBMYSQL_ENABLE_CLEARTEXT_PLUGIN'] = '1'
+ config = {
+ 'host': '<server-name>.mysql.database.azure.com',
+ 'database': '<database-name>',
+ 'user': mysqlUser,
+ 'password': token.token
+ }
+ conn = mysql.connector.connect(**config)
+ print("Connection established")
+ ```
+
+ The `if` statement sets the MySQL username based on which identity the token applies to. The token is then passed in to the [standard MySQL connection](../mysql/connect-python.md) as the password of the Azure identity.
+
+ The `LIBMYSQL_ENABLE_CLEARTEXT_PLUGIN` environment variable enables the [Cleartext plugin](https://dev.mysql.com/doc/refman/8.0/cleartext-pluggable-authentication.html) in the MySQL Connector (see [Use Azure Active Directory for authentication with MySQL](../mysql/howto-configure-sign-in-azure-ad-authentication.md#compatibility-with-application-drivers)).
+
+ # [Azure Database for PostgreSQL](#tab/postgresql)
+
+ ```python
+ from azure.identity import DefaultAzureCredential
+ import psycopg2
+
+ # Uncomment one of the two lines depending on the identity type
+ #credential = DefaultAzureCredential() # system-assigned identity
+ #credential = DefaultAzureCredential(managed_identity_client_id='<client-id-of-user-assigned-identity>') # user-assigned identity
+
+ # Get token for Azure Database for PostgreSQL
+ token = credential.get_token("https://ossrdbms-aad.database.windows.net")
+
+ # Set PostgreSQL user depending on the environment
+ if 'IDENTITY_ENDPOINT' in os.environ:
+ postgresUser = '<postgres-user-name>@<server-name>'
+ else:
+ postgresUser = '<aad-user-name>@<server-name>'
+
+ # Connect with the token
+ host = "<server-name>.postgres.database.azure.com"
+ dbname = "<database-name>"
+ conn_string = "host={0} user={1} dbname={2} password={3}".format(host, postgresUser, dbname, token.token)
+ conn = psycopg2.connect(conn_string)
+ ```
+
+ The `if` statement sets the PostgreSQL username based on which identity the token applies to. The token is then passed in to the [standard PostgreSQL connection](../postgresql/connect-python.md) as the password of the Azure identity.
+
+ Whatever database driver you use, make sure it can send the token as clear text (see [Use Azure Active Directory for authentication with MySQL](../mysql/howto-configure-sign-in-azure-ad-authentication.md#compatibility-with-application-drivers)).
+
+ --
+
+# [Java](#tab/java)
+
+1. Add the required dependencies to your project's BOM file.
+
+ # [Azure SQL Database](#tab/sqldatabase)
+
+ ```xml
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-identity</artifactId>
+ <version>1.4.6</version>
+ </dependency>
+ <dependency>
+ <groupId>com.microsoft.sqlserver</groupId>
+ <artifactId>mssql-jdbc</artifactId>
+ <version>10.2.0.jre11</version>
+ </dependency>
+ ```
+
+ # [Azure Database for MySQL](#tab/mysql)
+
+ ```xml
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-identity</artifactId>
+ <version>1.4.6</version>
+ </dependency>
+ <dependency>
+ <groupId>mysql</groupId>
+ <artifactId>mysql-connector-java</artifactId>
+ <version>8.0.28</version>
+ </dependency>
+ ```
+
+ # [Azure Database for PostgreSQL](#tab/postgresql)
+
+ ```xml
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-identity</artifactId>
+ <version>1.4.6</version>
+ </dependency>
+ <dependency>
+ <groupId>org.postgresql</groupId>
+ <artifactId>postgresql</artifactId>
+ <version>42.3.3</version>
+ </dependency>
+ ```
+
+ --
+
+1. Connect to Azure database by using an access token:
+
+ # [Azure SQL Database](#tab/sqldatabase)
+
+ ```java
+ import com.azure.identity.*;
+ import com.azure.core.credential.*;
+ import com.microsoft.sqlserver.jdbc.SQLServerDataSource;
+ import java.sql.*;
+
+ ...
+
+ // Uncomment one of the two lines depending on the identity type
+ //DefaultAzureCredential creds = new DefaultAzureCredentialBuilder().build(); // system-assigned identity
+ //DefaultAzureCredential creds = new DefaultAzureCredentialBuilder().managedIdentityClientId('<client-id-of-user-assigned-identity>")'build(); // user-assigned identity
+
+ // Get the token
+ TokenRequestContext request = new TokenRequestContext();
+ request.addScopes("https://database.windows.net//.default");
+ AccessToken token=creds.getToken(request).block();
+
+ // Set token in your SQL connection
+ SQLServerDataSource ds = new SQLServerDataSource();
+ ds.setServerName("<server-name>.database.windows.net");
+ ds.setDatabaseName("<database-name>");
+ ds.setAccessToken(token.getToken());
+
+ // Connect
+ try {
+ Connection connection = ds.getConnection();
+ Statement stmt = connection.createStatement();
+ ResultSet rs = stmt.executeQuery("SELECT SUSER_SNAME()");
+ if (rs.next()) {
+ System.out.println("Signed into database as: " + rs.getString(1));
+ }
+ }
+ catch (Exception e) {
+ System.out.println(e.getMessage());
+ }
+ ```
+
+ The [JDBC Driver for SQL Server] also has an authentication type [ActiveDirectoryMsi](/sql/connect/jdbc/connecting-using-azure-active-directory-authentication#connect-using-activedirectorymsi-authentication-mode), which is easier to use for App Service. The above code gets the token with `DefaultAzureCredential`, which works both in App Service and in your local development environment.
+
+ # [Azure Database for MySQL](#tab/mysql)
+
+ ```java
+ import com.azure.identity.*;
+ import com.azure.core.credential.*;
+ import java.sql.*;
+
+ ...
+
+ // Uncomment one of the two lines depending on the identity type
+ //DefaultAzureCredential creds = new DefaultAzureCredentialBuilder().build(); // system-assigned identity
+ //DefaultAzureCredential creds = new DefaultAzureCredentialBuilder().managedIdentityClientId('<client-id-of-user-assigned-identity>")'build(); // user-assigned identity
+
+ // Get the token
+ TokenRequestContext request = new TokenRequestContext();
+ request.addScopes("https://ossrdbms-aad.database.windows.net");
+ AccessToken token=creds.getToken(request).block();
+
+ // Set MySQL user depending on the environment
+ String mysqlUser;
+ if (System.getenv("IDENTITY_ENDPOINT" != null)) {
+ mysqlUser = "<aad-user-name>@<server-name>";
+ }
+ else {
+ mysqlUser = "<mysql-user-name>@<server-name>";
+ }
+
+ // Set token in your SQL connection
+ try {
+ Connection connection = DriverManager.getConnection(
+ "jdbc:mysql://<server-name>.mysql.database.azure.com/<database-name>",
+ mysqlUser,
+ token.getToken());
+ Statement stmt = connection.createStatement();
+ ResultSet rs = stmt.executeQuery("SELECT USER();");
+ if (rs.next()) {
+ System.out.println("Signed into database as: " + rs.getString(1));
+ }
+ }
+ catch (Exception e) {
+ System.out.println(e.getMessage());
+ }
+ ```
+
+ The `if` statement sets the MySQL username based on which identity the token applies to. The token is then passed in to the [standard MySQL connection](../mysql/connect-java.md) as the password of the Azure identity.
+
+ # [Azure Database for PostgreSQL](#tab/postgresql)
+
+ ```java
+ import com.azure.identity.*;
+ import com.azure.core.credential.*;
+ import java.sql.*;
+
+ ...
+
+ // Uncomment one of the two lines depending on the identity type
+ //DefaultAzureCredential creds = new DefaultAzureCredentialBuilder().build(); // system-assigned identity
+ //DefaultAzureCredential creds = new DefaultAzureCredentialBuilder().managedIdentityClientId('<client-id-of-user-assigned-identity>")'build(); // user-assigned identity
+
+ // Get the token
+ TokenRequestContext request = new TokenRequestContext();
+ request.addScopes("https://ossrdbms-aad.database.windows.net");
+ AccessToken token=creds.getToken(request).block();
+
+ // Set PostgreSQL user depending on the environment
+ String postgresUser;
+ if (System.getenv("IDENTITY_ENDPOINT") != null) {
+ postgresUser = "<aad-user-name>@<server-name>";
+ }
+ else {
+ postgresUser = "<postgresql-user-name>@<server-name>";
+ }
+
+ // Set token in your SQL connection
+ try {
+ Connection connection = DriverManager.getConnection(
+ "jdbc:postgresql://<server-name>.postgres.database.azure.com:5432/<database-name>",
+ postgresUser,
+ token.getToken());
+ Statement stmt = connection.createStatement();
+ ResultSet rs = stmt.executeQuery("select current_user;");
+ if (rs.next()) {
+ System.out.println("Signed into database as: " + rs.getString(1));
+ }
+ }
+ catch (Exception e) {
+ System.out.println(e.getMessage());
+ }
+ ```
+
+ The `if` statement sets the PostgreSQL username based on which identity the token applies to. The token is then passed in to the [standard PostgreSQL connection](../postgresql/connect-nodejs.md) as the password of the identity. To see how you can do it similarly with specific frameworks, see:
+
+ - [Spring Data JDBC](/azure/developer/java/spring-framework/configure-spring-data-jdbc-with-azure-postgresql)
+ - [Spring Data JPA](/azure/developer/java/spring-framework/configure-spring-data-jpa-with-azure-postgresql)
+ - [Spring Data R2DBC](/azure/developer/java/spring-framework/configure-spring-data-r2dbc-with-azure-postgresql)
+ --
+
+--
+
+## 4. Set up your dev environment
+
+ This sample code uses `DefaultAzureCredential` to get a useable token for your Azure database from Azure Active Directory and then adds it to the database connection. While you can customize `DefaultAzureCredential`, it's already versatile by default. It gets a token from the signed-in Azure AD user or from a managed identity, depending on whether you run it locally in your development environment or in App Service.
+
+Without any further changes, your code is ready to be run in Azure. To debug your code locally, however, your develop environment needs a signed-in Azure AD user. In this step, you configure your environment of choice by signing in [with your Azure AD user](#1-grant-database-access-to-azure-ad-user).
+
+# [Visual Studio Windows](#tab/windowsclient)
+
+1. Visual Studio for Windows is integrated with Azure AD authentication. To enable development and debugging in Visual Studio, add your Azure AD user in Visual Studio by selecting **File** > **Account Settings** from the menu, and select **Sign in** or **Add**.
+
+1. To set the Azure AD user for Azure service authentication, select **Tools** > **Options** from the menu, then select **Azure Service Authentication** > **Account Selection**. Select the Azure AD user you added and select **OK**.
+
+# [Visual Studio for macOS](#tab/macosclient)
+
+1. Visual Studio for Mac is *not* integrated with Azure AD authentication. However, the Azure Identity client library that you'll use later can also retrieve tokens from Azure CLI. To enable development and debugging in Visual Studio, [install Azure CLI](/cli/azure/install-azure-cli) on your local machine.
+
+1. Sign in to Azure CLI with the following command using your Azure AD user:
+
+ ```azurecli
+ az login --allow-no-subscriptions
+ ```
+
+# [Visual Studio Code](#tab/vscode)
+
+1. Visual Studio Code is integrated with Azure AD authentication through the Azure extension. Install the <a href="https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack" target="_blank">Azure Tools</a> extension in Visual Studio Code.
+
+1. In Visual Studio Code, in the [Activity Bar](https://code.visualstudio.com/docs/getstarted/userinterface), select the **Azure** logo.
+
+1. In the **App Service** explorer, select **Sign in to Azure...** and follow the instructions.
+
+# [Azure CLI](#tab/cli)
+
+1. The Azure Identity client library that you'll use later can use tokens from Azure CLI. To enable command-line based development, [install Azure CLI](/cli/azure/install-azure-cli) on your local machine.
+
+1. Sign in to Azure with the following command using your Azure AD user:
+
+ ```azurecli
+ az login --allow-no-subscriptions
+ ```
+
+# [Azure PowerShell](#tab/ps)
+
+1. The Azure Identity client library that you'll use later can use tokens from Azure PowerShell. To enable command-line based development, [install Azure PowerShell](/powershell/azure/install-az-ps) on your local machine.
+
+1. Sign in to Azure CLI with the following cmdlet using your Azure AD user:
+
+ ```powershell-interactive
+ Connect-AzAccount
+ ```
+
+--
+
+For more information about setting up your dev environment for Azure Active Directory authentication, see [Azure Identity client library for .NET](/dotnet/api/overview/azure/Identity-readme).
+
+You're now ready to develop and debug your app with the SQL Database as the back end, using Azure AD authentication.
+
+## 5. Test and publish
+
+1. Run your code in your dev environment. Your code uses the [signed-in Azure AD user](#1-grant-database-access-to-azure-ad-user)) in your environment to connect to the back-end database. The user can access the database because it's configured as an Azure AD administrator for the database.
+
+1. Publish your code to Azure using the preferred publishing method. In App Service, your code uses the app's managed identity to connect to the back-end database.
+
+## Frequently asked questions
+
+- [Does managed identity support SQL Server?](#does-managed-identity-support-sql-server)
+- [I get the error `Login failed for user '<token-identified principal>'.`](#i-get-the-error-login-failed-for-user-token-identified-principal)
+- [I made changes to App Service authentication or the associated app registration. Why do I still get the old token?](#i-made-changes-to-app-service-authentication-or-the-associated-app-registration-why-do-i-still-get-the-old-token)
+- [How do I add the managed identity to an Azure AD group?](#how-do-i-add-the-managed-identity-to-an-azure-ad-group)
+- [I get the error `mysql: unknown option '--enable-cleartext-plugin'`.](#i-get-the-error-mysql-unknown-optionenable-cleartext-plugin)
+- [I get the error `SSL connection is required. Please specify SSL options and retry`.](#i-get-the-error-ssl-connection-is-required-please-specify-ssl-options-and-retry)
+
+#### Does managed identity support SQL Server?
+
+Azure Active Directory and managed identities aren't supported for on-premises SQL Server.
+
+#### I get the error `Login failed for user '<token-identified principal>'.`
+
+The managed identity you're attempting to request a token for is not authorized to access the Azure database.
+
+#### I made changes to App Service authentication or the associated app registration. Why do I still get the old token?
+
+The back-end services of managed identities also [maintain a token cache](overview-managed-identity.md#configure-target-resource) that updates the token for a target resource only when it expires. If you modify the configuration *after* trying to get a token with your app, you don't actually get a new token with the updated permissions until the cached token expires. The best way to work around this is to test your changes with a new InPrivate (Edge)/private (Safari)/Incognito (Chrome) window. That way, you're sure to start from a new authenticated session.
++
+#### How do I add the managed identity to an Azure AD group?
+
+If you want, you can add the identity to an [Azure AD group](../active-directory/fundamentals/active-directory-manage-groups.md), then grant access to the Azure AD group instead of the identity. For example, the following commands add the managed identity from the previous step to a new group called _myAzureSQLDBAccessGroup_:
+
+```azurecli-interactive
+groupid=$(az ad group create --display-name myAzureSQLDBAccessGroup --mail-nickname myAzureSQLDBAccessGroup --query objectId --output tsv)
+msiobjectid=$(az webapp identity show --resource-group <group-name> --name <app-name> --query principalId --output tsv)
+az ad group member add --group $groupid --member-id $msiobjectid
+az ad group member list -g $groupid
+```
+
+To grant database permissions for an Azure AD group, see documentation for the respective database type.
+
+#### I get the error `mysql: unknown option '--enable-cleartext-plugin'`.
+
+If you're using a MariaDB client, the `--enable-cleartext-plugin` option isn't required.
+
+#### I get the error `SSL connection is required. Please specify SSL options and retry`.
+
+Connecting to the Azure database requires additional settings and is beyond the scope of this tutorial. For more information, see one of the following links:
+
+[Configure TLS connectivity in Azure Database for PostgreSQL - Single Server](../postgresql/concepts-ssl-connection-security.md)
+[Configure SSL connectivity in your application to securely connect to Azure Database for MySQL](../mysql/howto-configure-ssl.md)
+
+## Next steps
+
+What you learned:
+
+> [!div class="checklist"]
+> * Configure an Azure AD user as an administrator for your Azure database.
+> * Connect to your database as the Azure AD user.
+> * Configure a system-assigned or user-assigned managed identity for an App Service app.
+> * Grant database access to the managed identity.
+> * Connect to the Azure database from your code (.NET Framework 4.8, .NET 6, Node.js, Python, Java) using a managed identity.
+> * Connect to the Azure database from your development environment using the Azure AD user.
+
+> [!div class="nextstepaction"]
+> [How to use managed identities for App Service and Azure Functions](overview-managed-identity.md)
+
+> [!div class="nextstepaction"]
+> [Tutorial: Connect to SQL Database from .NET App Service without secrets using a managed identity](tutorial-connect-msi-sql-database.md)
+
+> [!div class="nextstepaction"]
+> [Tutorial: Connect to Azure services that don't support managed identities (using Key Vault)](tutorial-connect-msi-key-vault.md)
+
+> [!div class="nextstepaction"]
+> [Tutorial: Isolate back-end communication with Virtual Network integration](tutorial-networking-isolate-vnet.md)
app-service Tutorial Connect Msi Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-sql-database.md
Title: 'Tutorial: Access data with managed identity'
-description: Secure database connectivity with managed identity from .NET web app, and also how to apply it to other Azure services.
+description: Secure Azure SQL Database connectivity with managed identity from a sample .NET web app, and also how to apply it to other Azure services.
ms.devlang: csharp
When you're finished, your sample app will connect to SQL Database securely with
> - .NET Framework 4.8 and above > - .NET 6.0 and above >
+> For guidance for Azure Database for MySQL or Azure Database for PostgreSQL in other language frameworks (Node.js, Python, and Java), see [Tutorial: Connect to Azure databases from App Service without secrets using a managed identity](tutorial-connect-msi-azure-database.md).
What you will learn:
What you learned:
> [!div class="nextstepaction"] > [Map an existing custom DNS name to Azure App Service](app-service-web-tutorial-custom-domain.md)
+> [!div class="nextstepaction"]
+> [Tutorial: Connect to Azure databases from App Service without secrets using a managed identity](tutorial-connect-msi-azure-database.md)
+ > [!div class="nextstepaction"] > [Tutorial: Connect to Azure services that don't support managed identities (using Key Vault)](tutorial-connect-msi-key-vault.md)
applied-ai-services Concept Business Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-business-card.md
You'll need a business card document. You can use our [sample business card docu
## Input requirements * For best results, provide one clear photo or high-quality scan per document.
-* Supported file formats: JPEG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location.
+* Supported file formats: JPEG/JPG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location.
* For PDF and TIFF, up to 2000 pages can be processed (with a free tier subscription, only the first two pages are processed).
-* The file size must be less than 50 MB.
+* The file size must be less than 500 MB for paid (S0) tier and 4 MB for free (F0) tier.
* Image dimensions must be between 50 x 50 pixels and 10,000 x 10,000 pixels. * PDF dimensions are up to 17 x 17 inches, corresponding to Legal or A3 paper size, or smaller. * The total size of the training data is 500 pages or less. * If your PDFs are password-locked, you must remove the lock before submission.
-* For unsupervised learning (without labeled data):
- * Data must contain keys and values.
- * Keys must appear above or to the left of the values; they can't appear below or to the right.
> [!NOTE] > The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) does not support the BMP file format. This is a limitation of the tool not the Form Recognizer Service.
applied-ai-services Concept Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom.md
The following table describes the features available with the associated tools a
> Custom template models trained with the 3.0 API will have a few improvements over the 2.1 API stemming from improvements to the OCR engine. Datasets used to train a custom template model using the 2.1 API can still be used to train a new model using the 3.0 API. * For best results, provide one clear photo or high-quality scan per document.
-* Supported file formats are JPEG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location.
+* Supported file formats are JPEG/JPG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location.
* For PDF and TIFF files, up to 2,000 pages can be processed. With a free tier subscription, only the first two pages are processed.
-* The file size must be less than 50 MB.
+* The file size must be less than 500 MB for paid (S0) tier and 4 MB for free (F0) tier.
* Image dimensions must be between 50 x 50 pixels and 10,000 x 10,000 pixels. * PDF dimensions are up to 17 x 17 inches, corresponding to Legal or A3 paper size, or smaller. * The total size of the training data is 500 pages or less. * If your PDFs are password-locked, you must remove the lock before submission.
-* For unsupervised learning (without labeled data):
- * Data must contain keys and values.
- * Keys must appear above or to the left of the values. They can't appear below or to the right.
> [!TIP] > Training data:
applied-ai-services Concept General Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-general-document.md
The key value pair extraction model and entity identification model are run in p
## Input requirements * For best results, provide one clear photo or high-quality scan per document.
-* Supported file formats: JPEG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location.
+* Supported file formats: JPEG/JPG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location.
* For PDF and TIFF, up to 2000 pages can be processed (with a free tier subscription, only the first two pages are processed).
-* The file size must be less than 50 MB.
+* The file size must be less than 500 MB for paid (S0) tier and 4 MB for free (F0) tier.
* Image dimensions must be between 50 x 50 pixels and 10,000 x 10,000 pixels. * PDF dimensions are up to 17 x 17 inches, corresponding to Legal or A3 paper size, or smaller. * The total size of the training data is 500 pages or less. * If your PDFs are password-locked, you must remove the lock before submission.
-* For unsupervised learning (without labeled data):
- * Data must contain keys and values.
- * Keys must appear above or to the left of the values; they can't appear below or to the right.
## Supported languages and locales
applied-ai-services Concept Id Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-id-document.md
You'll need an ID document. You can use our [sample ID document](https://raw.git
## Input requirements * For best results, provide one clear photo or high-quality scan per document.
-* Supported file formats: JPEG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location.
+* Supported file formats: JPEG/JPG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location.
* For PDF and TIFF, up to 2000 pages can be processed (with a free tier subscription, only the first two pages are processed).
-* The file size must be less than 50 MB.
+* The file size must be less than 500 MB for paid (S0) tier and 4 MB for free (F0) tier.
* Image dimensions must be between 50 x 50 pixels and 10,000 x 10,000 pixels. * PDF dimensions are up to 17 x 17 inches, corresponding to Legal or A3 paper size, or smaller. * The total size of the training data is 500 pages or less. * If your PDFs are password-locked, you must remove the lock before submission.
-* For unsupervised learning (without labeled data):
- * Data must contain keys and values.
- * Keys must appear above or to the left of the values; they can't appear below or to the right.
> [!NOTE] > The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) does not support the BMP file format. This is a limitation of the tool not the Form Recognizer Service.
applied-ai-services Concept Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-invoice.md
You'll need an invoice document. You can use our [sample invoice document](https
## Input requirements * For best results, provide one clear photo or high-quality scan per document.
-* Supported file formats: JPEG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location.
+* Supported file formats: JPEG/JPG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location.
* For PDF and TIFF, up to 2000 pages can be processed (with a free tier subscription, only the first two pages are processed).
-* The file size must be less than 50 MB.
+* The file size must be less than 500 MB for paid (S0) tier and 4 MB for free (F0) tier.
* Image dimensions must be between 50 x 50 pixels and 10,000 x 10,000 pixels. * PDF dimensions are up to 17 x 17 inches, corresponding to Legal or A3 paper size, or smaller. * The total size of the training data is 500 pages or less. * If your PDFs are password-locked, you must remove the lock before submission.
-* For unsupervised learning (without labeled data):
- * Data must contain keys and values.
- * Keys must appear above or to the left of the values; they can't appear below or to the right.
> [!NOTE] > The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) does not support the BMP file format. This is a limitation of the tool not the Form Recognizer Service.
Following are the line items extracted from an invoice in the JSON output respon
| Unit | String| The unit of the line item, e.g, kg, lb etc. | Hours | | | Date | Date| Date corresponding to each line item. Often it's a date the line item was shipped | 3/4/2021| 2021-03-04 | | Tax | Number | Tax associated with each line item. Possible values include tax amount, tax %, and tax Y/N | 10% | |
-| VAT | Number | Stands for Value added tax. This is a flat tax levied on an item. Common in european countries | &euro;20.00 | |
+| VAT | Number | Stands for Value added tax. This is a flat tax levied on an item. Common in European countries | &euro;20.00 | |
The invoice key-value pairs and line items extracted are in the `documentResults` section of the JSON output.
applied-ai-services Concept Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-layout.md
You'll need a form document. You can use our [sample form document](https://raw.
## Input requirements * For best results, provide one clear photo or high-quality scan per document.
-* Supported file formats: JPEG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location.
+* Supported file formats: JPEG/JPG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location.
* For PDF and TIFF, up to 2000 pages can be processed (with a free tier subscription, only the first two pages are processed).
-* The file size must be less than 50 MB (4 MB for the free tier).
+* The file size must be less than 500 MB for paid (S0) tier and 4 MB for free (F0) tier (4 MB for the free tier).
* Image dimensions must be between 50 x 50 pixels and 10,000 x 10,000 pixels. > [!NOTE]
applied-ai-services Concept Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-model-overview.md
A composed model is created by taking a collection of custom models and assignin
## Input requirements * For best results, provide one clear photo or high-quality scan per document.
-* Supported file formats: JPEG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location.
+* Supported file formats: JPEG/JPG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location.
* For PDF and TIFF, up to 2000 pages can be processed (with a free tier subscription, only the first two pages are processed).
-* The file size must be less than 50 MB.
+* The file size must be less than 500 MB for paid (S0) tier and 4 MB for free (F0) tier.
* Image dimensions must be between 50 x 50 pixels and 10,000 x 10,000 pixels. * PDF dimensions are up to 17 x 17 inches, corresponding to Legal or A3 paper size, or smaller. * The total size of the training data is 500 pages or less. * If your PDFs are password-locked, you must remove the lock before submission.
-* For unsupervised learning (without labeled data):
- * Data must contain keys and values.
- * Keys must appear above or to the left of the values; they can't appear below or to the right.
> [!NOTE] > The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) does not support the BMP file format. This is a limitation of the tool not the Form Recognizer Service.
applied-ai-services Concept Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-read.md
See how text is extracted from forms and documents using the Form Recognizer Stu
## Input requirements * For best results, provide one clear photo or high-quality scan per document.
-* Supported file formats: JPEG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location.
+* Supported file formats: JPEG/JPG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location.
* For PDF and TIFF, up to 2000 pages can be processed (with a free tier subscription, only the first two pages are processed).
-* The file size must be less than 50 MB (4 MB for the free tier)
+* The file size must be less than 500 MB for paid (S0) tier and 4 MB for free (F0) tier (4 MB for the free tier)
* Image dimensions must be between 50 x 50 pixels and 10,000 x 10,000 pixels. ## Supported languages and locales
applied-ai-services Concept Receipt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-receipt.md
You will need a receipt document. You can use our [sample receipt document](http
## Input requirements * For best results, provide one clear photo or high-quality scan per document.
-* Supported file formats: JPEG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location.
+* Supported file formats: JPEG/JPG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location.
* For PDF and TIFF, up to 2000 pages can be processed (with a free tier subscription, only the first two pages are processed).
-* The file size must be less than 50 MB.
+* The file size must be less than 500 MB for paid (S0) tier and 4 MB for free (F0) tier.
* Image dimensions must be between 50 x 50 pixels and 10000 x 10000 pixels. * PDF dimensions are up to 17 x 17 inches, corresponding to Legal or A3 paper size, or smaller. * The total size of the training data is 500 pages or less. * If your PDFs are password-locked, you must remove the lock before submission.
-* For unsupervised learning (without labeled data):
- * Data must contain keys and values.
- * Keys must appear above or to the left of the values; they can't appear below or to the right.
## Supported languages and locales v2.1
applied-ai-services Concept W2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-w2.md
See how data is extracted from W-2 forms using the Form Recognizer Studio. You'l
## Input requirements * For best results, provide one clear photo or high-quality scan per document.
-* Supported file formats: JPEG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location.
+* Supported file formats: JPEG/JPG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location.
* For PDF and TIFF, up to 2000 pages can be processed (with a free tier subscription, only the first two pages are processed).
-* The file size must be less than 50 MB.
+* The file size must be less than 500 MB for paid (S0) tier and 4 MB for free (F0) tier.
* Image dimensions must be between 50 x 50 pixels and 10,000 x 10,000 pixels. * PDF dimensions are up to 17 x 17 inches, corresponding to Legal or A3 paper size, or smaller. * The total size of the training data is 500 pages or less. * If your PDFs are password-locked, you must remove the lock before submission.
-* For unsupervised learning (without labeled data):
- * Data must contain keys and values.
- * Keys must appear above or to the left of the values; they can't appear below or to the right.
## Supported languages and locales
attestation Claim Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/claim-sets.md
tee | x-ms-attestation-type
policy_hash | x-ms-policy-hash maa-policyHash | x-ms-policy-hash policy_signer | x-ms-policy-signer
+rp_data | nonce
### SGX attestation
Below claims are generated and included in the attestation token by the service
- **quotehash**: SHA256 value of the evaluated quote - **tcbinfocertshash**: SHA256 value of the TCB Info issuing certs - **tcbinfocrlhash**: SHA256 value of the TCB Info issuing certs CRL list
- - **tcbinfohash**: SHA256 value of the TCB Info collateral
+ - **tcbinfohash**: SHA256 value of the TCB Info collateral
+- **x-ms-sgx-report-data**: SGX enclave report data field (usually SHA256 hash of x-ms-sgx-ehd)
Below claims are considered deprecated but are fully supported and will continue to be included in the future. It is recommended to use the non-deprecated claim names.
$maa-ehd | x-ms-sgx-ehd
$aas-ehd | x-ms-sgx-ehd $maa-attestationcollateral | x-ms-sgx-collateral
+### SEV-SNP attestation
+
+The following claims are additionally supported by the SevSnpVm attestation type:
+
+- **x-ms-runtime**: JSON object containing ΓÇ£claimsΓÇ¥ that are defined and generated within the attested environment. This is a specialization of the ΓÇ£enclave held dataΓÇ¥ concept, where the ΓÇ£enclave held dataΓÇ¥ is specifically formatted as a UTF-8 encoding of well formed JSON
+- **x-ms-sevsnpvm-authorkeydigest**: SHA384 hash of the author signing key
+- **x-ms-sevsnpvm-bootloader-svn** :AMD boot loader security version number (SVN)
+- **x-ms-sevsnpvm-familyId**: HCL family identification string
+- **x-ms-sevsnpvm-guestsvn**: HCL security version number (SVN)
+- **x-ms-sevsnpvm-hostdata**: Arbitrary data defined by the host at VM launch time
+- **x-ms-sevsnpvm-idkeydigest**: SHA384 hash of the identification signing key
+- **x-ms-sevsnpvm-imageId**: HCL image identification
+- **x-ms-sevsnpvm-is-debuggable**: Boolean value indicating whether AMD SEV-SNP debugging is enabled
+- **x-ms-sevsnpvm-launchmeasurement**: Measurement of the launched guest image
+- **x-ms-sevsnpvm-microcode-svn**: AMD microcode security version number (SVN
+- **x-ms-sevsnpvm-migration-allowed**: Boolean value indicating whether AMD SEV-SNP migration support is enabled
+- **x-ms-sevsnpvm-reportdata**: Data passed by HCL to include with report, to verify that transfer key and VM configuration are correct
+- **x-ms-sevsnpvm-reportid**: Report ID of the guest
+- **x-ms-sevsnpvm-smt-allowed**: Boolean value indicating whether SMT is enabled on the host
+- **x-ms-sevsnpvm-snpfw-svn**: AMD firmware security version number (SVN)
+- **x-ms-sevsnpvm-tee-svn**: AMD trusted execution environment (TEE) security version number (SVN)
+- **x-ms-sevsnpvm-vmpl**: VMPL that generated this report (0 for HCL)
+ ### TPM and VBS attestation - **cnf (Confirmation)**: The "cnf" claim is used to identify the proof-of-possession key. Confirmation claim as defined in RFC 7800, contains the public part of the attested enclave key represented as a JSON Web Key (JWK) object (RFC 7517)
automation Automation Update Azure Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-update-azure-modules.md
If you develop your scripts locally, it's recommended to have the same module ve
## Update Az modules
+You can update Az modules through the portal **(recommended)** or through the runbook.
+
+### Update Az modules through portal
+ Currently, updating AZ modules is only available through the portal. Updates through PowerShell and ARM template will be available in the future. Only default Az modules will be updated when performing the following steps: 1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your Automation account.
You can verify the update operation by checking the Module version and Status pr
The Azure team will regularly update the module version and provide an option to update the **default** Az modules by selecting the module version from the drop-down list.
-## Obtain a runbook to use for updates
+### Update Az modules through runbook
-To update the Azure modules in your Automation account, you must use the [Update-AutomationAzureModulesForAccount](https://github.com/Microsoft/AzureAutomation-Account-Modules-Update) runbook, which is available as open source. To start using this runbook to update your Azure modules, download it from the GitHub repository. You can then import it into your Automation account or run it as a script. To learn how to import a runbook in your Automation account, see [Import a runbook](manage-runbooks.md#import-a-runbook).
+To update the Azure modules in your Automation account, you must use the [Update-AutomationAzureModulesForAccount](https://github.com/Microsoft/AzureAutomation-Account-Modules-Update) runbook, available as open source. To start using this runbook to update your Azure modules, download it from the GitHub repository. You can then import it into your Automation account or run it as a script. To learn how to import a runbook in your Automation account, see [Import a runbook](manage-runbooks.md#import-a-runbook). In case of any runbook failure, we recommend that you modify the parameters in the runbook according to your specific needs, as the runbook is available as open-source and provided as a reference.
The **Update-AutomationAzureModulesForAccount** runbook supports updating the Azure, AzureRM, and Az modules by default. Review the [Update Azure modules runbook README](https://github.com/microsoft/AzureAutomation-Account-Modules-Update/blob/master/README.md) for more information on updating Az.Automation modules with this runbook. There are additional important factors that you need to take into account when using the Az modules in your Automation account. To learn more, see [Manage modules in Azure Automation](shared-resources/modules.md).
-## Use update runbook code as a regular PowerShell script
+#### Use the update runbook code as a regular PowerShell script
You can use the runbook code as a regular PowerShell script instead of a runbook. To do this, sign in to Azure using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet first, then pass `-Login $false` to the script.
-## Use the update runbook on sovereign clouds
-
+#### Use the update runbook on sovereign clouds
To use this runbook on sovereign clouds, use the `AzEnvironment` parameter to pass the correct environment to the runbook. Acceptable values are AzureCloud (Azure public cloud), AzureChinaCloud, AzureGermanCloud, and AzureUSGovernment. These values can be retrieved using `Get-AzEnvironment | select Name`. If you don't pass a value to this cmdlet, the runbook defaults to AzureCloud.
-## Use the update runbook to update a specific module version
+#### Use the update runbook to update a specific module version
If you want to use a specific Azure PowerShell module version instead of the latest module available on the PowerShell Gallery, pass these versions to the optional `ModuleVersionOverrides` parameter of the **Update-AutomationAzureModulesForAccount** runbook. For examples, see the [Update-AutomationAzureModulesForAccount.ps1](https://github.com/Microsoft/AzureAutomation-Account-Modules-Update/blob/master/Update-AutomationAzureModulesForAccount.ps1) runbook. Azure PowerShell modules that aren't mentioned in the `ModuleVersionOverrides` parameter are updated with the latest module versions on the PowerShell Gallery. If you pass nothing to the `ModuleVersionOverrides` parameter, all modules are updated with the latest module versions on the PowerShell Gallery. This behavior is the same for the **Update Azure Modules** button in the Azure portal. + ## Next steps * For details of using modules, see [Manage modules in Azure Automation](shared-resources/modules.md).
automation Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/shared-resources/modules.md
These are known limitations with the sandbox. The recommended workaround is to d
All new Automation accounts have the latest version of the PowerShell Az module imported by default. The Az module replaces AzureRM and is the recommended module to use with Azure. **Default modules** in the new Automation account includes the existing 24 AzureRM modules and 60+ Az modules.
-There is a native option to update modules to the latest Az module by the user for Automation accounts. The operation will handle all the module dependencies at the backend thereby removing the hassles of updating the modules [manually](../automation-update-azure-modules.md#update-az-modules) or executing the runbook to [update Azure modules](../automation-update-azure-modules.md#obtain-a-runbook-to-use-for-updates).
+There is a native option to update modules to the latest Az module by the user for Automation accounts. The operation will handle all the module dependencies at the backend thereby removing the hassles of updating the modules [manually](../automation-update-azure-modules.md#update-az-modules) or executing the runbook to [update Azure modules](../automation-update-azure-modules.md#update-az-modules-through-runbook).
If the existing Automation account has only AzureRM modules, the [Update Az modules](../automation-update-azure-modules.md#update-az-modules) option will update the Automation account with the user selected version of the Az module.
availability-zones Az Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/az-region.md
For a list of Azure services that support availability zones by Azure region, se
## Highly available services
-Three types of Azure services support availability zones: *zonal*, *zone-redundant*, and *always-available* services. You can combine all three of these architecture approaches when you design your resiliency strategy.
+Three types of Azure services support availability zones: *zonal*, *zone-redundant*, and *always-available* services. You can combine all three of these approaches to architecture when you design your resiliency strategy.
- **Zonal services**: A resource can be deployed to a specific, self-selected availability zone to achieve more stringent latency or performance requirements. Resiliency is self-architected by replicating applications and data to one or more zones within the region. Resources can be pinned to a specific zone. For example, virtual machines, managed disks, or standard IP addresses can be pinned to a specific zone, which allows for increased resiliency by having one or more instances of resources spread across zones. - **Zone-redundant services**: Resources are replicated or distributed across zones automatically. For example, zone-redundant services replicate the data across three zones so that a failure in one zone doesn't affect the high availability of the data.ΓÇ»
In the Product Catalog, always-available services are listed as "non-regional" s
| Virtual Machines:ΓÇ»[Ev3-Series](../virtual-machines/windows/create-powershell-availability-zone.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) | | Virtual Machines:ΓÇ»[F-Series](../virtual-machines/windows/create-powershell-availability-zone.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) | | Virtual Machines:ΓÇ»[FS-Series](../virtual-machines/windows/create-powershell-availability-zone.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
-| Virtual Machines:ΓÇ»[Shared Image Gallery](../virtual-machines/shared-image-galleries.md#make-your-images-highly-available) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| Virtual Machines:ΓÇ»[Azure Compute Gallery](../virtual-machines/azure-compute-gallery.md#high-availability)| ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
| [Azure Virtual Network](../vpn-gateway/create-zone-redundant-vnet-gateway.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure VPN Gateway](../vpn-gateway/about-zone-redundant-vnet-gateways.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
In the Product Catalog, always-available services are listed as "non-regional" s
| [Azure HPC Cache](../hpc-cache/hpc-cache-overview.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) | | [Azure IoT Hub Device Provisioning Service](../iot-dps/about-iot-dps.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | Azure Red Hat OpenShift | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) |
-| [Azure Managed Instance for Apache Cassandra](../managed-instance-apache-cassandr) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure SQL Managed Instance for Apache Cassandra](../managed-instance-apache-cassandr) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
| Azure Storage: Ultra Disk | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) | ### ![An icon that signifies this service is non-regional.](media/icon-always-available.svg) Non-regional services (always-available services)
In the Product Catalog, always-available services are listed as "non-regional" s
| Azure Peering Service | ![An icon that signifies this service is always available.](media/icon-always-available.svg) | | Azure Performance Diagnostics | ![An icon that signifies this service is always available.](media/icon-always-available.svg) | | Azure Policy | ![An icon that signifies this service is always available.](media/icon-always-available.svg) |
-| Azure Portal | ![An icon that signifies this service is always available.](media/icon-always-available.svg) |
+| Azure portal | ![An icon that signifies this service is always available.](media/icon-always-available.svg) |
| Azure Resource Graph | ![An icon that signifies this service is always available.](media/icon-always-available.svg) | | Azure Stack Edge | ![An icon that signifies this service is always available.](media/icon-always-available.svg) | | Azure Traffic Manager | ![An icon that signifies this service is always available.](media/icon-always-available.svg) |
azure-arc Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/troubleshooting.md
az connectedk8s connect --resource-group AzureArc --name AzureArcCluster
Ensure that you have the latest helm version installed before proceeding to avoid unexpected errors. This operation might take a while... ```
+### Helm timeout error
-### Helm issue
+```azurecli
+az connectedk8s connect -n AzureArcTest -g AzureArcTest
+```
+
+```output
+Unable to install helm release: Error: UPGRADE Failed: time out waiting for the condition
+```
+
+If you get the above helm timeout issue, you can troubleshoot as follows:
+
+ 1. Run the following command:
+
+ ```console
+ kubectl get pods -n azure-arc
+ ```
+ 2. Check if the `clusterconnect-agent` or the `config-agent` pods are showing crashloopbackoff, or not all containers are running:
+
+ ```output
+ NAME READY STATUS RESTARTS AGE
+ cluster-metadata-operator-664bc5f4d-chgkl 2/2 Running 0 4m14s
+ clusterconnect-agent-7cb8b565c7-wklsh 2/3 CrashLoopBackOff 0 1m15s
+ clusteridentityoperator-76d645d8bf-5qx5c 2/2 Running 0 4m15s
+ config-agent-65d5df564f-lffqm 1/2 CrashLoopBackOff 0 1m14s
+ ```
+ 3. If the below certificate isn't present, the system assigned managed identity didn't get installed.
+
+ ```console
+ kubectl get secret -n azure-arc -o yaml | grep name:
+ ```
+
+ ```output
+ name: azure-identity-certificate
+ ```
+ This could be a transient issue. You can try deleting the Arc deployment by running the `az connectedk8s delete` command and reinstalling it. If you're consistently facing this, it could be an issue with your proxy settings. Please follow [these steps](./quickstart-connect-cluster.md#connect-using-an-outbound-proxy-server) to connect your cluster to Arc via a proxy.
+ 4. If the `clusterconnect-agent` and the `config-agent` pods are running, but the `kube-aad-proxy` pod is missing, check your pod security policies. This pod uses the `azure-arc-kube-aad-proxy-sa` service account, which doesn't have admin permissions but requires the permission to mount host path.
+
+
+### Helm validation error
Helm `v3.3.0-rc.1` version has an [issue](https://github.com/helm/helm/pull/8527) where helm install/upgrade (used by `connectedk8s` CLI extension) results in running of all hooks leading to the following error:
azure-arc Plan At Scale Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/plan-at-scale-deployment.md
Title: Plan and deploy Azure Arc-enabled servers description: Learn how to enable a large number of machines to Azure Arc-enabled servers to simplify configuration of essential security, management, and monitoring capabilities in Azure. Previously updated : 03/14/2022 Last updated : 04/27/2022
In this phase, system engineers or administrators enable the core features in th
| Design and deploy [Azure Monitor Logs](../../azure-monitor/logs/data-platform-logs.md) | Evaluate [design and deployment considerations](../../azure-monitor/logs/design-logs-deployment.md) to determine if your organization should use an existing or implement another Log Analytics workspace to store collected log data from hybrid servers and machines.<sup>1</sup> | One day | | [Develop an Azure Policy](../../governance/policy/overview.md) governance plan | Determine how you will implement governance of hybrid servers and machines at the subscription or resource group scope with Azure Policy. | One day | | Configure [Role based access control](../../role-based-access-control/overview.md) (RBAC) | Develop an access plan to control who has access to manage Azure Arc-enabled servers and ability to view their data from other Azure services and solutions. | One day |
-| Identify machines with Log Analytics agent already installed | Run the following log query in [Log Analytics](../../azure-monitor/logs/log-analytics-overview.md) to support conversion of existing Log Analytics agent deployments to extension-managed agent:<br> Heartbeat <br> &#124; where TimeGenerated > ago(30d) <br> &#124; where ResourceType == "machines" and (ComputerEnvironment == "Non-Azure") <br> &#124; summarize by Computer, ResourceProvider, ResourceType, ComputerEnvironment | One hour |
+| Identify machines with Log Analytics agent already installed | Run the following log query in [Log Analytics](../../azure-monitor/logs/log-analytics-overview.md) to support conversion of existing Log Analytics agent deployments to extension-managed agent:<br> Heartbeat <br> &#124; summarize arg_max(TimeGenerated, OSType, ResourceId, ComputerEnvironment) by Computer <br> &#124; where ComputerEnvironment == "Non-Azure" and isempty(ResourceId) <br> &#124; project Computer, OSType | One hour |
<sup>1</sup> When evaluating your Log Analytics workspace design, consider integration with Azure Automation in support of its Update Management and Change Tracking and Inventory feature, as well as Microsoft Defender for Cloud and Microsoft Sentinel. If your organization already has an Automation account and enabled its management features linked with a Log Analytics workspace, evaluate whether you can centralize and streamline management operations, as well as minimize cost, by using those existing resources versus creating a duplicate account, workspace, etc.
azure-arc Ssh Arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/ssh-arc-overview.md
If you already have the extension installed, it can be updated by running:
```az extension update --name ssh``` > [!NOTE]
-> The Azure CLI extension version must be greater than 1.0.1.
+> The Azure CLI extension version must be greater than 1.1.0.
### Create default connectivity endpoint > [!NOTE] > The following actions must be completed for each Arc-enabled server.
-Run the following commands:
- ```az rest --method put --uri https://management.azure.com/subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default?api-version=2021-10-06-preview --body '{\"properties\": {\"type\": \"default\"}}'```
-
- ```az rest --method get --uri https://management.azure.com/subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default?api-version=2021-10-06-preview```
-
+Create the default endpoint in PowerShell:
+ ```powershell
+ az rest --method put --uri https://management.azure.com/subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default?api-version=2021-10-06-preview --body '{\"properties\": {\"type\": \"default\"}}'
+ ```
+Create the default endpoint in Bash:
+```bash
+az rest --method put --uri https://management.azure.com/subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default?api-version=2021-10-06-preview --body '{"properties": {"type": "default"}}'
+```
+Validate endpoint creation:
+ ```
+ az rest --method get --uri https://management.azure.com/subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default?api-version=2021-10-06-preview
+ ```
### Enable functionality on your Arc-enabled server In order to use the SSH connect feature, you must enable connections on the hybrid agent.
azure-functions Functions Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-versions.md
A pre-upgrade validator is available to help identify potential issues when migr
1. In *Search for common problems or tools*, enter and select **Functions 4.x Pre-Upgrade Validator**
-To migrate an app from 3.x to 4.x, set the `FUNCTIONS_EXTENSION_VERSION` application setting to `~4` with the following Azure CLI command:
+To migrate an app from 3.x to 4.x, set the `FUNCTIONS_EXTENSION_VERSION` application setting to `~4` with the following Azure CLI or Azure PowerShell commands:
+
+# [Azure CLI](#tab/azure-cli)
```azurecli az functionapp config appsettings set --settings FUNCTIONS_EXTENSION_VERSION=~4 -n <APP_NAME> -g <RESOURCE_GROUP_NAME>
az functionapp config appsettings set --settings FUNCTIONS_EXTENSION_VERSION=~4
az functionapp config set --net-framework-version v6.0 -n <APP_NAME> -g <RESOURCE_GROUP_NAME> ```
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Update-AzFunctionAppSetting -AppSetting @{FUNCTIONS_EXTENSION_VERSION = "~4"} -Name <APP_NAME> -ResourceGroupName <RESOURCE_GROUP_NAME> -Force
+
+# For Windows function apps only, also enable .NET 6.0 that is needed by the runtime
+Set-AzWebApp -NetFrameworkVersion v6.0 -Name <APP_NAME> -ResourceGroupName <RESOURCE_GROUP_NAME>
+```
+++ ### Breaking changes between 3.x and 4.x The following are some changes to be aware of before upgrading a 3.x app to 4.x. For a full list, see Azure Functions GitHub issues labeled [*Breaking Change: Approved*](https://github.com/Azure/azure-functions/issues?q=is%3Aissue+label%3A%22Breaking+Change%3A+Approved%22+is%3A%22closed+OR+open%22). More changes are expected during the preview period. Subscribe to [App Service Announcements](https://github.com/Azure/app-service-announcements/issues) for updates.
azure-functions Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/ip-addresses.md
Use the `nslookup` utility from your local client computer:
nslookup <APP_NAME>.azurewebsites.net ```
+# [Azure PowerShell](#tab/azure-powershell)
+
+Use the `nslookup` utility from your local client computer:
+
+```powershell
+nslookup <APP_NAME>.azurewebsites.net
+```
+ ## <a name="find-outbound-ip-addresses"></a>Function app outbound IP addresses
To find the outbound IP addresses available to a function app:
az functionapp show --resource-group <GROUP_NAME> --name <APP_NAME> --query outboundIpAddresses --output tsv az functionapp show --resource-group <GROUP_NAME> --name <APP_NAME> --query possibleOutboundIpAddresses --output tsv ```+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+$functionApp = Get-AzFunctionApp -ResourceGroupName <GROUP_NAME> -Name <APP_NAME>
+$functionApp.OutboundIPAddress
+$functionApp.PossibleOutboundIPAddress
+```
+ The set of `outboundIpAddresses` is currently available to the function app. The set of `possibleOutboundIpAddresses` includes IP addresses that will be available only if the function app [scales to other pricing tiers](#outbound-ip-address-changes).
If you need to add the outbound IP addresses used by your function apps to an al
For example, the following JSON fragment is what the allowlist for Western Europe might look like:
-```
+```json
{ "name": "AzureCloud.westeurope", "id": "AzureCloud.westeurope",
To find out if your function app runs in an App Service Environment:
# [Azure CLI](#tab/azurecli) ```azurecli-interactive
-az webapp show --resource-group <group_name> --name <app_name> --query sku --output tsv
+az resource show --resource-group <GROUP_NAME> --name <APP_NAME> --resource-type Microsoft.Web/sites --query properties.sku --output tsv
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+$functionApp = Get-AzResource -ResourceGroupName <GROUP_NAME> -ResourceName <APP_NAME> -ResourceType Microsoft.Web/sites
+$functionApp.Properties.sku
```
azure-maps Quick Demo Map App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/quick-demo-map-app.md
Once your Azure Maps account is successfully created, retrieve the primary key t
## Download and update the Azure Maps demo
-1. Go to [interactiveSearch.html](https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/master/AzureMapsCodeSamples/Tutorials/interactiveSearch.html). Copy the contents of the file.
+1. Go to [interactiveSearch.html](https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/master/Samples/Tutorials/Interactive%20Search/Interactive%20Search%20Quickstart.html). Copy the contents of the file.
2. Save the contents of this file locally as **AzureMapDemo.html**. Open it in a text editor. 3. Add the **Primary Key** value you got in the preceding section 1. Comment out all of the code in the `authOptions` function, this code is used for Azure Active Directory authentication.
azure-monitor Itsmc Dashboard Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-dashboard-errors.md
The following sections describe common errors that appear in the connector statu
**Cause**: The IP address of ITSM application is not allow ITSM connections from partners ITSM tools. **Resolution**: In order to list the ITSM IP addresses in order to allow ITSM connections from partners ITSM tools, we recommend the to list the whole public IP range of Azure region where their LogAnalytics workspace belongs. [details here](https://www.microsoft.com/download/details.aspx?id=56519) For regions EUS/WEU/EUS2/WUS2/US South Central the customer can list ActionGroup network tag only.++
+## Authentication
+**Error**: "User Not Authenticated"
+
+**Cause**: There can be one of 2 options either the token need to be refreshed or there is missing integration user rights.
+
+**Resolution**:If the integration worked for you in the past, it might be that the refresh token has expired. Then sync ITSMC to generate a new refresh token, as explained in [How to manually fix sync problems](./itsmc-resync-servicenow.md). If it never worked, it might be missing integration user rights, Please check it [here ](./itsmc-connections-servicenow.md#install-the-user-app-and-create-the-user-role)
+
azure-monitor Itsmc Troubleshoot Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-troubleshoot-overview.md
The following sections identify common symptoms, possible causes, and resolution
**Resolution**: * [Sync the connector](itsmc-resync-servicenow.md). * Check the [dashboard](itsmc-dashboard.md) and review the errors in the section for connector status. Then review the [common errors and their resolutions](itsmc-dashboard-errors.md)+
+### Configuration Item is showing blank in incidents received from Service Now
+**Cause**: There can be several reasons for this symptom:
+* Only Log alerts supports configurtaion item, the alert can be from other type
+* The search results must have column Computer or Resource in order to have the configuration item
+* The values in the configurtaion item fied does not match to an entry in the CMDB
+
+**Resolution**:
+* Check whether it is log alert - if not configuration item not supported
+* Check whether search results have column Computer or Resource -if not it should be added to the query
+* Check whether values in the columns Computer/Resource are identical to the values in CMDB- if not a new entry should be added to the CMDB
azure-monitor Profiler Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler-overview.md
ms.contributor: charles.weininger Previously updated : 04/25/2022 Last updated : 04/26/2022
Profiler works with .NET applications deployed on the following Azure services.
If you've enabled Profiler but aren't seeing traces, check our [Troubleshooting guide](profiler-troubleshooting.md?toc=/azure/azure-monitor/toc.json).
-## View Profiler data
+## How to generate load to view Profiler data
-For Profiler to upload traces, your application must be actively handling requests. To generate requests:
-- **If you're doing an experiment,** use [Application Insights performance testing](/vsts/load-test/app-service-web-app-performance-test).-- **If you've newly enabled Profiler,** simply run a short load test.
+For Profiler to upload traces, your application must be actively handling requests. You can trigger Profiler manually with a single click.
-While the load test is running, select the **Profile Now** button on the [**Profiler Settings** pane](profiler-settings.md). Once Profiler starts running, it profiles randomly about once per hour, for a duration of two minutes. If your application is handling a steady stream of requests, Profiler uploads traces every hour.
+Suppose you're running a web performance test. You'll need traces to help you understand how your web app is running under load. By controlling when traces are captured, you'll know when the load test will be running, while the random sampling interval might miss it.
-After your application receives some traffic and Profiler has had time to upload the traces, you should be able to view traces within 5 to 10 minutes. To view traces:
+### Generate traffic to your web app by starting a web performance test
-1. Select **Take Actions** in the **Performance** pane.
-1. Select the **Profiler Traces** button.
+If you've newly enabled Profiler, you can run a short [load test](/vsts/load-test/app-service-web-app-performance-test). If your web app already has incoming traffic or if you just want to manually generate traffic, skip the load test and start a Profiler on-demand session.
- ![Application Insights Performance pane preview Profiler traces][performance-blade]
+### Start a Profiler on-demand session
+1. From the Application Insights overview page, select **Performance** from the left menu.
+1. On the **Performance** pane, select **Profiler** from the top menu for Profiler settings.
-1. Select a sample to display a code-level breakdown of time spent executing the request.
+ :::image type="content" source="./media/profiler-overview/profiler-button-inline.png" alt-text="Screenshot of the Profiler button from the Performance blade" lightbox="media/profiler-settings/profiler-button.png":::
- ![Application Insights trace explorer][trace-explorer]
-
-The trace explorer displays the following information:
-
-| Category | Description |
-| -- | -- |
-| **Show Hot Path** | Opens the biggest leaf node, or at least something close. In most cases, this node is near a performance bottleneck. |
-| **Label** | The name of the function or event. The tree displays a mix of code and events that occurred, such as SQL and HTTP events. The top event represents the overall request duration. |
-| **Elapsed** | The time interval between the start of the operation and the end of the operation. |
-| **When** | The time when the function or event was running in relation to other functions. |
+1. Once the Profiler settings page loads, select **Profile Now**.
+ :::image type="content" source="./media/profiler-settings/configure-blade-inline.png" alt-text="Profiler page features and settings" lightbox="media/profiler-settings/configure-blade.png":::
-### Other options for viewing profiler data
+### View traces
+1. After the Profiler sessions finish running, return to the **Performance** pane.
+1. Under **Drill into...**, select **Profiler traces** to view the traces.
-Besides viewing the profiles in the Azure portal, you can download the profiles and open them in other tools. There are 3 options for viewing the contents' profiles. The downloaded file is a .diagsession file and can be opened natively by Visual Studio. Use the profiling tools in Visual Studio to examine the details of the file.
+ :::image type="content" source="./media/profiler-overview/trace-explorer-inline.png" alt-text="Screenshot of trace explorer page" lightbox="media/profiler-overview/trace-explorer.png":::
-If you rename the file by adding `.zip` to the end of the file name, you can also open it in:
+The trace explorer displays the following information:
-- Windows Performance analyzer
- - [Download](https://www.microsoft.com/p/windows-performance-analyzer/9n0w1b2bxgnz)
- - [Documentation](https://docs.microsoft.com/windows-hardware/test/wpt/windows-performance-analyzer)
-- Perfview
- - [Download](https://github.com/microsoft/perfview/blob/main/documentation/Downloading.md)
- - [How-to videos](https://docs.microsoft.com/shows/PerfView-Tutorial/)
+| Filter | Description |
+| | -- |
+| Profile tree v. Flame graph | View the traces as either a tree or in graph form. |
+| Hot path | Select to open the biggest leaf node. In most cases, this node is near a performance bottleneck. |
+| Framework dependencies | Select to view each of the traced framework dependencies associated with the traces. |
+| Hide events | Type in strings to hide from the trace view. Select *Suggested events* for suggestions. |
+| Event | Event or function name. The tree displays a mix of code and events that occurred, such as SQL and HTTP events. The top event represents the overall request duration. |
+| Module | The module where the traced event or function occurred. |
+| Thread time | The time interval between the start of the operation and the end of the operation. |
+| Timeline | The time when the function or event was running in relation to other functions. |
## How to read performance data
-The Microsoft service Profiler uses a combination of sampling methods and instrumentation to analyze the performance of your application. During detailed collection the service Profiler:
-- Samples the instruction pointer of each machine CPU every millisecond. Each sample:
- - Captures the complete call stack of the thread that's currently executing (the result of sampling and instrumentation).
- - Includes code from Microsoft .NET Framework and from other frameworks that you reference.
- - Gives detailed information about the thread actions, at both a high level and a low level of abstraction.
-- Collects other events to track activity correlation and causality, including:
- - Context switching events
- - Task Parallel Library (TPL) events
- - Thread pool events
+The Microsoft service profiler uses a combination of sampling methods and instrumentation to analyze the performance of your application. When detailed collection is in progress, the service profiler samples the instruction pointer of each machine CPU every millisecond. Each sample captures the complete call stack of the thread that's currently executing. It gives detailed information about what that thread was doing, at both a high level and a low level of abstraction. The service profiler also collects other events to track activity correlation and causality, including context switching events, Task Parallel Library (TPL) events, and thread pool events.
+
+The call stack displayed in the timeline view is the result of the sampling and instrumentation. Because each sample captures the complete call stack of the thread, it includes code from Microsoft .NET Framework and other frameworks that you reference.
### <a id="jitnewobj"></a>Object allocation (clr!JIT\_New or clr!JIT\_Newarr1)
azure-monitor Profiler Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler-settings.md
Title: Use the Azure Application Insights Profiler settings pane | Microsoft Docs description: See Profiler status and start profiling sessions++
+ms.contributor: Charles.Weininger
Previously updated : 12/08/2021 Last updated : 04/26/2022+ # Configure Application Insights Profiler
-## Updated Profiler Agent
-The trigger features only work with version 2.6 or newer of the profiler agent. If you are running an Azure App Service, your agent will be updated automatically. You can see what version of the agent you are running if you go to the Kudu URL for your website and append /DiagnosticServices to the end of it, like this: `https://yourwebsite.scm.azurewebsites.net/diagnosticservices`. The Application Insights Profiler Webjob should be version 2.6 or newer. You can force an upgrade by restarting your web app.
+To open the Azure Application Insights Profiler settings pane, select **Performance** from the left menu within your Application Insights page.
-If you are running the profiler on a VM or Cloud Service, you need to have Windows Azure Diagnostics (WAD) extension version 16.0.4 or newer installed. You can check the version of WAD by logging onto your VM and looking this directory: C:\Packages\Plugins\Microsoft.Azure.Diagnostics.IaaSDiagnostics\1.16.0.4. The directory name is the version of WAD that is installed. The Azure VM agent will update WAD automatically when new versions are available.
-## Profiler settings page
+View profiler traces across your Azure resources via two methods:
-To open the Azure Application Insights Profiler settings pane, go to the Application Insights Performance pane, and then select the **Configure Profiler** button.
+**Profiler button**
-![Link to open Profiler settings page][configure-profiler-entry]
+Select the **Profiler** button from the top menu.
-That opens a page that looks like this:
-![Profiler settings page][configure-profiler-page]
+**By operation**
-The **Configure Application Insights Profiler** page has these features:
+1. Select an operation from the **Operation name** list ("Overall" is highlighted by default).
+1. Select the **Profiler traces** button.
+
+ :::image type="content" source="./media/profiler-settings/operation-entry-inline.png" alt-text="Select operation and Profiler traces to view all profiler traces" lightbox="media/profiler-settings/operation-entry.png":::
+
+1. Select one of the requests from the list to the left.
+1. Select **Configure Profiler**.
+
+ :::image type="content" source="./media/profiler-settings/configure-profiler-inline.png" alt-text="Overall selection and clicking Profiler traces to view all profiler traces" lightbox="media/profiler-settings/configure-profiler.png":::
+
+Once within the Profiler, you can configure and view the Profiler. The **Application Insights Profiler** page has these features:
+ | Feature | Description | |-|-| Profile Now | Starts profiling sessions for all apps that are linked to this instance of Application Insights. Triggers | Allows you to configure triggers that cause the profiler to run.
-Recent profiling sessions | Displays information about past profiling sessions.
+Recent profiling sessions | Displays information about past profiling sessions, which you can sort using the filters at the top of the page.
## Profile Now
-This option allows you to start a profiling session on demand. When you click this link, all profiler agents that are sending data to this Application Insights instance will start to capture a profile. After 5 to 10 minutes, the profile session will show in the list below.
+Select **Profile Now** to start a profiling session on demand. When you click this link, all profiler agents that are sending data to this Application Insights instance will start to capture a profile. After 5 to 10 minutes, the profile session will show in the list below.
-For a user to manually trigger a profiler session, they require at minimum "write" access on their role for the Application Insights component. In most cases, you get this access automatically and no additional work is needed. If you're having issues, the subscription scope role to add would be the "Application Insights Component Contributor" role. [See more about role access control with Azure Monitoring](./resources-roles-access-control.md).
+To manually trigger a profiler session, you'll need, at minimum, *write* access on your role for the Application Insights component. In most cases, you get write access automatically. If you're having issues, you'll need the "Application Insights Component Contributor" subscription scope role added. [See more about role access control with Azure Monitoring](./resources-roles-access-control.md).
## Trigger Settings
-![Trigger Settings Flyout][trigger-settings-flyout]
-Clicking the Triggers button on the menu bar opens the trigger settings box. You can set up trigger to start profiling when the percentage of CPU or Memory use hits the level you set.
+Select the Triggers button on the menu bar to open the CPU, Memory, and Sampling trigger settings pane.
+
+**CPU or Memory triggers**
+
+You can set up a trigger to start profiling when the percentage of CPU or Memory use hits the level you set.
+ | Setting | Description | |-|-|
Memory threshold | When this percentage of memory is in use, the profiler will b
Duration | Sets the length of time the profiler will run when triggered. Cooldown | Sets the length of time the profiler will wait before checking for the memory or CPU usage again after it's triggered.
+**Sampling trigger**
+
+Unlike CPU or memory triggers, the Sampling trigger isn't triggered by an event. Instead, it's triggered randomly to get a truly random sample of your application's performance. You can:
+- Turn this trigger off to disable random sampling.
+- Set how often profiling will occur and the duration of the profiling session.
++
+| Setting | Description |
+|-|-|
+On / Off Button | On: profiler can be started by this trigger; Off: profiler won't be started by this trigger.
+Sample rate | The rate at which the profiler can occur. </br> <ul><li>The **Normal** setting collects data 5% of the time, which is about 2 minutes per hour.</li><li>The **High** setting profiles 50% of the time.</li><li>The **Maximum** setting profiles 75% of the time.</li></ul> </br> Normal is recommended for production environments.
+Duration | Sets the length of time the profiler will run when triggered.
+ ## Recent Profiling Sessions
-This section of the page shows information about recent profiling sessions. A profiling session represents the period of time when the profiler agent was taking a profile on one of the machines hosting your application. You can open the profiles from a session by clicking on one of the rows. For each session, we show:
+This section of the Profiler page displays recent profiling session information. A profiling session represents the time taken by the profiler agent while profiling one of the machines hosting your application. Open the profiles from a session by clicking on one of the rows. For each session, we show:
| Setting | Description | |-|-|
Tracee | Number of traces that were attached to individual requests.
CPU % | Percentage of CPU that was being used while the profiler was running. Memory % | Percentage of memory that was being used while the profiler was running.
-## <a id="profileondemand"></a> Use web performance tests to generate traffic to your application
-
-You can trigger Profiler manually with a single click. Suppose you're running a web performance test. You'll need traces to help you understand how your web app is running under load. Having control over when traces are captured is crucial, because you know when the load test will be running. But the random sampling interval might miss it.
-
-The next sections illustrate how this scenario works:
-
-### Step 1: Generate traffic to your web app by starting a web performance test
-
-If your web app already has incoming traffic or if you just want to manually generate traffic, skip this section and continue to Step 2.
-
-1. In the Application Insights portal, select **Configure** > **Performance Testing**.
-
-1. To start a new performance test, select the **New** button.
-
- ![create new performance test][create-performance-test]
-
-1. In the **New performance test** pane, configure the test target URL. Accept all default settings, and then select **Run test** to start running the load test.
-
- ![Configure load test][configure-performance-test]
-
- The new test is queued first, followed by a status of *in progress*.
-
- ![Load test is submitted and queued][load-test-queued]
-
- ![Load test is running in progress][load-test-in-progress]
-
-### Step 2: Start a Profiler on-demand session
-
-1. When the load test is running, start Profiler to capture traces on the web app while it's receiving load.
-
-1. Go to the **Configure Profiler** pane.
--
-### Step 3: View traces
-
-After Profiler finishes running, follow the instructions on notification to go to Performance pane and view traces.
- ## Next steps [Enable Profiler and view traces](profiler-overview.md?toc=/azure/azure-monitor/toc.json) [profiler-on-demand]: ./media/profiler-settings/Profiler-on-demand.png
-[configure-profiler-entry]: ./media/profiler-settings/configure-profiler-entry.png
+[performance-blade]: ./media/profiler-settings/performance-blade.png
[configure-profiler-page]: ./media/profiler-settings/configureBlade.png [trigger-settings-flyout]: ./media/profiler-settings/CPUTrigger.png [create-performance-test]: ./media/profiler-settings/new-performance-test.png
azure-monitor Best Practices Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-analysis.md
Common scenarios for dashboards include the following:
See [Create and share dashboards of Log Analytics data](visualize/tutorial-logs-dashboards.md) for details on creating a dashboard that includes data from Azure Monitor Logs. See [Create custom KPI dashboards using Azure Application Insights](app/tutorial-app-dashboards.md) for details on creating a dashboard that includes data from Application Insights. --
-## Power BI
-[Power BI](https://powerbi.microsoft.com/documentation/powerbi-service-get-started/) is useful for creating business-centric dashboards and reports, along with reports that analyze long-term KPI trends. You can [import the results of a log query](./logs/log-powerbi.md) into a Power BI dataset and then take advantage of its features, such as combining data from different sources and sharing reports on the web and mobile devices.
-
-![Screenshot that shows an example Power B I report for I T operations.](media/visualizations/power-bi.png)
-
-Common scenarios for Power BI include the following:
--- Rich visualizations.-- Extensive interactivity, including zoom-in and cross-filtering.-- Ease of sharing throughout your organization.-- Integration with other data from multiple data sources.-- Better performance with results cached in a cube.--- ## Grafana [Grafana](https://grafana.com/) is an open platform that excels in operational dashboards. It's useful for detecting, isolating, and triaging operational incidents, combining visualizations of Azure and non-Azure data sources including on-premises, third party tools, and data stores in other clouds. Grafana has popular plugins and dashboard templates for APM tools such as Dynatrace, New Relic, and App Dynamics which enables users to visualize Azure platform data alongside other metrics from higher in the stack collected by other tools. It also has AWS CloudWatch and GCP BigQuery plugins for multi-cloud monitoring in a single pane of glass.
-You can add the [Azure Monitor data source plug-in for Grafana](visualize/grafana-plugin.md) to your Azure subscription to have it visualize your Azure metric data.
+All versions of Grafana include the [Azure Monitor datasource plug-in](visualize/grafana-plugin.md) to visualize your Azure Monitor metrics and logs.
+
+Additionally, [Azure Managed Grafana](../managed-grafan) to get started.
![Screenshot that shows Grafana visualizations.](media/visualizations/grafana.png)
Common scenarios for Grafana include the following:
- Create a dashboard from a community created and supported template. - Create a vendor agnostic BCDR scenario that runs on any cloud provider or on-premises.
+## Power BI
+[Power BI](https://powerbi.microsoft.com/documentation/powerbi-service-get-started/) is useful for creating business-centric dashboards and reports, along with reports that analyze long-term KPI trends. You can [import the results of a log query](./logs/log-powerbi.md) into a Power BI dataset and then take advantage of its features, such as combining data from different sources and sharing reports on the web and mobile devices.
+
+![Screenshot that shows an example Power B I report for I T operations.](media/visualizations/power-bi.png)
+
+Common scenarios for Power BI include the following:
+
+- Rich visualizations.
+- Extensive interactivity, including zoom-in and cross-filtering.
+- Ease of sharing throughout your organization.
+- Integration with other data from multiple data sources.
+- Better performance with results cached in a cube.
+ ## Azure Monitor partners Some Azure Monitor partners provide visualization functionality. For a list of partners that Microsoft has evaluated, see [Azure Monitor partner integrations](./partners.md). An Azure Monitor partner might provide out-of-the-box visualizations to save you time, although these solutions may have an additional cost.
azure-monitor Monitor Azure Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/monitor-azure-resource.md
In this tutorial, you learn:
### Azure Monitor data collection As soon as you create an Azure resource, Azure Monitor is enabled and starts collecting metrics and activity logs. With some configuration, you can gather additional monitoring data and enable additional features. The Azure Monitor data platform is made up of Metrics and Logs. Each collects different kinds of data and enables different Azure Monitor features. -- [Azure Monitor Metrics](../essentials/data-platform-metrics.md) stores numeric data from monitored resources into a time series database. The metric database is automatically created for each Azure subscription. Use [metrics explorer](../essentials/tutorial-metrics.md) to analyze data from Azure Monitor Logs.
+- [Azure Monitor Metrics](../essentials/data-platform-metrics.md) stores numeric data from monitored resources into a time series database. The metric database is automatically created for each Azure subscription. Use [metrics explorer](../essentials/tutorial-metrics.md) to analyze data from Azure Monitor Metrics.
- [Azure Monitor Logs](../logs/data-platform-logs.md) collects logs and performance data where they can be retrieved and analyzed in a different ways using log queries. You must create a Log Analytics workspace to collect log data. Use [Log Analytics](../logs/log-analytics-tutorial.md) to analyze data from Azure Monitor Logs. ### <a id="monitoring-data-from-azure-resources"></a> Monitor data from Azure resources
azure-monitor Log Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-powerbi.md
A dataflow is a type of "cloud ETL" designed to help you collect and prep your d
## Incremental refresh
-Both Power BI datasets and Power BI dataflows have an incremental refresh option. Power BI dataflows and Power BI datasets support this feature, but you need Power BI Premium to use it.
+Both Power BI datasets and Power BI dataflows have an incremental refresh option. Power BI dataflows and Power BI datasets support this feature. To use incremental refresh on dataflows, you need Power BI Premium.
Incremental refresh runs small queries and updates smaller amounts of data per run instead of ingesting all of the data again and again when you run the query. You have the option to save large amounts of data, but add a new increment of data every time the query is run. This behavior is ideal for longer running reports.
Additional information can be found in [Integrate Log Analytics and Excel](log-e
## Next steps
-Get started with [Log Analytics queries](./log-query-overview.md).
+Get started with [Log Analytics queries](./log-query-overview.md).
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
na Previously updated : 04/20/2022 Last updated : 04/27/2022 # Guidelines for Azure NetApp Files network planning
Azure NetApp Files standard network features are supported for the following reg
* Australia Central * Australia Central 2
+* Australia Southeast
* East US 2 * France Central * Germany West Central
azure-netapp-files Azure Netapp Files Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md
na Previously updated : 04/14/2022 Last updated : 03/02/2022 # Resource limits for Azure NetApp Files
Size: 4096 Blocks: 8 IO Block: 65536 directory
## `Maxfiles` limits <a name="maxfiles"></a>
-Azure NetApp Files volumes have a limit called *`maxfiles`*. The `maxfiles` limit is the number of files a volume can contain. Linux file systems refer to the limit as *inodes*. The `maxfiles` limit for an Azure NetApp Files volume is indexed based on the size (quota) of the volume. The `maxfiles` limit for a volume increases or decreases at the rate of 21,251,126 files per TiB of provisioned volume size.
+Azure NetApp Files volumes have a limit called *`maxfiles`*. The `maxfiles` limit is the number of files a volume can contain. Linux file systems refer to the limit as *inodes*. The `maxfiles` limit for an Azure NetApp Files volume is indexed based on the size (quota) of the volume. The `maxfiles` limit for a volume increases or decreases at the rate of 20 million files per TiB of provisioned volume size.
-The service dynamically adjusts the `maxfiles` limit for a volume based on its provisioned size. For example, a volume configured initially with a size of 1 TiB would have a `maxfiles` limit of 21,251,126. Subsequent changes to the size of the volume would result in an automatic readjustment of the `maxfiles` limit based on the following rules:
+The service dynamically adjusts the `maxfiles` limit for a volume based on its provisioned size. For example, a volume configured initially with a size of 1 TiB would have a `maxfiles` limit of 20 million. Subsequent changes to the size of the volume would result in an automatic readjustment of the `maxfiles` limit based on the following rules:
| Volume size (quota) | Automatic readjustment of the `maxfiles` limit | |-|-|
-| <= 1 TiB | 21,251,126 |
-| > 1 TiB but <= 2 TiB | 42,502,252 |
-| > 2 TiB but <= 3 TiB | 63,753,378 |
-| > 3 TiB but <= 4 TiB | 85,004,504 |
-| > 4 TiB | 106,255,630 |
+| <= 1 TiB | 20 million |
+| > 1 TiB but <= 2 TiB | 40 million |
+| > 2 TiB but <= 3 TiB | 60 million |
+| > 3 TiB but <= 4 TiB | 80 million |
+| > 4 TiB | 100 million |
-If you have allocated at least 4 TiB of quota for a volume, you can initiate a [support request](#request-limit-increase) to increase the `maxfiles` (inodes) limit beyond 106,255,630. For every 106,255,630 files you increase (or a fraction thereof), you need to increase the corresponding volume quota by 4 TiB. For example, if you increase the `maxfiles` limit from 106,255,630 files to 212,511,260 files (or any number in between), you need to increase the volume quota from 4 TiB to 8 TiB.
+If you have allocated at least 4 TiB of quota for a volume, you can initiate a [support request](#request-limit-increase) to increase the `maxfiles` (inodes) limit beyond 100 million. For every 100 million files you increase (or a fraction thereof), you need to increase the corresponding volume quota by 4 TiB. For example, if you increase the `maxfiles` limit from 100 million files to 200 million files (or any number in between), you need to increase the volume quota from 4 TiB to 8 TiB.
-You can increase the `maxfiles` limit to 531,278,150 if your volume quota is at least 20 TiB.
+You can increase the `maxfiles` limit to 500 million if your volume quota is at least 20 TiB.
## Request limit increase
azure-resource-manager Quickstart Create Bicep Use Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/quickstart-create-bicep-use-visual-studio-code.md
For more information about the Bicep syntax, see [Bicep structure](./file.md).
You can view a representation of the resources in your file.
-From the upper left corner, select the visualizer button to open the Bicep Visualizer.
+From the upper right corner, select the visualizer button to open the Bicep Visualizer.
:::image type="content" source="./media/quickstart-create-bicep-use-visual-studio-code/bicep-visualizer.png" alt-text="Bicep Visualizer":::
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
Title: Resource naming restrictions description: Shows the rules and restrictions for naming Azure resources. Previously updated : 04/13/2022 Last updated : 04/26/2022 # Naming rules and restrictions for Azure resources
In the following tables, the term alphanumeric refers to:
> | Entity | Scope | Length | Valid Characters | > | | | | | > | deployments | resource group | 1-64 | Alphanumerics, underscores, parentheses, hyphens, and periods. |
-> | resourcegroups | subscription | 1-90 | Alphanumerics, underscores, parentheses, hyphens, periods, and unicode characters that match the [regex documentation](/rest/api/resources/resourcegroups/createorupdate).<br><br>Can't end with period. |
+> | resourcegroups | subscription | 1-90 | Letters or digits as defined by the [Char.IsLetterOrDigit](/dotnet/api/system.char.isletterordigit) function.<br><br>Valid characters are members of the following categories in [UnicodeCategory](/dotnet/api/system.globalization.unicodecategory):<br>**UppercaseLetter**,<br>**LowercaseLetter**,<br>**TitlecaseLetter**,<br>**ModifierLetter**,<br>**OtherLetter**,<br>**DecimalDigitNumber**.<br><br>Can't end with period. |
> | tagNames | resource | 1-512 | Can't use:<br>`<>%&\?/` or control characters | > | tagNames / tagValues | tag name | 1-256 | All characters. | > | templateSpecs | resource group | 1-90 | Alphanumerics, underscores, parentheses, hyphens, and periods. |
certification Program Requirements Pnp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/program-requirements-pnp.md
This document outlines the device specific capabilities that will be represented
## Program Purpose
-IoT Plug and Play Preview enables solution builders to integrate smart devices with their solutions without any manual configuration. At the core of IoT Plug and Play, is a device model that a device uses to advertise its capabilities to an IoT Plug and Play-enabled application. This model is structured as a set of elements: Telemetry, Properties and Commands.
+IoT Plug and Play enables solution builders to integrate smart devices with their solutions without any manual configuration. At the core of IoT Plug and Play, is a device model that advertises the device capabilities to an IoT Plug and Play-enabled application.
Promise of IoT Plug and Play certification are:
-1. Defined device models and interfaces are compliant with the [Digital Twin Definition Language](https://github.com/Azure/opendigitaltwins-dtdl)
-1. Easy integration with Azure IoT based solutions using the [Digital Twin APIs](../iot-develop/concepts-digital-twin.md) : Azure IoT Hub and Azure IoT Central
-1. Validated product truth on certified devices
-1. Meets all requirements of [Azure Certified Device](./program-requirements-azure-certified-device.md)
+1. Defined device models and interfaces are compliant with the [Digital Twin Definition Language](https://github.com/Azure/opendigitaltwins-dtdl)
+1. Easy integration with Azure IoT based solutions using the [Digital Twin APIs](../iot-develop/concepts-digital-twin.md) : Azure IoT Hub and Azure IoT Central
+1. Product truth validated through testing telemetry from end point to cloud using DTDL
+
+> [!Note]
+> Upon completed testing and validation, we may request that the product is evaluated by Microsoft.
## Requirements
Promise of IoT Plug and Play certification are:
| **Validation** | Device must send any telemetry schemas to IoT Hub. Microsoft provides the [portal workflow](https://certify.azure.com) to execute the tests. Device to cloud (required): **1.** Validates that the device can send message to AICS managed IoT Hub **2.** User must specify the number and frequency of messages. **3.** AICS validates the telemetry is received by the Hub instance | | **Resources** | [Certification steps](./overview.md) (has all the additional resources) |
+**[Required] DPS: The purpose of test is to check the device implements and supports IoT Hub Device Provisioning Service with one of the three attestation methods**
+
+| **Name** | AzureCertified.DPS |
+| -- | |
+| **Target Availability** | New |
+| **Applies To** | Any device |
+| **OS** | Agnostic |
+| **Validation Type** | Automated |
+| **Validation** | Device supports easy input of target DPS ID scope ownership. Microsoft provides the [portal workflow](https://certify.azure.com) to execute the tests to validate that the device supports DPS **1.** User must select one of the attestation methods (X.509, TPM and SAS key) **2.** Depending on the attestation method, user needs to take corresponding action such as **a)** Upload X.509 cert to AICS managed DPS scope **b)** Implement SAS key or endorsement key into the device |
+| **Resources** | [Device provisioning service overview](../iot-dps/about-iot-dps.md) |
**[Required] DTDL v2: The purpose of test to ensure defined device models and interfaces are compliant with the Digital Twins Definition Language v2.**
Promise of IoT Plug and Play certification are:
| **Validation** | All device models are required to be published in public repository. Device models are resolved via models available in public repository **1.** User must manually publish the models to the public repository before submitting for the certification. **2.** Note that once the models are published, it is immutable. We strongly recommend publishing only when the models and embedded device code are finalized.*1 *1 User must contact Microsoft support to revoke the models once published to the model repository **3.** [Portal workflow](https://certify.azure.com) checks the existence of the models in the public repository when the device is connected to the certification service | | **Resources** | [Model repository](../iot-develop/overview-iot-plug-and-play.md) |
-**[Required] Physical device validation using the GSG**
-
-| **Name** | IoTPnP.Physicaldevice |
-| -- | |
-| **Target Availability** | Available now |
-| **Applies To** | Any device |
-| **OS** | Agnostic |
-| **Validation Type** | Manual |
-| **Validation** | Partners must engage with Microsoft contact ([iotcert@microsoft.com](mailto:iotcert@microsoft.com)) to make arrangements to perform additional validations on physical device. Due to COVID-19 situation, we are exploring various ways to perform physical device validation without shipping the device to Microsoft. |
-| **Resources** | Details are available later |
-| **Azure Recommended** | N/A |
**[If implemented] Device info Interface: The purpose of test is to validate device info interface is implemented properly in the device code**
cloud-shell Private Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/private-vnet.md
vm-linux Previously updated : 07/15/2020 Last updated : 04/27/2022
The Azure Relay instance used for Cloud Shell can be configured to control which
## Storage requirements As in standard Cloud Shell, a storage account is required while using Cloud Shell in a virtual network. Each administrator needs a file share to store their files. The storage account needs to be accessible from the virtual network that is used by Cloud Shell.
+> [!NOTE]
+> Secondary storage regions are currently not supported in Cloud Shell VNET scenarios.
+ ## Virtual network deployment limitations * Due to the additional networking resources involved, starting Cloud Shell in a virtual network is typically slower than a standard Cloud Shell session.
cognitive-services Captioning Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/captioning-concepts.md
Captioning can accompany real time or pre-recorded speech. Whether you're showin
For real time captioning, use a microphone or audio input stream instead of file input. For examples of how to recognize speech from a microphone, see the [Speech to text quickstart](get-started-speech-to-text.md) and [How to recognize speech](how-to-recognize-speech.md) documentation. For more information about streaming, see [How to use the audio input stream](how-to-use-audio-input-streams.md).
-For captioning of a prerecoding, send file input to the Speech service. For more information, see [How to use compressed audio files](how-to-use-codec-compressed-audio-input-streams.md).
+For captioning of a prerecoding, send file input to the Speech service. For more information, see [How to use compressed input audio](how-to-use-codec-compressed-audio-input-streams.md).
## Caption and speech synchronization
cognitive-services How To Custom Speech Test And Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-test-and-train.md
Audio files can have silence at the beginning and end of the recording. If possi
| Maximum length per audio | 2 hours (testing) / 60 s (training) | | Sample format | PCM, 16-bit | | Archive format | .zip |
-| Maximum zip size | 2 GB |
+| Maximum zip size | 2 GB or 10,000 files |
[!INCLUDE [supported-audio-formats](includes/supported-audio-formats.md)]
Custom Speech requires audio files with these properties:
| Maximum length per audio | 2 hours | | Sample format | PCM, 16-bit | | Archive format | .zip |
-| Maximum archive size | 2 GB |
+| Maximum archive size | 2 GB or 10,000 files |
[!INCLUDE [supported-audio-formats](includes/supported-audio-formats.md)]
cognitive-services How To Use Codec Compressed Audio Input Streams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-use-codec-compressed-audio-input-streams.md
Title: How to use compressed audio files with the Speech SDK - Speech service
+ Title: How to use compressed input audio - Speech service
-description: Learn how to use compressed audio files to the Speech service with the Speech SDK.
+description: Learn how to use compressed input audio the Speech SDK and CLI.
Previously updated : 01/13/2022 Last updated : 04/25/2022 ms.devlang: cpp, csharp, golang, java, python
-zone_pivot_groups: programming-languages-set-twenty-eight
+zone_pivot_groups: programming-languages-speech-services
-# How to use compressed audio files
-
-The Speech SDK and Speech CLI use GStreamer to support different kinds of input audio formats. GStreamer decompresses the audio before it's sent over the wire to the Speech service as raw PCM.
--
-## Install GStreamer
-
-Choose a platform for installation instructions.
-
-Platform | Languages | Supported GStreamer version
-| : | : | ::
-Android | Java | [1.18.3](https://gstreamer.freedesktop.org/data/pkg/android/1.18.3/)
-Linux | C++, C#, Java, Python, Go | [Supported Linux distributions and target architectures](~/articles/cognitive-services/speech-service/speech-sdk.md)
-Windows (excluding UWP) | C++, C#, Java, Python | [1.18.3](https://gstreamer.freedesktop.org/data/pkg/windows/1.18.3/msvc/gstreamer-1.0-msvc-x86_64-1.18.3.msi)
-
-### [Android](#tab/android)
-
-For more information about building libgstreamer_android.so, see [GStreamer configuration by programming language](#gstreamer-configuration).
-
-For more information, see [Android installation instructions](https://gstreamer.freedesktop.org/documentation/installing/for-android-development.html?gi-language=c).
-
-### [Linux](#tab/linux)
-
-For more information, see [Linux installation instructions](https://gstreamer.freedesktop.org/documentation/installing/on-linux.html?gi-language=c).
-
-```sh
-sudo apt install libgstreamer1.0-0 \
-gstreamer1.0-plugins-base \
-gstreamer1.0-plugins-good \
-gstreamer1.0-plugins-bad \
-gstreamer1.0-plugins-ugly
-```
-### [Windows](#tab/windows)
-
-Make sure that packages of the same platform (x64 or x86) are installed. For example, if you installed the x64 package for Python, you need to install the x64 GStreamer package. The following instructions are for the x64 packages.
-
-1. Create the folder c:\gstreamer.
-1. Download the [installer](https://gstreamer.freedesktop.org/data/pkg/windows/1.18.3/msvc/gstreamer-1.0-msvc-x86_64-1.18.3.msi).
-1. Copy the installer to c:\gstreamer.
-1. Open PowerShell as an administrator.
-1. Run the following command in PowerShell:
-
- ```powershell
- cd c:\gstreamer
- msiexec /passive INSTALLLEVEL=1000 INSTALLDIR=C:\gstreamer /i gstreamer-1.0-msvc-x86_64-1.18.3.msi
- ```
-
-1. Add the system variables GST_PLUGIN_PATH with the value C:\gstreamer\1.0\msvc_x86_64\lib\gstreamer-1.0.
-1. Add the system variables GSTREAMER_ROOT_X86_64 with the value C:\gstreamer\1.0\msvc_x86_64.
-1. Add another entry in the path variable as C:\gstreamer\1.0\msvc_x86_64\bin.
-1. Reboot the machine.
-
-For more information about GStreamer, see [Windows installation instructions](https://gstreamer.freedesktop.org/documentation/installing/on-windows.html?gi-language=c).
-
-***
-
-## GStreamer configuration
-
-> [!NOTE]
-> GStreamer configuration requirements vary by programming language. For more information, choose your programming language at the top of this page. The contents of this section will be updated.
+# How to use compressed input audio
::: zone pivot="programming-language-csharp" ::: zone-end ::: zone pivot="programming-language-cpp" ::: zone-end ::: zone-end ::: zone-end ::: zone-end
-## Example
- ::: zone-end ::: zone-end ::: zone-end ::: zone-end ::: zone-end ## Next steps
-> [!div class="nextstepaction"]
-> [Learn how to recognize speech](./get-started-speech-to-text.md)
+* [Try the speech to text quickstart](get-started-speech-to-text.md)
+* [Improve recognition accuracy with custom speech](custom-speech-overview.md)
cognitive-services Adding Synonyms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/tutorials/adding-synonyms.md
For the question and answer pair ΓÇ£Fix problems with Surface PenΓÇ¥, we compare
As you can see, when `troubleshoot` was not added as a synonym, we got a low confidence response to the query ΓÇ£How to troubleshoot your surface penΓÇ¥. However, after we add `troubleshoot` as a synonym to ΓÇ£fix problemsΓÇ¥, we received the correct response to the query with a higher confidence score. Once, these synonyms were added, the relevance of results improved thereby improving user experience.
-> [!NOTE]
+> [!IMPORTANT]
> Synonyms are case insensitive. Synonyms also might not work as expected if you add stop words as synonyms. The list of stop words can be found here: [List of stop words](https://github.com/Azure-Samples/azure-search-sample-dat). > For instance, if you add the abbreviation **IT** for Information technology, the system might not be able to recognize Information Technology because **IT** is a stop word and is filtered when a query is processed. > Synonyms do not allow these special characters: ',', '?', ':', ';', '\"', '\'', '(', ')', '{', '}', '[', ']', '-', '+', '.', '/', '!', '*', '-', '_', '@', '#'
+## Notes
+* Synonyms can be added in any order. The ordering is not considered in any computational logic.
+* Special characters are not allowed for synonyms. For hyphenated words like "COVID-19", they are treated the same as "COVID 19", and "space" can be used as a term separator.
+* In case of overlapping synonym words between 2 sets of alterations, it may have unexpected results and it is not recommended to use overlapping sets.
+ ## Next steps > [!div class="nextstepaction"]
communication-services Call Recording https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-recording.md
[!INCLUDE [Public Preview](../../includes/public-preview-include-document.md)] > [!NOTE]
-> Call Recording is available for Communication Services resources created in the US, UK, Europe, Asia and Australia regions. Call Recording is not enabled for [Teams interoperability](../teams-interop.md).
+> Call Recording is not enabled for [Teams interoperability](../teams-interop.md).
-Call Recording provides a set of APIs to start, stop, pause and resume recording. These APIs can be accessed from server-side business logic or via events triggered by user actions. Recorded media output is in MP4 Audio+Video format, which is the same format that Teams uses to record media. Notifications related to media and metadata are emitted via Event Grid. Recordings are stored for 48 hours on built-in temporary storage for retrieval and movement to a long-term storage solution of choice.
+Call Recording provides a set of APIs to start, stop, pause and resume recording. These APIs can be accessed from server-side business logic or via events triggered by user actions. Recorded media output is in MP4 Audio+Video format, which is the same format that Teams uses to record media. Notifications related to media and metadata are emitted via Event Grid. Recordings are stored for 48 hours on built-in temporary storage for retrieval and movement to a long-term storage solution of choice. Call Recording supports all ACS data regions.
![Call recording concept diagram](../media/call-recording-concept.png)
container-apps Authentication Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/authentication-azure-active-directory.md
+
+ Title: Enable authentication and authorization in Azure Container Apps Preview with Azure Active Directory
+description: Learn to use the built-in Azure Active Directory authentication provider in Azure Container Apps.
++++ Last updated : 04/20/2022+++
+# Enable authentication and authorization in Azure Container Apps Preview with Azure Active Directory
+
+This article shows you how to configure authentication for Azure Container Apps so that your app signs in users with the [Microsoft identity platform](../active-directory/develop/v2-overview.md) (Azure AD) as the authentication provider.
+
+The Container Apps Authentication feature can automatically create an app registration with the Microsoft identity platform. You can also use a registration that you or a directory admin creates separately.
+
+- [Create a new app registration automatically](#aad-express)
+- [Use an existing registration created separately](#aad-advanced)
+
+## <a name="aad-express"> </a> Option 1: Create a new app registration automatically
+
+This option is designed to make enabling authentication simple and requires just a few steps.
+
+1. Sign in to the [Azure portal] and navigate to your app.
+1. Select **Authentication** in the menu on the left. Select **Add identity provider**.
+1. Select **Microsoft** in the identity provider dropdown. The option to create a new registration is selected by default. You can change the name of the registration or the supported account types.
+
+ A client secret will be created and stored as a [secret](manage-secrets.md) in the container app.
+
+1. If you're configuring the first identity provider for this application, you'll also be prompted with a **Container Apps authentication settings** section. Otherwise, you may move on to the next step.
+
+ These options determine how your application responds to unauthenticated requests, and the default selections redirect all requests to sign in with this new provider. You can customize this behavior now or adjust these settings later from the main **Authentication** screen by choosing **Edit** next to **Authentication settings**. To learn more about these options, see [Authentication flow](authentication.md#authentication-flow).
+
+1. (Optional) Select **Next: Permissions** and add any scopes needed by the application. These will be added to the app registration, but you can also change them later.
+1. Select **Add**.
+
+You're now ready to use the Microsoft identity platform for authentication in your app. The provider will be listed on the **Authentication** screen. From there, you can edit or delete this provider configuration.
+
+## <a name="aad-advanced"> </a>Option 2: Use an existing registration created separately
+
+You can also manually register your application for the Microsoft identity platform, customizing the registration and configuring Container Apps Authentication with the registration details. This approach is useful if you want to use an app registration from a different Azure AD tenant other than the one your application is defined.
+
+### <a name="aad-register"> </a>Create an app registration in Azure AD for your container app
+
+First, you'll create your app registration. As you do so, collect the following information that you'll need later when you configure the authentication in the container app:
+
+- Client ID
+- Tenant ID
+- Client secret (optional)
+- Application ID URI
+
+To register the app, perform the following steps:
+
+1. Sign in to the [Azure portal], search for and select **Container Apps**, and then select your app. Note your app's **URL**. You'll use it to configure your Azure Active Directory app registration.
+1. From the portal menu, select **Azure Active Directory**, then go to the **App registrations** tab and select **New registration**.
+1. In the **Register an application** page, enter a **Name** for your app registration.
+1. In **Redirect URI**, select **Web** and type `<app-url>/.auth/login/aad/callback`. For example, `https://<hostname>.azurecontainerapps.io/.auth/login/aad/callback`.
+1. Select **Register**.
+1. After the app registration is created, copy the **Application (client) ID** and the **Directory (tenant) ID** for later.
+1. Select **Authentication**. Under **Implicit grant and hybrid flows**, enable **ID tokens** to allow OpenID Connect user sign-ins from Container Apps. Select **Save**.
+1. (Optional) Select **Branding**. In **Home page URL**, enter the URL of your container app and select **Save**.
+1. Select **Expose an API**, and select **Set** next to *Application ID URI*. This value uniquely identifies the application when it's used as a resource, allowing tokens to be requested that grant access. The value is also used as a prefix for scopes you create.
+
+ For a single-tenant app, you can use the default value, which is in the form `api://<application-client-id>`. You can also specify a more readable URI like `https://contoso.com/api` based on one of the verified domains for your tenant. For a multi-tenant app, you must provide a custom URI. To learn more about accepted formats for App ID URIs, see the [app registrations best practices reference](../active-directory/develop/security-best-practices-for-app-registration.md#appid-uri-configuration).
+
+ The value is automatically saved.
+
+1. Select **Add a scope**.
+ 1. In **Add a scope**, the **Application ID URI** is the value you set in a previous step. Select **Save and continue**.
+ 1. In **Scope name**, enter *user_impersonation*.
+ 1. In the text boxes, enter the consent scope name and description you want users to see on the consent page. For example, enter *Access &lt;application-name&gt;*.
+ 1. Select **Add scope**.
+1. (Optional) To create a client secret, select **Certificates & secrets** > **Client secrets** > **New client secret**. Enter a description and expiration and select **Add**. Copy the client secret value shown in the page. It won't be shown again.
+1. (Optional) To add multiple **Reply URLs**, select **Authentication**.
+
+### <a name="aad-secrets"> </a>Enable Azure Active Directory in your container app
+
+1. Sign in to the [Azure portal] and navigate to your app.
+1. Select **Authentication** in the menu on the left. Select **Add identity provider**.
+1. Select **Microsoft** in the identity provider dropdown.
+1. For **App registration type**, you can choose to **Pick an existing app registration in this directory** which will automatically gather the necessary app information. If your registration is from another tenant or you don't have permission to view the registration object, choose **Provide the details of an existing app registration**. For this option, you'll need to fill in the following configuration details:
+
+ |Field|Description|
+ |-|-|
+ |Application (client) ID| Use the **Application (client) ID** of the app registration. |
+ |Client Secret| Use the client secret you generated in the app registration. With a client secret, hybrid flow is used and the Container Apps will return access and refresh tokens. When the client secret isn't set, implicit flow is used and only an ID token is returned. These tokens are sent by the provider and stored in the EasyAuth token store.|
+ |Issuer Url| Use `<authentication-endpoint>/<TENANT-ID>/v2.0`, and replace *\<authentication-endpoint>* with the [authentication endpoint for your cloud environment](../active-directory/develop/authentication-national-cloud.md#azure-ad-authentication-endpoints) (for example, "https://login.microsoftonline.com" for global Azure), also replacing *\<TENANT-ID>* with the **Directory (tenant) ID** in which the app registration was created. This value is used to redirect users to the correct Azure AD tenant, and to download the appropriate metadata to determine the appropriate token signing keys and token issuer claim value for example. For applications that use Azure AD v1, omit `/v2.0` in the URL.|
+ |Allowed Token Audiences| The configured **Application (client) ID** is *always* implicitly considered to be an allowed audience. If this value refers to a cloud or server app and you want to accept authentication tokens from a client container app (the authentication token can be retrieved in the `X-MS-TOKEN-AAD-ID-TOKEN` header), add the **Application (client) ID** of the client app here. |
+
+ The client secret will be stored as [secrets](manage-secrets.md) in your container app.
+
+1. If this is the first identity provider configured for the application, you'll also be prompted with a **Container Apps authentication settings** section. Otherwise, you may move on to the next step.
+
+ These options determine how your application responds to unauthenticated requests, and the default selections will redirect all requests to sign in with this new provider. You can change customize this behavior now or adjust these settings later from the main **Authentication** screen by choosing **Edit** next to **Authentication settings**. To learn more about these options, see [Authentication flow](authentication.md#authentication-flow).
+
+1. Select **Add**.
+
+You're now ready to use the Microsoft identity platform for authentication in your app. The provider will be listed on the **Authentication** screen. From there, you can edit or delete this provider configuration.
+
+## Configure client apps to access your container app
+
+In the prior section, you registered your container app to authenticate users. This section explains how to register native client or daemon apps so that they can request access to APIs exposed by your container app on behalf of users or themselves. Completing the steps in this section isn't required if you only wish to authenticate users.
+
+### Native client application
+
+You can register native clients to request access your container app's APIs on behalf of a signed in user.
+
+1. In the [Azure portal], select **Active Directory** > **App registrations** > **New registration**.
+1. In the **Register an application** page, enter a **Name** for your app registration.
+1. In **Redirect URI**, select **Public client (mobile & desktop)** and type the URL `<app-url>/.auth/login/aad/callback`. For example, `https://<hostname>.azurecontainerapps.io/.auth/login/aad/callback`.
+
+ > [!NOTE]
+ > For a Microsoft Store application, use the [package SID](/previous-versions/azure/app-service-mobile/app-service-mobile-dotnet-how-to-use-client-library#package-sid) as the URI instead.
+1. Select **Create**.
+1. After the app registration is created, copy the value of **Application (client) ID**.
+1. Select **API permissions** > **Add a permission** > **My APIs**.
+1. Select the app registration you created earlier for your container app. If you don't see the app registration, make sure that you've added the **user_impersonation** scope in [Create an app registration in Azure AD for your container app](#aad-register).
+1. Under **Delegated permissions**, select **user_impersonation**, and then select **Add permissions**.
+
+You've now configured a native client application that can request access your container app on behalf of a user.
+
+### Daemon client application (service-to-service calls)
+
+Your application can acquire a token to call a Web API hosted in your container app on behalf of itself (not on behalf of a user). This scenario is useful for non-interactive daemon applications that perform tasks without a logged in user. It uses the standard OAuth 2.0 [client credentials](../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md) grant.
+
+1. In the [Azure portal], select **Active Directory** > **App registrations** > **New registration**.
+1. In the **Register an application** page, enter a **Name** for your daemon app registration.
+1. For a daemon application, you don't need a Redirect URI so you can keep that empty.
+1. Select **Create**.
+1. After the app registration is created, copy the value of **Application (client) ID**.
+1. Select **Certificates & secrets** > **New client secret** > **Add**. Copy the client secret value shown in the page. It won't be shown again.
+
+You can now [request an access token using the client ID and client secret](../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md#first-case-access-token-request-with-a-shared-secret) by setting the `resource` parameter to the **Application ID URI** of the target app. The resulting access token can then be presented to the target app using the standard [OAuth 2.0 Authorization header](../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md#use-the-access-token-to-access-the-secured-resource), and Container Apps Authentication / Authorization will validate and use the token as usual to now indicate that the caller (an application in this case, not a user) is authenticated.
+
+This process allows _any_ client application in your Azure AD tenant to request an access token and authenticate to the target app. If you also want to enforce _authorization_ to allow only certain client applications, you must adjust the configuration.
+
+1. [Define an App Role](../active-directory/develop/howto-add-app-roles-in-azure-ad-apps.md) in the manifest of the app registration representing the container app you want to protect.
+1. On the app registration representing the client that needs to be authorized, select **API permissions** > **Add a permission** > **My APIs**.
+1. Select the app registration you created earlier. If you don't see the app registration, make sure that you've [added an App Role](../active-directory/develop/howto-add-app-roles-in-azure-ad-apps.md).
+1. Under **Application permissions**, select the App Role you created earlier, and then select **Add permissions**.
+1. Make sure to select **Grant admin consent** to authorize the client application to request the permission.
+1. Similar to the previous scenario (before any roles were added), you can now [request an access token](../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md#first-case-access-token-request-with-a-shared-secret) for the same target `resource`, and the access token will include a `roles` claim containing the App Roles that were authorized for the client application.
+1. Within the target Container Apps code, you can now validate that the expected roles are present in the token. The validation steps aren't performed by the Container Apps auth layer. For more information, see [Access user claims](authentication.md#access-user-claims-in-application-code).
+
+You've now configured a daemon client application that can access your container app using its own identity.
+
+## Working with authenticated users
+
+Use the following guides for details on working with authenticated users.
+
+* [Customize sign-in and sign-out](authentication.md#customize-sign-in-and-sign-out)
+* [Access user claims in application code](authentication.md#access-user-claims-in-application-code)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Authentication and authorization overview](authentication.md)
+
+<!-- URLs. -->
+[Azure portal]: https://portal.azure.com/
container-apps Authentication Facebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/authentication-facebook.md
+
+ Title: Enable authentication and authorization in Azure Container Apps Preview with Facebook
+description: Learn to use the built-in Facebook authentication provider in Azure Container Apps.
++++ Last updated : 04/06/2022+++
+# Enable authentication and authorization in Azure Container Apps Preview with Facebook
+
+This article shows how to configure Azure Container Apps to use Facebook as an authentication provider.
+
+To complete the procedure in this article, you need a Facebook account that has a verified email address and a mobile phone number. To create a new Facebook account, go to [facebook.com](https://facebook.com/).
+
+## <a name="facebook-register"> </a>Register your application with Facebook
+
+1. Go to the [Facebook Developers](https://go.microsoft.com/fwlink/p/?LinkId=268286) website and sign in with your Facebook account credentials.
+
+ If you don't have a Facebook for Developers account, select **Get Started** and follow the registration steps.
+1. Select **My Apps** > **Add New App**.
+1. In **Display Name** field:
+ 1. Type a unique name for your app.
+ 1. Provide your **Contact Email**.
+ 1. Select **Create App ID**.
+ 1. Complete the security check.
+
+ The developer dashboard for your new Facebook app opens.
+1. Select **Dashboard** > **Facebook Login** > **Set up** > **Web**.
+1. In the left navigation under **Facebook Login**, select **Settings**.
+1. In the **Valid OAuth redirect URIs** field, enter `https://<hostname>.azurecontainerapps.io/.auth/login/facebook/callback`. Remember to use the hostname of your container app.
+1. Select **Save Changes**.
+1. In the left pane, select **Settings** > **Basic**.
+1. In the **App Secret** field, select **Show**. Copy the values of **App ID** and **App Secret**. You use them later to configure your container app in Azure.
+
+ > [!IMPORTANT]
+ > The app secret is an important security credential. Do not share this secret with anyone or distribute it within a client application.
+ >
+
+1. The Facebook account that you used to register the application is an administrator of the app. At this point, only administrators can sign in to this application.
+
+ To authenticate other Facebook accounts, select **App Review** and enable **Make \<your-app-name> public** to enable the general public to access the app by using Facebook authentication.
+
+## <a name="facebook-secrets"> </a>Add Facebook information to your application
+
+1. Sign in to the [Azure portal] and navigate to your app.
+1. Select **Authentication** in the menu on the left. Select **Add identity provider**.
+1. Select **Facebook** in the identity provider dropdown. Paste in the App ID and App Secret values that you obtained previously.
+
+ The secret will be stored as a [secret](manage-secrets.md) in your container app.
+
+1. If you're configuring the first identity provider for this application, you'll be prompted with a **Container Apps authentication settings** section. Otherwise, you may move on to the next step.
+
+ These options determine how your application responds to unauthenticated requests. The default selections redirect all requests to sign in with this new provider. You can change customize this behavior now or adjust these settings later from the main **Authentication** screen by choosing **Edit** next to **Authentication settings**. To learn more about these options, see [Authentication flow](authentication.md#authentication-flow).
+
+1. (Optional) Select **Next: Scopes** and add any scopes needed by the application. These scopes are requested when a user signs in for browser-based flows.
+1. Select **Add**.
+
+You're now ready to use Facebook for authentication in your app. The provider will be listed on the **Authentication** screen. From there, you can edit or delete this provider configuration.
+
+## Working with authenticated users
+
+Use the following guides for details on working with authenticated users.
+
+* [Customize sign-in and sign-out](authentication.md#customize-sign-in-and-sign-out)
+* [Access user claims in application code](authentication.md#access-user-claims-in-application-code)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Authentication and authorization overview](authentication.md)
+
+<!-- URLs. -->
+[Azure portal]: https://portal.azure.com/
container-apps Authentication Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/authentication-github.md
+
+ Title: Enable authentication and authorization in Azure Container Apps Preview with GitHub
+description: Learn to use the built-in GitHub authentication provider in Azure Container Apps.
++++ Last updated : 04/20/2022+++
+# Enable authentication and authorization in Azure Container Apps Preview with GitHub
+
+This article shows how to configure Azure Container Apps to use GitHub as an authentication provider.
+
+To complete the procedure in this article, you need a GitHub account. To create a new GitHub account, go to [GitHub](https://github.com/).
+
+## <a name="github-register"> </a>Register your application with GitHub
+
+1. Sign in to the [Azure portal] and go to your application. Copy your **URL**. You'll use it to configure your GitHub app.
+1. Follow the instructions for [creating an OAuth app on GitHub](https://docs.github.com/developers/apps/building-oauth-apps/creating-an-oauth-app). In the **Authorization callback URL** section, enter the HTTPS URL of your app and append the path `/.auth/login/github/callback`. For example, `https://<hostname>.azurecontainerapps.io/.auth/login/github/callback`.
+1. On the application page, make note of the **Client ID**, which you'll need later.
+1. Under **Client Secrets**, select **Generate a new client secret**.
+1. Make note of the client secret value, which you'll need later.
+
+ > [!IMPORTANT]
+ > The client secret is an important security credential. Do not share this secret with anyone or distribute it with your app.
+
+## <a name="github-secrets"> </a>Add GitHub information to your application
+
+1. Sign in to the [Azure portal] and navigate to your app.
+1. Select **Authentication** in the menu on the left. Select **Add identity provider**.
+1. Select **GitHub** in the identity provider dropdown. Paste in the `Client ID` and `Client secret` values that you obtained previously.
+
+ The secret will be stored as a secret in your container app.
+
+1. If you're configuring the first identity provider for this application, you'll also be prompted with a **Container Apps authentication settings** section. Otherwise, you may move on to the next step.
+
+ These options determine how your application responds to unauthenticated requests. The default selections redirect all requests to sign in with this new provider. You can change customize this behavior now or adjust these settings later from the main **Authentication** screen by choosing **Edit** next to **Authentication settings**. To learn more about these options, see [Authentication flow]([Authentication flow](authentication.md#authentication-flow)).
+
+1. Select **Add**.
+
+You're now ready to use GitHub for authentication in your app. The provider will be listed on the **Authentication** screen. From there, you can edit or delete this provider configuration.
+
+## Working with authenticated users
+
+Use the following guides for details on working with authenticated users.
+
+* [Customize sign-in and sign-out](authentication.md#customize-sign-in-and-sign-out)
+* [Access user claims in application code](authentication.md#access-user-claims-in-application-code)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Authentication and authorization overview](authentication.md)
+
+<!-- URLs. -->
+[Azure portal]: https://portal.azure.com/
container-apps Authentication Google https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/authentication-google.md
+
+ Title: Enable authentication and authorization in Azure Container Apps Preview with Google
+description: Learn to use the built-in Google authentication provider in Azure Container Apps.
++++ Last updated : 04/20/2022+++
+# Enable authentication and authorization in Azure Container Apps Preview with Google
+
+This article shows you how to configure Azure Container Apps to use Google as an authentication provider.
+
+To complete the following procedure, you must have a Google account that has a verified email address. To create a new Google account, go to [accounts.google.com](https://go.microsoft.com/fwlink/p/?LinkId=268302).
+
+## <a name="google-register"> </a>Register your application with Google
+
+1. Follow the Google documentation at [Google Sign-In for server-side apps](https://developers.google.com/identity/sign-in/web/server-side-flow) to create a client ID and client secret. There's no need to make any code changes. Just use the following information:
+ - For **Authorized JavaScript Origins**, use `https://<hostname>.azurecontainerapps.io` with the name of your app in *\<hostname>*.
+ - For **Authorized Redirect URI**, use `https://<hostname>.azurecontainerapps.io/.auth/login/google/callback`.
+1. Copy the App ID and the App secret values.
+
+ > [!IMPORTANT]
+ > The App secret is an important security credential. Do not share this secret with anyone or distribute it within a client application.
+
+## <a name="google-secrets"> </a>Add Google information to your application
+
+1. Sign in to the [Azure portal] and navigate to your app.
+1. Select **Authentication** in the menu on the left. Select **Add identity provider**.
+1. Select **Google** in the identity provider dropdown. Paste in the App ID and App Secret values that you obtained previously.
+
+ The secret will be stored as a [secret](manage-secrets.md) in your container app.
+
+1. If you're configuring the first identity provider for this application, you'll also be prompted with a **Container Apps authentication settings** section. Otherwise, you may move on to the next step.
+
+ These options determine how your application responds to unauthenticated requests. The default selections redirect all requests to sign in with this new provider. You can change customize this behavior now or adjust these settings later from the main **Authentication** screen by choosing **Edit** next to **Authentication settings**. To learn more about these options, see [Authentication flow](authentication.md#authentication-flow).
+
+1. Select **Add**.
+
+ > [!NOTE]
+ > For adding scope: You can define what permissions your application has in the provider's registration portal. The app can request scopes at login time which leverage these permissions.
+
+You're now ready to use Google for authentication in your app. The provider will be listed on the **Authentication** screen. From there, you can edit or delete this provider configuration.
+
+## Working with authenticated users
+
+Use the following guides for details on working with authenticated users.
+
+* [Customize sign-in and sign-out](authentication.md#customize-sign-in-and-sign-out)
+* [Access user claims in application code](authentication.md#access-user-claims-in-application-code)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Authentication and authorization overview](authentication.md)
+
+<!-- URLs. -->
+[Azure portal]: https://portal.azure.com/
container-apps Authentication Openid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/authentication-openid.md
+
+ Title: Enable authentication and authorization in Azure Container Apps Preview with a Custom OpenID Connect provider
+description: Learn to use the built-in Custom OpenID Connect authentication provider in Azure Container Apps.
++++ Last updated : 04/20/2022+++
+# Enable authentication and authorization in Azure Container Apps Preview with a Custom OpenID Connect provider
+
+This article shows you how to configure Azure Container Apps to use a custom authentication provider that adheres to the [OpenID Connect specification](https://openid.net/connect/). OpenID Connect (OIDC) is an industry standard used by many identity providers (IDPs). You don't need to understand the details of the specification in order to configure your app to use an adherent IDP.
+
+You can configure your app to use one or more OIDC providers. Each must be given a unique alphanumeric name in the configuration, and only one can serve as the default redirect target.
+
+## <a name="openid-register"> </a>Register your application with the identity provider
+
+Your provider will require you to register the details of your application with it. One of these steps involves specifying a redirect URI. This redirect URI will be of the form `<app-url>/.auth/login/<provider-name>/callback`. Each identity provider should provide more instructions on how to complete these steps.
+
+> [!NOTE]
+> Some providers may require additional steps for their configuration and how to use the values they provide. For example, Apple provides a private key which is not itself used as the OIDC client secret, and you instead must use it craft a JWT which is treated as the secret you provide in your app config (see the "Creating the Client Secret" section of the [Sign in with Apple documentation](https://developer.apple.com/documentation/sign_in_with_apple/generate_and_validate_tokens))
+>
+
+You'll need to collect a **client ID** and **client secret** for your application.
+
+> [!IMPORTANT]
+> The client secret is an important security credential. Do not share this secret with anyone or distribute it within a client application.
+>
+
+Additionally, you'll need the OpenID Connect metadata for the provider. This information is often exposed via a [configuration metadata document](https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfig), which is the provider's Issuer URL suffixed with `/.well-known/openid-configuration`. Gather this configuration URL.
+
+If you're unable to use a configuration metadata document, you'll need to gather the following values separately:
+
+- The issuer URL (sometimes shown as `issuer`)
+- The [OAuth 2.0 Authorization endpoint](https://tools.ietf.org/html/rfc6749#section-3.1) (sometimes shown as `authorization_endpoint`)
+- The [OAuth 2.0 Token endpoint](https://tools.ietf.org/html/rfc6749#section-3.2) (sometimes shown as `token_endpoint`)
+- The URL of the [OAuth 2.0 JSON Web Key Set](https://tools.ietf.org/html/rfc8414#section-2) document (sometimes shown as `jwks_uri`)
+
+## <a name="openid-configure"> </a>Add provider information to your application
+
+1. Sign in to the [Azure portal] and navigate to your app.
+
+1. Select **Authentication** in the menu on the left. Select **Add identity provider**.
+
+1. Select **OpenID Connect** in the identity provider dropdown.
+
+1. Provide the unique alphanumeric name selected earlier for **OpenID provider name**.
+
+1. If you have the URL for the **metadata document** from the identity provider, provide that value for **Metadata URL**. Otherwise, select the **Provide endpoints separately** option and put each URL gathered from the identity provider in the appropriate field.
+
+1. Provide the earlier collected **Client ID** and **Client Secret** in the appropriate fields.
+
+1. Specify an application setting name for your client secret. Your client secret will be stored as a [secret](manage-secrets.md) in your container app.
+
+1. Press the **Add** button to finish setting up the identity provider.
+
+## Working with authenticated users
+
+Use the following guides for details on working with authenticated users.
+
+* [Customize sign-in and sign-out](authentication.md#customize-sign-in-and-sign-out)
+* [Access user claims in application code](authentication.md#access-user-claims-in-application-code)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Authentication and authorization overview](authentication.md)
container-apps Authentication Twitter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/authentication-twitter.md
+
+ Title: Enable authentication and authorization in Azure Container Apps Preview with Twitter
+description: Learn to use the built-in Twitter authentication provider in Azure Container Apps.
++++ Last updated : 04/20/2022+++
+# Enable authentication and authorization in Azure Container Apps Preview with Twitter
+
+This article shows how to configure Azure Container Apps to use Twitter as an authentication provider.
+
+To complete the procedure in this article, you need a Twitter account that has a verified email address and phone number. To create a new Twitter account, go to [twitter.com].
+
+## <a name="twitter-register"> </a>Register your application with Twitter
+
+1. Sign in to the [Azure portal] and go to your application. Copy your **URL**. You'll use it to configure your Twitter app.
+1. Go to the [Twitter Developers] website, sign in with your Twitter account credentials, and select **Create an app**.
+1. Enter the **App name** and the **Application description** for your new app. Paste your application's **URL** into the **Website URL** field. In the **Callback URLs** section, enter the HTTPS URL of your container app and append the path `/.auth/login/twitter/callback`. For example, `https://<hostname>.azurecontainerapps.io/.auth/login/twitter/callback`.
+1. At the bottom of the page, type at least 100 characters in **Tell us how this app will be used**, then select **Create**. Select **Create** again in the pop-up. The application details are displayed.
+1. Select the **Keys and Access Tokens** tab.
+
+ Make a note of these values:
+ - API key
+ - API secret key
+
+ > [!IMPORTANT]
+ > The API secret key is an important security credential. Do not share this secret with anyone or distribute it with your app.
+
+## <a name="twitter-secrets"> </a>Add Twitter information to your application
+
+1. Sign in to the [Azure portal] and navigate to your app.
+1. Select **Authentication** in the menu on the left. Select **Add identity provider**.
+1. Select **Twitter** in the identity provider dropdown. Paste in the `API key` and `API secret key` values that you obtained previously.
+
+ The secret will be stored as [secret](manage-secrets.md) in your container app.
+
+1. If you're configuring the first identity provider for this application, you'll also be prompted with a **Container Apps authentication settings** section. Otherwise, you may move on to the next step.
+
+ These options determine how your application responds to unauthenticated requests. The default selections redirect all requests to sign in with this new provider. You can change customize this behavior now or adjust these settings later from the main **Authentication** screen by choosing **Edit** next to **Authentication settings**. To learn more about these options, see [Authentication flow](authentication.md#authentication-flow).
+
+1. Select **Add**.
+
+You're now ready to use Twitter for authentication in your app. The provider will be listed on the **Authentication** screen. From there, you can edit or delete this provider configuration.
+
+## Working with authenticated users
+
+Use the following guides for details on working with authenticated users.
+
+* [Customize sign-in and sign-out](authentication.md#customize-sign-in-and-sign-out)
+* [Access user claims in application code](authentication.md#access-user-claims-in-application-code)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Authentication and authorization overview](authentication.md)
+
+<!-- URLs. -->
+[Azure portal]: https://portal.azure.com/
container-apps Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/authentication.md
+
+ Title: Authentication and authorization in Azure Container Apps Preview
+description: Use built-in authentication in Azure Container Apps.
++++ Last updated : 04/20/2022+++
+# Authentication and authorization in Azure Container Apps Preview
+
+Azure Container Apps provides built-in authentication and authorization features (sometimes referred to as "Easy Auth"), to secure your external ingress-enabled container app with minimal or no code.
+
+For details surrounding authentication and authorization, refer to the following guides for your choice of provider.
+
+* [Azure Active Directory](authentication-azure-active-directory.md)
+* [Facebook](authentication-facebook.md)
+* [GitHub](authentication-github.md)
+* [Google](authentication-google.md)
+* [Twitter](authentication-twitter.md)
+* [Custom OpenID Connect](authentication-openid.md)
+
+## Why use the built-in authentication?
+
+You're not required to use this feature for authentication and authorization. You can use the bundled security features in your web framework of choice, or you can write your own utilities. However, implementing a secure solution for authentication (signing-in users) and authorization (providing access to secure data) can take significant effort. You must make sure to follow industry best practices and standards, and keep your implementation up to date.
+
+The built-in authentication feature for Container Apps can save you time and effort by providing out-of-the-box authentication with federated identity providers, allowing you to focus on the rest of your application.
+
+* Azure Container Apps provides access to various built-in authentication providers.
+* The built-in auth features donΓÇÖt require any particular language, SDK, security expertise, or even any code that you have to write.
+* You can integrate with multiple providers including Azure Active Directory, Facebook, Google, and Twitter.
+
+## Identity providers
+
+Container Apps uses [federated identity](https://en.wikipedia.org/wiki/Federated_identity), in which a third-party identity provider manages the user identities and authentication flow for you. The following identity providers are available by default:
+
+| Provider | Sign-in endpoint | How-To guidance |
+| - | - | - |
+| [Microsoft Identity Platform](../active-directory/fundamentals/active-directory-whatis.md) | `/.auth/login/aad` | [Microsoft Identity Platform](authentication-azure-active-directory.md) |
+| [Facebook](https://developers.facebook.com/docs/facebook-login) | `/.auth/login/facebook` | [Facebook](authentication-facebook.md) |
+| [GitHub](https://docs.github.com/en/developers/apps/building-oauth-apps/authorizing-oauth-apps) | `/.auth/login/github` | [Google](authentication-github.md) |
+| [Google](https://developers.google.com/identity/choose-auth) | `/.auth/login/google` | [Google](authentication-google.md) |
+| [Twitter](https://developer.twitter.com/en/docs/basics/authentication) | `/.auth/login/twitter` | [Twitter](authentication-twitter.md) |
+| Any [OpenID Connect](https://openid.net/connect/) provider | `/.auth/login/<providerName>` | [OpenID Connect](authentication-openid.md) |
+
+When you use one of these providers, the sign-in endpoint is available for user authentication and authentication token validation from the provider. You can provide your users with any number of these provider options.
+
+## Considerations for using built-in authentication
+
+This feature should be used with HTTPS only. Ensure `allowInsecure` is disabled on your container app's ingress configuration.
+
+You can configure your container app for authentication with or without restricting access to your site content and APIs. To restrict app access only to authenticated users, set its *Restrict access* setting to **Require authentication**. To authenticate but not restrict access, set its *Restrict access* setting to **Allow unauthenticated access**.
+
+## Feature architecture
+
+The authentication and authorization middleware component is a feature of the platform that runs as a sidecar container on each replica in your application. When enabled, every incoming HTTP request passes through the security layer before being handled by your application.
++
+The platform middleware handles several things for your app:
+
+* Authenticates users and clients with the specified identity provider(s)
+* Manages the authenticated session
+* Injects identity information into HTTP request headers
+
+The authentication and authorization module runs in a separate container, isolated from your application code. As the security container doesn't run in-process, no direct integration with specific language frameworks is possible. However, relevant information your app needs is provided in request headers as explained below.
+
+### Authentication flow
+
+The authentication flow is the same for all providers, but differs depending on whether you want to sign in with the provider's SDK:
+
+* **Without provider SDK** (_server-directed flow_ or _server flow_): The application delegates federated sign-in to Container Apps. Delegation is typically the case with browser apps, which presents the provider's sign-in page to the user.
+
+* **With provider SDK** (_client-directed flow_ or _client flow_): The application signs users in to the provider manually and then submits the authentication token to Container Apps for validation. This approach is typical for browser-less apps that don't present the provider's sign-in page to the user. An example is a native mobile app that signs users in using the provider's SDK.
+
+Calls from a trusted browser app in Container Apps to another REST API in Container Apps can be authenticated using the server-directed flow. For more information, see [Customize sign-ins and sign-outs](#customize-sign-in-and-sign-out).
+
+The table below shows the steps of the authentication flow.
+
+| Step | Without provider SDK | With provider SDK |
+| - | - | - |
+| 1. Sign user in | Redirects client to `/.auth/login/<PROVIDER>`. | Client code signs user in directly with provider's SDK and receives an authentication token. For information, see the provider's documentation. |
+| 2. Post-authentication | Provider redirects client to `/.auth/login/<PROVIDER>/callback`. | Client code [posts token from provider](#client-directed-sign-in) to `/.auth/login/<PROVIDER>` for validation. |
+| 3. Establish authenticated session | Container Apps adds authenticated cookie to response. | Container Apps returns its own authentication token to client code. |
+| 4. Serve authenticated content | Client includes authentication cookie in subsequent requests (automatically handled by browser). | Client code presents authentication token in `X-ZUMO-AUTH` header. |
+
+For client browsers, Container Apps can automatically direct all unauthenticated users to `/.auth/login/<PROVIDER>`. You can also present users with one or more `/.auth/login/<PROVIDER>` links to sign in to your app using their provider of choice.
+
+### <a name="authorization"></a>Authorization behavior
+
+In the [Azure portal](https://portal.azure.com), you can edit your container app's authentication settings to configure it with various behaviors when an incoming request isn't authenticated. The following headings describe the options.
+
+* **Allow unauthenticated access**: This option defers authorization of unauthenticated traffic to your application code. For authenticated requests, Container Apps also passes along authentication information in the HTTP headers. Your app can use information in the headers to make authorization decisions for a request.
+
+ This option provides more flexibility in handling anonymous requests. For example, it lets you [present multiple sign-in providers](#use-multiple-sign-in-providers) to your users. However, you must write code.
+
+* **Require authentication**: This option rejects any unauthenticated traffic to your application. This rejection can be a redirect action to one of the configured identity providers. In these cases, a browser client is redirected to `/.auth/login/<PROVIDER>` for the provider you choose. If the anonymous request comes from a native mobile app, the returned response is an `HTTP 401 Unauthorized`. You can also configure the rejection to be an `HTTP 401 Unauthorized` or `HTTP 403 Forbidden` for all requests.
+
+ With this option, you don't need to write any authentication code in your app. Finer authorization, such as role-specific authorization, can be handled by inspecting the user's claims (see [Access user claims](#access-user-claims-in-application-code)).
+
+ > [!CAUTION]
+ > Restricting access in this way applies to all calls to your app, which may not be desirable for apps wanting a publicly available home page, as in many single-page applications.
+
+ > [!NOTE]
+ > By default, any user in your Azure AD tenant can request a token for your application from Azure AD. You can [configure the application in Azure AD](../active-directory/develop/howto-restrict-your-app-to-a-set-of-users.md) if you want to restrict access to your app to a defined set of users.
+
+## Customize sign-in and sign-out
+
+Container Apps Authentication provides built-in endpoints for sign-in and sign-out. When the feature is enabled, these endpoints are available under the `/.auth` route prefix on your container app.
+
+### Use multiple sign-in providers
+
+The portal configuration doesn't offer a turn-key way to present multiple sign-in providers to your users (such as both Facebook and Twitter). However, it isn't difficult to add the functionality to your app. The steps are outlined as follows:
+
+First, in the **Authentication / Authorization** page in the Azure portal, configure each of the identity provider you want to enable.
+
+In **Action to take when request is not authenticated**, select **Allow Anonymous requests (no action)**.
+
+In the sign-in page, or the navigation bar, or any other location of your app, add a sign-in link to each of the providers you enabled (`/.auth/login/<provider>`). For example:
+
+```html
+<a href="/.auth/login/aad">Log in with the Microsoft Identity Platform</a>
+<a href="/.auth/login/facebook">Log in with Facebook</a>
+<a href="/.auth/login/google">Log in with Google</a>
+<a href="/.auth/login/twitter">Log in with Twitter</a>
+```
+
+When the user selects on one of the links, the UI for the respective providers is displayed to the user.
+
+To redirect the user post-sign-in to a custom URL, use the `post_login_redirect_uri` query string parameter (not to be confused with the Redirect URI in your identity provider configuration). For example, to navigate the user to `/Home/Index` after sign-in, use the following HTML code:
+
+```html
+<a href="/.auth/login/<provider>?post_login_redirect_uri=/Home/Index">Log in</a>
+```
+
+### Client-directed sign-in
+
+In a client-directed sign-in, the application signs in the user to the identity provider using a provider-specific SDK. The application code then submits the resulting authentication token to Container Apps for validation (see [Authentication flow](authentication.md#authentication-flow)) using an HTTP POST request.
+
+To validate the provider token, container app must first be configured with the desired provider. At runtime, after you retrieve the authentication token from your provider, post the token to `/.auth/login/<provider>` for validation. For example:
+
+```console
+POST https://<appname>.azurewebsites.net/.auth/login/aad HTTP/1.1
+Content-Type: application/json
+
+{"id_token":"<token>","access_token":"<token>"}
+```
+
+The token format varies slightly according to the provider. See the following table for details:
+
+| Provider value | Required in request body | Comments |
+|-|-|-|
+| `aad` | `{"access_token":"<ACCESS_TOKEN>"}` | The `id_token`, `refresh_token`, and `expires_in` properties are optional. |
+| `microsoftaccount` | `{"access_token":"<ACCESS_TOKEN>"}` or `{"authentication_token": "<TOKEN>"`| `authentication_token` is preferred over `access_token`. The `expires_in` property is optional. <br/> When requesting the token from Live services, always request the `wl.basic` scope. |
+| `google` | `{"id_token":"<ID_TOKEN>"}` | The `authorization_code` property is optional. Providing an `authorization_code` value will add an access token and a refresh token to the token store. When specified, `authorization_code` can also optionally be accompanied by a `redirect_uri` property. |
+| `facebook`| `{"access_token":"<USER_ACCESS_TOKEN>"}` | Use a valid [user access token](https://developers.facebook.com/docs/facebook-login/access-tokens) from Facebook. |
+| `twitter` | `{"access_token":"<ACCESS_TOKEN>", "access_token_secret":"<ACCES_TOKEN_SECRET>"}` | |
+| | | |
+
+If the provider token is validated successfully, the API returns with an `authenticationToken` in the response body, which is your session token.
+
+```json
+{
+ "authenticationToken": "...",
+ "user": {
+ "userId": "sid:..."
+ }
+}
+```
+
+Once you have this session token, you can access protected app resources by adding the `X-ZUMO-AUTH` header to your HTTP requests. For example:
+
+```console
+GET https://<hostname>.azurecontainerapps.io/api/products/1
+X-ZUMO-AUTH: <authenticationToken_value>
+```
+
+### Sign out of a session
+
+Users can initiate a sign-out by sending a `GET` request to the app's `/.auth/logout` endpoint. The `GET` request conducts the following actions:
+
+* Clears authentication cookies from the current session.
+* Deletes the current user's tokens from the token store.
+* For Azure Active Directory and Google, performs a server-side sign-out on the identity provider.
+
+Here's a simple sign-out link in a webpage:
+
+```html
+<a href="/.auth/logout">Sign out</a>
+```
+
+By default, a successful sign-out redirects the client to the URL `/.auth/logout/done`. You can change the post-sign-out redirect page by adding the `post_logout_redirect_uri` query parameter. For example:
+
+```console
+GET /.auth/logout?post_logout_redirect_uri=/https://docsupdatetracker.net/index.html
+```
+
+It's recommended that you [encode](https://wikipedia.org/wiki/Percent-encoding) the value of `post_logout_redirect_uri`.
+
+URL must be hosted in the same domain when using fully qualified URLs.
+
+## Access user claims in application code
+
+For all language frameworks, Container Apps makes the claims in the incoming token available to your application code. The claims are injected into the request headers, which are present whether from an authenticated end user or a client application. External requests aren't allowed to set these headers, so they're present only if set by Container Apps. Some example headers include:
+
+* `X-MS-CLIENT-PRINCIPAL-NAME`
+* `X-MS-CLIENT-PRINCIPAL-ID`
+
+Code that is written in any language or framework can get the information that it needs from these headers.
+
+> [!NOTE]
+> Different language frameworks may present these headers to the app code in different formats, such as lowercase or title case.
+
+## Next steps
+
+Refer to the following articles for details on securing your container app.
+
+* [Azure Active Directory](authentication-azure-active-directory.md)
+* [Facebook](authentication-facebook.md)
+* [GitHub](authentication-github.md)
+* [Google](authentication-google.md)
+* [Twitter](authentication-twitter.md)
+* [Custom OpenID Connect](authentication-openid.md)
cosmos-db Distribute Data Globally https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/distribute-data-globally.md
As you add and remove regions to and from your Azure Cosmos account, your applic
**Build highly available apps.** Running a database in multiple regions worldwide increases the availability of a database. If one region is unavailable, other regions automatically handle application requests. Azure Cosmos DB offers 99.999% read and write availability for multi-region databases.
-**Maintain business continuity during regional outages.** Azure Cosmos DB supports [automatic failover](how-to-manage-database-account.md#automatic-failover) during a regional outage. During a regional outage, Azure Cosmos DB continues to maintain its latency, availability, consistency, and throughput SLAs. To help make sure that your entire application is highly available, Cosmos DB offers a manual failover API to simulate a regional outage. By using this API, you can carry out regular business continuity drills.
+**Maintain business continuity during regional outages.** Azure Cosmos DB supports [service-managed failover](how-to-manage-database-account.md#automatic-failover) during a regional outage. During a regional outage, Azure Cosmos DB continues to maintain its latency, availability, consistency, and throughput SLAs. To help make sure that your entire application is highly available, Cosmos DB offers a manual failover API to simulate a regional outage. By using this API, you can carry out regular business continuity drills.
**Scale read and write throughput globally.** You can enable every region to be writable and elastically scale reads and writes all around the world. The throughput that your application configures on an Azure Cosmos database or a container is provisioned across all regions associated with your Azure Cosmos account. The provisioned throughput is guaranteed up by [financially backed SLAs](https://azure.microsoft.com/support/legal/sla/cosmos-db/v1_3/).
cosmos-db High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/high-availability.md
Service-managed failover allows Cosmos DB to fail over the write region of multi
Refer to [How to manage an Azure Cosmos DB account](./how-to-manage-database-account.md) for the instructions on how to enable multiple read regions and service-managed failover. > [!IMPORTANT]
-> It is strongly recommended that you configure the Azure Cosmos accounts used for production workloads to **enable automatic failover**. This enables Cosmos DB to failover the account databases to available regions automatically. In the absence of this configuration, the account will experience loss of write availability for all the duration of the write region outage, as manual failover will not succeed due to lack of region connectivity.
+> It is strongly recommended that you configure the Azure Cosmos accounts used for production workloads to **enable service-managed failover**. This enables Cosmos DB to failover the account databases to available regions automatically. In the absence of this configuration, the account will experience loss of write availability for all the duration of the write region outage, as manual failover will not succeed due to lack of region connectivity.
### Multiple write regions Azure Cosmos DB can be configured to accept writes in multiple regions. This is useful to reduce write latency in geographically distributed applications. When a Cosmos DB account is configured for multiple write regions, strong consistency isn't supported and write conflicts may arise. Refer to [Conflict types and resolution policies when using multiple write regions](./conflict-resolution-policies.md) for more information on how to resolve conflicts in multiple write region configurations.
Multi-region accounts will experience different behaviors depending on the follo
| Configuration | Outage | Availability impact | Durability impact| What to do | | -- | -- | -- | -- | -- | | Single write region | Read region outage | All clients will automatically redirect reads to other regions. No read or write availability loss for all configurations, except 2 regions with strong consistency which loses write availability until the service is restored or, if **service-managed failover** is enabled, the region is marked as failed and a failover occurs. | No data loss. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> When the outage is over, re-adjust provisioned RUs as appropriate. |
-| Single write region | Write region outage | Clients will redirect reads to other regions. <p/> **Without service-manages failover**, clients will experience write availability loss, until write availability is restored automatically when the outage ends. <p/> **With service-managed failover** clients will experience write availability loss until the services manages a failover to a new write region selected according to your preferences. | If strong consistency level isn't selected, some data may not have been replicated to the remaining active regions. This depends on the consistency level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> Do *not* trigger a manual failover during the outage, as it will not succeed. <p/> When the outage is over, re-adjust provisioned RUs as appropriate. Accounts using SQL APIs may also recover the non-replicated data in the failed region from your [conflicts feed](how-to-manage-conflicts.md#read-from-conflict-feed). |
+| Single write region | Write region outage | Clients will redirect reads to other regions. <p/> **Without service-managed failover**, clients will experience write availability loss, until write availability is restored automatically when the outage ends. <p/> **With service-managed failover** clients will experience write availability loss until the services manages a failover to a new write region selected according to your preferences. | If strong consistency level isn't selected, some data may not have been replicated to the remaining active regions. This depends on the consistency level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> Do *not* trigger a manual failover during the outage, as it will not succeed. <p/> When the outage is over, re-adjust provisioned RUs as appropriate. Accounts using SQL APIs may also recover the non-replicated data in the failed region from your [conflicts feed](how-to-manage-conflicts.md#read-from-conflict-feed). |
| Multiple write regions | Any regional outage | Possibility of temporary write availability loss, analogously to single write region with service-managed failover. The failover of the [conflict-resolution region](#conflict-resolution-region) may also cause a loss of write availability if a high number of conflicting writes happen at the time of the outage. | Recently updated data in the failed region may be unavailable in the remaining active regions, depending on the selected [consistency level](consistency-levels.md). If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support additional traffic. <p/> When the outage is over, you may re-adjust provisioned RUs as appropriate. If possible, Cosmos DB will automatically recover non-replicated data in the failed region using the configured conflict resolution method for SQL API accounts, and Last Write Wins for accounts using other APIs. | ### Additional information on read region outages
The following table summarizes the high availability capability of various accou
* To ensure high write and read availability, configure your Azure Cosmos account to span at least two regions and three, if using strong consistency. Remember that the best configuration to achieve high availability for a region outage is single write region with service-managed failover. To learn more, see [Tutorial: Set up Azure Cosmos DB global distribution using the SQL API](tutorial-global-distribution-sql-api.md).
-* For multi-region Azure Cosmos accounts that are configured with a single-write region, [enable service-managed failover by using Azure CLI or Azure portal](how-to-manage-database-account.md#automatic-failover). After you enable automatic failover, whenever there's a regional disaster, Cosmos DB will fail over your account without any user inputs.
+* For multi-region Azure Cosmos accounts that are configured with a single-write region, [enable service-managed failover by using Azure CLI or Azure portal](how-to-manage-database-account.md#automatic-failover). After you enable service-managed failover, whenever there's a regional disaster, Cosmos DB will fail over your account without any user inputs.
* Even if your Azure Cosmos account is highly available, your application may not be correctly designed to remain highly available. To test the end-to-end high availability of your application, as a part of your application testing or disaster recovery (DR) drills, temporarily disable automatic-failover for the account, invoke the [manual failover by using PowerShell, Azure CLI or Azure portal](how-to-manage-database-account.md#manual-failover), then monitor your application's failover. Once complete, you can fail back over to the primary region and restore automatic-failover for the account.
For single-region accounts, clients will experience loss of read and write avail
Multi-region accounts will experience different behaviors depending on the following table.
-| Write regions | Automatic failover | What to expect | What to do |
+| Write regions | Service-Managed failover | What to expect | What to do |
| -- | -- | -- | -- | | Single write region | Not enabled | In case of outage in a read region when not using strong consistency, all clients will redirect to other regions. No read or write availability loss. No data loss. When using strong consistency, read region outage can impact write availability if fewer than two read regions remaining.<p/> In case of an outage in the write region, clients will experience write availability loss. If strong consistency level isn't selected, some data may not have been replicated to the remaining active regions. This depends on the consistency level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, unreplicated data may be lost. <p/> Cosmos DB will restore write availability automatically when the outage ends. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> Do *not* trigger a manual failover during the outage, as it will not succeed. <p/> When the outage is over, re-adjust provisioned RUs as appropriate. | | Single write region | Enabled | In case of outage in a read region when not using strong consistency, all clients will redirect to other regions. No read or write availability loss. No data loss. When using strong consistency, read region outage can impact write availability if fewer than two read regions remaining.<p/> In case of an outage in the write region, clients will experience write availability loss until Cosmos DB automatically elects a new region as the new write region according to your preferences. If strong consistency level isn't selected, some data may not have been replicated to the remaining active regions. This depends on the consistency level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> Do *not* trigger a manual failover during the outage, as it will not succeed. <p/> When the outage is over, you may move the write region back to the original region, and re-adjust provisioned RUs as appropriate. Accounts using SQL APIs may also recover the non-replicated data in the failed region from your [conflicts feed](how-to-manage-conflicts.md#read-from-conflict-feed). |
Next you can read the following articles:
* [How to configure your Cosmos account with multiple write regions](how-to-multi-master.md)
-* [SDK behavior on multi-regional environments](troubleshoot-sdk-availability.md)
+* [SDK behavior on multi-regional environments](troubleshoot-sdk-availability.md)
cosmos-db How To Manage Database Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-manage-database-account.md
Open the **Replicate Data Globally** tab and select **Enable** to enable multi-r
:::image type="content" source="./media/how-to-manage-database-account/single-to-multi-master.png" alt-text="Azure Cosmos account configures multi-region writes screenshot":::
-## <a id="automatic-failover"></a>Enable automatic failover for your Azure Cosmos account
+## <a id="automatic-failover"></a>Enable service-managed failover for your Azure Cosmos account
-The Automatic failover option allows Azure Cosmos DB to failover to the region with the highest failover priority with no user action should a region become unavailable. When automatic failover is enabled, region priority can be modified. Account must have two or more regions to enable automatic failover.
+The Service-Managed failover option allows Azure Cosmos DB to failover to the region with the highest failover priority with no user action should a region become unavailable. When service-managed failover is enabled, region priority can be modified. Account must have two or more regions to enable service-managed failover.
1. From your Azure Cosmos account, open the **Replicate data globally** pane.
The Automatic failover option allows Azure Cosmos DB to failover to the region w
After a Cosmos account is configured for automatic failover, the failover priority for regions can be changed. > [!IMPORTANT]
-> You cannot modify the write region (failover priority of zero) when the account is configured for automatic failover. To change the write region, you must disable automatic failover and do a manual failover.
+> You cannot modify the write region (failover priority of zero) when the account is configured for service-managed failover. To change the write region, you must disable service-managed failover and do a manual failover.
1. From your Azure Cosmos account, open the **Replicate data globally** pane.
cosmos-db Create Mongodb Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/create-mongodb-dotnet.md
ms.devlang: csharp Previously updated : 8/26/2021 Last updated : 4/26/2022
> * [Python](create-mongodb-python.md) > * [Java](create-mongodb-java.md) > * [Node.js](create-mongodb-nodejs.md)
-> * [Xamarin](create-mongodb-xamarin.md)
> * [Golang](create-mongodb-go.md) >
cosmos-db Create Mongodb Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/create-mongodb-go.md
Title: Connect a Go application to Azure Cosmos DB's API for MongoDB description: This quickstart demonstrates how to connect an existing Go application to Azure Cosmos DB's API for MongoDB.--++ ms.devlang: golang Previously updated : 08/26/2021 Last updated : 04/26/2022 # Quickstart: Connect a Go application to Azure Cosmos DB's API for MongoDB
> [!div class="op_single_selector"] > * [.NET](create-mongodb-dotnet.md)
+> * [Python](create-mongodb-python.md)
> * [Java](create-mongodb-java.md) > * [Node.js](create-mongodb-nodejs.md)
-> * [Xamarin](create-mongodb-xamarin.md)
> * [Golang](create-mongodb-go.md) >
cosmos-db Create Mongodb Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/create-mongodb-java.md
ms.devlang: java Previously updated : 08/26/2021 Last updated : 04/26/2022
> * [Python](create-mongodb-python.md) > * [Java](create-mongodb-java.md) > * [Node.js](create-mongodb-nodejs.md)
-> * [Xamarin](create-mongodb-xamarin.md)
> * [Golang](create-mongodb-go.md) >
cosmos-db Create Mongodb Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/create-mongodb-nodejs.md
ms.devlang: javascript Previously updated : 08/26/2021 Last updated : 04/26/2022 # Quickstart: Migrate an existing MongoDB Node.js web app to Azure Cosmos DB
> * [Python](create-mongodb-python.md) > * [Java](create-mongodb-java.md) > * [Node.js](create-mongodb-nodejs.md)
-> * [Xamarin](create-mongodb-xamarin.md)
> * [Golang](create-mongodb-go.md) >
cosmos-db Create Mongodb Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/create-mongodb-python.md
Previously updated : 10/22/2021 Last updated : 04/26/2022 ms.devlang: python
> * [Python](create-mongodb-python.md) > * [Java](create-mongodb-java.md) > * [Node.js](create-mongodb-nodejs.md)
-> * [Xamarin](create-mongodb-xamarin.md)
> * [Golang](create-mongodb-go.md) >
cosmos-db Create Mongodb Rust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/create-mongodb-rust.md
- Title: Connect a Rust application to Azure Cosmos DB's API for MongoDB
-description: This quickstart demonstrates how to build a Rust application backed by Azure Cosmos DB's API for MongoDB.
----- Previously updated : 08/26/2021--
-# Quickstart: Connect a Rust application to Azure Cosmos DB's API for MongoDB
-
-> [!div class="op_single_selector"]
-> * [.NET](create-mongodb-dotnet.md)
-> * [Python](create-mongodb-python.md)
-> * [Java](create-mongodb-java.md)
-> * [Node.js](create-mongodb-nodejs.md)
-> * [Xamarin](create-mongodb-xamarin.md)
-> * [Golang](create-mongodb-go.md)
-> * [Rust](create-mongodb-rust.md)
->
-
-Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities. The sample presented in this article is a simple command-line based application that uses the [Rust driver for MongoDB](https://github.com/mongodb/mongo-rust-driver). Since Azure Cosmos DB's API for MongoDB is [compatible with the MongoDB wire protocol](./mongodb-introduction.md), it is possible for any MongoDB client driver to connect to it.
-
-You will learn how to use the MongoDB Rust driver to interact with Azure Cosmos DB's API for MongoDB by exploring CRUD (create, read, update, delete) operations implemented in the sample code. Finally, you can run the application locally to see it in action.
-
-## Prerequisites
--- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free). Or [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription. You can also use the [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) with the connection string `.mongodb://localhost:C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==@localhost:10255/admin?ssl=true`.-- [Rust](https://www.rust-lang.org/tools/install) (version 1.39 or above)-- [Git](https://git-scm.com/downloads)-
-## Set up Azure Cosmos DB
-
-To set up an Azure Cosmos DB account, follow the [instructions here](create-mongodb-dotnet.md). The application will need the MongoDB connection string which you can fetch using the Azure portal. For details, see [Get the MongoDB connection string to customize](connect-mongodb-account.md#get-the-mongodb-connection-string-to-customize).
-
-## Run the application
-
-### Clone the sample application
-
-Run the following commands to clone the sample repository.
-
-1. Open a command prompt, create a new folder named `git-samples`, then close the command prompt.
-
- ```bash
- mkdir "C:\git-samples"
- ```
-
-1. Open a git terminal window, such as git bash, and use the `cd` command to change to the new folder to install the sample app.
-
- ```bash
- cd "C:\git-samples"
- ```
-
-1. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
-
- ```bash
- git clone https://github.com/Azure-Samples/cosmosdb-rust-mongodb-quickstart
- ```
-
-### Build the application
-
-To build the binary:
-
-```bash
-cargo build --release
-```
-
-### Configure the application
-
-Export the connection string, MongoDB database, and collection names as environment variables.
-
-```bash
-export MONGODB_URL="mongodb://<COSMOSDB_ACCOUNT_NAME>:<COSMOSDB_PASSWORD>@<COSMOSDB_ACCOUNT_NAME>.mongo.cosmos.azure.com:10255/?ssl=true&replicaSet=globaldb&maxIdleTimeMS=120000&appName=@<COSMOSDB_ACCOUNT_NAME>@"
-```
-
-> [!NOTE]
-> The `ssl=true` option is important because of Cosmos DB requirements. For more information, see [Connection string requirements](connect-mongodb-account.md#connection-string-requirements).
->
-
-For the `MONGODB_URL` environment variable, replace the placeholders for `<COSMOSDB_ACCOUNT_NAME>` and `<COSMOSDB_PASSWORD>`
--- `<COSMOSDB_ACCOUNT_NAME>`: The name of the Azure Cosmos DB account you created-- `<COSMOSDB_PASSWORD>`: The database key extracted in the previous step-
-```bash
-export MONGODB_DATABASE=todos_db
-export MONGODB_COLLECTION=todos
-```
-
-You can choose your preferred values for `MONGODB_DATABASE` and `MONGODB_COLLECTION` or leave them as is.
-
-To run the application, change to the correct folder (where the application binary exists):
-
-```bash
-cd target/release
-```
-
-To create a `todo`
-
-```bash
-./todo create "Create an Azure Cosmos DB database account"
-```
-
-If successful, you should see an output with the MongoDB `_id` of the newly created document:
-
-```bash
-inserted todo with id = ObjectId("5ffd1ca3004cc935004a0959")
-```
-
-Create another `todo`
-
-```bash
-./todo create "Get the MongoDB connection string using the Azure CLI"
-```
-
-List all the `todo`s
-
-```bash
-./todo list all
-```
-
-You should see the ones you just added:
-
-```bash
-todo_id: 5ffd1ca3004cc935004a0959 | description: Create an Azure Cosmos DB database account | status: pending
-todo_id: 5ffd1cbe003bcec40022c81c | description: Get the MongoDB connection string using the Azure CLI | status: pending
-```
-
-To update the status of a `todo` (for example, change it to `completed` status), use the `todo` ID as such:
-
-```bash
-./todo update 5ffd1ca3004cc935004a0959 completed
-
-#output
-updating todo_id 5ffd1ca3004cc935004a0959 status to completed
-updated status for todo id 5ffd1ca3004cc935004a0959
-```
-
-List only the completed `todo`s
-
-```bash
-./todo list completed
-```
-
-You should see the one you just updated
-
-```bash
-listing 'completed' todos
-
-todo_id: 5ffd1ca3004cc935004a0959 | description: Create an Azure Cosmos DB database account | status: completed
-```
-
-Delete a `todo` using it's ID
-
-```bash
-./todo delete 5ffd1ca3004cc935004a0959
-```
-
-List the `todo`s to confirm
-
-```bash
-./todo list all
-```
-
-The `todo` you just deleted should not be present.
-
-### View data in Data Explorer
-
-Data stored in Azure Cosmos DB is available to view and query in the Azure portal.
-
-To view, query, and work with the user data created in the previous step, login to the [Azure portal](https://portal.azure.com) in your web browser.
-
-In the top Search box, enter **Azure Cosmos DB**. When your Cosmos account blade opens, select your Cosmos account. In the left navigation, select **Data Explorer**. Expand your collection in the Collections pane, and then you can view the documents in the collection, query the data, and even create and run stored procedures, triggers, and UDFs.
-
-## Review the code (optional)
-
-If you're interested in learning how the application works, you can review the code snippets in this section. The following snippets are taken from the `src/main.rs` file.
-
-The `main` function is the entry point for the `todo` application. It expects the connection URL for Azure Cosmos DB's API for MongoDB to be provided by the `MONGODB_URL` environment variable. A new instance of `TodoManager` is created, followed by a [`match` expression](https://doc.rust-lang.org/book/ch06-02-match.html) that delegates to the appropriate `TodoManager` method based on the operation chosen by the user - `create`, `update`, `list`, or `delete`.
-
-```rust
-fn main() {
- let conn_string = std::env::var_os("MONGODB_URL").expect("missing environment variable MONGODB_URL").to_str().expect("failed to get MONGODB_URL").to_owned();
- let todos_db_name = std::env::var_os("MONGODB_DATABASE").expect("missing environment variable MONGODB_DATABASE").to_str().expect("failed to get MONGODB_DATABASE").to_owned();
- let todos_collection_name = std::env::var_os("MONGODB_COLLECTION").expect("missing environment variable MONGODB_COLLECTION").to_str().expect("failed to get MONGODB_COLLECTION").to_owned();
-
- let tm = Todo
-
- let ops: Vec<String> = std::env::args().collect();
- let op = ops[1].as_str();
-
- match op {
- CREATE_OPERATION_NAME => tm.add_todo(ops[2].as_str()),
- LIST_OPERATION_NAME => tm.list_todos(ops[2].as_str()),
- UPDATE_OPERATION_NAME => tm.update_todo_status(ops[2].as_str(), ops[3].as_str()),
- DELETE_OPERATION_NAME => tm.delete_todo(ops[2].as_str()),
- _ => panic!(INVALID_OP_ERR_MSG)
- }
-}
-```
-
-`TodoManager` is a `struct` that encapsulates a [mongodb::sync::Collection](https://docs.rs/mongodb/1.1.1/mongodb/sync/struct.Collection.html). When you try to instantiate a `TodoManager` using the `new` function, it initiates a connection to Azure Cosmos DB's API for MongoDB.
-
-```rust
-struct TodoManager {
- coll: Collection
-}
-....
-impl TodoManager{
- fn new(conn_string: String, db_name: &str, coll_name: &str) -> Self{
- let mongo_client = Client::with_uri_str(&*conn_string).expect("failed to create client");
- let todo_coll = mongo_client.database(db_name).collection(coll_name);
-
- TodoManager{coll: todo_coll}
- }
-....
-```
-
-Most importantly, `TodoManager` has methods to help manage `todo`s. Let's go over them one by one.
-
-The `add_todo` method takes in a `todo` description provided by the user and creates an instance of `Todo` struct, which looks like below. The [serde](https://github.com/serde-rs/serde) framework is used to map (serialize/de-serialize) BSON data into instances of `Todo` structs. Notice how `serde` field attributes are used to customize the serialization/de-serialzation process. For example, `todo_id` field in the Todo `struct` is an `ObjectId` and it is stored in MongoDB as `_id`.
-
-```rust
-#[derive(Serialize, Deserialize)]
-struct Todo {
- #[serde(rename = "_id", skip_serializing_if = "Option::is_none")]
- todo_id: Option<bson::oid::ObjectId>,
- #[serde(rename = "description")]
- desc: String,
- status: String,
-}
-```
-
-[Collection.insert_one](https://docs.rs/mongodb/1.1.1/mongodb/struct.Collection.html#method.insert_one) accepts a [Document](https://docs.rs/bson/1.1.0/bson/document/struct.Document.html) representing the `todo` details to be added. Note that the conversion from `Todo` to a `Document` is a two-step process, achieved using a combination of [to_bson](https://docs.rs/bson/1.1.0/bson/ser/fn.to_bson.html) and [as_document](https://docs.rs/bson/1.1.0/bson/enum.Bson.html#method.as_document).
-
-```rust
-fn add_todo(self, desc: &str) {
- let new_todo = Todo {
- todo_id: None,
- desc: String::from(desc),
- status: String::from(TODO_PENDING_STATUS),
- };
-
- let todo_doc = mongodb::bson::to_bson(&new_todo).expect("struct to BSON conversion failed").as_document().expect("BSON to Document conversion failed").to_owned();
-
- let r = self.coll.insert_one(todo_doc, None).expect("failed to add todo");
- println!("inserted todo with id = {}", r.inserted_id);
-}
-```
-
-[Collection.find](https://docs.rs/mongodb/1.1.1/mongodb/struct.Collection.html#method.find) is used to get the retrieve *all* the `todo`s or filters them based on the user provided status (`pending` or `completed`). Note how in the `while` loop, each `Document` obtained as a result of the search is converted into a `Todo` struct using [bson::from_bson](https://docs.rs/bson/1.1.0/bson/de/fn.from_bson.html). This is the opposite of what was done in the `add_todo` method.
-
-```rust
-fn list_todos(self, status_filter: &str) {
- let mut filter = doc!{};
- if status_filter == TODO_PENDING_STATUS || status_filter == TODO_COMPLETED_STATUS{
- println!("listing '{}' todos",status_filter);
- filter = doc!{"status": status_filter}
- } else if status_filter != "all" {
- panic!(INVALID_FILTER_ERR_MSG)
- }
-
- let mut todos = self.coll.find(filter, None).expect("failed to find todos");
-
- while let Some(result) = todos.next() {
- let todo_doc = result.expect("todo not present");
- let todo: Todo = bson::from_bson(Bson::Document(todo_doc)).expect("BSON to struct conversion failed");
- println!("todo_id: {} | description: {} | status: {}", todo.todo_id.expect("todo id missing"), todo.desc, todo.status);
- }
-}
-```
-
-A `todo` status can be updated (from `pending` to `completed` or vice versa). The `todo` is converted to a
-[bson::oid::ObjectId](https://docs.rs/bson/1.1.0/bson/oid/struct.ObjectId.html) which then used by the[Collection.update_one](https://docs.rs/mongodb/1.1.1/mongodb/struct.Collection.html#method.update_one) method to locate the document that needs to be
-updated.
-
-```rust
-fn update_todo_status(self, todo_id: &str, status: &str) {
-
- if status != TODO_COMPLETED_STATUS && status != TODO_PENDING_STATUS {
- panic!(INVALID_FILTER_ERR_MSG)
- }
-
- println!("updating todo_id {} status to {}", todo_id, status);
-
- let id_filter = doc! {"_id": bson::oid::ObjectId::with_string(todo_id).expect("todo_id is not valid ObjectID")};
-
- let r = self.coll.update_one(id_filter, doc! {"$set": { "status": status }}, None).expect("update failed");
- if r.modified_count == 1 {
- println!("updated status for todo id {}",todo_id);
- } else if r.matched_count == 0 {
- println!("could not update. check todo id {}",todo_id);
- }
-}
-```
-
-Deleting a `todo` is straightforward using the [Collection.delete_one](https://docs.rs/mongodb/1.1.1/mongodb/struct.Collection.html#method.delete_one) method.
--
-```rust
-fn delete_todo(self, todo_id: &str) {
- println!("deleting todo {}", todo_id);
-
- let id_filter = doc! {"_id": bson::oid::ObjectId::with_string(todo_id).expect("todo_id is not valid ObjectID")};
-
- self.coll.delete_one(id_filter, None).expect("delete failed").deleted_count;
-}
-```
-
-## Clean up resources
--
-## Next steps
-
-In this quickstart, you learned how to create an Azure Cosmos DB MongoDB API account using the Azure Cloud Shell, and create and run a Rust command-line app to manage `todo`s. You can now import additional data to your Azure Cosmos DB account.
-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
-
-> [!div class="nextstepaction"]
-> [Import MongoDB data into Azure Cosmos DB](../../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json)
cosmos-db Create Mongodb Xamarin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/create-mongodb-xamarin.md
- Title: Build a Xamarin app with .NET and Azure Cosmos DB's API for MongoDB
-description: Presents a Xamarin code sample you can use to connect to and query with Azure Cosmos DB's API for MongoDB
---- Previously updated : 08/26/2021----
-# QuickStart: Build a Xamarin.Forms app with .NET SDK and Azure Cosmos DB's API for MongoDB
-
-> [!div class="op_single_selector"]
-> * [.NET](create-mongodb-dotnet.md)
-> * [Python](create-mongodb-python.md)
-> * [Java](create-mongodb-java.md)
-> * [Node.js](create-mongodb-nodejs.md)
-> * [Xamarin](create-mongodb-xamarin.md)
-> * [Golang](create-mongodb-go.md)
->
-
-Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can quickly create and query document, key/value, and graph databases, all of which benefit from the global distribution and horizontal scale capabilities at the core of Azure Cosmos DB.
-
-This quickstart demonstrates how to create a [Cosmos account configured with Azure Cosmos DB's API for MongoDB](mongodb-introduction.md), document database, and collection using the Azure portal. You'll then build a todo app Xamarin.Forms app by using the [MongoDB .NET driver](https://docs.mongodb.com/ecosystem/drivers/csharp/).
-
-## Prerequisites to run the sample app
-
-To run the sample, you'll need [Visual Studio](https://www.visualstudio.com/downloads/) or [Visual Studio for Mac](https://visualstudio.microsoft.com/vs/mac/) and a valid Azure CosmosDB account.
-
-If you don't already have Visual Studio, download [Visual Studio 2019 Community Edition](https://www.visualstudio.com/downloads/) with the **Mobile development with .NET** workload installed with setup.
-
-If you prefer to work on a Mac, download [Visual Studio for Mac](https://visualstudio.microsoft.com/vs/mac/) and run the setup.
--
-<a id="create-account"></a>
-
-## Create a database account
--
-The sample described in this article is compatible with MongoDB.Driver version 2.6.1.
-
-## Clone the sample app
-
-First, download the sample app from GitHub. It implements a todo app with MongoDB's document storage model.
---
-# [Windows](#tab/windows)
-
-1. On Windows open a command prompt or on Mac open the terminal, create a new folder named git-samples, then close the window.
-
- ```batch
- md "C:\git-samples"
- ```
-
- ```bash
- mkdir '$home\git-samples\
- ```
-
-2. Open a git terminal window, such as git bash, and use the `cd` command to change to the new folder to install the sample app.
-
- ```bash
- cd "C:\git-samples"
- ```
-
-3. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
-
- ```bash
- git clone https://github.com/Azure-Samples/azure-cosmos-db-mongodb-xamarin-getting-started.git
- ```
-
-If you don't wish to use git, you can also [download the project as a ZIP file](https://github.com/Azure-Samples/azure-cosmos-db-mongodb-xamarin-getting-started/archive/master.zip)
-
-## Review the code
-
-This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [Update your connection string](#update-your-connection-string).
-
-The following snippets are all taken from the `MongoService` class, found at the following path: src/TaskList.Core/Services/MongoService.cs.
-
-* Initialize the Mongo Client.
- ```cs
- MongoClientSettings settings = MongoClientSettings.FromUrl(new MongoUrl(APIKeys.ConnectionString));
-
- settings.SslSettings = new SslSettings() { EnabledSslProtocols = SslProtocols.Tls12 };
-
- settings.RetryWrites = false;
-
- MongoClient mongoClient = new MongoClient(settings);
- ```
-
-* Retrieve a reference to the database and collection. The MongoDB .NET SDK will automatically create both the database and collection if they do not already exist.
- ```cs
- string dbName = "MyTasks";
- string collectionName = "TaskList";
-
- var db = mongoClient.GetDatabase(dbName);
-
- var collectionSettings = new MongoCollectionSettings
- {
- ReadPreference = ReadPreference.Nearest
- };
-
- tasksCollection = db.GetCollection<MyTask>(collectionName, collectionSettings);
- ```
-* Retrieve all documents as a List.
- ```cs
- var allTasks = await TasksCollection
- .Find(new BsonDocument())
- .ToListAsync();
- ```
-
-* Query for particular documents.
- ```cs
- public async Task<List<MyTask>> GetIncompleteTasksDueBefore(DateTime date)
- {
- var tasks = await TasksCollection
- .AsQueryable()
- .Where(t => t.Complete == false)
- .Where(t => t.DueDate < date)
- .ToListAsync();
-
- return tasks;
- }
- ```
-
-* Create a task and insert it into the collection.
- ```cs
- public async Task CreateTask(MyTask task)
- {
- await TasksCollection.InsertOneAsync(task);
- }
- ```
-
-* Update a task in a collection.
- ```cs
- public async Task UpdateTask(MyTask task)
- {
- await TasksCollection.ReplaceOneAsync(t => t.Id.Equals(task.Id), task);
- }
- ```
-
-* Delete a task from a collection.
- ```cs
- public async Task DeleteTask(MyTask task)
- {
- await TasksCollection.DeleteOneAsync(t => t.Id.Equals(task.Id));
- }
- ```
-
-<a id="update-your-connection-string"></a>
-
-## Update your connection string
-
-Now go back to the Azure portal to get your connection string information and copy it into the app.
-
-1. In the [Azure portal](https://portal.azure.com/), in your Azure Cosmos DB account, in the left navigation click **Connection String**, and then click **Read-write Keys**. You'll use the copy buttons on the right side of the screen to copy the Primary Connection String in the next steps.
-
-2. Open the **APIKeys.cs** file in the **Helpers** directory of the **TaskList.Core** project.
-
-3. Copy your **primary connection string** value from the portal (using the copy button) and make it the value of the **ConnectionString** field in your **APIKeys.cs** file.
-
-4. Remove `&replicaSet=globaldb` from the connection string. You will get a runtime error if you do not remove that value from the query string.
-
-> [!IMPORTANT]
-> You must remove the `&replicaSet=globaldb` key/value pair from the connection string's query string in order to avoid a runtime error.
-
-You've now updated your app with all the info it needs to communicate with Azure Cosmos DB.
-
-## Run the app
-
-### Visual Studio 2019
-
-1. In Visual Studio, right-click on each project in **Solution Explorer** and then click **Manage NuGet Packages**.
-2. Click **Restore all NuGet packages**.
-3. Right click on the **TaskList.Android** and select **Set as startup project**.
-4. Press F5 to start debugging the application.
-5. If you want to run on iOS, first your machine is connected to a Mac (here are [instructions](/xamarin/ios/get-started/installation/windows/introduction-to-xamarin-ios-for-visual-studio) on how to do so).
-6. Right click on **TaskList.iOS** project and select **Set as startup project**.
-7. Click F5 to start debugging the application.
-
-### Visual Studio for Mac
-
-1. In the platform dropdown list, select either TaskList.iOS or TaskList.Android, depending which platform you want to run on.
-2. Press cmd+Enter to start debugging the application.
-
-## Review SLAs in the Azure portal
--
-## Clean up resources
--
-## Next steps
-
-In this quickstart, you've learned how to create an Azure Cosmos DB account and run a Xamarin.Forms app using the API for MongoDB. You can now import additional data to your Cosmos DB account.
-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
-
-> [!div class="nextstepaction"]
-> [Import data into Azure Cosmos DB configured with Azure Cosmos DB's API for MongoDB](../../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json)
cosmos-db Monitor Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-cosmos-db.md
For example, the following table lists few alert rules for your resources. You c
| Alert type | Condition | Description | |:|:|:| |Rate limiting on request units (metric alert) |Dimension name: StatusCode, Operator: Equals, Dimension values: 429 | Alerts if the container or a database has exceeded the provisioned throughput limit. |
-|Region failed over |Operator: Greater than, Aggregation type: Count, Threshold value: 1 | When a single region is failed over. This alert is helpful if you didn't enable automatic failover. |
+|Region failed over |Operator: Greater than, Aggregation type: Count, Threshold value: 1 | When a single region is failed over. This alert is helpful if you didn't enable service-managed failover. |
| Rotate keys(activity log alert)| Event level: Informational , Status: started| Alerts when the account keys are rotated. You can update your application with the new keys. | ## <a id="monitor-cosmosdb-programmatically"></a> Monitor Azure Cosmos DB programmatically
cosmos-db Manage With Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/manage-with-cli.md
The following sections demonstrate how to manage the Azure Cosmos account, inclu
* [Add or remove regions](#add-or-remove-regions) * [Enable multi-region writes](#enable-multiple-write-regions) * [Set regional failover priority](#set-failover-priority)
-* [Enable automatic failover](#enable-automatic-failover)
+* [Enable service-managed failover](#enable-automatic-failover)
* [Trigger manual failover](#trigger-manual-failover) * [List account keys](#list-account-keys) * [List read-only account keys](#list-read-only-account-keys)
az cosmosdb update --ids $accountId --enable-multiple-write-locations true
### Set failover priority
-Set the failover priority for an Azure Cosmos account configured for automatic failover
+Set the failover priority for an Azure Cosmos account configured for service-managed failover
```azurecli-interactive # Assume region order is initially 'West US'=0 'East US'=1 'South Central US'=2 for account
az cosmosdb failover-priority-change --ids $accountId \
--failover-policies 'West US=0' 'South Central US=1' 'East US=2' ```
-### Enable automatic failover
+### Enable service-managed failover
```azurecli-interactive
-# Enable automatic failover on an existing account
+# Enable service-managed failover on an existing account
resourceGroupName='myResourceGroup' accountName='mycosmosaccount'
For more information on the Azure CLI, see:
* [Install Azure CLI](/cli/azure/install-azure-cli) * [Azure CLI Reference](/cli/azure/cosmosdb)
-* [Additional Azure CLI samples for Azure Cosmos DB](cli-samples.md)
+* [Additional Azure CLI samples for Azure Cosmos DB](cli-samples.md)
cosmos-db Manage With Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/manage-with-powershell.md
The following sections demonstrate how to manage the Azure Cosmos account, inclu
### <a id="create-account"></a> Create an Azure Cosmos account
-This command creates an Azure Cosmos DB database account with [multiple regions][distribute-data-globally], [automatic failover](../how-to-manage-database-account.md#automatic-failover) and bounded-staleness [consistency policy](../consistency-levels.md).
+This command creates an Azure Cosmos DB database account with [multiple regions][distribute-data-globally], [service-managed failover](../how-to-manage-database-account.md#automatic-failover) and bounded-staleness [consistency policy](../consistency-levels.md).
```azurepowershell-interactive $resourceGroupName = "myResourceGroup"
$accountName = "mycosmosaccount"
$enableAutomaticFailover = $false $enableMultiMaster = $true
-# First disable automatic failover - cannot have both automatic
+# First disable service-managed failover - cannot have both service-managed
# failover and multi-region writes on an account Update-AzCosmosDBAccount ` -ResourceGroupName $resourceGroupName `
New-AzCosmosDBAccountKey `
-KeyKind $keyKind ```
-### <a id="enable-automatic-failover"></a> Enable automatic failover
+### <a id="enable-automatic-failover"></a> Enable service-managed failover
The following command sets a Cosmos DB account to fail over automatically to its secondary region should the primary region become unavailable.
Update-AzCosmosDBAccount `
-Name $accountName ` -EnableMultipleWriteLocations:$enableMultiMaster
-# Now enable automatic failover
+# Now enable service-managed failover
Update-AzCosmosDBAccount ` -ResourceGroupName $resourceGroupName ` -Name $accountName `
Update-AzCosmosDBAccount `
### <a id="modify-failover-priority"></a> Modify Failover Priority
-For accounts configured with Automatic Failover, you can change the order in which Cosmos will promote secondary replicas to primary should the primary become unavailable.
+For accounts configured with Service-Managed Failover, you can change the order in which Cosmos will promote secondary replicas to primary should the primary become unavailable.
For the example below, assume the current failover priority is `West US = 0`, `East US = 1`, `South Central US = 2`. The command will change this to `West US = 0`, `South Central US = 1`, `East US = 2`.
cosmos-db Sql Sdk Connection Modes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-sdk-connection-modes.md
Previously updated : 10/05/2021 Last updated : 04/26/2022
The following table shows a summary of the connectivity modes available for vari
|Gateway | HTTPS | All SDKs | SQL (443), MongoDB (10250, 10255, 10256), Table (443), Cassandra (10350), Graph (443) <br> The port 10250 maps to a default Azure Cosmos DB API for MongoDB instance without geo-replication. Whereas the ports 10255 and 10256 map to the instance that has geo-replication. | |Direct | TCP | .NET SDK Java SDK | When using public/service endpoints: ports in the 10000 through 20000 range<br>When using private endpoints: ports in the 0 through 65535 range |
+## <a id="direct-mode"></a> Direct mode connection architecture
+
+As detailed in the [introduction](#available-connectivity-modes), Direct mode clients will directly connect to the backend nodes through TCP protocol. Each backend node represents a replica in a [replica set](../partitioning-overview.md#replica-sets) belonging to a [physical partition](../partitioning-overview.md#physical-partitions).
+
+### Routing
+
+When an Azure Cosmos DB SDK on Direct mode is performing an operation, it needs to resolve which backend replica to connect to. The first step is knowing which physical partition should the operation go to, and for that, the SDK obtains the container information that includes the [partition key definition](../partitioning-overview.md#choose-partitionkey) from a Gateway node and considered [metadata](../concepts-limits.md#metadata-request-limits). It also needs the routing information that contains the replicas' TCP addresses. The routing information is available also from Gateway nodes. Once the SDK obtains the routing information, it can proceed to open the TCP connections to the replicas belonging to the target physical partition and execute the operations.
+
+Each replica set contains one primary replica and three secondaries. Write operations are always routed to primary replica nodes while read operations can be served from primary or secondary nodes.
++
+Because the container and routing information don't change often, it's cached locally on the SDKs so subsequent operations can benefit from this information. The TCP connections already established are also reused across operations. Unless otherwise configured through the SDKs options, connections are permanently maintained during the lifetime of the SDK instance.
+
+### Volume of connections
+
+Each physical partition has a replica set of four replicas, in order to provide the best possible performance, SDKs will end up opening connections to all replicas for workloads that mix write and read operations. Concurrent operations are load balanced across existing connections to take advantage of the throughput each replica provides.
+
+There are two factors that dictate the number of TCP connections the SDK will open:
+
+* Number of physical partitions
+
+ In a steady state, the SDK will have one connection per replica per physical partition. The larger the number of physical partitions in a container, the larger the number of open connections will be. As operations are routed across different partitions, connections are established on demand. The average number of connections would then be the number of physical partitions times four.
+
+* Volume of concurrent requests
+
+ Each established connection can serve a configurable number of concurrent operations. If the volume of concurrent operations exceeds this threshold, new connections will be open to serve them, and it's possible that for a physical partition, the number of open connections exceeds the steady state number. This behavior is expected for workloads that might have spikes in their operational volume. For the .NET SDK this configuration is set by [CosmosClientOptions.MaxRequestsPerTcpConnection](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.maxrequestspertcpconnection), and for the Java SDK you can customize using [DirectConnectionConfig.setMaxRequestsPerConnection](/java/api/com.azure.cosmos.directconnectionconfig.setmaxrequestsperconnection).
+
+By default, connections are permanently maintained to benefit the performance of future operations (opening a connection has computational overhead). There might be some scenarios where you might want to close connections that are unused for some time understanding that this might affect future operations slightly. For the .NET SDK this configuration is set by [CosmosClientOptions.IdleTcpConnectionTimeout](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.idletcpconnectiontimeout), and for the Java SDK you can customize using [DirectConnectionConfig.setIdleConnectionTimeout](/java/api/com.azure.cosmos.directconnectionconfig.setidleconnectiontimeout). It isn't recommended to set these configurations to low values as it might cause connections to be frequently closed and affect overall performance.
+
+### Language specific implementation details
+
+For further implementation details regarding a language see:
+
+* [.NET SDK implementation information](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/docs/SdkDesign.md)
+* [Java SDK direct mode information](performance-tips-java-sdk-v4-sql.md#direct-connection)
+ ## Next steps For specific SDK platform performance optimizations:
cosmos-db Create Table Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/create-table-java.md
Last updated 05/28/2020
-# Quickstart: Build a Java app to manage Azure Cosmos DB Table API data
+# Quickstart: Build a Table API app with Java SDK and Azure Cosmos DB
+ [!INCLUDE[appliesto-table-api](../includes/appliesto-table-api.md)] > [!div class="op_single_selector"]
> * [Python](how-to-use-python.md) >
-In this quickstart, you create an Azure Cosmos DB Table API account, and use Data Explorer and a Java app cloned from GitHub to create tables and entities. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
+This quickstart shows how to access the Azure Cosmos DB [Tables API](introduction.md) from a Java application. The Cosmos DB Tables API is a schemaless data store allowing applications to store structured NoSQL data in the cloud. Because data is stored in a schemaless design, new properties (columns) are automatically added to the table when an object with a new attribute is added to the table.
+
+Java applications can access the Cosmos DB Tables API using the [azure-data-tables](https://search.maven.org/artifact/com.azure/azure-data-tables) client library.
## Prerequisites -- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription. You can also use the [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) with a URI of `https://localhost:8081` and the key `C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==`.-- [Java Development Kit (JDK) 8](https://www.azul.com/downloads/azure-only/zulu/?&version=java-8-lts&architecture=x86-64-bit&package=jdk). Point your `JAVA_HOME` environment variable to the folder where the JDK is installed.-- A [Maven binary archive](https://maven.apache.org/download.cgi). -- [Git](https://www.git-scm.com/downloads).
+The sample application is written in [Spring Boot 2.6.4](https://spring.io/projects/spring-boot), You can use either [Visual Studio Code](https://code.visualstudio.com/), or [IntelliJ IDEA](https://www.jetbrains.com/idea/) as an IDE.
++
+## Sample application
+
+The sample application for this tutorial may be cloned or downloaded from the repository [https://github.com/Azure-Samples/msdocs-azure-data-tables-sdk-java](https://github.com/Azure-Samples/msdocs-azure-data-tables-sdk-java). Both a starter and completed app are included in the sample repository.
+
+```bash
+git clone https://github.com/Azure-Samples/msdocs-azure-data-tables-sdk-java
+```
+
+The sample application uses weather data as an example to demonstrate the capabilities of the Tables API. Objects representing weather observations are stored and retrieved using the Table API, including storing objects with additional properties to demonstrate the schemaless capabilities of the Tables API.
++
+## 1 - Create an Azure Cosmos DB account
+
+You first need to create a Cosmos DB Tables API account that will contain the table(s) used in your application. This can be done using the Azure portal, Azure CLI, or Azure PowerShell.
+
+### [Azure portal](#tab/azure-portal)
+
+Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps to create a Cosmos DB account.
+
+| Instructions | Screenshot |
+|:|--:|
+| [!INCLUDE [Create cosmos db account step 1](./includes/create-table-jav)] | :::image type="content" source="./media/create-table-java/azure-portal-create-cosmos-db-account-table-api-1-240px.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find Cosmos D B accounts in Azure." lightbox="./media/create-table-java/azure-portal-create-cosmos-db-account-table-api-1.png"::: |
+| [!INCLUDE [Create cosmos db account step 2](./includes/create-table-jav)] | :::image type="content" source="./media/create-table-java/azure-portal-create-cosmos-db-account-table-api-2-240px.png" alt-text="A screenshot showing the Create button location on the Cosmos D B accounts page in Azure." lightbox="./media/create-table-java/azure-portal-create-cosmos-db-account-table-api-2.png"::: |
+| [!INCLUDE [Create cosmos db account step 3](./includes/create-table-jav)] | :::image type="content" source="./media/create-table-java/azure-portal-create-cosmos-db-account-table-api-3-240px.png" alt-text="A screenshot showing the Azure Table option as the correct option to select." lightbox="./media/create-table-java/azure-portal-create-cosmos-db-account-table-api-3.png"::: |
+| [!INCLUDE [Create cosmos db account step 4](./includes/create-table-jav)] | :::image type="content" source="./media/create-table-java/azure-portal-create-cosmos-db-account-table-api-4-240px.png" alt-text="A screenshot showing how to fill out the fields on the Cosmos D B Account creation page." lightbox="./media/create-table-java/azure-portal-create-cosmos-db-account-table-api-4.png"::: |
+
+### [Azure CLI](#tab/azure-cli)
-## Create a database account
+Cosmos DB accounts are created using the [az Cosmos DB create](/cli/azure/cosmosdb#az_cosmosdb_create) command. You must include the `--capabilities EnableTable` option to enable table storage within your Cosmos DB. As all Azure resource must be contained in a resource group, the following code snippet also creates a resource group for the Cosmos DB account.
-> [!IMPORTANT]
-> You need to create a new Table API account to work with the generally available Table API SDKs. Table API accounts created during preview are not supported by the generally available SDKs.
->
+Cosmos DB account names must be between 3 and 44 characters in length and may contain only lowercase letters, numbers, and the hyphen (-) character. Cosmos DB account names must also be unique across Azure.
+Azure CLI commands can be run in the [Azure Cloud Shell](https://shell.azure.com) or on a workstation with the [Azure CLI installed](/cli/azure/install-azure-cli).
-## Add a table
+It typically takes several minutes for the Cosmos DB account creation process to complete.
+```azurecli
+LOCATION='eastus'
+RESOURCE_GROUP_NAME='rg-msdocs-tables-sdk-demo'
+COSMOS_ACCOUNT_NAME='cosmos-msdocs-tables-sdk-demo-123' # change 123 to a unique set of characters for a unique name
+COSMOS_TABLE_NAME='WeatherData'
-## Add sample data
+az group create \
+ --location $LOCATION \
+ --name $RESOURCE_GROUP_NAME
+az cosmosdb create \
+ --name $COSMOS_ACCOUNT_NAME \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --capabilities EnableTable
+```
-## Clone the sample application
+### [Azure PowerShell](#tab/azure-powershell)
-Now let's clone a Table app from GitHub, set the connection string, and run it. You'll see how easy it is to work with data programmatically.
+Azure Cosmos DB accounts are created using the [New-AzCosmosDBAccount](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) cmdlet. You must include the `-ApiKind "Table"` option to enable table storage within your Cosmos DB. As all Azure resource must be contained in a resource group, the following code snippet also creates a resource group for the Azure Cosmos DB account.
-1. Open a command prompt, create a new folder named git-samples, then close the command prompt.
+Azure Cosmos DB account names must be between 3 and 44 characters in length and may contain only lowercase letters, numbers, and the hyphen (-) character. Azure Cosmos DB account names must also be unique across Azure.
- ```bash
- md "C:\git-samples"
- ```
+Azure PowerShell commands can be run in the [Azure Cloud Shell](https://shell.azure.com) or on a workstation with [Azure PowerShell installed](/powershell/azure/install-az-ps).
-2. Open a git terminal window, such as git bash, and use the `cd` command to change to the new folder to install the sample app.
+It typically takes several minutes for the Cosmos DB account creation process to complete.
- ```bash
- cd "C:\git-samples"
- ```
+```azurepowershell
+$location = 'eastus'
+$resourceGroupName = 'rg-msdocs-tables-sdk-demo'
+$cosmosAccountName = 'cosmos-msdocs-tables-sdk-demo-123' # change 123 to a unique set of characters for a unique name
-3. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
+# Create a resource group
+New-AzResourceGroup `
+ -Location $location `
+ -Name $resourceGroupName
- ```bash
- git clone https://github.com/Azure-Samples/storage-table-java-getting-started.git
- ```
+# Create an Azure Cosmos DB
+New-AzCosmosDBAccount `
+ -Name $cosmosAccountName `
+ -ResourceGroupName $resourceGroupName `
+ -Location $location `
+ -ApiKind "Table"
+```
-> [!TIP]
-> For a more detailed walkthrough of similar code, see the [Cosmos DB Table API sample](how-to-use-java.md) article.
++
+## 2 - Create a table
+
+Next, you need to create a table within your Cosmos DB account for your application to use. Unlike a traditional database, you only need to specify the name of the table, not the properties (columns) in the table. As data is loaded into your table, the properties (columns) will be automatically created as needed.
+
+### [Azure portal](#tab/azure-portal)
+
+In the [Azure portal](https://portal.azure.com/), complete the following steps to create a table inside your Cosmos DB account.
+
+| Instructions | Screenshot |
+|:--|--:|
+| [!INCLUDE [Create cosmos db table step 1](./includes/create-table-jav)] | :::image type="content" source="./media/create-table-java/azure-portal-create-cosmos-db-table-api-1-240px.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find your Cosmos D B account." lightbox="./media/create-table-java/azure-portal-create-cosmos-db-table-api-1.png"::: |
+| [!INCLUDE [Create cosmos db table step 2](./includes/create-table-jav)] | :::image type="content" source="./media/create-table-java/azure-portal-create-cosmos-db-table-api-2-240px.png" alt-text="A screenshot showing the location of the Add Table button." lightbox="./media/create-table-java/azure-portal-create-cosmos-db-table-api-2.png"::: |
+| [!INCLUDE [Create cosmos db table step 3](./includes/create-table-jav)] | :::image type="content" source="./media/create-table-java/azure-portal-create-cosmos-db-table-api-3-240px.png" alt-text="A screenshot showing how to New Table dialog box for a Cosmos D B table." lightbox="./media/create-table-java/azure-portal-create-cosmos-db-table-api-3.png"::: |
+
+### [Azure CLI](#tab/azure-cli)
-## Review the code
+Tables in Cosmos DB are created using the [az Cosmos DB table create](/cli/azure/cosmosdb/table#az_cosmosdb_table_create) command.
-This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [update the connection string](#update-your-connection-string) section of this doc.
+```azurecli
+COSMOS_TABLE_NAME='WeatherData'
+
+az cosmosdb table create \
+ --account-name $COSMOS_ACCOUNT_NAME \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name $COSMOS_TABLE_NAME \
+ --throughput 400
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+Tables in Cosmos DB are created using the [New-AzCosmosDBTable](/powershell/module/az.cosmosdb/new-azcosmosdbtable) cmdlet.
+
+```azurepowershell
+$cosmosTableName = 'WeatherData'
+
+# Create the table for the application to use
+New-AzCosmosDBTable `
+ -Name $cosmosTableName `
+ -AccountName $cosmosAccountName `
+ -ResourceGroupName $resourceGroupName
+```
++
-* The following code shows how to create a table within the Azure Storage:
+## 3 - Get Cosmos DB connection string
+
+To access your table(s) in Cosmos DB, your app will need the table connection string for the CosmosDB Storage account. The connection string can be retrieved using the Azure portal, Azure CLI or Azure PowerShell.
+
+### [Azure portal](#tab/azure-portal)
+
+| Instructions | Screenshot |
+|:--|--:|
+| [!INCLUDE [Get cosmos db table connection string step 1](./includes/create-table-jav)] | :::image type="content" source="./media/create-table-java/azure-portal-cosmos-db-table-connection-string-1-240px.png" alt-text="A screenshot showing the location of the connection strings link on the Cosmos D B page." lightbox="./media/create-table-java/azure-portal-cosmos-db-table-connection-string-1.png"::: |
+| [!INCLUDE [Get cosmos db table connection string step 2](./includes/create-table-jav)] | :::image type="content" source="./media/create-table-java/azure-portal-cosmos-db-table-connection-string-2-240px.png" alt-text="A screenshot showing the which connection string to select and use in your application." lightbox="./media/create-table-java/azure-portal-cosmos-db-table-connection-string-2.png"::: |
+
+### [Azure CLI](#tab/azure-cli)
+
+To get the primary table storage connection string using Azure CLI, use the [az Cosmos DB keys list](/cli/azure/cosmosdb/keys#az_cosmosdb_keys_list) command with the option `--type connection-strings`. This command uses a [JMESPath query](https://jmespath.org/) to display only the primary table connection string.
+
+```azurecli
+# This gets the primary Table connection string
+az cosmosdb keys list \
+ --type connection-strings \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name $COSMOS_ACCOUNT_NAME \
+ --query "connectionStrings[?description=='Primary Table Connection String'].connectionString" \
+ --output tsv
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+To get the primary table storage connection string using Azure PowerShell, use the [Get-AzCosmosDBAccountKey](/powershell/module/az.cosmosdb/get-azcosmosdbaccountkey) cmdlet.
+
+```azurepowershell
+# This gets the primary Table connection string
+$(Get-AzCosmosDBAccountKey `
+ -ResourceGroupName $resourceGroupName `
+ -Name $cosmosAccountName `
+ -Type "ConnectionStrings")."Primary Table Connection String"
+```
+++
+The connection string for your Cosmos DB account is considered an app secret and must be protected like any other app secret or password. This example uses the POM to store the connection string during development and make it available to the application.
+
+```xml
+<profiles>
+ <profile>
+ <id>local</id>
+ <properties>
+ <azure.tables.connection.string>
+ <![CDATA[YOUR-DATA-TABLES-SERVICE-CONNECTION-STRING]]>
+ </azure.tables.connection.string>
+ <azure.tables.tableName>WeatherData</azure.tables.tableName>
+ </properties>
+ <activation>
+ <activeByDefault>true</activeByDefault>
+ </activation>
+ </profile>
+</profiles>
+```
+
+## 4 - Include the azure-data-tables package
+
+To access the Cosmos DB Tables API from a java application, include the [azure-data-tables](https://search.maven.org/artifact/com.azure/azure-data-tables) package.
+
+```xml
+<dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-data-tables</artifactId>
+ <version>12.2.1</version>
+</dependency>
+```
+++
+## 5 - Configure the Table client in TableServiceConfig.java
+
+The Azure SDK communicates with Azure using client objects to execute different operations against Azure. The [TableClient](/java/api/com.azure.data.tables.tableclient) object is the object used to communicate with the Cosmos DB Tables API.
+
+An application will typically create a single [TableClient](/java/api/com.azure.data.tables.tableclient) object per table to be used throughout the application. It's recommended to indicate that a method produces a [TableClient](/java/api/com.azure.data.tables.tableclient) object bean to be managed by the Spring container and as a singleton to accomplish this.
+
+In the `TableServiceConfig.java` file of the application, edit the `tableClientConfiguration()` method to match the following code snippet:
+
+```java
+@Configuration
+public class TableServiceConfiguration {
+
+ private static String TABLE_NAME;
+
+ private static String CONNECTION_STRING;
+
+ @Value("${azure.tables.connection.string}")
+ public void setConnectionStringStatic(String connectionString) {
+ TableServiceConfiguration.CONNECTION_STRING = connectionString;
+ }
+
+ @Value("${azure.tables.tableName}")
+ public void setTableNameStatic(String tableName) {
+ TableServiceConfiguration.TABLE_NAME = tableName;
+ }
+
+ @Bean
+ public TableClient tableClientConfiguration() {
+ return new TableClientBuilder()
+ .connectionString(CONNECTION_STRING)
+ .tableName(TABLE_NAME)
+ .buildClient();
+ }
+
+}
+```
- ```java
- private static CloudTable createTable(CloudTableClient tableClient, String tableName) throws StorageException, RuntimeException, IOException, InvalidKeyException, IllegalArgumentException, URISyntaxException, IllegalStateException {
-
- // Create a new table
- CloudTable table = tableClient.getTableReference(tableName);
- try {
- if (table.createIfNotExists() == false) {
- throw new IllegalStateException(String.format("Table with name \"%s\" already exists.", tableName));
+You'll also need to add the following using statement at the top of the `TableServiceConfig.java` file.
+
+```java
+import com.azure.data.tables.TableClient;
+import com.azure.data.tables.TableClientBuilder;
+```
+
+## 6 - Implement Cosmos DB table operations
+
+All Cosmos DB table operations for the sample app are implemented in the `TablesServiceImpl` class located in the *Services* directory. You'll need to import the `com.azure.data.tables` SDK package.
+
+```java
+import com.azure.data.tables.TableClient;
+import com.azure.data.tables.models.ListEntitiesOptions;
+import com.azure.data.tables.models.TableEntity;
+import com.azure.data.tables.models.TableTransactionAction;
+import com.azure.data.tables.models.TableTransactionActionType;
+```
+
+At the start of the `TableServiceImpl` class, add a member variable for the [TableClient](/java/api/com.azure.data.tables.tableclient) object and a constructor to allow the [TableClient](/java/api/com.azure.data.tables.tableclient) object to be injected into the class.
+
+```java
+@Autowired
+private TableClient tableClient;
+```
+
+### Get rows from a table
+
+The [TableClient](/java/api/com.azure.data.tables.tableclient) class contains a method named [listEntities](/java/api/com.azure.data.tables.tableclient.listentities) which allows you to select rows from the table. In this example, since no parameters are being passed to the method, all rows will be selected from the table.
+
+The method also takes a generic parameter of type [TableEntity](/java/api/com.azure.data.tables.models.tableentity) that specifies the model class data will be returned as. In this case, the built-in class [TableEntity](/java/api/com.azure.data.tables.models.tableentity) is used, meaning the `listEntities` method will return a `PagedIterable<TableEntity>` collection as its results.
+
+```java
+public List<WeatherDataModel> retrieveAllEntities() {
+ List<WeatherDataModel> modelList = tableClient.listEntities().stream()
+ .map(WeatherDataUtils::mapTableEntityToWeatherDataModel)
+ .collect(Collectors.toList());
+ return Collections.unmodifiableList(WeatherDataUtils.filledValue(modelList));
+}
+```
+
+The [TableEntity](/java/api/com.azure.data.tables.models.tableentity) class defined in the `com.azure.data.tables.models` package has properties for the partition key and row key values in the table. Together, these two values for a unique key for the row in the table. In this example application, the name of the weather station (city) is stored in the partition key and the date/time of the observation is stored in the row key. All other properties (temperature, humidity, wind speed) are stored in a dictionary in the `TableEntity` object.
+
+It's common practice to map a [TableEntity](/java/api/com.azure.data.tables.models.tableentity) object to an object of your own definition. The sample application defines a class `WeatherDataModel` in the *Models* directory for this purpose. This class has properties for the station name and observation date that the partition key and row key will map to, providing more meaningful property names for these values. It then uses a dictionary to store all the other properties on the object. This is a common pattern when working with Table storage since a row can have any number of arbitrary properties and we want our model objects to be able to capture all of them. This class also contains methods to list the properties on the class.
+
+```java
+public class WeatherDataModel {
+
+ public WeatherDataModel(String stationName, String observationDate, OffsetDateTime timestamp, String etag) {
+ this.stationName = stationName;
+ this.observationDate = observationDate;
+ this.timestamp = timestamp;
+ this.etag = etag;
+ }
+
+ private String stationName;
+
+ private String observationDate;
+
+ private OffsetDateTime timestamp;
+
+ private String etag;
+
+ private Map<String, Object> propertyMap = new HashMap<String, Object>();
+
+ public String getStationName() {
+ return stationName;
+ }
+
+ public void setStationName(String stationName) {
+ this.stationName = stationName;
+ }
+
+ public String getObservationDate() {
+ return observationDate;
+ }
+
+ public void setObservationDate(String observationDate) {
+ this.observationDate = observationDate;
+ }
+
+ public OffsetDateTime getTimestamp() {
+ return timestamp;
+ }
+
+ public void setTimestamp(OffsetDateTime timestamp) {
+ this.timestamp = timestamp;
+ }
+
+ public String getEtag() {
+ return etag;
+ }
+
+ public void setEtag(String etag) {
+ this.etag = etag;
+ }
+
+ public Map<String, Object> getPropertyMap() {
+ return propertyMap;
+ }
+
+ public void setPropertyMap(Map<String, Object> propertyMap) {
+ this.propertyMap = propertyMap;
+ }
+}
+```
+
+The `mapTableEntityToWeatherDataModel` method is used to map a [TableEntity](/java/api/com.azure.data.tables.models.tableentity) object to a `WeatherDataModel` object. The `mapTableEntityToWeatherDataModel` method directly maps the `PartitionKey`, `RowKey`, `Timestamp`, and `Etag` properties and then uses the `properties.keySet` to iterate over the other properties in the `TableEntity` object and map those to the `WeatherDataModel` object, minus the properties that have already been directly mapped.
+
+Edit the code in the `mapTableEntityToWeatherDataModel` method to match the following code block.
+
+```java
+public static WeatherDataModel mapTableEntityToWeatherDataModel(TableEntity entity) {
+ WeatherDataModel observation = new WeatherDataModel(
+ entity.getPartitionKey(), entity.getRowKey(),
+ entity.getTimestamp(), entity.getETag());
+ rearrangeEntityProperties(observation.getPropertyMap(), entity.getProperties());
+ return observation;
+}
+
+private static void rearrangeEntityProperties(Map<String, Object> target, Map<String, Object> source) {
+ Constants.DEFAULT_LIST_OF_KEYS.forEach(key -> {
+ if (source.containsKey(key)) {
+ target.put(key, source.get(key));
+ }
+ });
+ source.keySet().forEach(key -> {
+ if (Constants.DEFAULT_LIST_OF_KEYS.parallelStream().noneMatch(defaultKey -> defaultKey.equals(key))
+ && Constants.EXCLUDE_TABLE_ENTITY_KEYS.parallelStream().noneMatch(defaultKey -> defaultKey.equals(key))) {
+ target.put(key, source.get(key));
}
+ });
+}
+```
+
+### Filter rows returned from a table
+To filter the rows returned from a table, you can pass an OData style filter string to the [listEntities](/java/api/com.azure.data.tables.tableclient.listentities) method. For example, if you wanted to get all of the weather readings for Chicago between midnight July 1, 2021 and midnight July 2, 2021 (inclusive) you would pass in the following filter string.
+
+```odata
+PartitionKey eq 'Chicago' and RowKey ge '2021-07-01 12:00 AM' and RowKey le '2021-07-02 12:00 AM'
+```
+
+You can view all OData filter operators on the OData website in the section [Filter System Query Option](https://www.odata.org/documentation/odata-version-2-0/uri-conventions/)
+
+In the example application, the `FilterResultsInputModel` object is designed to capture any filter criteria provided by the user.
+
+```java
+public class FilterResultsInputModel implements Serializable {
+
+ private String partitionKey;
+
+ private String rowKeyDateStart;
+
+ private String rowKeyTimeStart;
+
+ private String rowKeyDateEnd;
+
+ private String rowKeyTimeEnd;
+
+ private Double minTemperature;
+
+ private Double maxTemperature;
+
+ private Double minPrecipitation;
+
+ private Double maxPrecipitation;
+
+ public String getPartitionKey() {
+ return partitionKey;
+ }
+
+ public void setPartitionKey(String partitionKey) {
+ this.partitionKey = partitionKey;
+ }
+
+ public String getRowKeyDateStart() {
+ return rowKeyDateStart;
+ }
+
+ public void setRowKeyDateStart(String rowKeyDateStart) {
+ this.rowKeyDateStart = rowKeyDateStart;
+ }
+
+ public String getRowKeyTimeStart() {
+ return rowKeyTimeStart;
+ }
+
+ public void setRowKeyTimeStart(String rowKeyTimeStart) {
+ this.rowKeyTimeStart = rowKeyTimeStart;
+ }
+
+ public String getRowKeyDateEnd() {
+ return rowKeyDateEnd;
+ }
+
+ public void setRowKeyDateEnd(String rowKeyDateEnd) {
+ this.rowKeyDateEnd = rowKeyDateEnd;
+ }
+
+ public String getRowKeyTimeEnd() {
+ return rowKeyTimeEnd;
+ }
+
+ public void setRowKeyTimeEnd(String rowKeyTimeEnd) {
+ this.rowKeyTimeEnd = rowKeyTimeEnd;
+ }
+
+ public Double getMinTemperature() {
+ return minTemperature;
+ }
+
+ public void setMinTemperature(Double minTemperature) {
+ this.minTemperature = minTemperature;
+ }
+
+ public Double getMaxTemperature() {
+ return maxTemperature;
+ }
+
+ public void setMaxTemperature(Double maxTemperature) {
+ this.maxTemperature = maxTemperature;
}
- catch (StorageException s) {
- if (s.getCause() instanceof java.net.ConnectException) {
- System.out.println("Caught connection exception from the client. If running with the default configuration please make sure you have started the storage emulator.");
+
+ public Double getMinPrecipitation() {
+ return minPrecipitation;
+ }
+
+ public void setMinPrecipitation(Double minPrecipitation) {
+ this.minPrecipitation = minPrecipitation;
+ }
+
+ public Double getMaxPrecipitation() {
+ return maxPrecipitation;
+ }
+
+ public void setMaxPrecipitation(Double maxPrecipitation) {
+ this.maxPrecipitation = maxPrecipitation;
+ }
+}
+```
+
+When this object is passed to the `retrieveEntitiesByFilter` method in the `TableServiceImpl` class, it creates a filter string for each non-null property value. It then creates a combined filter string by joining all of the values together with an "and" clause. This combined filter string is passed to the [listEntities](/java/api/com.azure.data.tables.tableclient.listentities) method on the [TableClient](/java/api/com.azure.data.tables.tableclient) object and only rows matching the filter string will be returned. You can use a similar method in your code to construct suitable filter strings as required by your application.
+
+```java
+public List<WeatherDataModel> retrieveEntitiesByFilter(FilterResultsInputModel model) {
+
+ List<String> filters = new ArrayList<>();
+
+ if (!StringUtils.isEmptyOrWhitespace(model.getPartitionKey())) {
+ filters.add(String.format("PartitionKey eq '%s'", model.getPartitionKey()));
+ }
+ if (!StringUtils.isEmptyOrWhitespace(model.getRowKeyDateStart())
+ && !StringUtils.isEmptyOrWhitespace(model.getRowKeyTimeStart())) {
+ filters.add(String.format("RowKey ge '%s %s'", model.getRowKeyDateStart(), model.getRowKeyTimeStart()));
+ }
+ if (!StringUtils.isEmptyOrWhitespace(model.getRowKeyDateEnd())
+ && !StringUtils.isEmptyOrWhitespace(model.getRowKeyTimeEnd())) {
+ filters.add(String.format("RowKey le '%s %s'", model.getRowKeyDateEnd(), model.getRowKeyTimeEnd()));
+ }
+ if (model.getMinTemperature() != null) {
+ filters.add(String.format("Temperature ge %f", model.getMinTemperature()));
+ }
+ if (model.getMaxTemperature() != null) {
+ filters.add(String.format("Temperature le %f", model.getMaxTemperature()));
+ }
+ if (model.getMinPrecipitation() != null) {
+ filters.add(String.format("Precipitation ge %f", model.getMinPrecipitation()));
+ }
+ if (model.getMaxPrecipitation() != null) {
+ filters.add(String.format("Precipitation le %f", model.getMaxPrecipitation()));
+ }
+
+ List<WeatherDataModel> modelList = tableClient.listEntities(new ListEntitiesOptions()
+ .setFilter(String.join(" and ", filters)), null, null).stream()
+ .map(WeatherDataUtils::mapTableEntityToWeatherDataModel)
+ .collect(Collectors.toList());
+ return Collections.unmodifiableList(WeatherDataUtils.filledValue(modelList));
+}
+```
+
+### Insert data using a TableEntity object
+
+The simplest way to add data to a table is by using a [TableEntity](/java/api/com.azure.data.tables.models.tableentity) object. In this example, data is mapped from an input model object to a [TableEntity](/java/api/com.azure.data.tables.models.tableentity) object. The properties on the input object representing the weather station name and observation date/time are mapped to the `PartitionKey` and `RowKey`) properties respectively which together form a unique key for the row in the table. Then the additional properties on the input model object are mapped to dictionary properties on the [TableClient](/java/api/com.azure.data.tables.models.tableentity) object. Finally, the [createEntity](/java/api/com.azure.data.tables.tableclient.createentity) method on the [TableClient](/java/api/com.azure.data.tables.models.tableentity) object is used to insert data into the table.
+
+Modify the `insertEntity` class in the example application to contain the following code.
+
+```java
+public void insertEntity(WeatherInputModel model) {
+ tableClient.createEntity(WeatherDataUtils.createTableEntity(model));
+}
+```
+
+### Upsert data using a TableEntity object
+
+If you try to insert a row into a table with a partition key/row key combination that already exists in that table, you'll receive an error. For this reason, it's often preferable to use the [upsertEntity](/java/api/com.azure.data.tables.tableclient.upsertentity) instead of the `insertEntity` method when adding rows to a table. If the given partition key/row key combination already exists in the table, the [upsertEntity](/java/api/com.azure.data.tables.tableclient.upsertentity) method will update the existing row. Otherwise, the row will be added to the table.
+
+```java
+public void upsertEntity(WeatherInputModel model) {
+ tableClient.upsertEntity(WeatherDataUtils.createTableEntity(model));
+}
+```
+
+### Insert or upsert data with variable properties
+
+One of the advantages of using the Cosmos DB Tables API is that if an object being loaded to a table contains any new properties then those properties are automatically added to the table and the values stored in Cosmos DB. There's no need to run DDL statements like `ALTER TABLE` to add columns as in a traditional database.
+
+This model gives your application flexibility when dealing with data sources that may add or modify what data needs to be captured over time or when different inputs provide different data to your application. In the sample application, we can simulate a weather station that sends not just the base weather data but also some additional values. When an object with these new properties is stored in the table for the first time, the corresponding properties (columns) will be automatically added to the table.
+
+In the sample application, the `ExpandableWeatherObject` class is built around an internal dictionary to support any set of properties on the object. This class represents a typical pattern for when an object needs to contain an arbitrary set of properties.
+
+```java
+public class ExpandableWeatherObject {
+
+ private String stationName;
+
+ private String observationDate;
+
+ private Map<String, Object> propertyMap = new HashMap<String, Object>();
+
+ public String getStationName() {
+ return stationName;
+ }
+
+ public void setStationName(String stationName) {
+ this.stationName = stationName;
+ }
+
+ public String getObservationDate() {
+ return observationDate;
+ }
+
+ public void setObservationDate(String observationDate) {
+ this.observationDate = observationDate;
+ }
+
+ public Map<String, Object> getPropertyMap() {
+ return propertyMap;
+ }
+
+ public void setPropertyMap(Map<String, Object> propertyMap) {
+ this.propertyMap = propertyMap;
+ }
+
+ public boolean containsProperty(String key) {
+ return this.propertyMap.containsKey(key);
+ }
+
+ public Object getPropertyValue(String key) {
+ return containsProperty(key) ? this.propertyMap.get(key) : null;
+ }
+
+ public void putProperty(String key, Object value) {
+ this.propertyMap.put(key, value);
+ }
+
+ public List<String> getPropertyKeys() {
+ List<String> list = Collections.synchronizedList(new ArrayList<String>());
+ Iterator<String> iterators = this.propertyMap.keySet().iterator();
+ while (iterators.hasNext()) {
+ list.add(iterators.next());
}
- throw s;
+ return Collections.unmodifiableList(list);
}
- return table;
- }
- ```
+ public Integer getPropertyCount() {
+ return this.propertyMap.size();
+ }
+}
+```
+
+To insert or upsert such an object using the Table API, map the properties of the expandable object into a [TableEntity](/java/api/com.azure.data.tables.tableentity) object and use the [createEntity](/java/api/com.azure.data.tables.tableclient.createentity) or [upsertEntity](/java/api/com.azure.data.tables.tableclient.upsertentity) methods on the [TableClient](/java/api/com.azure.data.tables.tableclient) object as appropriate.
+
+```java
+public void insertExpandableEntity(ExpandableWeatherObject model) {
+ tableClient.createEntity(WeatherDataUtils.createTableEntity(model));
+}
+
+public void upsertExpandableEntity(ExpandableWeatherObject model) {
+ tableClient.upsertEntity(WeatherDataUtils.createTableEntity(model));
+}
+```
-* The following code shows how to insert data into the table:
+### Update an entity
- ```javascript
- private static void batchInsertOfCustomerEntities(CloudTable table) throws StorageException {
-
- // Create the batch operation
- TableBatchOperation batchOperation1 = new TableBatchOperation();
- for (int i = 1; i <= 50; i++) {
- CustomerEntity entity = new CustomerEntity("Smith", String.format("%04d", i));
- entity.setEmail(String.format("smith%04d@contoso.com", i));
- entity.setHomePhoneNumber(String.format("425-555-%04d", i));
- entity.setWorkPhoneNumber(String.format("425-556-%04d", i));
- batchOperation1.insertOrMerge(entity);
- }
-
- // Execute the batch operation
- table.execute(batchOperation1);
- }
- ```
+Entities can be updated by calling the [updateEntity](/java/api/com.azure.data.tables.tableclient.updateentity) method on the [TableClient](/java/api/com.azure.data.tables.tableclient) object. Because an entity (row) stored using the Tables API could contain any arbitrary set of properties, it's often useful to create an update object based around a dictionary object similar to the `ExpandableWeatherObject` discussed earlier. In this case, the only difference is the addition of an `etag` property which is used for concurrency control during updates.
-* The following code shows how to query data from the table:
+```java
+public class UpdateWeatherObject {
- ```java
- private static void partitionScan(CloudTable table, String partitionKey) throws StorageException {
-
- // Create the partition scan query
- TableQuery<CustomerEntity> partitionScanQuery = TableQuery.from(CustomerEntity.class).where(
- (TableQuery.generateFilterCondition("PartitionKey", QueryComparisons.EQUAL, partitionKey)));
-
- // Iterate through the results
- for (CustomerEntity entity : table.execute(partitionScanQuery)) {
- System.out.println(String.format("\tCustomer: %s,%s\t%s\t%s\t%s", entity.getPartitionKey(), entity.getRowKey(), entity.getEmail(), entity.getHomePhoneNumber(), entity. getWorkPhoneNumber()));
- }
- }
- ```
+ private String stationName;
-* The following code shows how to delete data from the table:
+ private String observationDate;
- ```java
-
- System.out.print("\nDelete any tables that were created.");
-
- if (table1 != null && table1.deleteIfExists() == true) {
- System.out.println(String.format("\tSuccessfully deleted the table: %s", table1.getName()));
- }
-
- if (table2 != null && table2.deleteIfExists() == true) {
- System.out.println(String.format("\tSuccessfully deleted the table: %s", table2.getName()));
- }
- ```
+ private String etag;
+
+ private Map<String, Object> propertyMap = new HashMap<String, Object>();
+
+ public String getStationName() {
+ return stationName;
+ }
-## Update your connection string
+ public void setStationName(String stationName) {
+ this.stationName = stationName;
+ }
-Now go back to the Azure portal to get your connection string information and copy it into the app. This enables your app to communicate with your hosted database.
+ public String getObservationDate() {
+ return observationDate;
+ }
-1. In your Azure Cosmos DB account in the [Azure portal](https://portal.azure.com/), select **Connection String**.
+ public void setObservationDate(String observationDate) {
+ this.observationDate = observationDate;
+ }
- :::image type="content" source="./media/create-table-java/cosmos-db-quickstart-connection-string.png" alt-text="View the connection string information in the Connection String pane":::
+ public String getEtag() {
+ return etag;
+ }
-2. Copy the PRIMARY CONNECTION STRING using the copy button on the right.
+ public void setEtag(String etag) {
+ this.etag = etag;
+ }
-3. Open *config.properties* from the *C:\git-samples\storage-table-java-getting-started\src\main\resources* folder.
+ public Map<String, Object> getPropertyMap() {
+ return propertyMap;
+ }
-5. Comment out line one and uncomment line two. The first two lines should now look like this.
+ public void setPropertyMap(Map<String, Object> propertyMap) {
+ this.propertyMap = propertyMap;
+ }
+}
+```
- ```xml
- #StorageConnectionString = UseDevelopmentStorage=true
- StorageConnectionString = DefaultEndpointsProtocol=https;AccountName=[ACCOUNTNAME];AccountKey=[ACCOUNTKEY]
- ```
+In the sample app, this object is passed to the `updateEntity` method in the `TableServiceImpl` class. This method first loads the existing entity from the Tables API using the [getEntity](/java/api/com.azure.data.tables.tableclient.getentity) method on the [TableClient](/java/api/com.azure.data.tables.tableclient). It then updates that entity object and uses the `updateEntity` method save the updates to the database. Note how the [updateEntity](/java/api/com.azure.data.tables.tableclient.updateentity) method takes the current Etag of the object to ensure the object hasn't changed since it was initially loaded. If you want to update the entity regardless, you may pass a value of `etag` to the `updateEntity` method.
-6. Paste your PRIMARY CONNECTION STRING from the portal into the StorageConnectionString value in line 2.
+```java
+public void updateEntity(UpdateWeatherObject model) {
+ TableEntity tableEntity = tableClient.getEntity(model.getStationName(), model.getObservationDate());
+ Map<String, Object> propertiesMap = model.getPropertyMap();
+ propertiesMap.keySet().forEach(key -> tableEntity.getProperties().put(key, propertiesMap.get(key)));
+ tableClient.updateEntity(tableEntity);
+}
+```
- > [!IMPORTANT]
- > If your Endpoint uses documents.azure.com, that means you have a preview account, and you need to create a [new Table API account](#create-a-database-account) to work with the generally available Table API SDK.
- >
+### Remove an entity
-7. Save the *config.properties* file.
+To remove an entity from a table, call the [deleteEntity](/java/api/com.azure.data.tables.tableclient.deleteentity) method on the [TableClient](/java/api/com.azure.data.tables.tableclient) object with the partition key and row key of the object.
-You've now updated your app with all the info it needs to communicate with Azure Cosmos DB.
+```java
+public void deleteEntity(WeatherInputModel model) {
+ tableClient.deleteEntity(model.getStationName(),
+ WeatherDataUtils.formatRowKey(model.getObservationDate(), model.getObservationTime()));
+}
+```
-## Run the app
+## 7 - Run the code
-1. In the git terminal window, `cd` to the storage-table-java-getting-started folder.
+Run the sample application to interact with the Cosmos DB Tables API. The first time you run the application, there will be no data because the table is empty. Use any of the buttons at the top of application to add data to the table.
- ```git
- cd "C:\git-samples\storage-table-java-getting-started"
- ```
-2. In the git terminal window, run the following commands to run the Java application.
+Selecting the **Insert using Table Entity** button opens a dialog allowing you to insert or upsert a new row using a `TableEntity` object.
- ```git
- mvn compile exec:java
- ```
- The console window displays the table data being added to the new table database in Azure Cosmos DB.
+Selecting the **Insert using Expandable Data** button brings up a dialog that enables you to insert an object with custom properties, demonstrating how the Cosmos DB Tables API automatically adds properties (columns) to the table when needed. Use the *Add Custom Field* button to add one or more new properties and demonstrate this capability.
- You can now go back to Data Explorer and see, query, modify, and work with this new data.
-## Review SLAs in the Azure portal
+Use the **Insert Sample Data** button to load some sample data into your Cosmos DB table.
+
+Select the **Filter Results** item in the top menu to be taken to the Filter Results page. On this page, fill out the filter criteria to demonstrate how a filter clause can be built and passed to the Cosmos DB Tables API.
+ ## Clean up resources
+When you're finished with the sample application, you should remove all Azure resources related to this article from your Azure account. You can do this by deleting the resource group.
+
+### [Azure portal](#tab/azure-portal)
+
+A resource group can be deleted using the [Azure portal](https://portal.azure.com/) by doing the following.
+
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Delete resource group step 1](./includes/create-table-jav)] | :::image type="content" source="./media/create-table-java/azure-portal-remove-resource-group-1-240px.png" alt-text="A screenshot showing how to search for a resource group." lightbox="./media/create-table-java/azure-portal-remove-resource-group-1.png"::: |
+| [!INCLUDE [Delete resource group step 2](./includes/create-table-jav)] | :::image type="content" source="./media/create-table-java/azure-portal-remove-resource-group-2-240px.png" alt-text="A screenshot showing the location of the Delete resource group button." lightbox="./media/create-table-java/azure-portal-remove-resource-group-2.png"::: |
+| [!INCLUDE [Delete resource group step 3](./includes/create-table-jav)] | :::image type="content" source="./media/create-table-java/azure-portal-remove-resource-group-3-240px.png" alt-text="A screenshot showing the confirmation dialog for deleting a resource group." lightbox="./media/create-table-java/azure-portal-remove-resource-group-3.png"::: |
+
+### [Azure CLI](#tab/azure-cli)
+
+To delete a resource group using the Azure CLI, use the [az group delete](/cli/azure/group#az_group_delete) command with the name of the resource group to be deleted. Deleting a resource group will also remove all Azure resources contained in the resource group.
+
+```azurecli
+az group delete --name $RESOURCE_GROUP_NAME
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+To delete a resource group using Azure PowerShell, use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command with the name of the resource group to be deleted. Deleting a resource group will also remove all Azure resources contained in the resource group.
+
+```azurepowershell
+Remove-AzResourceGroup -Name $resourceGroupName
+```
++ ## Next steps
-In this quickstart, you learned how to create an Azure Cosmos DB account, create a table using the Data Explorer, and run a Java app to add table data. Now you can query your data using the Table API.
+In this quickstart, you've learned how to create an Azure Cosmos DB account, create a table using the Data Explorer, and run an app. Now you can query your data using the Tables API.
> [!div class="nextstepaction"]
-> [Import table data to the Table API](table-import.md)
+> [Import table data to the Tables API](table-import.md)
cosmos-db Tutorial Setup Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/tutorial-setup-ci-cd.md
# Set up a CI/CD pipeline with the Azure Cosmos DB Emulator build task in Azure DevOps [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
-The Azure Cosmos DB Emulator provides a local environment that emulates the Azure Cosmos DB service for development purposes. The emulator allows you to develop and test your application locally, without creating an Azure subscription or incurring any costs.
-
-The Azure Cosmos DB Emulator build task for Azure DevOps allows you to do the same in a CI environment. With the build task, you can run tests against the emulator as part of your build and release workflows. The task spins up a Docker container with the emulator already running and provides an endpoint that can be used by the rest of the build definition. You can create and start as many instances of the emulator as you need, each running in a separate container.
-
-This article demonstrates how to set up a CI pipeline in Azure DevOps for an ASP.NET application that uses the Cosmos DB emulator build task to run tests. You can use a similar approach to set up a CI pipeline for a Node.js or a Python application.
-
-## Install the emulator build task
-
-To use the build task, we first need to install it onto our Azure DevOps organization. Find the extension **Azure Cosmos DB Emulator** in the [Marketplace](https://marketplace.visualstudio.com/items?itemName=azure-cosmosdb.emulator-public-preview) and click **Get it free.**
--
-Next, choose the organization in which to install the extension.
- > [!NOTE]
-> To install an extension to an Azure DevOps organization, you must be an account owner or project collection administrator. If you do not have permissions, but you are an account member, you can request extensions instead. [Learn more.](/azure/devops/marketplace/faq-extensions)
-
+> Due to the full removal of Windows 2016 hosted runners on April 1st, 2022, this method of using the Cosmos DB emulator with build task in Azure DevOps is no longer supported. We are actively working on alternative solutions. Meanwhile, you can follow the below instructions to leverage the Azure Cosmos DB emulator which comes pre-installed when using the "windows-2019" agent type.
-## Create a build definition
+The Azure Cosmos DB Emulator provides a local environment that emulates the Azure Cosmos DB service for development purposes. The emulator allows you to develop and test your application locally, without creating an Azure subscription or incurring any costs.
-Now that the extension is installed, sign in to your Azure DevOps organization and find your project from the projects dashboard. You can add a [build pipeline](/azure/devops/pipelines/get-started-designer?preserve-view=true&tabs=new-nav) to your project or modify an existing build pipeline. If you already have a build pipeline, you can skip ahead to [Add the Emulator build task to a build definition](#addEmulatorBuildTaskToBuildDefinition).
+## PowerShell Task for Emulator
+A typical PowerShell based task that will start the Cosmos DB emulator can be scripted as follows:
-1. To create a new build definition, navigate to the **Builds** tab in Azure DevOps. Select **+New.** \> **New build pipeline**
+Example of a job configuration, selecting the "windows-2019" agent type.
- :::image type="content" source="./media/tutorial-setup-ci-cd/CreateNewBuildDef_1.png" alt-text="Create a new build pipeline":::
+Example of a task executing the PowerShell script needed to start the emulator.
-2. Select the desired **source**, **Team project**, **Repository**, and the **Default branch for manual and scheduled builds**. After choosing the required options, select **Continue**
- :::image type="content" source="./media/tutorial-setup-ci-cd/CreateNewBuildDef_2.png" alt-text="Select the team project, repository, and branch for the build pipeline":::
-3. Finally, select the desired template for the build pipeline. We'll select the **ASP.NET** template in this tutorial. Now you have a build pipeline that you can set up to use the Azure Cosmos DB Emulator build task.
+```Powershell
-> [!NOTE]
-> The agent pool to be selected for this CI should have Docker for Windows installed unless the installation is done manually in a prior task as a part of the CI. See [Microsoft hosted agents](/azure/devops/pipelines/agents/hosted?tabs=yaml) article for a selection of agent pools; we recommend to start with `Hosted VS2017`.
+# Write your PowerShell commands here.
-Azure Cosmos DB Emulator currently doesnΓÇÖt support hosted VS2019 agent pool. However, the emulator already comes with VS2019 installed and you use it by starting the emulator with the following PowerShell cmdlets. If you run into any issues when using the VS2019, reach out to the [Azure DevOps](https://developercommunity.visualstudio.com/spaces/21/https://docsupdatetracker.net/index.html) team for help:
+dir "C:\Program Files\Azure Cosmos DB Emulator\"
-```powershell
Import-Module "$env:ProgramFiles\Azure Cosmos DB Emulator\PSModules\Microsoft.Azure.CosmosDB.Emulator"
-Start-CosmosDbEmulator
-```
-
-## <a name="addEmulatorBuildTaskToBuildDefinition"></a>Add the task to a build pipeline
-
-1. Before adding a task to the build pipeline, you should add an agent job. Navigate to your build pipeline, select the **...** and choose **Add an agent job**.
-1. Next select the **+** symbol next to the agent job to add the emulator build task. Search for **cosmos** in the search box, select **Azure Cosmos DB Emulator** and add it to the agent job. The build task will start up a container with an instance of the Cosmos DB emulator already running on it. The Azure Cosmos DB Emulator task should be placed before any other tasks that expect the emulator to be in running state.
+$startEmulatorCmd = "Start-CosmosDbEmulator -NoFirewall -NoUI"
+Write-Host $startEmulatorCmd
+Invoke-Expression -Command $startEmulatorCmd
- :::image type="content" source="./media/tutorial-setup-ci-cd/addExtension_3.png" alt-text="Add the Emulator build task to the build definition":::
+# Pipe an emulator info object to the output stream
-In this tutorial, you'll add the task to the beginning to ensure the emulator is available before our tests execute.
+$Emulator = Get-Item "$env:ProgramFiles\Azure Cosmos DB Emulator\Microsoft.Azure.Cosmos.Emulator.exe"
+$IPAddress = Get-NetIPAddress -AddressFamily IPV4 -AddressState Preferred -PrefixOrigin Manual | Select-Object IPAddress
-### Add the task using YAML
-
-This step is optional and it's only required if you are setting up the CI/CD pipeline by using a YAML task. In such cases, you can define the YAML task as shown in the following code:
-
-```yml
-- task: azure-cosmosdb.emulator-public-preview.run-cosmosdbemulatorcontainer.CosmosDbEmulator@2
- displayName: 'Run Azure Cosmos DB Emulator'
--- script: yarn test
- displayName: 'Run API tests (Cosmos DB)'
- env:
- HOST: $(CosmosDbEmulator.Endpoint)
- # Hardcoded key for emulator, not a secret
- AUTH_KEY: C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==
- # The emulator uses a self-signed cert, disable TLS auth errors
- NODE_TLS_REJECT_UNAUTHORIZED: '0'
-```
-
-## Configure tests to use the emulator
-
-Now, we'll configure our tests to use the emulator. The emulator build task exports an environment variable ΓÇô ΓÇÿCosmosDbEmulator.EndpointΓÇÖ ΓÇô that any tasks further in the build pipeline can issue requests against.
-
-In this tutorial, we'll use the [Visual Studio Test task](https://github.com/Microsoft/azure-pipelines-tasks/blob/master/Tasks/VsTestV2/README.md) to run unit tests configured via a **.runsettings** file. To learn more about unit test setup, visit the [documentation](/visualstudio/test/configure-unit-tests-by-using-a-dot-runsettings-file?preserve-view=true&view=vs-2017). The complete Todo application code sample that you use in this document is available on [GitHub](https://github.com/Azure-Samples/cosmos-dotnet-core-todo-app)
-
-Below is an example of a **.runsettings** file that defines parameters to be passed into an application's unit tests. Note the `authKey` variable used is the [well-known key](./local-emulator.md#authenticate-requests) for the emulator. This `authKey` is the key expected by the emulator build task and should be defined in your **.runsettings** file.
-
-```csharp
-<RunSettings>
- <TestRunParameters>
- <Parameter name="endpoint" value="https://localhost:8081" />
- <Parameter name="authKey" value="C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==" />
- <Parameter name="database" value="ToDoListTest" />
- <Parameter name="collection" value="ItemsTest" />
- </TestRunParameters>
-</RunSettings>
-```
-
-If you are setting up a CI/CD pipeline for an application that uses the Azure Cosmos DB's API for MongoDB, the connection string by default includes the port number 10255. However, this port is not currently open, as an alternate, you should use port 10250 to establish the connection. The Azure Cosmos DB's API for MongoDB connection string remains the same except the supported port number is 10250 instead of 10255.
-
-These parameters `TestRunParameters` are referenced via a `TestContext` property in the application's test project. Here is an example of a test that runs against Cosmos DB.
-
-```csharp
-namespace todo.Tests
-{
- [TestClass]
- public class TodoUnitTests
- {
- public TestContext TestContext { get; set; }
-
- [TestInitialize()]
- public void Initialize()
- {
- string endpoint = TestContext.Properties["endpoint"].ToString();
- string authKey = TestContext.Properties["authKey"].ToString();
- Console.WriteLine("Using endpoint: ", endpoint);
- DocumentDBRepository<Item>.Initialize(endpoint, authKey);
- }
- [TestMethod]
- public async Task TestCreateItemsAsync()
- {
- var item = new Item
- {
- Id = "1",
- Name = "testName",
- Description = "testDescription",
- Completed = false,
- Category = "testCategory"
- };
-
- // Create the item
- await DocumentDBRepository<Item>.CreateItemAsync(item);
- // Query for the item
- var returnedItem = await DocumentDBRepository<Item>.GetItemAsync(item.Id, item.Category);
- // Verify the item returned is correct.
- Assert.AreEqual(item.Id, returnedItem.Id);
- Assert.AreEqual(item.Category, returnedItem.Category);
- }
-
- [TestCleanup()]
- public void Cleanup()
- {
- DocumentDBRepository<Item>.Teardown();
- }
- }
+New-Object PSObject @{
+Emulator = $Emulator.BaseName
+Version = $Emulator.VersionInfo.ProductVersion
+Endpoint = @($(hostname), $IPAddress.IPAddress) | ForEach-Object { "https://${_}:8081/" }
+MongoDBEndpoint = @($(hostname), $IPAddress.IPAddress) | ForEach-Object { "mongodb://${_}:10255/" }
+CassandraEndpoint = @($(hostname), $IPAddress.IPAddress) | ForEach-Object { "tcp://${_}:10350/" }
+GremlinEndpoint = @($(hostname), $IPAddress.IPAddress) | ForEach-Object { "http://${_}:8901/" }
+TableEndpoint = @($(hostname), $IPAddress.IPAddress) | ForEach-Object { "https://${_}:8902/" }
+IPAddress = $IPAddress.IPAddress
} ```
-Navigate to the Execution Options in the Visual Studio Test task. In the **Settings file** option, specify that the tests are configured using the **.runsettings** file. In the **Override test run parameters** option, add in `-endpoint $(CosmosDbEmulator.Endpoint)`. Doing so will configure the Test task to refer to the endpoint of the emulator build task, instead of the one defined in the **.runsettings** file.
--
-## Run the build
-
-Now, **Save and queue** the build.
--
-Once the build has started, observe the Cosmos DB emulator task has begun pulling down the Docker image with the emulator installed.
--
-After the build completes, observe that your tests pass, all running against the Cosmos DB emulator from the build task!
-
+For agents that do not come with the Azure Cosmos DB emulator preinstalled, you can instead download the latest emulator's MSI package from https://aka.ms/cosmosdb-emulator using 'curl' or 'wget', then leverage ['msiexec'](/windows-server/administration/windows-commands/msiexec) to 'quiet' install it. After the install, you can run a similar PowerShell script as the one above to start the emulator.
## Next steps To learn more about using the emulator for local development and testing, see [Use the Azure Cosmos DB Emulator for local development and testing](./local-emulator.md).
-To export emulator TLS/SSL certificates, see [Export the Azure Cosmos DB Emulator certificates for use with Java, Python, and Node.js](./local-emulator-export-ssl-certificates.md)
+To export emulator TLS/SSL certificates, see [Export the Azure Cosmos DB Emulator certificates for use with Java, Python, and Node.js](./local-emulator-export-ssl-certificates.md)
cosmos-db Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/use-cases.md
This article provides an overview of several common use cases for Azure Cosmos DB. The recommendations in this article serve as a starting point as you develop your application with Cosmos DB. >
-> [!VIDEO https://aka.ms/docs.modeling-data]
+> [!VIDEO https://aka.ms/docs.essential-use-cases]
After reading this article, you'll be able to answer the following questions:
JSON, a format supported by Cosmos DB, is an effective format to represent UI la
* To get started with Azure Cosmos DB, follow our [quick starts](create-sql-api-dotnet.md), which walk you through creating an account and getting started with Cosmos DB.
-* If you'd like to read more about customers using Azure Cosmos DB, see the [customer case studies](https://azure.microsoft.com/case-studies/?service=cosmos-db) page.
+* If you'd like to read more about customers using Azure Cosmos DB, see the [customer case studies](https://azure.microsoft.com/case-studies/?service=cosmos-db) page.
cost-management-billing Download Azure Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/download-azure-invoice.md
tags: billing
Previously updated : 04/15/2022 Last updated : 04/26/2022
You can download your invoice in the [Azure portal](https://portal.azure.com/) o
## When invoices are generated
-An invoice is generated based on your billing account type. Invoices are created for Microsoft Online Service Program (MOSP), Microsoft Customer Agreement (MCA), and Microsoft Partner Agreement (MPA) billing accounts. Invoices are also generated for Enterprise Agreement (EA) billing accounts. However, invoices for EA billing accounts aren't shown in the Azure portal.
+An invoice is generated based on your billing account type. Invoices are created for Microsoft Online Service Program (MOSP) also called pay-as-you-go, Microsoft Customer Agreement (MCA), and Microsoft Partner Agreement (MPA) billing accounts. Invoices are also generated for Enterprise Agreement (EA) billing accounts. However, invoices for EA billing accounts aren't shown in the Azure portal.
To learn more about billing accounts and identify your billing account type, see [View billing accounts in Azure portal](../manage/view-all-accounts.md).
For more information about your invoice, see [Understand your bill for Microsoft
## Download your MOSP support plan invoice
-An invoice is only generated for a support plan subscription that belongs to an MOSP billing account. [Check your access to an MOSP account](../manage/view-all-accounts.md#check-the-type-of-your-account).
+A PDF invoice is only generated for a support plan subscription that belongs to an MOSP billing account. [Check your access to an MOSP account](../manage/view-all-accounts.md#check-the-type-of-your-account).
You must have an account admin role on the support plan subscription to download its invoice.
To download an invoice:
## Get MOSP subscription invoice in email
-You must have an account admin role on a subscription or a support plan to opt in to receive its invoice by email. Once you've opted-in you can add additional recipients, who receive the invoice by email as well.
+You must have an account admin role on a subscription or a support plan to opt in to receive its PDF invoice by email. When you opt-in, you can optionally add additional recipients that will also receive the invoice by email. The following steps apply to subscription and support plan invoices.
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Search for **Cost Management + Billing**.
-3. Select **Invoices** from the left-hand side.
-4. Select your Azure subscription or support plan subscription and then select **Receive invoice by email**.
- [![Screenshot that shows the Receive invoice by email option](./media/download-azure-invoice/cmb-email-invoice.png)](./media/download-azure-invoice/cmb-email-invoice-zoomed-in.png#lightbox)
-5. Click **Email invoice** and accept the terms.
- ![Screenshot that shows the opt-in flow step 2](./media/download-azure-invoice/invoicearticlestep02.png)
-6. The invoice is sent to your preferred communication email. Select **Update profile** to update the email.
- ![Screenshot that shows the opt-in flow step 3](./media/download-azure-invoice/invoicearticlestep03-verifyemail.png)
-
-## Share subscription and support plan invoice
-
-You may want to share the invoice for your subscription and support plan every month with your accounting team or send them to one of your other email addresses.
-
-1. Follow the steps in [Get your subscription's and support plan's invoices in email](#get-mosp-subscription-invoice-in-email) and select **Configure recipients**.
- [![Screenshot that shows a user selecting configure recipients](./media/download-azure-invoice/invoice-article-step03.png)](./media/download-azure-invoice/invoice-article-step03-zoomed.png#lightbox)
-1. Enter an email address, and then select **Add recipient**. You can add multiple email addresses.
- ![Screenshot that shows a user adding additional recipients](./media/download-azure-invoice/invoice-article-step04.png)
-1. Once you've added all the email addresses, select **Done** from the bottom of the screen.
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Navigate to **Cost Management + Billing**.
+1. Select a billing scope, if needed.
+1. Select **Invoices** on the left side.
+1. At the top of the page, select **Receive invoice by email**.
+ :::image type="content" source="./media/download-azure-invoice/select-receive-invoice-by-email.png" alt-text="Screenshot showing navigation to Receive invoice by email." lightbox="./media/download-azure-invoice/select-receive-invoice-by-email.png" :::
+1. In the Receive invoice by email window, select the subscription where invoices are created.
+1. In the **Status** area, select **Yes** for **Receive email invoices for Azure services**. You can optionally select **Yes** for **Email invoices for Azure marketplace and reservation purchases**.
+1. In the **Preferred email** area, enter the email address where invoices will get sent.
+1. Optionally, in the **Additional recipients** area, enter one or more email addresses.
+ :::image type="content" source="./media/download-azure-invoice/receive-invoice-by-email-page.png" alt-text="Screenshot showing the Receive invoice by email page." lightbox="./media/download-azure-invoice/receive-invoice-by-email-page.png" :::
+1. Select **Save**.
## Invoices for MCA and MPA billing accounts
data-factory Connector Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-database.md
Previously updated : 03/22/2022 Last updated : 04/27/2022 # Copy and transform data in Azure SQL Database by using Azure Data Factory or Azure Synapse Analytics
Settings specific to Azure SQL Database are available in the **Source Options**
:::image type="content" source="media/data-flow/isolationlevel.png" alt-text="Isolation Level":::
-**Enable incremental extract (preview)**: If your table has a timestamp column, you can enable incremental extract. ADF will prompt you to choose a timestamp field that will be used to query for changed rows from the last time the pipeline ran. ADF will handle storing the watermark and querying changed rows for you. This feature is currently in public preview.
- ### Sink transformation Settings specific to Azure SQL Database are available in the **Settings** tab of the sink transformation.
data-factory Data Flow Conditional Split https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-conditional-split.md
CleanData
split( year < 1960, year > 1980,
- disjoint: true
+ disjoint: false
) ~> SplitByYear@(moviesBefore1960, moviesAfter1980, AllOtherMovies) ```
data-factory Data Flow Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-create.md
-+ Last updated 07/05/2021
defender-for-cloud Defender For Container Registries Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-container-registries-usage.md
This page explains how to use the built-in vulnerability scanner to scan the con
When **Defender for Containers** is enabled, any image you push to your registry will be scanned immediately. In addition, any image pulled within the last 30 days is also scanned.
-When the scanner reports vulnerabilities to Defender for Cloud, Defender for Cloud presents the findings and related information as recommendations. In addition, the findings include related information such as remediation steps, relevant CVEs, CVSS scores, and more. You can view the identified vulnerabilities for one or more subscriptions, or for a specific registry.
+When the scanner, powered by Qualys, reports vulnerabilities to Defender for Cloud, Defender for Cloud presents the findings and related information as recommendations. In addition, the findings include related information such as remediation steps, relevant CVEs, CVSS scores, and more. You can view the identified vulnerabilities for one or more subscriptions, or for a specific registry.
> [!TIP] > You can also scan container images for vulnerabilities as the images are built in your CI/CD GitHub workflows. Learn more in [Identify vulnerable container images in your CI/CD workflows](defender-for-container-registries-cicd.md).
To enable vulnerability scans of images stored in your Azure Resource Manager-ba
## View and remediate findings
-1. To view the findings, open the **Recommendations** page. If issues were found, you'll see the recommendation [Container registry images should have vulnerability findings resolved (powered by Qualys)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648).
+1. To view the findings, open the **Recommendations** page. If issues were found, you'll see the recommendation [Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648).
![Recommendation to remediate issues .](media/monitor-container-security/acr-finding.png)
To enable vulnerability scans of images stored in your Azure Resource Manager-ba
1. Push the updated image to trigger a scan.
- 1. Check the recommendations page for the recommendation [Container registry images should have vulnerability findings resolved (powered by Qualys)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648).
+ 1. Check the recommendations page for the recommendation [Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648).
If the recommendation still appears and the image you've handled still appears in the list of vulnerable images, check the remediation steps again.
You can use any of the following criteria:
To create a rule:
-1. From the recommendations detail page for [Container registry images should have vulnerability findings resolved (powered by Qualys)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648), select **Disable rule**.
+1. From the recommendations detail page for [Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648), select **Disable rule**.
1. Select the relevant scope. 1. Define your criteria. 1. Select **Apply rule**.
defender-for-cloud Defender For Servers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-servers-introduction.md
The following table describes what's included in each plan at a high level.
| Flexibility to use Microsoft Defender for Cloud or Microsoft 365 Defender portal | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | | Integration of Microsoft Defender for Cloud and Microsoft Defender for Endpoint (alerts, software inventory, Vulnerability Assessment) | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | | Log-analytics (500 MB free) | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
-| Security Policy & Regulatory Compliance | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
| Vulnerability Assessment using Qualys | | :::image type="icon" source="./media/icons/yes-icon.png"::: | | Threat detections: OS level, network layer, control plane | | :::image type="icon" source="./media/icons/yes-icon.png"::: | | Adaptive application controls | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
defender-for-cloud Deploy Vulnerability Assessment Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-vm.md
Within 48 hrs of the disclosure of a critical vulnerability, Qualys incorporates
Defender for Cloud also offers vulnerability analysis for your: - SQL databases - see [Explore vulnerability assessment reports in the vulnerability assessment dashboard](defender-for-sql-on-machines-vulnerability-assessment.md#explore-vulnerability-assessment-reports)-- Azure Container Registry images - see [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-container-registries-usage.md)
+- Azure Container Registry images - see [Use Defender for Containers to scan your ACR images for vulnerabilities](defender-for-container-registries-usage.md)
defender-for-iot Resources Agent Frequently Asked Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/resources-agent-frequently-asked-questions.md
If the agent stops communicating or fails to send security messages, a **Device
## Can I create my own alerts?
-Yes, you can create custom alerts based on multiple parameters including IP/MAC address, protocol type, class, service, function, command, etc. as well as values of custom tags contained in the payloads. See [Create custom alerts](quickstart-create-custom-alerts.md) to learn more about custom alerts and how to create them.
+Yes, you can create custom alerts based on multiple parameters including IP/MAC address, protocol type, class, service, function, command, and so on, as well as values of custom tags contained in the payloads. See [Create custom alerts](quickstart-create-custom-alerts.md) to learn more about custom alerts and how to create them.
## Next steps
defender-for-iot Dell Edge 5200 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/dell-edge-5200.md
+
+ Title: Dell Edge 5200 (SMB) - Microsoft Defender for IoT
+description: Learn about the Dell Edge 5200 appliance for OT monitoring with Microsoft Defender for IoT.
Last updated : 04/24/2022+++
+# Dell Edge 5200
+
+This article describes the Dell Edge 5200 appliance for OT sensors.
+
+| Appliance characteristic |Details |
+|||
+|**Hardware profile** | SMB|
+|**Performance** | Max bandwidth: 60 Mbp/s<br>Max devices: 1,000 |
+|**Physical specifications** | Mounting: Wall Mount<br>Ports: 3x RJ45 |
+|**Status** | Supported, Not available pre-configured|
+
+## Specifications
+
+|Component | Technical specifications|
+|:-|:-|
+|Chassis| Desktop / Wall mount server|
+|Dimensions| 211 mm (W) x 240 mm (D) x 86 mm (H)|
+|Weight| 4.7 kg|
+|Processor| Intel® Core™ i7-9700TE|
+|Chipset|Intel C246|
+|Memory |32 GB = Two 16 GB DDR4 ECC UDIMM|
+|Storage| 1x 512GB SSD |
+|Network controller|3x Intel GbE: 2x i210 + i219LM PHY|
+|Management|Intel AMT supported on i5 and i7 CPUs|
+|Device access| 6x USB 3.0|
+|Power| DC Input 12ΓÇô24 V (┬▒10% tolerance) <br>AC Input Optional: 180 W / 220 W, 60 W (for PoE) external AC/DC adapter|
+
+## Appliance BOM
+
+|Quantity|PN|Description|
+|:-|:-|:-|
+|1|210-BCNV|Dell EMC Edge Gateway 5200,Core i7-9700TE.32G.512G, Win 10 IoT.TPM,OEM|
+|1|631-ADIJ|User Documentation EMEA 2|
+|1|683-1187|No Installation Service Selected (Contact Sales Rep for more details)|
+|1|709-BDGW|Parts Only Warranty 15 Months|
+|1|199-BIRV|ProSupport and Next Business Day Onsite Service Initial, 15 Month(s)|
+|1|199-BIRW|ProSupport and Next Business Day Onsite Service Extension, 21 Month(s)|
+|1|990-10090|EX-Works|
+
+## Next steps
+
+Continue understanding system requirements for physical or virtual appliances. For more information, see [Which appliances do I need?](../ot-appliance-sizing.md).
+
+Then, use any of the following procedures to continue:
+
+- [Purchase sensors or download software for sensors](../how-to-manage-sensors-on-the-cloud.md#purchase-sensors-or-download-software-for-sensors)
+- [Download software for an on-premises management console](../how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)
+- [Install software](../how-to-install-software.md)
defender-for-iot Dell Poweredge R340 Xl Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/dell-poweredge-r340-xl-legacy.md
+
+ Title: Dell PowerEdge R340 XL for OT monitoring (legacy) - Microsoft Defender for IoT
+description: Learn about the Dell PowerEdge R340 XL appliance in its legacy configuration when used for OT monitoring with Microsoft Defender for IoT in enterprise deployments.
Last updated : 04/24/2022+++
+# Dell PowerEdge R340 XL
+
+This article describes the Dell PowerEdge R340 XL appliance, supported for OT sensors and on-premises management consoles.
+
+Legacy appliances are certified but aren't currently offered as pre-configured appliances.
++
+|Appliance characteristic | Description|
+|||
+|**Hardware profile** | Enterprise|
+|**Performance** | Max bandwidth: 1 Gbp/s<br>Max devices: 10,000 |
+|**Physical Specifications** | Mounting: 1U<br>Ports: 8x RJ45 or 6x SFP (OPT)|
+|**Status** | Supported, not available as a preconfigured appliance|
+
+The following image shows a view of the Dell PowerEdge R340 front panel:
++
+In this image, numbers refer to the following components:
+
+ 1. Left control panel
+ 1. Optical drive (optional)
+ 1. Right control panel
+ 1. Information tag
+ 1. Drives
+
+The following image shows a view of the Dell PowerEdge R340 back panel:
++
+In this image, numbers refer to the following components:
+
+1. Serial port
+1. NIC port (Gb 1)
+1. NIC port (Gb 1)
+1. Half-height PCIe
+1. Full-height PCIe expansion card slot
+1. Power supply unit 1
+1. Power supply unit 2
+1. System identification
+1. System status indicator cable port (CMA) button
+1. USB 3.0 port (2)
+1. iDRAC9 dedicated network port
+1. VGA port
+
+## Specifications
+
+|Component| Technical specifications|
+|:-|:-|
+|Chassis| 1U rack server|
+|Dimensions| 42.8 x 434.0 x 596 (mm) /1.67ΓÇ¥ x 17.09ΓÇ¥ x 23.5ΓÇ¥ (in)|
+|Weight| Max 29.98 lb/13.6 Kg|
+|Processor| Intel Xeon E-2144G 3.6 GHz <br>8M cache <br> 4C/8T <br> turbo (71 W|
+|Chipset|Intel C246|
+|Memory |32 GB = Two 16 GB 2666MT/s DDR4 ECC UDIMM|
+|Storage| Three 2 TB 7.2 K RPM SATA 6 Gbps 512n 3.5in Hot-plug Hard Drive - RAID 5|
+|Network controller|On-board: Two 1 Gb Broadcom BCM5720 <br>On-board LOM: iDRAC Port Card 1 Gb Broadcom BCM5720 <br>External: One Intel Ethernet i350 QP 1 Gb Server Adapter Low Profile|
+|Management|iDRAC nine Enterprise|
+|Device access| Two rear USB 3.0|
+|One front| USB 3.0|
+|Power| Dual Hot Plug Power Supplies 350 W|
+|Rack support| ReadyRailsΓäó II sliding rails for tool-less mounting in 4-post racks with square or unthreaded round holes or tooled mounting in 4-post threaded hole racks, with support for optional tool-less cable management arm.|
+
+## Dell PowerEdgeR340XL installation
+
+This section describes how to install Defender for IoT software on the Dell PowerEdgeR340XL appliance.
+
+Before installing the software on the Dell appliance, you need to adjust the appliance's BIOS configuration.
+
+> [!NOTE]
+> Installation procedures are only relevant if you need to re-install software on a preconfigured device, or if you buy your own hardware and configure the appliance yourself.
+>
+
+### Prerequisites
+
+To install the Dell PowerEdge R340XL appliance, you need:
+
+- An Enterprise license for Dell Remote Access Controller (iDrac)
+
+- A BIOS configuration XML
+
+- One of the following server firmware versions:
+
+ - BIOS version 2.1.6
+ - iDrac version 3.23.23.23
+
+### Configure the Dell BIOS
+
+The Dell appliance is managed by an integrated iDRAC with Lifecycle Controller (LC). The LC is embedded in every Dell PowerEdge server and provides functionality that helps you deploy, update, monitor, and maintain your Dell PowerEdge appliances.
+
+To establish the communication between the Dell appliance and the management computer, you need to define the iDRAC IP address and the management computer's IP address on the same subnet.
+
+When the connection is established, the BIOS is configurable.
+
+**To configure the iDRAC IP address**:
+
+1. Power up the sensor.
+
+1. If the OS is already installed, select the F2 key to enter the BIOS configuration.
+
+1. Select **iDRAC Settings**.
+
+1. Select **Network**.
+
+ > [!NOTE]
+ > During the installation, you must configure the default iDRAC IP address and password mentioned in the following steps. After the installation, you change these definitions.
+
+1. Change the static IPv4 address to **10.100.100.250**.
+
+1. Change the static subnet mask to **255.255.255.0**.
+
+ :::image type="content" source="../media/tutorial-install-components/idrac-network-settings-screen-v2.png" alt-text="Screenshot that shows the static subnet mask.":::
+
+1. Select **Back** > **Finish**.
+
+**To configure the Dell BIOS**:
+
+This procedure describes how to update the Dell PowerEdge R340 XL configuration for your OT deployment.
+
+Configure the appliance BIOS only if you did not purchase your appliance from Arrow, or if you have an appliance, but do not have access to the XML configuration file.
+
+1. Access the appliance's BIOS directly by using a keyboard and screen, or use iDRAC.
+
+ - If the appliance is not a Defender for IoT appliance, open a browser and go to the IP address that was configured before. Sign in with the Dell default administrator privileges. Use **root** for the username and **calvin** for the password.
+
+ - If the appliance is a Defender for IoT appliance, sign in by using **XXX** for the username and **XXX** for the password.
+
+1. After you access the BIOS, go to **Device Settings**.
+
+1. Choose the RAID-controlled configuration by selecting **Integrated RAID controller 1: Dell PERC\<PERC H330 Adapter\> Configuration Utility**.
+
+1. Select **Configuration Management**.
+
+1. Select **Create Virtual Disk**.
+
+1. In the **Select RAID Level** field, select **RAID5**. In the **Virtual Disk Name** field, enter **ROOT** and select **Physical Disks**.
+
+1. Select **Check All** and then select **Apply Changes**
+
+1. Select **Ok**.
+
+1. Scroll down and select **Create Virtual Disk**.
+
+1. Select the **Confirm** check box and select **Yes**.
+
+1. Select **OK**.
+
+1. Return to the main screen and select **System BIOS**.
+
+1. Select **Boot Settings**.
+
+1. For the **Boot Mode** option, select **BIOS**.
+
+1. Select **Back**, and then select **Finish** to exit the BIOS settings.
+
+### Install Defender for IoT software on the Dell R340
+
+This procedure describes how to install Defender for IoT software on the HPE DL360.
+
+The installation process takes about 20 minutes. After the installation, the system is restarted several times.
+
+**To install the software**:
+
+1. Verify that the version media is mounted to the appliance in one of the following ways:
+
+ - Connect an external CD or disk-on-key that contains the sensor software you downloaded from the Azure portal.
+
+ - Mount the ISO image by using iDRAC. After signing in to iDRAC, select the virtual console, and then select **Virtual Media**.
+
+1. In the **Map CD/DVD** section, select **Choose File**.
+
+1. Choose the version ISO image file for this version from the dialog box that opens.
+
+1. Select the **Map Device** button.
+
+ :::image type="content" source="../media/tutorial-install-components/mapped-device-on-virtual-media-screen-v2.png" alt-text="Screenshot that shows a mapped device.":::
+
+1. The media is mounted. Select **Close**.
+
+1. Start the appliance. When you're using iDRAC, you can restart the servers by selecting the **Consul Control** button. Then, on the **Keyboard Macros**, select the **Apply** button, which will start the Ctrl+Alt+Delete sequence.
+
+1. Continue by installing OT sensor or on-premises management software. For more information, see [Defender for IoT software installation](../how-to-install-software.md).
+
+## Next steps
+
+Continue understanding system requirements for physical or virtual appliances. For more information, see [Which appliances do I need?](../ot-appliance-sizing.md).
+
+Then, use any of the following procedures to continue:
+
+- [Purchase sensors or download software for sensors](../how-to-manage-sensors-on-the-cloud.md#purchase-sensors-or-download-software-for-sensors)
+- [Download software for an on-premises management console](../how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)
+- [Install software](../how-to-install-software.md)
defender-for-iot Hpe Edgeline El300 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-edgeline-el300.md
+
+ Title: HPE Edgeline EL300 (SMB) - Microsoft Defender for IoT
+description: Learn about the HPE Edgeline EL300 appliance for IoT in SMB rugged deployments.
Last updated : 04/24/2022+++
+# HPE Edgeline EL300
+
+This article describes the HPE Edgeline EL300 appliance for OT sensors or on-premises management consoles.
+
+Legacy appliances are certified but aren't currently offered as pre-configured appliances.
++
+| Appliance characteristic |Details |
+|||
+|**Hardware profile** | SMB|
+|**Performance** | Max bandwidth: 100 Mbp/s<br>Max devices: 800 |
+|**Physical specifications** | Mounting: Mounting kit, Din Rail<br>Ports: 5x RJ45|
+|**Status** | Supported, Not available pre-configured|
+
+The following image shows a view of the back panel of the HPE Edgeline EL300.
++
+## Specifications
++
+|Component|Technical specifications|
+|:-|:-|
+|Construction|Aluminum, fanless and dust-proof design|
+|Dimensions (height x width x depth)|200.5 mm (7.9ΓÇ¥) tall, 232 mm (9.14ΓÇ¥) wide by 100 mm (3.9ΓÇ¥) deep|
+|Weight|4.91 KG (10.83 lbs.)|
+|CPU|Intel Core i7-8650U (1.9GHz/4-core/15W)|
+|Chipset|Intel® Q170 Platform Controller Hub|
+|Memory|8 GB DDR4 2133 MHz Wide Temperature SODIMM|
+|Storage|128 GB 3ME3 Wide Temperature mSATA SSD|
+|Network controller|6x Gigabit Ethernet ports by Intel® I219|
+|Device access|4 USBs: Two fronts; two rears; 1 internal|
+|Power Adapter|250V/10A|
+|Mounting|Mounting kit, Din Rail|
+|Operating Temperature|0C to +70C|
+|Humidity|10%~90%, non-condensing|
+|Vibration|0.3 gram 10 Hz to 300 Hz, 15 minutes per axis - Din rail|
+|Shock|10G 10 ms, half-sine, three for each axis. (Both positive and negative pulse) ΓÇô Din Rail|
+
+### Appliance BOM
+
+|Product|Description|
+|:-|:-|
+|P25828-B21|HPE Edgeline EL300 v2 Converged Edge System|
+|P25828-B21 B19|HPE EL300 v2 Converged Edge System|
+|P25833-B21|Intel Core i7-8650U (1.9GHz/4-core/15W) FIO Basic Processor Kit for HPE Edgeline EL300|
+|P09176-B21|HPE Edgeline 8 GB (1x8 GB) Dual Rank x8 DDR4-2666 SODIMM WT CAS-19-19-19 Registered Memory FIO Kit|
+|P09188-B21|HPE Edgeline 256-GB SATA 6G Read Intensive M.2 2242 3 year warranty wide temperature SSD|
+|P04054-B21|HPE Edgeline EL300 SFF to M.2 Enablement Kit|
+|P08120-B21|HPE Edgeline EL300 12VDC FIO Transfer Board|
+|P08641-B21|HPE Edgeline EL300 80W 12VDC Power Supply|
+|AF564A|HPE C13 - SI-32 IL 250 V 10 Amp 1.83 m Power Cord|
+|P25835-B21|HPE EL300 v2 FIO Carrier Board|
+|R1P49AAE|HPE EL300 iSM Adv 3 yr 24x7 Sup_Upd E-LTU|
+|P08018-B21 optional|HPE Edgeline EL300 Low Profile Bracket Kit|
+|P08019-B21 optional|HPE Edgeline EL300 DIN Rail Mount Kit|
+|P08020-B21 optional|HPE Edgeline EL300 Wall Mount Kit|
+|P03456-B21 optional|HPE Edgeline 1-GbE 4-port TSN FIO Daughter Card|
+
+## HPE EdgeLine 300 installation
+
+This section describes how to install Defender for IoT software on the HPE EdgeLine 300 appliance.
+
+Installation includes:
+
+- Enabling remote access
+- Configuring BIOS settings
+- Installing Defender for IoT software
+
+A default administrative user is provided. We recommend that you change the password during the network configuration.
+
+> [!NOTE]
+> Installation procedures are only relevant if you need to re-install software on a preconfigured device, or if you buy your own hardware and configure the appliance yourself.
+>
++
+### Enable remote access
+
+1. Enter the iSM IP Address into your web browser.
+
+1. Sign in using the default username and password found on your appliance.
+
+1. Navigate to **Wired and Wireless Network** > **IPV4**
+
+ :::image type="content" source="../media/tutorial-install-components/wired-and-wireless.png" alt-text="Screenshot of the Wired and Wireless Network screen.":::
+
+1. Toggle off the **DHCP** option..
+
+1. Configure the IPv4 addresses as such:
+ - **IPV4 Address**: `192.168.1.125`
+ - **IPV4 Subnet Mask**: `255.255.255.0`
+ - **IPV4 Gateway**: `192.168.1.1`
+
+1. Select **Apply**.
+
+1. Sign out and reboot the appliance.
+
+### Configure the BIOS
+
+This procedure describes how to update the HPE BIOS configuration for your OT deployment.
+
+**To configure the BIOS**:
+
+1. Turn on the appliance, and push **F9** to enter the BIOS.
+
+1. Select **Advanced**, and scroll down to **CSM Support**.
+
+ :::image type="content" source="../media/tutorial-install-components/csm-support.png" alt-text="Screenshot showing the CSM Support menu.":::
+
+1. Push **Enter** to enable CSM Support.
+
+1. Go to **Storage**, and press **+/-** to change it to **Legacy**.
+
+1. Go to **Video**, and press **+/-** to change it to **Legacy**.
+
+ :::image type="content" source="../media/tutorial-install-components/storage-and-video.png" alt-text="Screenshot showing the Storage and Video options":::
+
+1. Go to **Boot** > **Boot mode select**.
+
+1. Press **+/-** to change it to **Legacy**.
+
+ :::image type="content" source="../media/tutorial-install-components/boot-mode.png" alt-text="Screenshot of the Boot mode.":::
+
+1. Go to **Save & Exit**.
+
+1. Select **Save Changes and Exit**.
+
+ :::image type="content" source="../media/tutorial-install-components/save-and-exit.png" alt-text="Screenshot of the Save Changes and Exit option.":::
+
+1. Select **Yes**, and the appliance will reboot.
+
+1. Press **F11** to enter the **Boot Menu**.
+
+1. Select the device with the sensor image. Either **DVD** or **USB**.
+
+1. Continue by installing your Defender for IoT software. For more information, see [Defender for IoT software installation](../how-to-install-software.md).
+
+## Next steps
+
+Continue understanding system requirements for physical or virtual appliances. For more information, see [Which appliances do I need?](../ot-appliance-sizing.md).
+
+Then, use any of the following procedures to continue:
+
+- [Purchase sensors or download software for sensors](../how-to-manage-sensors-on-the-cloud.md#purchase-sensors-or-download-software-for-sensors)
+- [Download software for an on-premises management console](../how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)
+- [Install software](../how-to-install-software.md)
defender-for-iot Hpe Proliant Dl20 Plus Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl20-plus-enterprise.md
+
+ Title: HPE ProLiant DL20/DL20 Plus for OT monitoring in enterprise deployments- Microsoft Defender for IoT
+description: Learn about the HPE ProLiant DL20/DL20 Plus appliance when used for OT monitoring with Microsoft Defender for IoT in enterprise deployments.
Last updated : 04/24/2022++++
+# HPE ProLiant DL20/DL20 Plus
+
+This article describes the **HPE ProLiant DL20** or **HPE ProLiant DL20 Plus** appliance for OT sensors in an enterprise deployment.
+
+The HPE ProLiant DL20 Plus is also available for the on-premises management console.
+
+| Appliance characteristic |Details |
+|||
+|**Hardware profile** | Enterprise|
+|**Performance** | Max bandwidth: 1 Gbp/s <br>Max devices: 10,000 |
+|**Physical specifications** | Mounting: 1U <br> Ports: 8x RJ45 or 6x SFP (OPT)|
+|**Status** | Supported, Available preconfigured |
+
+The following image shows a sample of the HPE ProLiant DL20 front panel:
++
+The following image shows a sample of the HPE ProLiant DL20 back panel:
++
+### Specifications
+
+|Component |Specifications|
+|||
+|Chassis |1U rack server |
+|Dimensions |Four 3.5" chassis: 4.29 x 43.46 x 38.22 cm / 1.70 x 17.11 x 15.05 in |
+|Weight | Max 7.9 kg / 17.41 lb |
+
+**DL20 BOM**
+
+| Quantity | PN| Description: high end |
+|--|--|--|
+|1| P06963-B21 | HPE DL20 Gen10 4SFF CTO Server |
+|1| P06963-B21 | HPE DL20 Gen10 4SFF CTO Server |
+|1| P17104-L21 | HPE DL20 Gen10 E-2234 FIO Kit |
+|2| 879507-B21 | HPE 16-GB 2Rx8 PC4-2666V-E STND Kit |
+|3| 655710-B21 | HPE 1-TB SATA 7.2 K SFF SC DS HDD |
+|1| P06667-B21 | HPE DL20 Gen10 x8x16 FLOM Riser Kit |
+|1| 665240-B21 | HPE Ethernet 1-Gb 4-port 366FLR Adapter |
+|1| 782961-B21 | HPE 12-W Smart Storage Battery |
+|1| 869081-B21 | HPE Smart Array P408i-a SR G10 LH Controller |
+|2| 865408-B21 | HPE 500-W FS Plat Hot Plug LH Power Supply Kit |
+|1| 512485-B21 | HPE iLO Adv 1-Server License 1 Year Support |
+|1| P06722-B21 | HPE DL20 Gen10 RPS Enablement FIO Kit |
+|1| 775612-B21 | HPE 1U Short Friction Rail Kit |
+
+**DL20 Plus BOM**:
+
+|Quantity|PN|Description|
+|-||-|
+|1| P44111-B21| HPE DL20 Gen10+ 4SFF CTO Server|
+|1| P45252-B21| Intel Xeon E-2334 FIO CPU for HPE|
+|1| 869081-B21| HPE Smart Array P408i-a SR G10 LH Controller|
+|1| 782961-B21| HPE 12W Smart Storage Battery|
+|1| P45948-B21| HPE DL20 Gen10+ RPS FIO Enable Kit|
+|2| 865408-B21| HPE 500W FS Plat Hot Plug LH Power Supply Kit|
+|1| 775612-B21| HPE 1U Short Friction Rail Kit|
+|1| 512485-B21| HPE iLO Adv 1 Server License 1 year support|
+|1| P46114-B21| HPE DL20 Gen10+ 2x8 LP FIO Riser Kit|
+|1| P21106-B21| INT I350 1GbE 4p BASE-T Adapter|
+|3| P28610-B21| HPE 1TB SATA 7.2K SFF BC HDD|
+|2| P43019-B21| HPE 16GB 1Rx8 PC4-3200AA-E Standard Kit|
+
+## Port expansion
+
+Optional modules for port expansion include:
+
+|Location |Type|Specifications|
+|-- | --| |
+| PCI Slot 1 (Low profile)| Quad Port Ethernet NIC| 811546-B21 - HPE 1 GbE 4p BASE-T I350 Adapter SI |
+| PCI Slot 1 (Low profile) | DP F/O NIC|727054-B21 - HPE 10 GbE 2p FLR-SFP+ X710 Adapter|
+| PCI Slot 2 (High profile)| Quad Port Ethernet NIC|811546-B21 - HPE 1 GbE 4p BASE-T I350 Adapter SI|
+| PCI Slot 2 (High profile)|DP F/O NIC| 727054-B21 - HPE 10 GbE 2p FLR-SFP+ X710 Adapter|
+| PCI Slot 2 (High profile)|Quad Port F/O NIC| 869585-B21 - HPE 10 GbE 4p SFP+ X710 Adapter SI|
+| SFPs for Fiber Optic NICs|MultiMode, Short Range|455883-B21 - HPE BLc 10G SFP+ SR Transceiver|
+| SFPs for Fiber Optic NICs|SingleMode, Long Range | 455886-B21 - HPE BLc 10G SFP+ LR Transceiver|
+
+## HPE ProLiant DL20 / HPE ProLiant DL20 Plus installation
+
+This section describes how to install Defender for IoT software on the HPE ProLiant DL20 or HPE ProLiant DL20 Plus appliance.
+
+Installation includes:
+
+- Enabling remote access and updating the default administrator password
+- Configuring iLO port on network port 1
+- Configuring BIOS and RAID settings
+- Installing Defender for IoT software
+
+> [!NOTE]
+> Installation procedures are only relevant if you need to re-install software on a preconfigured device, or if you buy your own hardware and configure the appliance yourself.
+>
+
+### Enable remote access and update the password
+
+Use the following procedure to set up network options and update the default password.
+
+**To enable, and update the password**:
+
+1. Connect a screen and a keyboard to the HPE appliance, turn on the appliance, and press **F9**.
+
+ :::image type="content" source="../media/tutorial-install-components/hpe-proliant-screen-v2.png" alt-text="Screenshot that shows the HPE ProLiant window.":::
+
+1. Go to **System Utilities** > **System Configuration** > **iLO 5 Configuration Utility** > **Network Options**.
+
+ :::image type="content" source="../media/tutorial-install-components/system-configuration-window-v2.png" alt-text="Screenshot that shows the System Configuration window.":::
+
+ 1. Select **Shared Network Port-LOM** from the **Network Interface Adapter** field.
+
+ 1. Set **Enable DHCP** to **Off**.
+
+ 1. Enter the IP address, subnet mask, and gateway IP address.
+
+1. Select **F10: Save**.
+
+1. Select **Esc** to get back to the **iLO 5 Configuration Utility**, and then select **User Management**.
+
+1. Select **Edit/Remove User**. The administrator is the only default user defined.
+
+1. Change the default password and select **F10: Save**.
+
+### Configure the HPE BIOS
+
+This procedure describes how to update the HPE BIOS configuration for your OT deployment.
+
+**To configure the HPE BIOS**:
+
+1. Select **System Utilities** > **System Configuration** > **BIOS/Platform Configuration (RBSU)**.
+
+1. In the **BIOS/Platform Configuration (RBSU)** form, select **Boot Options**.
+
+1. Change **Boot Mode** to **Legacy BIOS Mode**, and then select **F10: Save**.
+
+1. Select **Esc** twice to close the **System Configuration** form.
+
+1. Select **Embedded RAID 1: HPE Smart Array P408i-a SR Gen 10** > **Array Configuration** > **Create Array**.
+
+1. In the **Create Array** form, select all the options. Three options are available for the **Enterprise** appliance.
+
+### Install Defender for IoT software on the HPE ProLiant DL20 or HPE ProLiant DL20 Plus
+
+This procedure describes how to install Defender for IoT software on the HPE ProLiant DL20 or HPE ProLiant DL20 Plus.
+
+The installation process takes about 20 minutes. After the installation, the system is restarted several times.
+
+**To install Defender for IoT software**:
+
+1. Connect the screen and keyboard to the appliance, and then connect to the CLI.
+
+1. Connect an external CD or disk-on-key that contains the software you downloaded from the Azure portal.
+
+1. Start the appliance.
+
+1. Continue with the generic procedure for installing Defender for IoT software. For more information, see [Defender for IoT software installation](../how-to-install-software.md).
+
+## Next steps
+
+Continue understanding system requirements for physical or virtual appliances. For more information, see [Which appliances do I need?](../ot-appliance-sizing.md).
+
+Then, use any of the following procedures to continue:
+
+- [Purchase sensors or download software for sensors](../how-to-manage-sensors-on-the-cloud.md#purchase-sensors-or-download-software-for-sensors)
+- [Download software for an on-premises management console](../how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)
+- [Install software](../how-to-install-software.md)
defender-for-iot Hpe Proliant Dl20 Plus Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl20-plus-smb.md
+
+ Title: HPE ProLiant DL20/DL20 Plus for OT monitoring in SMB deployments- Microsoft Defender for IoT
+description: Learn about the HPE ProLiant DL20/DL20 Plus appliance when used for in SMB deployments for OT monitoring with Microsoft Defender for IoT.
Last updated : 04/24/2022+++
+# HPE ProLiant DL20/DL20 Plus for SMB deployments
+
+This article describes the **HPE ProLiant DL20** or **HPE ProLiant DL20 Plus** appliance for OT sensors in an SBM deployment.
+
+The HPE ProLiant DL20 Plus is also available for the on-premises management console.
+
+| Appliance characteristic |Details |
+|||
+|**Hardware profile** | SMB|
+|**Performance** | Max bandwidth: 200Mbp/s <br>Max devices: 1,000 |
+|**Physical specifications** | Mounting: 1U<br>Ports: 4x RJ45|
+|**Status** | Supported; Available as pre-configured |
+
+The following image shows a sample of the HPE ProLiant DL20 front panel:
++
+The following image shows a sample of the HPE ProLiant DL20 back panel:
++
+## Specifications
+
+|Component|Technical specifications|
+|-|-|
+|Chassis|1U rack server|
+|Dimensions |4.32 x 43.46 x 38.22 cm / 1.70 x 17.11 x 15.05 in|
+|Weight|7.88 kg / 17.37 lb|
+|Processor| Intel Xeon E-2224 <br> 3.4 GHz 4C 71 W|
+|Chipset|Intel C242|
+|Memory|One 8-GB Dual Rank x8 DDR4-2666|
+|Storage|Two 1-TB SATA 6G Midline 7.2 K SFF (2.5 in) ΓÇô RAID 1 with Smart Array P208i-a|
+|Network controller|On-board: Two 1 Gb|
+|On-board| iLO Port Card 1 Gb|
+|External| 1 x HPE Ethernet 1-Gb 4-port 366FLR Adapter|
+|Management|HPE iLO Advanced|
+|Device access| Front: One USB 3.0 1 x USB iLO Service Port<br> Rear: Two USBs 3.0|
+|Internal| One USB 3.0|
+|Power|Hot Plug Power Supply 290 W|
+|Rack support|HPE 1U Short Friction Rail Kit|
+
+## Appliance BOM
+
+|PN|Description|Quantity|
+|:-|:-|:-|
+|P06961-B21|HPE DL20 Gen10 NHP 2LFF CTO Server|1|
+|P17102-L21|HPE DL20 Gen10 E-2224 FIO Kit|1|
+|879505-B21|HPE 8-GB 1Rx8 PC4-2666V-E Standard Kit|1|
+|801882-B21|HPE 1-TB SATA 7.2 K LFF RW HDD|2|
+|P06667-B21|HPE DL20 Gen10 x8x16 FLOM Riser Kit|1|
+|665240-B21|HPE Ethernet 1-Gb 4-port 366FLR Adapter|1|
+|869079-B21|HPE Smart Array E208i-a SR G10 LH Controller|1|
+|P21649-B21|HPE DL20 Gen10 Plat 290 W FIO PSU Kit|1|
+|P06683-B21|HPE DL20 Gen10 M.2 SATA/LFF AROC Cable Kit|1|
+|512485-B21|HPE iLO Adv 1-Server License 1 Year Support|1|
+|775612-B21|HPE 1U Short Friction Rail Kit|1|
+
+## HPE ProLiant DL20/HPE ProLiant DL20 Plus installation
+
+This section describes how to install Defender for IoT software on the HPE ProLiant DL20 or HPE ProLiant DL20 Plus appliance.
+
+Installation includes:
+
+- Enabling remote access and updating the default administrator password
+- Configuring iLO port on network port 1
+- Configuring BIOS and RAID settings
+- Installing Defender for IoT software
+
+> [!NOTE]
+> Installation procedures are only relevant if you need to re-install software on a preconfigured device, or if you buy your own hardware and configure the appliance yourself.
+>
++
+### Enable remote access and update the password
+
+Use the following procedure to set up network options and update the default password.
+
+**To enable and update the password**:
+
+1. Connect a screen, and a keyboard to the HPE appliance, turn on the appliance, and press **F9**.
+
+ :::image type="content" source="../media/tutorial-install-components/hpe-proliant-screen-v2.png" alt-text="Screenshot that shows the HPE ProLiant window.":::
+
+1. Go to **System Utilities** > **System Configuration** > **iLO 5 Configuration Utility** > **Network Options**.
+
+ :::image type="content" source="../media/tutorial-install-components/system-configuration-window-v2.png" alt-text="Screenshot that shows the System Configuration window.":::
+
+ 1. Select **Shared Network Port-LOM** from the **Network Interface Adapter** field.
+
+ 1. Set **Enable DHCP** to **Off**.
+
+ 1. Enter the IP address, subnet mask, and gateway IP address.
+
+1. Select **F10: Save**.
+
+1. Select **Esc** to get back to the **iLO 5 Configuration Utility**, and then select **User Management**.
+
+1. Select **Edit/Remove User**. The administrator is the only default user defined.
+
+1. Change the default password and select **F10: Save**.
+
+### Configure the HPE BIOS
+
+This procedure describes how to update the HPE BIOS configuration for your OT deployment.
+
+**To configure the HPE BIOS**:
+
+1. Select **System Utilities** > **System Configuration** > **BIOS/Platform Configuration (RBSU)**.
+
+1. In the **BIOS/Platform Configuration (RBSU)** form, select **Boot Options**.
+
+1. Change **Boot Mode** to **Legacy BIOS Mode**, and then select **F10: Save**.
+
+1. Select **Esc** twice to close the **System Configuration** form.
+
+1. Select **Embedded RAID 1: HPE Smart Array P208i-a SR Gen 10** > **Array Configuration** > **Create Array**.
+
+1. Select **Proceed to Next Form**.
+
+1. In the **Set RAID Level** form, set the level to **RAID 5** for enterprise deployments and **RAID 1** for SMB deployments.
+
+1. Select **Proceed to Next Form**.
+
+1. In the **Logical Drive Label** form, enter **Logical Drive 1**.
+
+1. Select **Submit Changes**.
+
+1. In the **Submit** form, select **Back to Main Menu**.
+
+1. Select **F10: Save** and then press **Esc** twice.
+
+1. In the **System Utilities** window, select **One-Time Boot Menu**.
+
+1. In the **One-Time Boot Menu** form, select **Legacy BIOS One-Time Boot Menu**.
+
+1. The **Booting in Legacy** and **Boot Override** windows appear. Choose a boot override option; for example, to a CD-ROM, USB, HDD, or UEFI shell.
+
+ :::image type="content" source="../media/tutorial-install-components/boot-override-window-one-v2.png" alt-text="Screenshot that shows the first Boot Override window.":::
+
+ :::image type="content" source="../media/tutorial-install-components/boot-override-window-two-v2.png" alt-text="Screenshot that shows the second Boot Override window.":::
+
+### Install Defender for IoT software on the HPE ProLiant DL20 or HPE ProLiant DL20 Plus
+
+This procedure describes how to install Defender for IoT software on the HPE ProLiant DL20 or HPE ProLiant DL20 Plus.
+
+The installation process takes about 20 minutes. After the installation, the system is restarted several times.
+
+**To install Defender for IoT software**:
+
+1. Connect the screen and keyboard to the appliance, and then connect to the CLI.
+
+1. Connect an external CD or disk-on-key that contains the software you downloaded from the Azure portal.
+
+1. Start the appliance.
+
+1. Continue by installing your Defender for IoT software. For more information, see [Defender for IoT software installation](../how-to-install-software.md).
+
+## Next steps
+
+Continue understanding system requirements for physical or virtual appliances. For more information, see [Which appliances do I need?](../ot-appliance-sizing.md).
+
+Then, use any of the following procedures to continue:
+
+- [Purchase sensors or download software for sensors](../how-to-manage-sensors-on-the-cloud.md#purchase-sensors-or-download-software-for-sensors)
+- [Download software for an on-premises management console](../how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)
+- [Install software](../how-to-install-software.md)
defender-for-iot Hpe Proliant Dl360 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl360.md
+
+ Title: HPE ProLiant DL360 OT monitoring - Microsoft Defender for IoT
+description: Learn about the HPE ProLiant DL360 appliance when used for OT monitoring with Microsoft Defender for IoT.
Last updated : 04/24/2022+++
+# HPE ProLiant DL360
+
+This article describes the **HPE ProLiant DL360** appliance for OT sensors.
+
+| Appliance characteristic |Details |
+|||
+|**Hardware profile** | Corporate |
+|**Performance** | Max bandwidth: 3Gbp/s <br> Max devices: 12,000 |
+|**Physical specifications** | Mounting: 1U<br>Ports: 15x RJ45 or 8x SFP (OPT)|
+|**Status** | Supported, Available preconfigured|
++
+The following image shows a view of the HPE ProLiant Dl360 front panel:
++
+The following image shows a view of the HPE ProLiant Dl360 back panel:
++
+## Specifications
+
+|Component |Specifications|
+|||
+|Chassis |1U rack server |
+|Dimensions |Four 3.5" chassis: 4.29 x 43.46 x 70.7 cm / 1.69 x 17.11 x 27.83 in |
+|Weight | Max 16.72 kg / 35.86 lb |
+|Chassis |1U rack server|
+|Dimensions| 42.9 x 43.46 x 70.7 cm / 1.69" x 17.11" x 27.83" in|
+|Weight| Max 16.27 kg / 35.86 lb |
+|Processor | Intel Xeon Silver 4215 R 3.2 GHz 11M cache 8c/16T 130 W|
+|Chipset | Intel C621|
+|Memory | 32 GB = Two 16-GB 2666MT/s DDR4 ECC UDIMM|
+|Storage| Six 1.2-TB SAS 12G Enterprise 10K SFF (2.5 in) in hot-plug hard drive - RAID 5|
+|Network controller| On-board: Two 1 Gb <br> On-board: iLO Port Card 1 Gb <br>External: One HPE Ethernet 1-Gb 4-port 366FLR adapter|
+|Management |HPE iLO Advanced |
+|Device access | Two rear USB 3.0 |
+|One front | USB 2.0 |
+|One internal |USB 3.0 |
+|Power |Two HPE 500-W flex slot platinum hot plug low halogen power supply kit
+|Rack support | HPE 1U Gen10 SFF easy install rail kit |
+
+### Port expansion
+
+Optional modules for port expansion include:
+
+|Location |Type|Specifications|
+|-- | --| |
+| PCI Slot 1 (Low profile)| Quad Port Ethernet NIC| 811546-B21 - HPE 1 GbE 4p BASE-T I350 Adapter SI |
+| PCI Slot 1 (Low profile) | DP F/O NIC|727054-B21 - HPE 10 GbE 2p FLR-SFP+ X710 Adapter|
+| PCI Slot 2 (High profile)| Quad Port Ethernet NIC|811546-B21 - HPE 1 GbE 4p BASE-T I350 Adapter SI|
+| PCI Slot 2 (High profile)|DP F/O NIC| 727054-B21 - HPE 10 GbE 2p FLR-SFP+ X710 Adapter|
+| PCI Slot 2 (High profile)|Quad Port F/O NIC| 869585-B21 - HPE 10 GbE 4p SFP+ X710 Adapter SI|
+| SFPs for Fiber Optic NICs|MultiMode, Short Range|455883-B21 - HPE BLc 10G SFP+ SR Transceiver|
+| SFPs for Fiber Optic NICs|SingleMode, Long Range | 455886-B21 - HPE BLc 10G SFP+ LR Transceiver|
+
+## HPE ProLiant DL360 installation
+
+This section describes how to install OT sensor software on the HPE ProLiant DL360 appliance, and includes adjusting the appliance's BIOS configuration.
+
+During this procedure, you'll configure the iLO port. We recommend that you also change the default password provided for the administrative user.
+
+> [!NOTE]
+> Installation procedures are only relevant if you need to re-install software on a preconfigured device, or if you buy your own hardware and configure the appliance yourself.
+>
+
+### Enable remote access and update the password
+
+Use the following procedure to set up network options and update the default password.
+
+**To enable and update the password**:
+
+1. Connect a screen and a keyboard to the HP appliance, turn on the appliance, and press **F9**.
+
+ :::image type="content" source="../media/tutorial-install-components/hpe-proliant-screen-v2.png" alt-text="Screenshot that shows the HPE ProLiant window.":::
+
+1. Go to **System Utilities** > **System Configuration** > **iLO 5 Configuration Utility** > **Network Options**.
+
+ :::image type="content" source="../media/tutorial-install-components/system-configuration-window-v2.png" alt-text="Screenshot that shows the System Configuration window.":::
+
+ 1. Select **Shared Network Port-LOM** from the **Network Interface Adapter** field.
+
+ 1. Set **Enable DHCP** to **Off**.
+
+ 1. Enter the IP address, subnet mask, and gateway IP address.
+
+1. Select **F10: Save**.
+
+1. Select **Esc** to get back to the **iLO 5 Configuration Utility**, and then select **User Management**.
+
+1. Select **Edit/Remove User**. The administrator is the only default user defined.
+
+1. Change the default password and select **F10: Save**.
+
+### Configure the HPE BIOS
+
+This procedure describes how to update the HPE BIOS configuration for your OT sensor deployment.
+
+**To configure the HPE BIOS**:
+
+1. Select **System Utilities** > **System Configuration** > **BIOS/Platform Configuration (RBSU)**.
+
+1. In the **BIOS/Platform Configuration (RBSU)** form, select **Boot Options**.
+
+1. Change **Boot Mode** to **Legacy BIOS Mode**, and then select **F10: Save**.
+
+1. Select **Esc** twice to close the **System Configuration** form.
+
+1. Select **Embedded RAID 1: HPE Smart Array P408i-a SR Gen 10** > **Array Configuration** > **Create Array**.
+
+1. In the **Create Array** form, select all the options.
++
+### Install iLO remotely from a virtual drive
+
+This procedure describes how to install iLO software remotely from a virtual drive.
+
+**To install iLO software**:
+
+1. Sign in to the iLO console, and then right-click the servers' screen.
+
+1. Select **HTML5 Console**.
+
+1. In the console, select the CD icon, and choose the CD/DVD option.
+
+1. Select **Local ISO file**.
+
+1. In the dialog box, choose the relevant ISO file.
+
+1. Go to the left icon, select **Power**, and the select **Reset**.
+
+1. The appliance will restart, and run the sensor installation process.
+
+### Install OT sensor software on the HPE DL360
+
+This procedure describes how to install OT sensor software on the HPE DL360.
+
+The installation process takes about 20 minutes. After the installation, the system is restarted several times.
+
+**To install OT sensor software**:
+
+1. Connect a screen and keyboard to the appliance, and then connect to the CLI.
+
+1. Connect an external CD or disk-on-key that contains the sensor software you downloaded from the Azure portal.
+
+1. Continue with the generic procedure for installing OT sensor software. For more information, see [Defender for IoT software installation](../how-to-install-software.md).
+
+## Next steps
+
+Continue understanding system requirements for physical or virtual appliances. For more information, see [Which appliances do I need?](../ot-appliance-sizing.md).
+
+Then, use any of the following procedures to continue:
+
+- [Purchase sensors or download software for sensors](../how-to-manage-sensors-on-the-cloud.md#purchase-sensors-or-download-software-for-sensors)
+- [Download software for an on-premises management console](../how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)
+- [Install software](../how-to-install-software.md)
defender-for-iot Neousys Nuvo 5006Lp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/neousys-nuvo-5006lp.md
+
+ Title: Neousys Nuvo-5006LP (SMB) - Microsoft Defender for IoT
+description: Learn about the Neousys Nuvo-5006LP appliance for OT monitoring with Microsoft Defender for IoT.
Last updated : 04/24/2022+++
+# Neousys Nuvo-5006LP
+
+This article describes the Neousys Nuvo-5006LP appliance for OT sensors.
+
+Legacy appliances are certified but aren't currently offered as pre-configured appliances.
+
+| Appliance characteristic |Details |
+|||
+|**Hardware profile** | Office |
+|**Performance** | Max bandwidth: 30 Mbp/s<br>Max devices: 400 |
+|**Physical specifications** | Mounting: Mounting kit, Din Rail<br>Ports: 5x RJ45|
+|**Status** | Supported, Not available pre-configured|
++
+The following image shows a view of the Nuvo 5006LP front panel:
++
+In this image, numbers indicate the following components:
+
+1. Power button, Power indicator
+1. DVI video connectors
+1. HDMI video connectors
+1. VGA video connectors
+1. Remote on/off Control, and status LED output
+1. Reset button
+1. Management network adapter
+1. Ports to receive mirrored data
+
+The following image shows a view of the Nuvo 5006LP back panel:
++
+In this image, numbers indicate the following components:
+
+1. SIM card slot
+1. Microphone, and speakers
+1. COM ports
+1. USB connectors
+1. DC power port (DC IN)
+
+## Specifications
+
+|Component|Technical Specifications|
+|:-|:-|
+|Construction|Aluminum, fanless and dust-proof design|
+|Dimensions|240 mm (W) x 225 mm (D) x 77 mm (H)|
+|Weight|3.1 kg (including CPU, memory, and HDD)|
+|CPU|Intel Core i5-6500TE (6M Cache, up to 3.30 GHz) S1151|
+|Chipset|Intel® Q170 Platform Controller Hub|
+|Memory|8 GB DDR4 2133 MHz Wide Temperature SODIMM|
+|Storage|128 GB 3ME3 Wide Temperature mSATA SSD|
+|Network controller|Six-Gigabit Ethernet ports by Intel® I219|
+|Device access|Four USBs: Two in front, two in the rear, and 1 internal|
+|Power Adapter|120/240VAC-20VDC/6A|
+|Mounting|Mounting kit, Din Rail|
+|Operating Temperature|-25┬░C - 70┬░C|
+|Storage Temperature|-40┬░C ~ 85┬░C|
+|Humidity|10%~90%, non-condensing|
+|Vibration|Operating, 5 Grms, 5-500 Hz, three Axes <br>(w/ SSD, according to IEC60068-2-64)|
+|Shock|Operating, 50 Grms, Half-sine 11 ms Duration <br>(w/ SSD, according to IEC60068-2-27)|
+|EMC|CE/FCC Class A, according to EN 55022, EN 55024 & EN 55032|
++
+## Nuvo 5006LP sensor installation
+
+This section describes how to install OT sensor software on the Nuvo 5006LP appliance. Before you install the OT sensor software, you must adjust the appliance's BIOS configuration.
+
+> [!NOTE]
+> Installation procedures are only relevant if you need to re-install software on a preconfigured device, or if you buy your own hardware and configure the appliance yourself.
+>
+
+### Prerequisites
+
+Before you start installing OT sensor software, or updating the BIOS configuration, make sure that the operating system is installed on the appliance.
+
+### Configure the Nuvo 5006LP BIOS
+
+This procedure describes how to update the Nuvo 5006LP BIOS configuration for your OT sensor deployment.
+
+**To configure the Nuvo 5006LP BIOS**:
+
+1. Power on the appliance.
+
+1. Press **F2** to enter the BIOS configuration.
+
+1. Go to **Power** and change the **Power On after Power Failure** setting to **S0-Power On**.
+
+ :::image type="content" source="../media/tutorial-install-components/nuvo-power-on.png" alt-text="Screenshot of setting your Nuvo 5006 to power on after a power failure.":::
+
+1. Go to **Boot** and ensure that **PXE Boot to LAN** is set to **Disabled**.
+
+1. Press **F10** to save, and then select **Exit**.
+
+### Install OT sensor software on the Nuvo 5006LP
+
+This procedure describes how to install OT sensor software on the Nuvo 5006LP. The installation takes approximately 20 minutes. After the installation is complete, the system restarts several times.
+
+**To install OT sensor software**:
+
+1. Connect an external CD or disk-on-key that contains the sensor software you downloaded from the Azure portal.
+
+1. Boot the appliance.
+
+1. Select **English**.
+
+1. Select **XSENSE-RELEASE-\<version> Office...**.
+
+1. Define the appliance architecture, and network properties as follows:
+
+ | Parameter | Configuration |
+ | -| - |
+ | **Hardware profile** | Select **office**. |
+ | **Management interface** | **eth0** |
+ | **Management network IP address** | **IP address provided by the customer** |
+ | **Management subnet mask** | **IP address provided by the customer** |
+ | **DNS** | **IP address provided by the customer** |
+ | **Default gateway IP address** | **0.0.0.0** |
+ | **Input interface** | The list of input interfaces is generated for you by the system. <br />To mirror the input interfaces, copy all the items presented in the list with a comma separator. |
+ | **Bridge interface** | - |
+
+ For example:
+
+ :::image type="content" source="../media/tutorial-install-components/nuvo-profile-appliance.png" alt-text="Screenshot of the Nuvo's architecture and network properties.":::
++
+1. Accept the settings and continue by entering `Y`.
+
+After approximately 10 minutes, sign-in credentials are automatically generated. Save the username and passwords, you'll need these credentials to access the platform the first time you use it.
+
+## Next steps
+
+Continue understanding system requirements for physical or virtual appliances. For more information, see [Which appliances do I need?](../ot-appliance-sizing.md).
+
+Then, use any of the following procedures to continue:
+
+- [Purchase sensors or download software for sensors](../how-to-manage-sensors-on-the-cloud.md#purchase-sensors-or-download-software-for-sensors)
+- [Download software for an on-premises management console](../how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)
+- [Install software](../how-to-install-software.md)
defender-for-iot Virtual Management Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/virtual-management-hyper-v.md
+
+ Title: On-premises management console (Microsoft Hyper-V) - Microsoft Defender for IoT
+description: Learn about deploying a Microsoft Defender for IoT on-premises management console as a virtual appliance using Microsoft Hyper-V.
Last updated : 04/24/2022+++
+# On-premises management console (Microsoft Hyper-V hypervisor)
+
+This article describes an on-premises management console deployment on a virtual appliance using the Microsoft Hyper-V hypervisor (Windows 10 Pro or Enterprise).
+
+| Appliance characteristic |Details |
+|||
+|**Hardware profile** | As required for your organization. For more information, see [Which appliances do I need?](../ot-appliance-sizing.md) |
+|**Performance** | As required for your organization. For more information, see [Which appliances do I need?](../ot-appliance-sizing.md) |
+|**Physical specifications** | Virtual Machine |
+|**Status** | Supported |
+
+## Prerequisites
+
+Before you begin the installation, make sure you have the following items:
+
+- Microsoft Hyper-V hypervisor (Windows 10 Pro or Enterprise) installed and operational
+
+- Available hardware resources for the virtual machine. For more information, see [OT monitoring with virtual appliances](../ot-virtual-appliances.md).
+
+- The on-premises management console software [downloaded from Defender for IoT in the Azure portal](../how-to-install-software.md#download-software-files-from-the-azure-portal)
+
+Make sure the hypervisor is running.
+
+## Create the virtual machine
+
+This procedure describes how to create a virtual machine for your on-premises management console using Microsoft Hyper-V.
+
+**To create a virtual machine**:
+
+1. Create a virtual disk in Hyper-V Manager.
+
+1. Select the format **VHDX** > **Next**.
+
+1. Select the type **Dynamic expanding** > **Next**.
+
+1. Enter the name and location for the VHD and then select **Next**.
+
+1. Enter the [required size for your organization's needs](../ot-appliance-sizing.md), and then select **Next**.
+
+1. Review the summary and select **Finish**.
+
+1. On the **Actions** menu, create a new virtual machine and select **Next**.
+
+1. Enter a name for the virtual machine and select **Next**.
+
+1. Select **Generation** and set it to **Generation 1**, and then select **Next**.
+
+1. Specify the [memory allocation for your organization's needs](../ot-appliance-sizing.md), and then select **Next**.
+
+1. Configure the network adaptor according to your server network topology and then select **Next**.
+
+1. Connect the VHDX created previously to the virtual machine, and then select **Next**.
+
+1. Review the summary and select **Finish**.
+
+1. Right-click the new virtual machine, and then select **Settings**.
+
+1. Select **Add Hardware** and add a new adapter for **Network Adapter**.
+
+1. For **Virtual Switch**, select the switch that will connect to the sensor management network.
+
+1. Allocate [CPU resources for your organization's needs](../ot-appliance-sizing.md), and then select **Next**.
+
+1. Connect the management console's ISO image to a virtual DVD drive and start the virtual machine.
+
+1. On the **Actions** menu, select **Connect** to continue the software installation.
+
+## Software installation
+
+1. To start installing the on-premises management console software, open the virtual machine console.
+
+ The VM will start from the ISO image, and the language selection screen will appear.
+
+1. Continue with the [generic procedure for installing on-premises management console software](../how-to-install-software.md#install-on-premises-management-console-software).
+
+## Next steps
+
+Continue understanding system requirements for physical or virtual appliances. For more information, see [Which appliances do I need?](../ot-appliance-sizing.md) and [OT monitoring with virtual appliances](../ot-virtual-appliances.md).
+
+Then, use any of the following procedures to continue:
+
+- [Purchase sensors or download software for sensors](../how-to-manage-sensors-on-the-cloud.md#purchase-sensors-or-download-software-for-sensors)
+- [Download software for an on-premises management console](../how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)
+- [Install software](../how-to-install-software.md)
defender-for-iot Virtual Management Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/virtual-management-vmware.md
+
+ Title: On-premises management console (VMware ESXi) - Microsoft Defender for IoT
+description: Learn about deploying a Microsoft Defender for IoT on-premises management console as a virtual appliance using VMware ESXi.
Last updated : 04/24/2022+++
+# On-premises management console (VMware ESXi)
+
+This article describes an on-premises management console deployment on a virtual appliance using VMware ESXi.
+
+| Appliance characteristic |Details |
+|||
+|**Hardware profile** | As required for your organization. For more information, see [Which appliances do I need?](../ot-appliance-sizing.md) |
+|**Performance** | As required for your organization. For more information, see [Which appliances do I need?](../ot-appliance-sizing.md) |
+|**Physical specifications** | Virtual Machine |
+|**Status** | Supported |
+
+## Prerequisites
+
+The on-premises management console supports both VMware and Hyper-V deployment options. Before you begin the installation, make sure you have the following items:
+
+- VMware (ESXi 5.5 or later) installed and operational
+
+- Available hardware resources for the virtual machine. For more information, see [OT monitoring with virtual appliances](../ot-virtual-appliances.md).
+
+- The on-premises management console software [downloaded from Defender for IoT in the Azure portal](../how-to-install-software.md#download-software-files-from-the-azure-portal)
+
+Make sure the hypervisor is running.
+
+## Create the virtual machine
+
+This procedure describes how to create a virtual machine for your on-premises management console using VMware ESXi.
+
+**To create the virtual machine**:
+
+1. Sign in to the ESXi, choose the relevant **datastore**, and select **Datastore Browser**.
+
+1. Upload the image and select **Close**.
+
+1. Go to **Virtual Machines** > **Create/Register VM** > **Create new virtual machine** > **Next**.
+
+1. Add a sensor name and select:
+
+ - **Compatibility**: \<latest ESXi version>
+
+ - **Guest OS family**: Linux
+
+ - **Guest OS version**: Ubuntu Linux (64-bit)
+
+ When you're done, select **Next**.
+
+1. Select the relevant datastore > **Next**.
+
+1. Change the virtual hardware parameters [according to your organization's needs](../ot-appliance-sizing.md).
+
+1. For **CD/DVD Drive 1**, select **Datastore ISO file** and then select the ISO file that you uploaded earlier.
+
+1. Select **Next** > **Finish**.
+
+## Software installation
+
+1. To start installing the on-premises management console software, open the virtual machine console.
+
+ The VM will start from the ISO image, and the language selection screen will appear.
+
+1. Continue with the [generic procedure for installing on-premises management console software](../how-to-install-software.md#install-on-premises-management-console-software).
++
+## Next steps
+
+Continue understanding system requirements for physical or virtual appliances. For more information, see [Which appliances do I need?](../ot-appliance-sizing.md) and [OT monitoring with virtual appliances](../ot-virtual-appliances.md).
+
+Then, use any of the following procedures to continue:
+
+- [Purchase sensors or download software for sensors](../how-to-manage-sensors-on-the-cloud.md#purchase-sensors-or-download-software-for-sensors)
+- [Download software for an on-premises management console](../how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)
+- [Install software](../how-to-install-software.md)
defender-for-iot Virtual Sensor Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/virtual-sensor-hyper-v.md
+
+ Title: OT sensor VM (Microsoft Hyper-V) - Microsoft Defender for IoT
+description: Learn about deploying a Microsoft Defender for IoT OT sensor as a virtual appliance using Microsoft Hyper-V.
Last updated : 04/24/2022+++
+# OT network sensor VM (Microsoft Hyper-V)
+
+This article describes an OT sensor deployment on a virtual appliance using Microsoft Hyper-V.
+
+| Appliance characteristic |Details |
+|||
+|**Hardware profile** | As required for your organization. For more information, see [Which appliances do I need?](../ot-appliance-sizing.md) |
+|**Performance** | As required for your organization. For more information, see [Which appliances do I need?](../ot-appliance-sizing.md) |
+|**Physical specifications** | Virtual Machine |
+|**Status** | Supported |
++
+## Prerequisites
+
+The on-premises management console supports both VMware and Hyper-V deployment options. Before you begin the installation, make sure you have the following items:
+
+- Hyper-V hypervisor (Windows 10 Pro or Enterprise) installed and operational
+
+- Available hardware resources for the virtual machine. For more information, see [OT monitoring with virtual appliances](../ot-virtual-appliances.md).
+
+- The OT sensor software [downloaded from Defender for IoT in the Azure portal](../how-to-install-software.md#download-software-files-from-the-azure-portal).
+
+Make sure the hypervisor is running.
+
+## Create the virtual machine
+
+This procedure describes how to create a virtual machine by using Hyper-V.
+
+**To create the virtual machine using Hyper-V**:
+
+1. Create a virtual disk in Hyper-V Manager.
+
+1. Select **format = VHDX**.
+
+1. Select **type = Dynamic Expanding**.
+
+1. Enter the name and location for the VHD.
+
+1. Enter the required size [according to your organization's needs](../ot-appliance-sizing.md).
+
+1. Review the summary, and select **Finish**.
+
+1. On the **Actions** menu, create a new virtual machine.
+
+1. Enter a name for the virtual machine.
+
+1. Select **Specify Generation** > **Generation 1**.
+
+1. Specify the memory allocation [according to your organization's needs](../ot-appliance-sizing.md), and select the check box for dynamic memory.
+
+1. Configure the network adaptor according to your server network topology.
+
+1. Connect the VHDX created previously to the virtual machine.
+
+1. Review the summary, and select **Finish**.
+
+1. Right-click on the new virtual machine, and select **Settings**.
+
+1. Select **Add Hardware**, and add a new network adapter.
+
+1. Select the virtual switch that will connect to the sensor management network.
+
+1. Allocate CPU resources [according to your organization's needs](../ot-appliance-sizing.md).
+
+1. Connect the management console's ISO image to a virtual DVD drive.
+
+1. Start the virtual machine.
+
+1. On the **Actions** menu, select **Connect** to continue the software installation.
+
+## Software installation
+
+1. To start installing the OT sensor software, open the virtual machine console.
+
+ The VM will start from the ISO image, and the language selection screen will appear.
+
+1. Continue with the [generic procedure for installing sensor software](../how-to-install-software.md#install-ot-sensor-software).
++
+## Configure a monitoring interface (SPAN)
+
+While a virtual switch doesn't have mirroring capabilities, you can use *Promiscuous mode* in a virtual switch environment as a workaround for configuring a SPAN port.
+
+*Promiscuous mode* is a mode of operation and a security, monitoring, and administration technique that is defined at the virtual switch or portgroup level. When promiscuous mode is used, any of the virtual machineΓÇÖs network interfaces that are in the same portgroup can view all network traffic that goes through that virtual switch. By default, promiscuous mode is turned off.
+
+For more information, see [Purdue reference model and Defender for IoT](../plan-network-monitoring.md#purdue-reference-model-and-defender-for-iot).
+
+### Prerequisites
+
+Before you start:
+
+- Ensure that there is no instance of a virtual appliance running.
+
+- Enable Ensure SPAN on the data port, and not the management port.
+
+- Ensure that the data port SPAN configuration is not configured with an IP address.
+
+### Configure a SPAN port with Hyper-V
+
+1. Open the Virtual Switch Manager.
+
+1. In the Virtual Switches list, select **New virtual network switch** > **External** as the dedicated spanned network adapter type.
+
+ :::image type="content" source="../media/tutorial-install-components/new-virtual-network.png" alt-text="Screenshot of selecting new virtual network and external before creating the virtual switch.":::
+
+1. Select **Create Virtual Switch**.
+
+1. Under connection type, select **External Network**.
+
+1. Ensure the checkbox for **Allow management operating system to share this network adapter** is checked.
+
+ :::image type="content" source="../media/tutorial-install-components/external-network.png" alt-text="Select external network, and allow the management operating system to share the network adapter.":::
+
+1. Select **OK**.
+
+#### Attach a SPAN Virtual Interface to the virtual switch
+
+You are able to attach a SPAN Virtual Interface to the Virtual Switch through Windows PowerShell, or through Hyper-V Manager.
+
+**To attach a SPAN Virtual Interface to the virtual switch with PowerShell**:
+
+1. Select the newly added SPAN virtual switch, and add a new network adapter with the following command:
+
+ ```bash
+ ADD-VMNetworkAdapter -VMName VK-C1000V-LongRunning-650 -Name Monitor -SwitchName vSwitch_Span
+ ```
+
+1. Enable port mirroring for the selected interface as the span destination with the following command:
+
+ ```bash
+ Get-VMNetworkAdapter -VMName VK-C1000V-LongRunning-650 | ? Name -eq Monitor | Set-VMNetworkAdapter -PortMirroring Destination
+ ```
+
+ | Parameter | Description |
+ |--|--|
+ | VK-C1000V-LongRunning-650 | CPPM VA name |
+ |vSwitch_Span |Newly added SPAN virtual switch name |
+ |Monitor |Newly added adapter name |
+
+1. Select **OK**.
+
+These commands set the name of the newly added adapter hardware to be `Monitor`. If you are using Hyper-V Manager, the name of the newly added adapter hardware is set to `Network Adapter`.
+
+**To attach a SPAN Virtual Interface to the virtual switch with Hyper-V Manager**:
+
+1. Under the Hardware list, select **Network Adapter**.
+
+1. In the Virtual Switch field, select **vSwitch_Span**.
+
+ :::image type="content" source="../media/tutorial-install-components/vswitch-span.png" alt-text="Screenshot of selecting the following options on the virtual switch screen.":::
+
+1. In the Hardware list, under the Network Adapter drop-down list, select **Advanced Features**.
+
+1. In the Port Mirroring section, select **Destination** as the mirroring mode for the new virtual interface.
+
+ :::image type="content" source="../media/tutorial-install-components/destination.png" alt-text="Screenshot of the selections needed to configure mirroring mode.":::
+
+1. Select **OK**.
+
+#### Enable Microsoft NDIS capture extensions for the virtual switch
+
+Microsoft NDIS Capture Extensions will need to be enabled for the new virtual switch.
+
+**To enable Microsoft NDIS capture extensions for the newly added virtual switch**:
+
+1. Open the Virtual Switch Manager on the Hyper-V host.
+
+1. In the Virtual Switches list, expand the virtual switch name `vSwitch_Span` and select **Extensions**.
+
+1. In the Switch Extensions field, select **Microsoft NDIS Capture**.
+
+ :::image type="content" source="../media/tutorial-install-components/microsoft-ndis.png" alt-text="Screenshot of enabling the Microsoft NDIS by selecting it from the switch extensions menu.":::
+
+1. Select **OK**.
+
+#### Set the Mirroring Mode on the external port
+
+Mirroring mode will need to be set on the external port of the new virtual switch to be the source.
+
+You will need to configure the Hyper-V virtual switch (vSwitch_Span) to forward any traffic that comes to the external source port, to the virtual network adapter that you configured as the destination.
+
+Use the following PowerShell commands to set the external virtual switch port to source mirror mode:
+
+```bash
+$ExtPortFeature=Get-VMSystemSwitchExtensionPortFeature -FeatureName "Ethernet Switch Port Security Settings"
+$ExtPortFeature.SettingData.MonitorMode=2
+Add-VMSwitchExtensionPortFeature -ExternalPort -SwitchName vSwitch_Span -VMSwitchExtensionFeature $ExtPortFeature
+```
+
+| Parameter | Description |
+|--|--|
+| vSwitch_Span | Newly added SPAN virtual switch name. |
+| MonitorMode=2 | Source |
+| MonitorMode=1 | Destination |
+| MonitorMode=0 | None |
+
+Use the following PowerShell command to verify the monitoring mode status:
+
+```bash
+Get-VMSwitchExtensionPortFeature -FeatureName "Ethernet Switch Port Security Settings" -SwitchName vSwitch_Span -ExternalPort | select -ExpandProperty SettingData
+```
+| Parameter | Description |
+|--|--|
+| vSwitch_Span | Newly added SPAN virtual switch name |
++
+## Next steps
+
+Continue understanding system requirements for physical or virtual appliances. For more information, see [Which appliances do I need?](../ot-appliance-sizing.md) and [OT monitoring with virtual appliances](../ot-virtual-appliances.md).
+
+Then, use any of the following procedures to continue:
+
+- [Purchase sensors or download software for sensors](../how-to-manage-sensors-on-the-cloud.md#purchase-sensors-or-download-software-for-sensors)
+- [Download software for an on-premises management console](../how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)
+- [Install software](../how-to-install-software.md)
defender-for-iot Virtual Sensor Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/virtual-sensor-vmware.md
+
+ Title: OT sensor VM (VMWare ESXi) - Microsoft Defender for IoT
+description: Learn about deploying a Microsoft Defender for IoT OT sensor as a virtual appliance using VMWare ESXi.
Last updated : 04/24/2022+++
+# OT network sensor VM (VMWare ESXi)
+
+This article describes an OT sensor deployment on a virtual appliance using VMWare ESXi.
+
+| Appliance characteristic |Details |
+|||
+|**Hardware profile** | As required for your organization. For more information, see [Which appliances do I need?](../ot-appliance-sizing.md) |
+|**Performance** | As required for your organization. For more information, see [Which appliances do I need?](../ot-appliance-sizing.md) |
+|**Physical specifications** | Virtual Machine |
+|**Status** | Supported |
+
+## Prerequisites
+
+Before you begin the installation, make sure you have the following items:
+
+- VMware (ESXi 5.5 or later) installed and operational
+
+- Available hardware resources for the virtual machine. For more information, see [OT monitoring with virtual appliances](../ot-virtual-appliances.md).
+
+- The OT sensor software [downloaded from Defender for IoT in the Azure portal](../how-to-install-software.md#download-software-files-from-the-azure-portal).
+
+Make sure the hypervisor is running.
+
+## Create the virtual machine
+
+This procedure describes how to create a virtual machine by using ESXi.
+
+**To create the virtual machine using ESXi**:
+
+1. Sign in to the ESXi, choose the relevant **datastore**, and select **Datastore Browser**.
+
+1. Select **Upload**, to upload the image, and select **Close**.
+
+1. Navigate to VM, and then select **Create/Register VM**.
+
+1. Select **Create new virtual machine**, and then select **Next**.
+
+1. Add a sensor name, and select the following options:
+
+ - Compatibility: **&lt;latest ESXi version&gt;**
+
+ - Guest OS family: **Linux**
+
+ - Guest OS version: **Ubuntu Linux (64-bit)**
+
+1. Select **Next**.
+
+1. Choose the relevant datastore and select **Next**.
+
+1. Change the virtual hardware parameters according to the required architecture.
+
+1. For **CD/DVD Drive 1**, select **Datastore ISO file** and choose the ISO file that you uploaded earlier.
+
+1. Select **Next** > **Finish**.
+
+## Software installation
+
+1. To start installing the OT sensor software, open the virtual machine console.
+
+ The VM will start from the ISO image, and the language selection screen will appear.
+
+1. Continue with the [generic procedure for installing sensor software](../how-to-install-software.md#install-ot-sensor-software).
++
+## Configure a monitoring interface (SPAN)
+
+While a virtual switch doesn't have mirroring capabilities, you can use *Promiscuous mode* in a virtual switch environment as a workaround for configuring a SPAN port.
+
+*Promiscuous mode* is a mode of operation and a security, monitoring, and administration technique that is defined at the virtual switch or portgroup level. When promiscuous mode is used, any of the virtual machineΓÇÖs network interfaces that are in the same portgroup can view all network traffic that goes through that virtual switch. By default, promiscuous mode is turned off.
+
+For more information, see [Purdue reference model and Defender for IoT](../plan-network-monitoring.md#purdue-reference-model-and-defender-for-iot).
+
+**To configure a SPAN port with ESXi**:
+
+1. Open vSwitch properties.
+
+1. Select **Add**.
+
+1. Select **Virtual Machine** > **Next**.
+
+1. Insert a network label **SPAN Network**, select **VLAN ID** > **All**, and then select **Next**.
+
+1. Select **Finish**.
+
+1. Select **SPAN Network** > **Edit*.
+
+1. Select **Security**, and verify that the **Promiscuous Mode** policy is set to **Accept** mode.
+
+1. Select **OK**, and then select **Close** to close the vSwitch properties.
+
+1. Open the **XSense VM** properties.
+
+1. For **Network Adapter 2**, select the **SPAN** network.
+
+1. Select **OK**.
+
+1. Connect to the sensor, and verify that mirroring works.
+
+## Next steps
+
+Continue understanding system requirements for physical or virtual appliances. For more information, see [Which appliances do I need?](../ot-appliance-sizing.md) and [OT monitoring with virtual appliances](../ot-virtual-appliances.md).
+
+Then, use any of the following procedures to continue:
+
+- [Purchase sensors or download software for sensors](../how-to-manage-sensors-on-the-cloud.md#purchase-sensors-or-download-software-for-sensors)
+- [Download software for an on-premises management console](../how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)
+- [Install software](../how-to-install-software.md)
defender-for-iot Ys Techsystems Ys Fit2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/ys-techsystems-ys-fit2.md
+
+ Title: YS-techsystems YS-FIT2 for OT monitoring - Microsoft Defender for IoT
+description: Learn about the YS-techsystems YS-FIT2 appliance when used for OT monitoring with Microsoft Defender for IoT.
Last updated : 04/24/2022+++
+# YS-techsystems YS-FIT2
+
+This article describes the **YS-techsystems YS-FIT2** appliance deployment and installation for OT sensors.
+
+| Appliance characteristic |Details |
+|||
+|**Hardware profile** | Office|
+|**Performance** | Max bandwidth: 10Mbp/s<br>Max devices: 100|
+|**Physical specifications** | Mounting: DIN/VESA<br>Ports: 2x RJ45|
+|**Status** | Supported; Available as pre-configured |
+
+The following image shows a view of the YS-FIT2 front panel:
++
+The following image shows a view of the YS-FIT2 back panel:
++
+## Specifications
+
+|Components|Technical specifications|
+|:-|--|
+|Construction |Aluminum or zinc die-cast parts, fanless and dust-proof design
+| Dimensions |112 mm (W) x 112 mm (D) x 25 mm (H)4.41in (W) x 4.41in (D) x 0.98 in (H)|
+|Weight |0.35 kg |
+| CPU |Intel Atom® x7-E3950 Processor |
+| Memory |8 GB SODIMM 1 x 204-pin DDR3L non-ECC 1866 (1.35 V) |
+| Storage |128 GB M.2 M-key 2260* or 2242 (SATA 3 6 Gbps) PLP|
+|Network controller |Two 1 GbE LAN Ports |
+| Device access |Two USB 2.0, Two USB 3.0 |
+| Power Adapter |7V-20V (Optional 9V-36V) DC / 5W-15W Power AdapterVehicle DC cable for YS-FIT2 (Optional)|
+|UPS|Fit-uptime Miniature 12 V UPS for miniPCs (Optional)|
+|Mounting |VESA / wall or Din Rail mounting kit |
+| Temperature |0┬░C ~ 70┬░C |
+| Humidity |5% ~ 95%, non-condensing |
+| Vibration |IEC TR 60721-4-7:2001+A1:03, Class 7M1, test method IEC 60068-2-64 (up to 2 KHz, 3 axis)|
+|Shock|IEC TR 60721-4-7:2001+A1:03, Class 7M1, test method IEC 60068-2-27 (15 g , 6 directions)|
+|EMC |CE/FCC Class B|
+
+## YS-FIT2 installation
+
+This section describes how to install OT sensor software on the YS-FIT2 appliance. Before you install the OT sensor software, you must adjust the appliance's BIOS configuration.
+
+> [!NOTE]
+> Installation procedures are only relevant if you need to re-install software on a preconfigured device, or if you buy your own hardware and configure the appliance yourself.
+>
+
+### Configure the YS-FIT2 BIOS
+
+This procedure describes how to update the YS-FIT2 BIOS configuration for your OT sensor deployment.
+
+**To configure the YS-FIT2 BIOS**:
+
+1. Power on the appliance and go to **Main** > **OS Selection**.
+
+1. Press **+/-** to select **Linux**.
+
+ :::image type="content" source="../media/tutorial-install-components/fitlet-linux.png" alt-text="Screenshot of setting the OS to Linux on your YS-FIT2.":::
+
+1. Verify that the system date and time are updated with the installation date and time.
+
+1. Go to **Advanced**, and select **ACPI Settings**.
+
+1. Select **Enable Hibernation**, and press **+/-** to select **Disabled**.
+
+ :::image type="content" source="../media/tutorial-install-components/disable-hibernation.png" alt-text="Screenshot of turning off the hibernation mode on your YS-FIT2.":::
+
+1. Press **Esc**.
+
+1. Go to **Advanced** > **TPM Configuration**.
+
+1. Select **fTPM**, and press **+/-** to select **Disabled**.
+
+1. Press **Esc**.
+
+1. Go to **CPU Configuration** > **VT-d**.
+
+1. Press **+/-** to select **Enabled**.
+
+1. Go to **CSM Configuration** > **CSM Support**.
+
+1. Press **+/-** to select **Enabled**.
+
+1. Go to **Advanced** > **Boot option filter [Legacy only]** and change the setting in the following fields to **Legacy**:
+
+ - Network
+ - Storage
+ - Video
+ - Other PCI
+
+ :::image type="content" source="../media/tutorial-install-components/legacy-only.png" alt-text="Screenshot of setting all fields to Legacy.":::
+
+1. Press **Esc**.
+
+1. Go to **Security** > **Secure Boot Customization**.
+
+1. Press **+/-** to select **Disabled**.
+
+1. Press **Esc**.
+
+1. Go to **Boot** > **Boot mode** select, and select **Legacy**.
+
+1. Select **Boot Option #1 ΓÇô [USB CD/DVD]**.
+
+1. Select **Save & Exit**.
+
+### Install OT sensor software on the YS-FIT2
+
+This procedure describes how to install OT sensor software on the YS-FIT2.
+
+The installation takes approximately 20 minutes. After the installation is complete, the system restarts several times.
+
+**To install OT sensor software**:
+
+1. Connect an external CD or disk-on-key that contains the sensor software you downloaded from the Azure portal.
+
+1. Boot the appliance.
+
+1. Select **English**.
+
+1. Select **XSENSE-RELEASE-\<version> Office...**.
+
+1. Define the appliance architecture, and network properties:
+
+ | Parameter | Configuration |
+ | -| - |
+ | **Hardware profile** | Select **office**. |
+ | **Management interface** | **em1** |
+ | **Management network IP address** | **IP address provided by the customer** |
+ | **Management subnet mask** | **IP address provided by the customer** |
+ | **DNS** | **IP address provided by the customer** |
+ | **Default gateway IP address** | **0.0.0.0** |
+ | **Input interface** | The list of input interfaces is generated for you by the system. <br />To mirror the input interfaces, copy all the items presented in the list with a comma separator. |
+ | **Bridge interface** | - |
+
+ For example:
+
+ :::image type="content" source="../media/tutorial-install-components/nuvo-profile-appliance.png" alt-text="Define the Nuvo's architecture and network properties.":::
+
+1. Accept the settings and continue by entering `Y`.
+
+After approximately 10 minutes, sign-in credentials are automatically generated. Save the username and passwords, you'll need these credentials to access the platform the first time you use it.
+
+## Next steps
+
+Continue understanding system requirements for physical or virtual appliances. For more information, see [Which appliances do I need?](../ot-appliance-sizing.md).
+
+Then, use any of the following procedures to continue:
+
+- [Purchase sensors or download software for sensors](../how-to-manage-sensors-on-the-cloud.md#purchase-sensors-or-download-software-for-sensors)
+- [Download software for an on-premises management console](../how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)
+- [Install software](../how-to-install-software.md)
defender-for-iot Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/architecture.md
For more information, see:
- [Frequently asked questions](resources-frequently-asked-questions.md) - [Sensor connection methods](architecture-connections.md)-- [Connect your sensors to Microsoft Defender for IoT](connect-sensors.md)
+- [Connect your sensors to Microsoft Defender for IoT](connect-sensors.md)
defender-for-iot Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/getting-started.md
If you're setting up network monitoring for enterprise IoT systems, you can skip
Calculate the approximate number of devices you'll be monitoring. Devices can be added in intervals of **1,000**, such as **1000**, **2000**, **3000**. The numbers of monitored devices are called *committed devices*.
-Microsoft Defender for IoT supports both physical and virtual deployments. For physical deployments, you'll be able to purchase certified appliances with software pre-installed, or download software to install yourself.
+Microsoft Defender for IoT supports both physical and virtual deployments. For physical deployments, you'll be able to purchase certified, preconfigured appliances, or download software to install yourself.
For more information, see:
defender-for-iot How To Identify Required Appliances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-identify-required-appliances.md
- Title: OT monitoring appliance catalog - Microsoft Defender for IoT
-description: Learn about hardware and virtual appliances for certified Microsoft Defender for IoT sensors and the on-premises management console.
Previously updated : 04/10/2022-----
-# OT monitoring appliance catalog
-
-This article provides information on certified Defender for IoT sensor appliances. Defender for IoT can be deployed on physical and virtual appliances.
-
-This includes certified *pre-configured* appliances, on which software is already installed, and non-configured certified appliances, on which you can download and install required software.
-
-The article also provides specifications for an on-premises management console appliance. The on-premises management console is not available as a pre-configured appliance.
--- If you want to purchase a pre-configured sensor, review the models available in the [Sensor appliances](#sensor-appliances) section and then proceed with the purchase.--- If you want to purchase your own appliance, review the models available in the [Sensor appliances](#sensor-appliances) section and in the [Additional certified appliances](#additional-certified-appliances) section. After you acquire the appliance, you can download and install the software.--- If you want to purchase the on-premises management console, review the information in the [On-premises management console appliance](#on-premises-management-console-appliance) section. After you acquire the device, you can download and install the software.-
-After you've completed the tasks here, you can install the software and set up your network.
-
-## Sensor appliances
-
-Defender for IoT supports both physical and virtual deployments.
-
-### Physical sensors
-
-This section provides an overview of physical sensor models that are available. You can purchase sensors with pre-configured software or purchase sensors that are not pre-configured.
--- **Pre-configured sensors**: Microsoft has partnered with Arrow to provide preconfigured sensors. To purchase a preconfigured sensor, contact Arrow at the following address: <hardware.sales@arrow.com>--- **About bringing your own appliance**: Review the supported models described below. After you've acquired your appliance, go to **Defender for IoT** > **Getting started** > **Sensor**. Under **Purchase an appliance and install software**, select **Download**.-
- :::image type="content" source="media/how-to-prepare-your-network/azure-defender-for-iot-sensor-download-software-screen.png" alt-text="Network sensors ISO.":::
-
- > [!NOTE]
- > <a name="anchortext"></a>For each model, bandwidth capacity can vary, depending on the distribution of protocols.
-
-For more information about each model, see [Appliance specifications](#appliance-specifications).
-
-#### Corporate sensors
--
-|Element |Description |
-|||
-|**Model** | HPE ProLiant DL360 |
-|**Monitoring ports** | Up to 15 RJ45 or 8 OPT |
-|**Maximum bandwidth**<sup>[1](#anchortext)</sup> | 3 Gb/sec |
-|**Maximum protected devices** | 12,000 |
-
-#### Enterprise sensors
--
-|Element |Description |
-|||
-|**Model** | HPE ProLiant DL20 |
-|**Monitoring ports** | Up to 8 RJ45 or 6 OPT |
-|**Maximum bandwidth**<sup>[1](#anchortext)</sup> | 1 Gb/sec |
-|**Maximum protected devices** | 10,000 |
-
-#### SMB rack mount
--
-|Element |Description |
-|||
-|**Model** | HPE ProLiant DL20 |
-|**Monitoring ports** | Up to 4 RJ45 |
-|**Maximum bandwidth**<sup>[1](#anchortext)</sup> | 200 Mb/Sec |
-|**Maximum protected devices** | 1,000 |
-
-#### SMB ruggedized
--
-|Element |Description |
-|||
-|**Model** | HPE EL300 |
-|**Monitoring ports** | Up to 5 RJ45 |
-|**Maximum bandwidth**<sup>[1](#anchortext)</sup> | 100 Mb/sec |
-|**Maximum protected devices** | 800 |
-
-#### Office Ruggedized
--
-|Element |Description |
-|||
-|**Model** | YS-techsystems YS-FIT2 |
-|**Monitoring ports** | Up to 2 RJ45 |
-|**Maximum bandwidth**<sup>[1](#anchortext)</sup> | 10 Mb/sec |
-|**Maximum protected devices** | 100 |
-
-### Virtual sensors
-
-This section describes virtual sensors that are available.
-
-| Deployment type | Corporate | Enterprise | SMB |
-|--|--|--|--|
-| Maximum bandwidth | 2.5 Gb/sec | 800 Mb/sec | 160 Mb/sec |
-| Maximum protected devices | 12,000 | 10,000 | 800 |
-
-## On-premises management console appliance
-
-The management console is available as a virtual deployment.
-
-| Deployment type | Enterprise |
-|--|--|
-| Appliance type | HPE DL20, VM |
-| Number of managed sensors | Up to 300 |
-
-After you acquire an on-premises management console, go to **Defender for IoT** > **On-premises management console** > **ISO Installation** to download the ISO.
--
-## Appliance specifications
-
-This section describes hardware specifications for supported models.
-
-### Corporate deployment: HPE ProLiant DL360
-
-| Component | Technical specifications |
-|--|--|
-| Chassis | 1U rack server |
-| Dimensions | 42.9 x 43.46 x 70.7 (cm)/1.69" x 17.11" x 27.83" (in) |
-| Weight | Max 16.27 kg (35.86 lb) |
-| Processor | Intel Xeon Silver 4215 R 3.2 GHz, 11M cache, 8c/16T, 130 W |
-| Chipset | Intel C621 |
-| Memory | 32 GB = 2 x 16-GB 2666MT/s DDR4 ECC UDIMM |
-| Storage | 6 x 1.2-TB SAS 12G Enterprise 10K SFF (2.5 in) in Hot-Plug Hard Drive - RAID 5 |
-| Network controller | On-board: 2 x 1 Gb <br>On-board: iLO Port Card 1 Gb <br>External: 1 x HPE Ethernet 1-Gb 4-port 366FLR Adapter |
-| Management | HPE iLO Advanced |
-| Device access | Two rear USB 3.0<br>One front USB 2.0<br>One internal USB 3.0 |
-| Power | 2 x HPE 500 W Flex Slot Platinum Hot Plug Low Halogen Power Supply Kit |
-| Rack support | HPE 1U Gen10 SFF Easy Install Rail Kit |
-
-#### Appliance BOM
-
-| PN | Description | Quantity |
-|--|--|--|
-| P19766-B21 | HPE DL360 Gen10 8SFF NC CTO Server | 1 |
-| P19766-B21 | Europe - Multilingual Localization | 1 |
-| P24479-L21 | Intel Xeon-S 4215 R FIO Kit for DL360 G10 | 1 |
-| P24479-B21 | Intel Xeon-S 4215 R Kit for DL360 Gen10 | 1 |
-| P00922-B21 | HPE 16-GB 2Rx8 PC4-2933Y-R Smart Kit | 2 |
-| 872479-B21 | HPE 1.2-TB SAS 10K SFF SC DS HDD | 6 |
-| 811546-B21 | HPE 1-GbE 4-p BASE-T I350 Adapter | 1 |
-| P02377-B21 | HPE Smart Hybrid Capacitor w\_ 145 mm Cable | 1 |
-| 804331-B21 | HPE Smart Array P408i-a SR Gen10 Controller | 1 |
-| 665240-B21 | HPE 1-GbE 4-p FLR-T I350 Adapter | 1 |
-| 871244-B21 | HPE DL360 Gen10 High Performance Fan Kit | 1 |
-| 865408-B21 | HPE 500-W FS Plat Hot Plug LH Power Supply Kit | 2 |
-| 512485-B21 | HPE iLO Adv 1-Server License 1 Year Support | 1 |
-| 874543-B21 | HPE 1U Gen10 SFF Easy Install Rail Kit | 1 |
-
-### Enterprise deployment: HPE ProLiant DL20
-
-| Component | Technical specifications |
-|--|--|
-| Chassis | 1U rack server |
-| Dimensions (height x width x depth) | 4.32 x 43.46 x 38.22 cm/1.70 x 17.11 x 15.05 inch |
-| Weight | 7.9 kg/17.41 lb |
-| Processor | Intel Xeon E-2234, 3.6 GHz, 4C/8T, 71 W |
-| Chipset | Intel C242 |
-| Memory | 2 x 16-GB Dual Rank x8 DDR4-2666 |
-| Storage | 3 x 1-TB SATA 6G Midline 7.2 K SFF (2.5 in) ΓÇô RAID 5 with Smart Array P408i-a SR Controller |
-| Network controller | On-board: 2 x 1 Gb <br>On-board: iLO Port Card 1 Gb <br>External: 1 x HPE Ethernet 1-Gb 4-port 366FLR Adapter |
-| Management | HPE iLO Advanced |
-| Device access | Front: 1 x USB 3.0, 1 x USB iLO Service Port <br>Rear: 2 x USB 3.0 <br>Internal: 1 x USB 3.0 |
-| Power | Dual Hot Plug Power Supplies 500 W |
-| Rack support | HPE 1U Short Friction Rail Kit |
-
-#### Appliance BOM
-
-| PN | Description: high end | Quantity |
-|--|--|--|
-| P06963-B21 | HPE DL20 Gen10 4SFF CTO Server | 1 |
-| P06963-B21 | HPE DL20 Gen10 4SFF CTO Server | 1 |
-| P17104-L21 | HPE DL20 Gen10 E-2234 FIO Kit | 1 |
-| 879507-B21 | HPE 16-GB 2Rx8 PC4-2666V-E STND Kit | 2 |
-| 655710-B21 | HPE 1-TB SATA 7.2 K SFF SC DS HDD | 3 |
-| P06667-B21 | HPE DL20 Gen10 x8x16 FLOM Riser Kit | 1 |
-| 665240-B21 | HPE Ethernet 1-Gb 4-port 366FLR Adapter | 1 |
-| 782961-B21 | HPE 12-W Smart Storage Battery | 1 |
-| 869081-B21 | HPE Smart Array P408i-a SR G10 LH Controller | 1 |
-| 865408-B21 | HPE 500-W FS Plat Hot Plug LH Power Supply Kit | 2 |
-| 512485-B21 | HPE iLO Adv 1-Server License 1 Year Support | 1 |
-| P06722-B21 | HPE DL20 Gen10 RPS Enablement FIO Kit | 1 |
-| 775612-B21 | HPE 1U Short Friction Rail Kit | 1 |
-
-### SMB deployment: HPE ProLiant DL20
-
-| Component | Technical specifications |
-|--|--|
-| Chassis | 1U rack server |
-| Dimensions (height x width x depth) | 4.32 x 43.46 x 38.22 cm/1.70 x 17.11 x 15.05 inch |
-| Weight | 7.88 kg/17.37 lb |
-| Processor | Intel Xeon E-2224, 3.4 GHz, 4C, 71 W |
-| Chipset | Intel C242 |
-| Memory | 1 x 8-GB Dual Rank x8 DDR4-2666 |
-| Storage | 2 x 1-TB SATA 6G Midline 7.2 K SFF (2.5 in) ΓÇô RAID 1 with Smart Array P208i-a |
-| Network controller | On-board: 2 x 1 Gb <br>On-board: iLO Port Card 1 Gb <br>External: 1 x HPE Ethernet 1-Gb 4-port 366FLR Adapter |
-| Management | HPE iLO Advanced |
-| Device access | Front: 1 x USB 3.0, 1 x USB iLO Service Port <br>Rear: 2 x USB 3.0 <br>Internal: 1 x USB 3.0 |
-| Power | Hot Plug Power Supply 290 W |
-| Rack support | HPE 1U Short Friction Rail Kit |
-
-#### Appliance BOM
-
-| PN | Description | Quantity |
-|--|--|--|
-| P06961-B21 | HPE DL20 Gen10 NHP 2LFF CTO Server | 1 |
-| P06961-B21 | HPE DL20 Gen10 NHP 2LFF CTO Server | 1 |
-| P17102-L21 | HPE DL20 Gen10 E-2224 FIO Kit | 1 |
-| 879505-B21 | HPE 8-GB 1Rx8 PC4-2666V-E STND Kit | 1 |
-| 801882-B21 | HPE 1-TB SATA 7.2 K LFF RW HDD | 2 |
-| P06667-B21 | HPE DL20 Gen10 x8x16 FLOM Riser Kit | 1 |
-| 665240-B21 | HPE Ethernet 1-Gb 4-port 366FLR Adapter | 1 |
-| 869079-B21 | HPE Smart Array E208i-a SR G10 LH Controller | 1 |
-| P21649-B21 | HPE DL20 Gen10 Plat 290 W FIO PSU Kit | 1 |
-| P06683-B21 | HPE DL20 Gen10 M.2 SATA/LFF AROC Cable Kit | 1 |
-| 512485-B21 | HPE iLO Adv 1-Server License 1 Year Support | 1 |
-| 775612-B21 | HPE 1U Short Friction Rail Kit | 1 |
-
-### SMB Rugged: HPE Edgeline EL300
-
-| Component | Technical specifications |
-|--|--|
-| Construction | Aluminum, Fanless & Dust-proof Design |
-| Dimensions (height x width x depth) | 200.5 mm (7.9ΓÇ¥) tall, 232 mm (9.14ΓÇ¥) wide by 100 mm (3.9ΓÇ¥) deep |
-| Weight | 4.91 KG (10.83 lbs.) |
-| CPU | Intel Core i7-8650U (1.9GHz/4-core/15W) |
-| Chipset | Intel® Q170 Platform Controller Hub |
-| Memory | 8 GB DDR4 2133 MHz Wide Temperature SODIMM |
-| Storage | 128 GB 3ME3 Wide Temperature mSATA SSD |
-| Network controller | 6x Gigabit Ethernet ports by Intel® I219 |
-| Device access | 4 USBs: 2 fronts; 2 rears; 1 internal |
-| Power Adapter | 250V/10A |
-| Mounting | Mounting kit, Din Rail |
-| Operating Temperature | 0C to +70C |
-| Humidity | 10%~90%, non-condensing |
-| Vibration | 0.3 gram 10 Hz to 300 Hz, 15 minutes per axis - Din rail |
-| Shock | 10G 10 ms, half-sine, three for each axis. (Both positive & negative pulse) ΓÇô Din Rail |
-
-#### Appliance BOM
-| Product | Description |
-|--|--|
-| P25828-B21 | HPE Edgeline EL300 v2 Converged Edge System |
-| P25828-B21 B19 | HPE EL300 v2 Converged Edge System |
-| P25833-B21 | Intel Core i7-8650U (1.9GHz/4-core/15W) FIO Basic Processor Kit for HPE Edgeline EL300 |
-| P09176-B21 | HPE Edgeline 8GB (1x8GB) Dual Rank x8 DDR4-2666 SODIMM WT CAS-19-19-19 Registered Memory FIO Kit |
-| P09188-B21 | HPE Edgeline 256GB SATA 6G Read Intensive M.2 2242 3yr Wty Wide Temp SSD |
-| P04054-B21 | HPE Edgeline EL300 SFF to M.2 Enablement Kit |
-| P08120-B21 | HPE Edgeline EL300 12VDC FIO Transfer Board |
-| P08641-B21 | HPE Edgeline EL300 80W 12VDC Power Supply |
-| AF564A | HPE C13 - SI-32 IL 250V 10Amp 1.83m Power Cord |
-| P25835-B21 | HPE EL300 v2 FIO Carrier Board |
-| R1P49AAE | HPE EL300 iSM Adv 3yr 24x7 Sup_Upd E-LTU |
-| P08018-B21 optional | HPE Edgeline EL300 Low Profile Bracket Kit |
-| P08019-B21 optional | HPE Edgeline EL300 DIN Rail Mount Kit |
-| P08020-B21 optional | HPE Edgeline EL300 Wall Mount Kit |
-| P03456-B21 optional | HPE Edgeline 1GbE 4-port TSN FIO Daughter Card |
-
-### Office Rugged: YS-techsystems YS-FIT2
-
-| Component | Technical specifications |
-|--|--|
-| Construction | Aluminum, zinc die cast parts, Fanless & Dust-proof Design |
-| Dimensions (height x width x depth) | 112mm (W) x 112mm (D) x 25mm (H) / 4.41in (W) x 4.41in (D) x 0.98 in (H) |
-| Weight | 0.35kg (0.77 lbs) |
-| CPU | Intel Atom® x7-E3950 Processor |
-| Chipset | Intel® Q170 Platform Controller Hub |
-| Memory | 8GB SODIMM 1 x 204-pin DDR3L non-ECC 1866 (1.35V) |
-| Storage | 128GB M.2 M-key 2260* or 2242 (SATA 3 6 Gbps) PLP |
-| Network controller | 2x 1GbE LAN Ports |
-| Device access | 2 x USB 2.0, 2 X USB 3.0 |
-| Power Adapter | 7V-20V (Optional 9V-36V) DC / 5W-15W Power Adapter / Vehicle DC cable for fitlet2 (Optional) / UPS fit-uptime Miniature 12V UPS for miniPCs (Optional) |
-| Mounting | VESA / wall or Din Rail mounting kit |
-| Operating Temperature | 0┬░C ~ 70┬░C |
-| Humidity | 5%~95%, non-condensing |
-| Vibration | IEC TR 60721-4-7:2001+A1:03, Class 7M1, test method IEC 60068-2-27 (15g , 6 directions) |
-| Shock | 10G 10 ms, half-sine, three for each axis. (Both positive & negative pulse) ΓÇô Din Rail |
-| EMC | CE/FCC Class B |
-
-## Virtual appliance specifications
-
-### Sensors
-
-| Type | Corporate | Enterprise | SMB |
-|--|--|--|--|
-| vCPU | 32 | 8 | 4 |
-| Memory | 32 GB | 32 GB | 8 GB |
-| Storage | 5.6 TB | 1.8 TB | 500 GB |
-
-### On-premises management console appliance
-
-| Type | Enterprise |
-|--|--|
-| Description | Virtual appliance for enterprise deployment types |
-| vCPU | 8 |
-| Memory | 32 GB |
-| Storage | 1.8 TB |
-
-Supported hypervisors: VMware ESXi version 5.0 and later, Hyper-V
-
-## Additional certified appliances
-
-This section details additional appliances that were certified by Microsoft but are not offered as preconfigured appliances.
-
-| Deployment type | Enterprise |
-|--|--|
-| Image | :::image type="content" source="media/how-to-prepare-your-network/deployment-type-enterprise-for-azure-defender-for-iot-v2.png" alt-text="Enterprise deployment type."::: |
-| Model | Dell PowerEdge R340 XL |
-| Monitoring ports | Up to nine RJ45 or six OPT |
-| Max bandwidth [1](#anchortext2)| 1 Gb/sec |
-| Max protected devices | 10,000 |
-
-<a id="anchortext2">One</a> Bandwidth capacity can vary, depending on protocols distribution.
-
-After you purchase the appliance, go to **Defender for IoT** > **Network Sensors ISO** > **Installation** to download the software.
--
-## Next steps
-
-[About Microsoft Defender for IoT installation](how-to-install-software.md)
-
-[About Microsoft Defender for IoT network setup](how-to-set-up-your-network.md)
defender-for-iot How To Install Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-install-software.md
Last updated 01/06/2022
-# Defender for IoT installation
+# Defender for IoT software installation
-This article describes how to install the following Microsoft Defender for IoT components:
+This article describes how to install software for OT sensors and on-premises management consoles. You might need the procedures in this article if you're re-installing software on a preconfigured appliance, or if you've chosen to install software on your own appliances.
-- **Sensor**: Defender for IoT sensors collects ICS network traffic by using passive (agentless) monitoring. Passive and nonintrusive, the sensors have zero impact on OT and IoT networks and devices. The sensor connects to a SPAN port or network TAP and immediately begins monitoring your network. Detections appear in the sensor console. There, you can view, investigate, and analyze them in a network map, device inventory, and an extensive range of reports. Examples include risk assessment reports, data mining queries, and attack vectors. -- **On-premises management console**: The on-premises management console lets you carry out device management, risk management, and vulnerability management. You can also use it to carry out threat monitoring and incident response across your enterprise. It provides a unified view of all network devices, key IoT, and OT risk indicators and alerts detected in facilities where sensors are deployed. Use the on-premises management console to view and manage sensors in air-gapped networks.
+## Pre-installation configuration
-This article covers the following installation information:
+Each appliance type comes with it's own set of instructions that are required before installing Defender for IoT software.
-- **Hardware:** Dell and HPE physical appliance details.
+Make sure that you've completed the procedures as instructed in the **Reference > OT monitoring appliance** section of our documentation before installing Defender for IoT software.
-- **Software:** Sensor and on-premises management console software installation.
+For more information, see:
-- **Virtual Appliances:** Virtual machine details and software installation.
+- [Which appliances do I need?](ot-appliance-sizing.md)
+- [Pre-configured physical appliances for OT monitoring](ot-pre-configured-appliances.md), including the catalog of available appliances
+- [OT monitoring with virtual appliances](ot-virtual-appliances.md)
-After the software is installed, connect your sensor to your network.
+## Download software files from the Azure portal
-## About Defender for IoT appliances
+Make sure that you've downloaded the relevant software file for the sensor or on-premises management console.
-The following sections provide information about Defender for IoT sensor appliances and the appliance for the Defender for IoT on-premises management console.
+You can obtain the latest versions of our OT sensor and on-premises management console software from the Azure portal, on the Defender for IoT > **Getting started** page. Select the **Sensor**, **On-premises management console**, or **Updates** tab and locate the software you need.
-### Physical appliances
+Mount the ISO file using one of the following options:
-The Defender for IoT appliance sensor connects to a SPAN port, or network TAP. Once connected, the sensor immediately collects ICS network traffic by using passive (agentless) monitoring. This process has zero impact on OT networks, and devices because it isn't placed in the data path, and doesn't actively scan OT devices.
+- **Physical media** ΓÇô burn the ISO file to a DVD or USB, and boot from the media.
-The following rack mount appliances are available:
+- **Virtual mount** ΓÇô use iLO for HPE appliances, or iDRAC for Dell appliances to boot the ISO file.
-| **Deployment type** | **Corporate** | **Enterprise** | **SMB** |**SMB Ruggedized** |
-|--|--|--|--|--|
-| **Model** | HPE ProLiant DL360 | HPE ProLiant DL20 | HPE ProLiant DL20 | HPE EL300 |
-| **Monitoring ports** | up to 15 RJ45 or 8 OPT | up to 8 RJ45 or 6 OPT | up to 4 RJ45 | Up to 5 RJ45 |
-| **Max Bandwidth\*** | 3 Gb/Sec | 1 Gb/Sec | 200 Mb/Sec | 100 Mb/Sec |
-| **Max Protected Devices** | 12,000 | 10,000 | 1,000 | 800 |
+## Install OT sensor software
-*Maximum bandwidth capacity might vary depending on protocol distribution.
-
-### Virtual appliances
-
-The following virtual appliances are available:
-
-| **Deployment type** | **Corporate** | **Enterprise** | **SMB** |
-|--|--|--|--|
-| **Description** | Virtual appliance for corporate deployments | Virtual appliance for enterprise deployments | Virtual appliance for SMB deployments |
-| **Max Bandwidth\*** | 2.5 Gb/Sec | 800 Mb/sec | 160 Mb/sec |
-| **Max protected devices** | 12,000 | 10,000 | 800 |
-| **Deployment Type** | Corporate | Enterprise | SMB |
-
-*Maximum bandwidth capacity might vary depending on protocol distribution.
-
-### Hardware specifications for the on-premises management console
-
- | Item | Description |
- |-|--|
- **Description** | In a multi-tier architecture, the on-premises management console delivers visibility and control across geographically distributed sites. It integrates with SOC security stacks, including SIEMs, ticketing systems, next-generation firewalls, secure remote access platforms, and the Defender for IoT ICS malware sandbox. |
- **Deployment type** | Enterprise |
- **Appliance type** | Dell R340, VM |
- **Number of managed sensors** | Unlimited |
-
-## Prepare for the installation
-
-### Access the ISO installation image
-
-The installation image is accessible from Defender for IoT, in the [Azure portal](https://ms.portal.azure.com).
-
-**To access the file**:
-
-1. Navigate to the [Azure portal](https://ms.portal.azure.com).
-
-1. Search for, and select **Microsoft Defender for IoT**.
-
-1. Select the **Sensor**, or **On-premises management console** tab.
-
- :::image type="content" source="media/tutorial-install-components/sensor-tab.png" alt-text="Screeshot of the sensor tab under Defender for IoT.":::
-
-1. Select a version from the drop-down menu.
-
-1. Select the **Download** button.
-
-### Install from DVD
-
-Before the installation, ensure you have:
--- A portable DVD drive with the USB connector.--- An ISO installer image.-
-**To Burn the image to a DVD**:
-
-1. Connect a portable DVD drive to your computer.
-
-1. Insert a blank DVD into the portable DVD drive.
-
-1. Right-click the ISO image, and select **Burn to disk**.
-
-1. Connect the DVD drive to the device, and configure the appliance to boot from DVD.
-
-### Install from disk on a key
-
-Before the installation, ensure you have:
--- Rufus installed.
-
-- A disk on key with USB version 3.0 and later. The minimum size is 4 GB.--- An ISO installer image file.-
-This process will format the disk on a key and any data stored on the disk on key will be erased.
-
-**To prepare a disk on a key**:
-
-1. Run Rufus, and select **SENSOR ISO**.
-
-1. Connect the disk on a key to the front panel.
-
-1. Set the BIOS of the server to boot from the USB.
-
-## Dell PowerEdgeR340XL installation
-
-Before installing the software on the Dell appliance, you need to adjust the appliance's BIOS configuration:
--- [Dell PowerEdge R340 Front Panel](#dell-poweredge-r340-front-panel) and [Dell PowerEdge R340 Back Panel](#dell-poweredge-r340-back-panel) contains the description of front and back panels, along with information required for installation, such as drivers and ports.--- [Dell BIOS Configuration](#dell-bios-configuration) provides information about how to connect to the Dell appliance management interface and configure the BIOS.--- [Software Installation (Dell R340)](#software-installation-dell-r340) describes the procedure required to install the Defender for IoT sensor software.-
-### Dell PowerEdge R340XL requirements
-
-To install the Dell PowerEdge R340XL appliance, you need:
--- Enterprise license for Dell Remote Access Controller (iDrac)--- BIOS configuration XML--- Server firmware versions:-
- - BIOS version 2.1.6
-
- - iDrac version 3.23.23.23
-
-### Dell PowerEdge R340 front panel
--
- 1. Left control panel
- 1. Optical drive (optional)
- 1. Right control panel
- 1. Information tag
- 1. Drives
-
-### Dell PowerEdge R340 back panel
--
-1. Serial port
-1. NIC port (Gb 1)
-1. NIC port (Gb 1)
-1. Half-height PCIe
-1. Full-height PCIe expansion card slot
-1. Power supply unit 1
-1. Power supply unit 2
-1. System identification
-1. System status indicator cable port (CMA) button
-1. USB 3.0 port (2)
-1. iDRAC9 dedicated network port
-1. VGA port
-
-### Dell BIOS configuration
-
-Dell BIOS configuration is required to adjust the Dell appliance to work with the software.
-
-The Dell appliance is managed by an integrated iDRAC with Lifecycle Controller (LC). The LC is embedded in every Dell PowerEdge server and provides functionality that helps you deploy, update, monitor, and maintain your Dell PowerEdge appliances.
-
-To establish the communication between the Dell appliance and the management computer, you need to define the iDRAC IP address and the management computer's IP address on the same subnet.
-
-When the connection is established, the BIOS is configurable.
-
-**To configure Dell BIOS**:
-
-1. [Configure the iDRAC IP address](#configure-idrac-ip-address)
-
-1. [Configuring the BIOS](#configuring-the-bios)
-
-#### Configure iDRAC IP address
-
-1. Power up the sensor.
-
-1. If the OS is already installed, select the F2 key to enter the BIOS configuration.
-
-1. Select **iDRAC Settings**.
-
-1. Select **Network**.
-
- > [!NOTE]
- > During the installation, you must configure the default iDRAC IP address and password mentioned in the following steps. After the installation, you change these definitions.
-
-1. Change the static IPv4 address to **10.100.100.250**.
-
-1. Change the static subnet mask to **255.255.255.0**.
-
- :::image type="content" source="media/tutorial-install-components/idrac-network-settings-screen-v2.png" alt-text="Screenshot that shows the static subnet mask.":::
-
-1. Select **Back** > **Finish**.
-
-#### Configuring the BIOS
-
-Configure the appliance BIOS if:
--- You did not purchase your appliance from Arrow.--- You have an appliance, but do not have access to the XML configuration file.-
-After you access the BIOS, go to **Device Settings**.
-
-**To configure the BIOS**:
-
-1. Access the appliance's BIOS directly by using a keyboard and screen, or use iDRAC.
-
- - If the appliance is not a Defender for IoT appliance, open a browser and go to the IP address that was configured before. Sign in with the Dell default administrator privileges. Use **root** for the username and **calvin** for the password.
-
- - If the appliance is a Defender for IoT appliance, sign in by using **XXX** for the username and **XXX** for the password.
-
-1. After you access the BIOS, go to **Device Settings**.
-
-1. Choose the RAID-controlled configuration by selecting **Integrated RAID controller 1: Dell PERC\<PERC H330 Adapter\> Configuration Utility**.
-
-1. Select **Configuration Management**.
-
-1. Select **Create Virtual Disk**.
-
-1. In the **Select RAID Level** field, select **RAID5**. In the **Virtual Disk Name** field, enter **ROOT** and select **Physical Disks**.
-
-1. Select **Check All** and then select **Apply Changes**
-
-1. Select **Ok**.
-
-1. Scroll down and select **Create Virtual Disk**.
-
-1. Select the **Confirm** check box and select **Yes**.
-
-1. Select **OK**.
-
-1. Return to the main screen and select **System BIOS**.
-
-1. Select **Boot Settings**.
-
-1. For the **Boot Mode** option, select **BIOS**.
-
-1. Select **Back**, and then select **Finish** to exit the BIOS settings.
-
-### Software installation (Dell R340)
-
-The installation process takes about 20 minutes. After the installation, the system is restarted several times.
-
-**To install the software**:
-
-1. Verify that the version media is mounted to the appliance in one of the following ways:
-
- - Connect the external CD, or disk on a key with the release.
-
- - Mount the ISO image by using iDRAC. After signing in to iDRAC, select the virtual console, and then select **Virtual Media**.
-
-1. In the **Map CD/DVD** section, select **Choose File**.
-
-1. Choose the version ISO image file for this version from the dialog box that opens.
-
-1. Select the **Map Device** button.
-
- :::image type="content" source="media/tutorial-install-components/mapped-device-on-virtual-media-screen-v2.png" alt-text="Screenshot that shows a mapped device.":::
-
-1. The media is mounted. Select **Close**.
-
-1. Start the appliance. When you're using iDRAC, you can restart the servers by selecting the **Consul Control** button. Then, on the **Keyboard Macros**, select the **Apply** button, which will start the Ctrl+Alt+Delete sequence.
-
-1. Follow the software installation instructions located [here](#install-the-software).
-
-## HPE ProLiant DL20 installation
-
-This section describes the HPE ProLiant DL20 installation process, which includes the following steps:
--- Enable remote access and update the default administrator password.-- Configure BIOS and RAID settings.-- Install the software.-
-### About the installation
--- Enterprise and SMB appliances can be installed. The installation process is identical for both appliance types, except for the array configuration.-- A default administrative user is provided. We recommend that you change the password during the network configuration process.-- During the network configuration process, you'll configure the iLO port on network port 1.-- The installation process takes about 20 minutes. After the installation, the system is restarted several times.-
-### HPE ProLiant DL20 front panel
--
-### HPE ProLiant DL20 back panel
--
-### Enable remote access and update the password
-
-Use the following procedure to set up network options and update the default password.
-
-**To enable, and update the password**:
-
-1. Connect a screen, and a keyboard to the HP appliance, turn on the appliance, and press **F9**.
-
- :::image type="content" source="media/tutorial-install-components/hpe-proliant-screen-v2.png" alt-text="Screenshot that shows the HPE ProLiant window.":::
-
-1. Go to **System Utilities** > **System Configuration** > **iLO 5 Configuration Utility** > **Network Options**.
-
- :::image type="content" source="media/tutorial-install-components/system-configuration-window-v2.png" alt-text="Screenshot that shows the System Configuration window.":::
-
- 1. Select **Shared Network Port-LOM** from the **Network Interface Adapter** field.
-
- 1. Disable DHCP.
-
- 1. Enter the IP address, subnet mask, and gateway IP address.
-
-1. Select **F10: Save**.
-
-1. Select **Esc** to get back to the **iLO 5 Configuration Utility**, and then select **User Management**.
-
-1. Select **Edit/Remove User**. The administrator is the only default user defined.
-
-1. Change the default password and select **F10: Save**.
-
-### Configure the HPE BIOS
-
-The following procedure describes how to configure the HPE BIOS for the enterprise, and SMB appliances.
-
-**To configure the HPE BIOS**:
-
-1. Select **System Utilities** > **System Configuration** > **BIOS/Platform Configuration (RBSU)**.
-
-1. In the **BIOS/Platform Configuration (RBSU)** form, select **Boot Options**.
-
-1. Change **Boot Mode** to **Legacy BIOS Mode**, and then select **F10: Save**.
-
-1. Select **Esc** twice to close the **System Configuration** form.
-
-#### For the enterprise appliance
-
-1. Select **Embedded RAID 1: HPE Smart Array P408i-a SR Gen 10** > **Array Configuration** > **Create Array**.
-
-1. In the **Create Array** form, select all the options. Three options are available for the **Enterprise** appliance.
-
-#### For the SMB appliance
-
-1. Select **Embedded RAID 1: HPE Smart Array P208i-a SR Gen 10** > **Array Configuration** > **Create Array**.
-
-1. Select **Proceed to Next Form**.
-
-1. In the **Set RAID Level** form, set the level to **RAID 5** for enterprise deployments and **RAID 1** for SMB deployments.
-
-1. Select **Proceed to Next Form**.
-
-1. In the **Logical Drive Label** form, enter **Logical Drive 1**.
-
-1. Select **Submit Changes**.
-
-1. In the **Submit** form, select **Back to Main Menu**.
-
-1. Select **F10: Save** and then press **Esc** twice.
-
-1. In the **System Utilities** window, select **One-Time Boot Menu**.
-
-1. In the **One-Time Boot Menu** form, select **Legacy BIOS One-Time Boot Menu**.
-
-1. The **Booting in Legacy** and **Boot Override** windows appear. Choose a boot override option; for example, to a CD-ROM, USB, HDD, or UEFI shell.
-
- :::image type="content" source="media/tutorial-install-components/boot-override-window-one-v2.png" alt-text="Screenshot that shows the first Boot Override window.":::
-
- :::image type="content" source="media/tutorial-install-components/boot-override-window-two-v2.png" alt-text="Screenshot that shows the second Boot Override window.":::
-
-### Software installation (HPE ProLiant DL20 appliance)
-
-The installation process takes about 20 minutes. After the installation, the system is restarted several times.
-
-To install the software:
-
-1. Connect the screen and keyboard to the appliance, and then connect to the CLI.
-
-1. Connect an external CD or disk on the key with the ISO image that you downloaded from the **Updates** page of Defender for IoT in the Azure portal.
-
-1. Start the appliance.
-
-1. Follow the software installation instructions located [here](#install-the-software).
-
-## HPE ProLiant DL360 installation
--- A default administrative user is provided. We recommend that you change the password during the network configuration.--- During the network configuration, you'll configure the iLO port.--- The installation process takes about 20 minutes. After the installation, the system is restarted several times.-
-### HPE ProLiant DL360 front panel
--
-### HPE ProLiant DL360 back panel
--
-### Enable remote access and update the password
-
-Refer to the preceding sections for HPE ProLiant DL20 installation:
--- "Enable remote access and update the password"--- "Configure the HPE BIOS"-
-The enterprise configuration is identical.
-
-> [!Note]
-> In the array form, verify that you select all the options.
-
-### iLO remote installation (from a virtual drive)
-
-This procedure describes the iLO installation from a virtual drive.
-
-**To perform the iLO installation from a virtual drive**:
-
-1. Sign in to the iLO console, and then right-click the servers' screen.
-
-1. Select **HTML5 Console**.
-
-1. In the console, select the CD icon, and choose the CD/DVD option.
-
-1. Select **Local ISO file**.
-
-1. In the dialog box, choose the relevant ISO file.
-
-1. Go to the left icon, select **Power**, and the select **Reset**.
-
-1. The appliance will restart, and run the sensor installation process.
-
-### Software installation (HPE DL360)
-
-The installation process takes about 20 minutes. After the installation, the system is restarted several times.
-
-**To install the software**:
-
-1. Connect a screen, and keyboard to the appliance, and then connect to the CLI.
-
-1. Connect an external CD or disk on a key with the ISO image that you downloaded from the **Updates** page of Defender for IoT in the Azure portal.
-
-1. Follow the software installation instructions located [here](#install-the-software).
-
-## HP EdgeLine 300 installation
--- A default administrative user is provided. We recommend that you change the password during the network configuration.--- The installation process takes about 20 minutes. After the installation, the system is restarted several times.-
-### HP EdgeLine 300 back panel
--
-### Enable remote access
-
-1. Enter the iSM IP Address into your web browser.
-
-1. Sign in using the default username, and password found on your appliance.
-
-1. Navigate to **Wired and Wireless Network** > **IPV4**
-
- :::image type="content" source="media/tutorial-install-components/wired-and-wireless.png" alt-text="navigate to highlighted sections.":::
-
-1. Disable **DHCP toggle**.
-
-1. Configure the IPv4 addresses as such:
- - **IPV4 Address**: `192.168.1.125`
- - **IPV4 Subnet Mask**: `255.255.255.0`
- - **IPV4 Gateway**: `192.168.1.1`
-
-1. Select **Apply**.
-
-1. Sign out, and reboot the appliance.
-
-### Configure the BIOS
-
-The following procedure describes how to configure the BIOS for HP EL300 appliance.
-
-**To configure the BIOS**:
-
-1. Turn on the appliance, and push **F9** to enter the BIOS.
-
-1. Select **Advanced**, and scroll down to **CSM Support**.
-
- :::image type="content" source="media/tutorial-install-components/csm-support.png" alt-text="Enable CSM support to open the additional menu.":::
-
-1. Push **Enter** to enable CSM Support.
-
-1. Navigate to Storage, and push **+/-** to change it to Legacy.
-
-1. Navigate to Video, and push **+/-** to change it to Legacy.
-
- :::image type="content" source="media/tutorial-install-components/storage-and-video.png" alt-text="Navigate to storage and video and change them to Legacy.":::
-
-1. Navigate to **Boot** > **Boot mode select**.
-
-1. Push **+/-** to change it to Legacy.
-
- :::image type="content" source="media/tutorial-install-components/boot-mode.png" alt-text="Change Boot mode select to Legacy.":::
-
-1. Navigate to **Save & Exit**.
-
-1. Select **Save Changes and Exit**.
-
- :::image type="content" source="media/tutorial-install-components/save-and-exit.png" alt-text="Save your changes and exit the system.":::
-
-1. Select **Yes**, and the appliance will reboot.
-
-1. Push **F11** to enter the **Boot Menu**.
-
-1. Select the device with the sensor image. Either **DVD** or **USB**.
-
-1. Follow the software installation instructions located [here](#install-the-software).
-
-## Sensor installation for the virtual appliance
-
-You can deploy the virtual machine for the Defender for IoT sensor in the following architectures:
-
-| Architecture | Specifications | Usage | Comments |
-|||||
-| **Enterprise** | CPU: 8<br/>Memory: 32G RAM<br/>HDD: 1800 GB | Production environment | Default and most common |
-| **Small Business** | CPU: 4 <br/>Memory: 8G RAM<br/>HDD: 500 GB | Test or small production environments | - |
-| **Office** | CPU: 4<br/>Memory: 8G RAM<br/>HDD: 100 GB | Small test environments | - |
-
-### Prerequisites
-
-The on-premises management console supports both VMware and Hyper-V deployment options. Before you begin the installation, make sure you have the following items:
--- VMware (ESXi 5.5 or later) or Hyper-V hypervisor (Windows 10 Pro or Enterprise) installed and operational--- Available hardware resources for the virtual machine--- ISO installation file for the Microsoft Defender for IoT sensor-
-Make sure the hypervisor is running.
-
-### Create the virtual machine (ESXi)
-
-This procedure describes how to create a virtual machine by using ESXi.
-
-**To create the virtual machine using ESXi**:
-
-1. Sign in to the ESXi, choose the relevant **datastore**, and select **Datastore Browser**.
-
-1. Select **Upload**, to upload the image, and select **Close**.
-
-1. Navigate to VM, and then select **Create/Register VM**.
-
-1. Select **Create new virtual machine**, and then select **Next**.
-
-1. Add a sensor name, and select the following options:
-
- - Compatibility: **&lt;latest ESXi version&gt;**
-
- - Guest OS family: **Linux**
-
- - Guest OS version: **Ubuntu Linux (64-bit)**
-
-1. Select **Next**.
-
-1. Choose the relevant datastore and select **Next**.
-
-1. Change the virtual hardware parameters according to the required architecture.
-
-1. For **CD/DVD Drive 1**, select **Datastore ISO file** and choose the ISO file that you uploaded earlier.
-
-1. Select **Next** > **Finish**.
-
-### Create the virtual machine (Hyper-V)
-
-This procedure describes how to create a virtual machine by using Hyper-V.
-
-**To create the virtual machine using Hyper-V**:
-
-1. Create a virtual disk in Hyper-V Manager.
-
-1. Select **format = VHDX**.
-
-1. Select **type = Dynamic Expanding**.
-
-1. Enter the name and location for the VHD.
-
-1. Enter the required size (according to the architecture).
-
-1. Review the summary, and select **Finish**.
-
-1. On the **Actions** menu, create a new virtual machine.
-
-1. Enter a name for the virtual machine.
-
-1. Select **Specify Generation** > **Generation 1**.
-
-1. Specify the memory allocation (according to the architecture), and select the check box for dynamic memory.
-
-1. Configure the network adaptor according to your server network topology.
-
-1. Connect the VHDX created previously to the virtual machine.
-
-1. Review the summary, and select **Finish**.
-
-1. Right-click on the new virtual machine, and select **Settings**.
-
-1. Select **Add Hardware**, and add a new network adapter.
-
-1. Select the virtual switch that will connect to the sensor management network.
-
-1. Allocate CPU resources (according to the architecture).
-
-1. Connect the management console's ISO image to a virtual DVD drive.
-
-1. Start the virtual machine.
-
-1. On the **Actions** menu, select **Connect** to continue the software installation.
-
-### Software installation (ESXi and Hyper-V)
-
-This section describes the ESXi and Hyper-V software installation.
-
-To install:
-
-1. Open the virtual machine console.
-
-1. The VM will start from the ISO image, and the language selection screen will appear.
-
-1. Follow the software installation instructions located [here](#install-the-software).
-
-## Install the software
-
-Ensure you followed the installation instruction for your device prior to starting the software installation, and have downloaded the containerized sensor version ISO file.
-
-Mount the ISO file using one of the following options;
--- Physical media ΓÇô burn the ISO file to a DVD, or USB, and boot from the media. --- Virtual mount ΓÇô use iLO for HPE, or iDRAC for Dell to boot the iso file.
+This procedure describes how to install OT sensor software on a physical or virtual appliance.
> [!Note] > At the end of this process you will be presented with the usernames, and passwords for your device. Make sure to copy these down as these passwords will not be presented again.
Mount the ISO file using one of the following options;
:::image type="content" source="media/tutorial-install-components/sensor-architecture.png" alt-text="Screenshot of the sensor's architecture select screen.":::
-1. The Sensor will reboot, and the Package configuration screen will appear. Press the up, or down arrows to navigate, and the Space bar to select an option. Press the Enter key to advance to the next screen.
+1. The sensor will reboot, and the **Package configuration** screen will appear. Press the up or down arrows to navigate, and the SPACE bar to select an option. Press ENTER to advance to the next screen.
-1. Select the monitor interface, and press the **Enter** key.
+1. Select the monitor interface, and press the **ENTER** key.
:::image type="content" source="media/tutorial-install-components/monitor-interface.png" alt-text="Screenshot of the select monitor interface screen.":::
-1. If one of the monitoring ports is for ERSPAN, select it, and press the **Enter** key.
+1. If one of the monitoring ports is for ERSPAN, select it, and press the **ENTER** key.
:::image type="content" source="media/tutorial-install-components/erspan-monitor.png" alt-text="Screenshot of the select erspan monitor screen.":::
-1. Select the interface to be used as the management interface, and press the **Enter** key.
+1. Select the interface to be used as the management interface, and press the **ENTER** key.
:::image type="content" source="media/tutorial-install-components/management-interface.png" alt-text="Screenshot of the management interface select screen.":::
-1. Enter the sensor's IP address, and press the **Enter** key.
+1. Enter the sensor's IP address, and press the **ENTER** key.
:::image type="content" source="media/tutorial-install-components/sensor-ip-address.png" alt-text="Screenshot of the sensor IP address screen.":::
-1. Enter the path of the mounted logs folder. We recommend using the default path, and press the **Enter** key.
+1. Enter the path of the mounted logs folder. We recommend using the default path, and press the **ENTER** key.
:::image type="content" source="media/tutorial-install-components/mounted-backups-path.png" alt-text="Screenshot of the mounted backup path screen.":::
-1. Enter the Subnet Mask IP address, and press the **Enter** key.
+1. Enter the Subnet Mask IP address, and press the **ENTER** key.
-1. Enter the default gateway IP address, and press the **Enter** key.
+1. Enter the default gateway IP address, and press the **ENTER** key.
-1. Enter the DNS Server IP address, and press the **Enter** key.
+1. Enter the DNS Server IP address, and press the **ENTER** key.
-1. Enter the sensor hostname, and press the **Enter** key.
+1. Enter the sensor hostname, and press the **ENTER** key.
:::image type="content" source="media/tutorial-install-components/sensor-hostname.png" alt-text="Screenshot of the screen where you enter a hostname for your sensor.":::
Mount the ISO file using one of the following options;
:::image type="content" source="media/tutorial-install-components/login-information.png" alt-text="Screenshot of the final screen of the installation with usernames, and passwords.":::
-## On-premises management console installation
-
-Before installing the software on the appliance, you need to adjust the appliance's BIOS configuration:
-
-### BIOS configuration
-
-**To configure the BIOS for your appliance**:
-
-1. [Enable remote access and update the password](#enable-remote-access-and-update-the-password).
+## Install on-premises management console software
-1. [Configure the BIOS](#configure-the-hpe-bios).
-
-### Software installation
+This procedure describes how to install on-premises management console software on a physical or virtual appliance.
The installation process takes about 20 minutes. After the installation, the system is restarted several times.
-During the installation process, you can add a secondary NIC. If you choose not to install the secondary NIC during installation, you can [add a secondary NIC](#add-a-secondary-nic) at a later time.
+During the installation process, you can add a secondary NIC. If you choose not to install the secondary NIC during installation, you can [add a secondary NIC](#add-a-secondary-nic-optional) at a later time.
-To install the software:
+**To install the software**:
1. Select your preferred language for the installation process.
To install the software:
| Parameter | Configuration | |--|--| | **configure management network interface** | For Dell: **eth0, eth1** <br /> For HP: **enu1, enu2** <br> Or <br />**possible value** |
- | **configure management network IP address:** | **IP address provided by the customer** |
- | **configure subnet mask:** | **IP address provided by the customer** |
- | **configure DNS:** | **IP address provided by the customer** |
- | **configure default gateway IP address:** | **IP address provided by the customer** |
+ | **configure management network IP address** | Enter an IP address |
+ | **configure subnet mask:** | Enter an IP address|
+ | **configure DNS:** | Enter an IP address |
+ | **configure default gateway IP address:** | Enter an IP address|
1. **(Optional)** If you would like to install a secondary Network Interface Card (NIC), define the following appliance profile, and network properties:
To install the software:
| Parameter | Configuration | |--|--|
- | **configure sensor monitoring interface (Optional):** | **eth1**, or **possible value** |
- | **configure an IP address for the sensor monitoring interface:** | **IP address provided by the customer** |
- | **configure a subnet mask for the sensor monitoring interface:** | **IP address provided by the customer** |
+ | **configure sensor monitoring interface** (Optional) | **eth1** or **possible value** |
+ | **configure an IP address for the sensor monitoring interface:** | Enter an IP address |
+ | **configure a subnet mask for the sensor monitoring interface:** | Enter an IP address |
1. Accept the settlings and continue by typing `Y`. 1. After about 10 minutes, the two sets of credentials appear. One is for a **CyberX** user, and one is for a **Support** user.
- :::image type="content" source="media/tutorial-install-components/credentials-screen.png" alt-text="Copy these credentials as they will not be presented again.":::
+ :::image type="content" source="media/tutorial-install-components/credentials-screen.png" alt-text="Copy these credentials as they will not be presented again.":::
Save the usernames, and passwords, you'll need these credentials to access the platform the first time you use it.
To install the software:
For information on how to find the physical port on your appliance, see [Find your port](#find-your-port).
-### Add a secondary NIC
+### Add a secondary NIC (optional)
-You can enhance security to your on-premises management console by adding a secondary NIC. By adding a secondary NIC you will have one dedicated for your users, and the other will support the configuration of a gateway for routed networks. The second NIC is dedicated to all attached sensors within an IP address range.
+You can enhance security to your on-premises management console by adding a secondary NIC dedicated for attached sensors within an IP address range. By adding a secondary NIC, the first will be dedicated for end-users, and the secondary will support the configuration of a gateway for routed networks.
:::image type="content" source="media/tutorial-install-components/secondary-nic.png" alt-text="The overall architecture of the secondary NIC."::: Both NICs will support the user interface (UI). If you choose not to deploy a secondary NIC, all of the features will be available through the primary NIC.
-If you have already configured your on-premises management console, and would like to add a secondary NIC to your on-premises management console, use the following steps:
+This procedure describes how to add a secondary NIC if you've already installed your on-premises management console.
+
+**To add a secondary NIC**:
1. Use the network reconfigure command:
If you have already configured your on-premises management console, and would li
1. Enter the following responses to the following questions:
- :::image type="content" source="media/tutorial-install-components/network-reconfig-command.png" alt-text="Enter the following answers to configure your appliance.":::
+ :::image type="content" source="media/tutorial-install-components/network-reconfig-command.png" alt-text="Screenshot of the required answers to configure your appliance. ":::
| Parameters | Response to enter | |--|--|
If you have already configured your on-premises management console, and would li
| **Subnet mask** | `N` | | **DNS** | `N` | | **Default gateway IP Address** | `N` |
- | **Sensor monitoring interface (Optional. Applicable when sensors are on a different network segment. For more information, see the Installation instructions)**| `Y`, **select a possible value** |
- | **An IP address for the sensor monitoring interface (accessible by the sensors)** | `Y`, **IP address provided by the customer**|
- | **A subnet mask for the sensor monitoring interface (accessible by the sensors)** | `Y`, **IP address provided by the customer** |
- | **Hostname** | **provided by the customer** |
+ | **Sensor monitoring interface** <br>Optional. Relevant when sensors are on a different network segment.| `Y` and select a possible value |
+ | **An IP address for the sensor monitoring interface** | `Y`, and enter an IP address that's accessible by the sensors|
+ | **A subnet mask for the sensor monitoring interface** | `Y`, and enter an IP address that's accessible by the sensors|
+ | **Hostname** | Enter the hostname |
1. Review all choices, and enter `Y` to accept the changes. The system reboots.
sudo ethtool -p <port value> <time-in-seconds>
This command will cause the light on the port to flash for the specified time period. For example, entering `sudo ethtool -p eno1 120`, will have port eno1 flash for 2 minutes allowing you to find the port on the back of your appliance.
-## Virtual appliance: On-premises management console installation
-
-The on-premises management console VM supports the following architectures:
-
-| Architecture | Specifications | Usage |
-|--|--|--|
-| Enterprise <br/>(Default and most common) | CPU: 8 <br/>Memory: 32G RAM<br/> HDD: 1.8 TB | Large production environments |
-| Small | CPU: 4 <br/> Memory: 8G RAM<br/> HDD: 500 GB | Large production environments |
-| Office | CPU: 4 <br/>Memory: 8G RAM <br/> HDD: 100 GB | Small test environments |
-
-### Prerequisites
-
-The on-premises management console supports both VMware and Hyper-V deployment options. Before you begin the installation, verify the following:
--- VMware (ESXi 5.5 or later) or Hyper-V hypervisor (Windows 10 Pro or Enterprise) is installed and operational.--- The hardware resources are available for the virtual machine.--- You have the ISO installation file for the on-premises management console.--- The hypervisor is running.-
-### Create the virtual machine (ESXi)
-
-To create a virtual machine (ESXi):
-
-1. Sign in to the ESXi, choose the relevant **datastore**, and select **Datastore Browser**.
-
-1. Upload the image and select **Close**.
-
-1. Go to **Virtual Machines**.
-
-1. Select **Create/Register VM**.
-
-1. Select **Create new virtual machine** and select **Next**.
-
-1. Add a sensor name and choose:
-
- - Compatibility: \<latest ESXi version>
-
- - Guest OS family: Linux
-
- - Guest OS version: Ubuntu Linux (64-bit)
-
-1. Select **Next**.
-
-1. Choose relevant datastore and select **Next**.
-
-1. Change the virtual hardware parameters according to the required architecture.
-
-1. For **CD/DVD Drive 1**, select **Datastore ISO file** and choose the ISO file that you uploaded earlier.
-
-1. Select **Next** > **Finish**.
-
-### Create the virtual machine (Hyper-V)
-
-To create a virtual machine by using Hyper-V:
-
-1. Create a virtual disk in Hyper-V Manager.
-
-1. Select the format **VHDX**.
-
-1. Select **Next**.
-
-1. Select the type **Dynamic expanding**.
-
-1. Select **Next**.
-
-1. Enter the name and location for the VHD.
-
-1. Select **Next**.
-
-1. Enter the required size (according to the architecture).
-
-1. Select **Next**.
-
-1. Review the summary and select **Finish**.
-
-1. On the **Actions** menu, create a new virtual machine.
-
-1. Select **Next**.
-
-1. Enter a name for the virtual machine.
-
-1. Select **Next**.
-
-1. Select **Generation** and set it to **Generation 1**.
-
-1. Select **Next**.
-
-1. Specify the memory allocation (according to the architecture) and select the check box for dynamic memory.
-
-1. Select **Next**.
-
-1. Configure the network adaptor according to your server network topology.
-
-1. Select **Next**.
-
-1. Connect the VHDX created previously to the virtual machine.
-
-1. Select **Next**.
-
-1. Review the summary and select **Finish**.
-
-1. Right-click the new virtual machine, and then select **Settings**.
-
-1. Select **Add Hardware** and add a new adapter for **Network Adapter**.
-
-1. For **Virtual Switch**, select the switch that will connect to the sensor management network.
-
-1. Allocate CPU resources (according to the architecture).
-
-1. Connect the management console's ISO image to a virtual DVD drive.
-
-1. Start the virtual machine.
-
-1. On the **Actions** menu, select **Connect** to continue the software installation.
-
-### Software installation (ESXi and Hyper-V)
-
-Starting the virtual machine will start the installation process from the ISO image.
-
-To install the software:
-
-1. Select **English**.
-
-1. Select the required architecture for your deployment.
-
-1. Define the network interface for the sensor management network: interface, IP, subnet, DNS server, and default gateway.
-
-1. Sign-in credentials are automatically generated. Save the username and passwords, you'll need these credentials to access the platform the first time you use it.
-
- The appliance will then reboot.
-
-1. Access the management console via the IP address previously configured: `<https://ip_address>`.
-
- :::image type="content" source="media/tutorial-install-components/defender-for-iot-management-console-sign-in-screen.png" alt-text="Screenshot that shows the management console's sign-in screen.":::
-
-## Legacy appliances
-
-This section describes installation procedures for *legacy* appliances only. See [Identify required appliances](how-to-identify-required-appliances.md), if you are buying a new appliance.
-
-### Nuvo 5006LP installation
-
-This section provides the Nuvo 5006LP installation procedure. Before installing the software on the Nuvo 5006LP appliance, you need to adjust the appliance BIOS configuration.
-
-#### Nuvo 5006LP front panel
--
-1. Power button, Power indicator
-1. DVI video connectors
-1. HDMI video connectors
-1. VGA video connectors
-1. Remote on/off Control, and status LED output
-1. Reset button
-1. Management network adapter
-1. Ports to receive mirrored data
-
-#### Nuvo back panel
--
-1. SIM card slot
-1. Microphone, and speakers
-1. COM ports
-1. USB connectors
-1. DC power port (DC IN)
-
-#### Configure the Nuvo 5006LP BIOS
-
-The following procedure describes how to configure the Nuvo 5006LP BIOS. Make sure the operating system was previously installed on the appliance.
-
-**To configure the BIOS**:
-
-1. Power on the appliance.
-
-1. Press **F2** to enter the BIOS configuration.
-
-1. Navigate to **Power** and change Power On after Power Failure to S0-Power On.
-
- :::image type="content" source="media/tutorial-install-components/nuvo-power-on.png" alt-text="Change your Nuvo 5006 to power on after a power failure.":::
-
-1. Navigate to **Boot** and ensure that **PXE Boot to LAN** is set to **Disabled**.
-
-1. Press **F10** to save, and then select **Exit**.
-
-#### Software installation (Nuvo 5006LP)
-
-The installation process takes approximately 20 minutes. After installation, the system is restarted several times.
-
-**To install the software**:
-
-1. Connect the external CD, or disk on key with the ISO image.
-
-1. Boot the appliance.
-
-1. Select **English**.
-
-1. Select **XSENSE-RELEASE-\<version> Office...**.
-
- :::image type="content" source="media/tutorial-install-components/sensor-version-select-screen-v2.png" alt-text="Select the version of the sensor to install.":::
-
-1. Define the appliance architecture, and network properties:
-
- :::image type="content" source="media/tutorial-install-components/nuvo-profile-appliance.png" alt-text="Define the Nuvo's architecture and network properties.":::
-
- | Parameter | Configuration |
- | -| - |
- | **Hardware profile** | Select **office**. |
- | **Management interface** | **eth0** |
- | **Management network IP address** | **IP address provided by the customer** |
- | **Management subnet mask** | **IP address provided by the customer** |
- | **DNS** | **IP address provided by the customer** |
- | **Default gateway IP address** | **0.0.0.0** |
- | **Input interface** | The list of input interfaces is generated for you by the system. <br />To mirror the input interfaces, copy all the items presented in the list with a comma separator. |
- | **Bridge interface** | - |
-
-1. Accept the settings and continue by entering `Y`.
-
-After approximately 10 minutes, sign-in credentials are automatically generated. Save the username and passwords, you'll need these credentials to access the platform the first time you use it.
-
-### Fitlet2 mini sensor Installation
-
-This section provides the Fitlet2 installation procedure. Before installing the software on the Fitlet appliance, you need to adjust the appliance's BIOS configuration.
-
-#### Fitlet2 front panel
--
-#### Fitlet2 back panel
--
-#### Configure the Fitlet2 BIOS
-
-**To configure the Fitlet2 BIOS**:
-
-1. Power on the appliance.
-
-1. Navigate to **Main** > **OS Selection**.
-
-1. Press **+/-** to select **Linux**.
-
- :::image type="content" source="media/tutorial-install-components/fitlet-linux.png" alt-text="Set the OS to Linux on your Fitlet2.":::
-
-1. Verify that the system date, and time are updated with the installation date, and time.
-
-1. Navigate to **Advanced**, and select **ACPI Settings**.
-
-1. Select **Enable Hibernation**, and press **+/-** to select **Disabled**.
-
- :::image type="content" source="media/tutorial-install-components/disable-hibernation.png" alt-text="Diable the hibernation mode on your Fitlet2.":::
-
-1. Press **Esc**.
-
-1. Navigate to **Advanced** > **TPM Configuration**.
-
-1. Select **fTPM**, and press **+/-** to select **Disabled**.
-
-1. Press **Esc**.
-
-1. Navigate to **CPU Configuration** > **VT-d**.
-
-1. Press **+/-** to select **Enabled**.
-
-1. Navigate to **CSM Configuration** > **CSM Support**.
-
-1. Press **+/-** to select **Enabled**.
-
-1. Navigate to **Advanced** > **Boot option filter [Legacy only]** and change setting in the following fields to **Legacy**:
-
- - Network
- - Storage
- - Video
- - Other PCI
-
- :::image type="content" source="media/tutorial-install-components/legacy-only.png" alt-text="Set all fields to Legacy.":::
-
-1. Press **Esc**.
-
-1. Navigate to **Security** > **Secure Boot Customization**.
-
-1. Press **+/-** to select **Disabled**.
-
-1. Press **Esc**.
-
-1. Navigate to **Boot** > **Boot mode** select, and select **Legacy**.
-
-1. Select **Boot Option #1 ΓÇô [USB CD/DVD]**.
-
-1. Select **Save & Exit**.
-
-#### Software installation (Fitlet2)
-
-The installation process takes approximately 20 minutes. After installation, the system is restarted several times.
-
-1. Connect the external CD, or disk on key with the ISO image.
-
-1. Boot the appliance.
-
-1. Select **English**.
-
-1. Select **XSENSE-RELEASE-\<version> Office...**.
-
- :::image type="content" source="media/tutorial-install-components/sensor-version-select-screen-v2.png" alt-text="Select the version of the sensor to install.":::
-
- > [!Note]
- > Do not select Ruggedized.
-
-1. Define the appliance architecture, and network properties:
-
- :::image type="content" source="media/tutorial-install-components/nuvo-profile-appliance.png" alt-text="Define the Nuvo's architecture and network properties.":::
-
- | Parameter | Configuration |
- | -| - |
- | **Hardware profile** | Select **office**. |
- | **Management interface** | **em1** |
- | **Management network IP address** | **IP address provided by the customer** |
- | **Management subnet mask** | **IP address provided by the customer** |
- | **DNS** | **IP address provided by the customer** |
- | **Default gateway IP address** | **0.0.0.0** |
- | **Input interface** | The list of input interfaces is generated for you by the system. <br />To mirror the input interfaces, copy all the items presented in the list with a comma separator. |
- | **Bridge interface** | - |
-
-1. Accept the settings and continue by entering `Y`.
-
-After approximately 10 minutes, sign-in credentials are automatically generated. Save the username and passwords, you'll need these credentials to access the platform the first time you use it.
## Post-installation validation
Post-installation validation must include the following tests:
- **ifconfig**: Verify that all the input interfaces configured during the installation process are running.
-### Check system health by using the GUI
+### Check system health
+
+Check your system health from the sensor or on-premises management console. For example:
:::image type="content" source="media/tutorial-install-components/system-health-check-screen.png" alt-text="Screenshot that shows the system health check.":::
Verify that you can access the console web GUI:
For any other issues, contact [Microsoft Support](https://support.microsoft.com/en-us/supportforbusiness/productselection?sapId=82c88f35-1b8e-f274-ec11-c6efdd6dd099).
-## Configure a SPAN port
-
-A virtual switch does not have mirroring capabilities. However, you can use promiscuous mode in a virtual switch environment. Promiscuous mode is a mode of operation, and a security, monitoring and administration technique, that is defined at the virtual switch, or portgroup level. By default, Promiscuous mode is disabled. When Promiscuous mode is enabled the virtual machineΓÇÖs network interfaces that are in the same portgroup will use the Promiscuous mode to view all network traffic that goes through that virtual switch. You can implement a workaround with either ESXi, or Hyper-V.
--
-### Configure a SPAN port with ESXi
-
-**To configure a SPAN port with ESXi**:
-
-1. Open vSwitch properties.
-
-1. Select **Add**.
-
-1. Select **Virtual Machine** > **Next**.
-
-1. Insert a network label **SPAN Network**, select **VLAN ID** > **All**, and then select **Next**.
-
-1. Select **Finish**.
-
-1. Select **SPAN Network** > **Edit*.
-
-1. Select **Security**, and verify that the **Promiscuous Mode** policy is set to **Accept** mode.
-
-1. Select **OK**, and then select **Close** to close the vSwitch properties.
-
-1. Open the **XSense VM** properties.
-
-1. For **Network Adapter 2**, select the **SPAN** network.
-
-1. Select **OK**.
-
-1. Connect to the sensor, and verify that mirroring works.
-
-### Configure a SPAN port with Hyper-V
-
-Prior to starting you will need to:
--- Ensure that there is no instance of a virtual appliance running.--- Enable Ensure SPAN on the data port, and not the management port.--- Ensure that the data port SPAN configuration is not configured with an IP address.-
-**To configure a SPAN port with Hyper-V**:
-
-1. Open the Virtual Switch Manager.
-
-1. In the Virtual Switches list, select **New virtual network switch** > **External** as the dedicated spanned network adapter type.
-
- :::image type="content" source="media/tutorial-install-components/new-virtual-network.png" alt-text="Screenshot of selecting new virtual network and external before creating the virtual switch.":::
-
-1. Select **Create Virtual Switch**.
-
-1. Under connection type, select **External Network**.
-
-1. Ensure the checkbox for **Allow management operating system to share this network adapter** is checked.
-
- :::image type="content" source="media/tutorial-install-components/external-network.png" alt-text="Select external network, and allow the management operating system to share the network adapter.":::
-
-1. Select **OK**.
-
-#### Attach a SPAN Virtual Interface to the virtual switch
-
-You are able to attach a SPAN Virtual Interface to the Virtual Switch through Windows PowerShell, or through Hyper-V Manager.
-
-**To attach a SPAN Virtual Interface to the virtual switch with PowerShell**:
-
-1. Select the newly added SPAN virtual switch, and add a new network adapter with the following command:
-
- ```bash
- ADD-VMNetworkAdapter -VMName VK-C1000V-LongRunning-650 -Name Monitor -SwitchName vSwitch_Span
- ```
-
-1. Enable port mirroring for the selected interface as the span destination with the following command:
-
- ```bash
- Get-VMNetworkAdapter -VMName VK-C1000V-LongRunning-650 | ? Name -eq Monitor | Set-VMNetworkAdapter -PortMirroring Destination
- ```
-
- | Parameter | Description |
- |--|--|
- | VK-C1000V-LongRunning-650 | CPPM VA name |
- |vSwitch_Span |Newly added SPAN virtual switch name |
- |Monitor |Newly added adapter name |
-
-1. Select **OK**.
-
-These commands set the name of the newly added adapter hardware to be `Monitor`. If you are using Hyper-V Manager, the name of the newly added adapter hardware is set to `Network Adapter`.
-
-**To attach a SPAN Virtual Interface to the virtual switch with Hyper-V Manager**:
-
-1. Under the Hardware list, select **Network Adapter**.
-
-1. In the Virtual Switch field, select **vSwitch_Span**.
-
- :::image type="content" source="media/tutorial-install-components/vswitch-span.png" alt-text="Screenshot of selecting the following options on the virtual switch screen.":::
-
-1. In the Hardware list, under the Network Adapter drop-down list, select **Advanced Features**.
-
-1. In the Port Mirroring section, select **Destination** as the mirroring mode for the new virtual interface.
-
- :::image type="content" source="media/tutorial-install-components/destination.png" alt-text="Screenshot of the selections needed to configure mirroring mode.":::
-
-1. Select **OK**.
-
-#### Enable Microsoft NDIS capture extensions for the virtual switch
-
-Microsoft NDIS Capture Extensions will need to be enabled for the new virtual switch.
-
-**To enable Microsoft NDIS capture extensions for the newly added virtual switch**:
-
-1. Open the Virtual Switch Manager on the Hyper-V host.
-
-1. In the Virtual Switches list, expand the virtual switch name `vSwitch_Span` and select **Extensions**.
-
-1. In the Switch Extensions field, select **Microsoft NDIS Capture**.
-
- :::image type="content" source="media/tutorial-install-components/microsoft-ndis.png" alt-text="Screenshot of enabling the Microsoft NDIS by selecting it from the switch extensions menu.":::
-
-1. Select **OK**.
-
-#### Set the Mirroring Mode on the external port
-
-Mirroring mode will need to be set on the external port of the new virtual switch to be the source.
-
-You will need to configure the Hyper-V virtual switch (vSwitch_Span) to forward any traffic that comes to the external source port, to the virtual network adapter that you configured as the destination.
-
-Use the following PowerShell commands to set the external virtual switch port to source mirror mode:
-
-```bash
-$ExtPortFeature=Get-VMSystemSwitchExtensionPortFeature -FeatureName "Ethernet Switch Port Security Settings"
-$ExtPortFeature.SettingData.MonitorMode=2
-Add-VMSwitchExtensionPortFeature -ExternalPort -SwitchName vSwitch_Span -VMSwitchExtensionFeature $ExtPortFeature
-```
-
-| Parameter | Description |
-|--|--|
-| vSwitch_Span | Newly added SPAN virtual switch name. |
-| MonitorMode=2 | Source |
-| MonitorMode=1 | Destination |
-| MonitorMode=0 | None |
-
-Use the following PowerShell command to verify the monitoring mode status:
-
-```bash
-Get-VMSwitchExtensionPortFeature -FeatureName "Ethernet Switch Port Security Settings" -SwitchName vSwitch_Span -ExternalPort | select -ExpandProperty SettingData
-```
-
-| Parameter | Description |
-|--|--|
-| vSwitch_Span | Newly added SPAN virtual switch name |
## Access sensors from the on-premises management console
defender-for-iot How To Manage Sensors On The Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-sensors-on-the-cloud.md
This procedure describes how to use the Azure portal to contact vendors for pre-
1. Do one of the following:
- - To buy a pre-configured appliance, select **Contact** under **Buy preconfigured appliance**. This open an email to [hardware.sales@arrow.com](mailto:hardware.sales@arrow.com) with a template request for Defender for IoT appliances.
+ - To buy a pre-configured appliance, select **Contact** under **Buy preconfigured appliance**. This opens an email to [hardware.sales@arrow.com](mailto:hardware.sales@arrow.com) with a template request for Defender for IoT appliances. For more information, see [Pre-configured physical appliances for OT monitoring](ot-pre-configured-appliances.md).
- To install software on your own appliances, do the following:
- 1. Make sure that you have a supported appliance available. For more information, see [Identify required appliances](how-to-identify-required-appliances.md).
+ 1. Make sure that you have a supported appliance available.
1. Under *Select version**, select the software version you want to install. We recommend that you always select the most recent version.
defender-for-iot How To Manage The On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-the-on-premises-management-console.md
This procedure describes how to use the Azure portal to download software for yo
1. In the Azure portal, go to **Defender for IoT** > **Getting started** > **On-premises management console**.
-1. Make sure that you have a supported appliance available. For more information, see [Identify required appliances](how-to-identify-required-appliances.md).
+1. Make sure that you have a supported appliance available. For more information, see [Which appliances do I need?](ot-appliance-sizing.md).
-1. Under *Select version**, select the software version you want to install. We recommend that you always select the most recent version.
+1. Under **Select version**, select the software version you want to install. We recommend that you always select the most recent version.
1. Select **Download**. Download the sensor software and save it in a location that you can access from your selected appliance.
defender-for-iot Ot Appliance Sizing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-appliance-sizing.md
+
+ Title: Which OT appliances do I need? - Microsoft Defender for IoT
+description: Learn about the deployment options for Microsoft Defender for IoT sensors and on-premises management consoles.
Last updated : 04/04/2022+++
+# Which appliances do I need?
+
+This article is designed to help you choose the right OT appliances for your sensors and on-premises management consoles. Use the tables below to understand which hardware profile best fits your organization's network monitoring needs.
+
+You can use both physical or virtual appliances.
+
+## Corporate IT/OT mixed environments
+
+Use the following hardware profiles for high bandwidth corporate IT/OT mixed networks:
++
+|Hardware profile |Max throughput |Max monitored Assets |Deployment |
+|||||
+|Corporate | 3 Gbps | 12 K |Physical / Virtual |
+
+## Enterprise monitoring at the site level
+
+Use the following hardware profiles for enterprise monitoring at the site level:
+
+|Hardware profile |Max throughput |Max monitored assets |Deployment |
+|||||
+|Enterprise |1 Gbps |10K |Physical / Virtual |
+
+## Production line monitoring
+
+Use the following hardware profiles for production line monitoring:
+
+|Hardware profile |Max throughput |Max monitored assets |Deployment |
+|||||
+|SMB | 200 Mbps | 1,000 |Physical / Virtual |
+|Office | 60 Mbps | 800 | Physical / Virtual |
+|Rugged | 10 Mbps | 100 |Physical / Virtual|
+
+## On-premises management console systems
+
+On-premises management consoles allow you to manage and monitor large, multiple-sensor deployments. Use the following hardware profiles for deployment of an on-premises management console:
+
+|Hardware profile |Max monitored sensors |Deployment |
+||||
+|Enterprise |Up to 300 |Physical / Virtual |
+
+## Next steps
+
+Continue understanding system requirements, including options for ordering pre-configured appliances, or required specifications to install software on your own appliances:
+
+- [Pre-configured physical appliances for OT monitoring](ot-pre-configured-appliances.md)
+- [Resource requirements for virtual appliances](ot-virtual-appliances.md)
+
+Then, use any of the following procedures to continue:
+
+- [Purchase sensors or download software for sensors](how-to-manage-sensors-on-the-cloud.md#purchase-sensors-or-download-software-for-sensors)
+- [Download software for an on-premises management console](how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)
+- [Install software](how-to-install-software.md)
+
+Reference articles for OT monitoring appliances also include installation procedures in case you need to install software on your own appliances, or re-install software on preconfigured appliances.
defender-for-iot Ot Pre Configured Appliances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-pre-configured-appliances.md
+
+ Title: Preconfigured appliances for OT network monitoring - Microsoft Defender for IoT
+description: Learn about the appliances available for use with Microsoft Defender for IoT OT sensors and on-premises management consoles.
Last updated : 04/07/2022+++
+# Pre-configured physical appliances for OT monitoring
+
+This article provides a catalog of the pre-configured appliances available for Microsoft Defender for IoT OT sensors and on-premises management consoles.
+
+Use the links in the tables below to jump to articles with more details about each appliance.
+
+Microsoft has partnered with [Arrow Electronics](https://www.arrow.com/) to provide pre-configured sensors. To purchase a pre-configured sensor, contact Arrow at: [hardware.sales@arrow.com](mailto:hardware.sales@arrow.com).
+
+For more information, see [Purchase sensors or download software for sensors](how-to-manage-sensors-on-the-cloud.md#purchase-sensors-or-download-software-for-sensors)
+
+> [!TIP]
+> Pre-configured physical appliances have been validated for Defender for IoT OT system monitoring, and have the following advantages over installing your own software:
+>
+>- **Performance** over the total assets monitored
+>- **Compatibility** with new Defender for IoT releases, with validations for upgrades and driver support
+>- **Stability**, validated physical appliances undergo traffic monitoring and packet loss tests
+>- **In-lab experience**, Microsoft support teams train using validated physical appliances and have a working knowledge of the hardware
+>- **Availability**, components are selected to offer long-term worldwide availability
+>
+
+## Appliances for OT network sensors
+
+You can order any of the following preconfigured appliances for monitoring your OT networks:
+
+|Capacity / Hardware profile |Appliance |Performance / Monitoring |Physical specifications |
+|||||
+|Corporate | [HPE ProLiant DL360](appliance-catalog/hpe-proliant-dl360.md) | **Max bandwidth**: 3Gbp/s <br>**Max devices**:12,000 | **Mounting**: 1U <br>**Ports**: 15x RJ45 or 8x SFP (OPT) |
+|Enterprise | [HPE ProLiant DL20/DL20 Plus](appliance-catalog/hpe-proliant-dl20-plus-enterprise.md) <br> (4SFF) | **Max bandwidth**: 1 Gbp/s<br>**Max devices**: 10,000 | **Mounting**: 1U <br>**Ports**: 8x RJ45 or 6x SFP (OPT) |
+|SMB | [HPE ProLiant DL20/DL20 Plus](appliance-catalog/hpe-proliant-dl20-plus-smb.md) <br> (NHP 2LFF) | **Max bandwidth**: 200Mbp/s<br>**Max devices**: 1,000 | **Mounting**: 1U<br>**Ports**: 4x RJ45 |
+|SMB | [Dell Edge 5200](appliance-catalog/dell-edge-5200.md) <br> (NHP 2LFF) | **Max bandwidth**: 60Mbp/s<br>**Max devices**: 1,000 | **Mounting**: Wall Mount<br>**Ports**: 3x RJ45 |
+|Office | [YS-Techsystems YS-FIT2](appliance-catalog/ys-techsystems-ys-fit2.md) | **Max bandwidth**: 10Mbp/s <br>**Max devices**: 100 | **Mounting**: DIN/VESA<br>**Ports**: 2x RJ45 |
++
+> [!NOTE]
+> Bandwidth performance may vary depending on protocol distribution.
+
+## Appliances for on-premises management consoles
+
+You can purchase any of the following appliances for your OT on-premises management consoles:
+
+|Capacity / Hardware profile |Appliance |Max sensors |Physical specifications |
+|||||
+|Enterprise | [HPE ProLiant DL20/DL20 Plus](appliance-catalog/hpe-proliant-dl20-plus-enterprise.md) <br> (4SFF) | 300 | **Mounting**: 1U <br>**Ports**: 8x RJ45 or 6x SFP (OPT) |
+
+## Next steps
+
+Continue understanding system requirements for physical or virtual appliances. For more information, see [Which appliances do I need?](ot-appliance-sizing.md) and [OT monitoring with virtual appliances](ot-virtual-appliances.md).
+
+Use any of the following procedures to continue:
+
+- [Purchase sensors or download software for sensors](how-to-manage-sensors-on-the-cloud.md#purchase-sensors-or-download-software-for-sensors)
+- [Download software for an on-premises management console](how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)
+- [Install software](how-to-install-software.md)
+
+Our OT monitoring appliance reference articles also include installation procedures in case you need to install software on your own appliances, or re-install software on preconfigured appliances.
defender-for-iot Ot Virtual Appliances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-virtual-appliances.md
+
+ Title: OT monitoring with virtual appliances - Microsoft Defender for IoT
+description: Learn about system requirements for virtual appliances used for the Microsoft Defender for IoT OT sensors and on-premises management console.
Last updated : 04/04/2022+++
+# OT monitoring with virtual appliances
+
+This article lists the specifications required if you want to install Microsoft Defender for IoT OT sensor and on-premises management console software on your own virtual appliances.
+
+## About hypervisors
+
+The virtualized hardware used to run guest operating systems is supplied by virtual machine hosts, also known as *hypervisors*. Defender for IoT supports the following hypervisor software:
+
+- **VMware ESXi** (version 5.0 and later)
+- **Microsoft Hyper-V** (VM configuration version 8.0 and later)
+
+Learn more:
+
+- [OT sensor as a virtual appliance with VMware ESXi](appliance-catalog/virtual-sensor-vmware.md)
+- [OT sensor as a virtual appliance with Microsoft Hyper-V](appliance-catalog/virtual-sensor-hyper-v.md)
+- [On-premises management console as a virtual appliance with VMware ESXi](appliance-catalog/virtual-management-vmware.md)
+- [On-premises management console as a virtual appliance with Microsoft Hyper-V](appliance-catalog/virtual-management-hyper-v.md)
+
+> [!IMPORTANT]
+> Other types of hypervisors, such as hosted hypervisors, may also run Defender for IoT. However, due due to their lack of exclusive hardware control and resource reservation, other types of hypervisors are not supported for production environments. For example: Parallels, Oracle VirtualBox, and VMware Workstation or Fusion
+>
+
+## Virtual appliance design considerations
+
+This section outlines considerations for virtual appliance components, for both OT sensors and on-premises monitoring consoles.
+
+|Specification |Considerations |
+|||
+|**CPU** | Assign dedicated CPU cores (also known as pinning) with at least 2.4 GHz, which are not dynamically allocated. <br><br>CPU usage will be high since the appliance continuously records and analyzes network traffic.<br> CPU performance is critical to capturing and analyzing network traffic, and any slowdown could lead to packet drops and performance degradation. |
+|**Memory** | RAM should be allocated statically for the required capacity, not dynamically. <br><br>Expect high RAM utilization due to the sensor's constant network traffic recording and analytics, |
+|**Network interfaces** | Physical mapping provides best performance, lowest latency and efficient CPU usage. Our recommendation is to physically map NICs to the virtual machines with SR-IOV or a dedicated NIC. <br><br> As a result of high traffic monitoring levels, expect high network utilization. <br><br> Set the promiscuous mode on your vSwitch to **Accept**, which allows all traffic to reach the VM. Some vSwitch implementations may block certain protocols if it isn't configured correctly.|
+|**Storage** | Make sure to allocate enough read and write IOPs and throughput to match the performance of the appliances listed in this article. <br><br>You should expect high storage usage due to the large traffic monitoring volumes. |
++
+## OT network sensor VM requirements
+
+The following tables list system requirements for OT network sensors on virtual appliances.
+
+For all deployments, bandwidth results for virtual machines may vary, depending on the distribution of protocols and the actual hardware resources that are available, including the CPU model, memory bandwidth, and IOPS.
+
+# [Corporate](#tab/corporate)
++
+|Specification |Requirements |
+|||
+|**Maximum bandwidth** | 2.5 Gb/sec |
+|**Maximum monitored assets** | 12,000 |
+|**vCPU** | 32 |
+|**Memory** | 32 GB |
+|**Storage** | 5.6 TB (600 IOPS) |
+
+# [Enterprise](#tab/enterprise)
+
+|Specification |Requirements |
+|||
+|**Maximum bandwidth** | 800 Mb/sec |
+|**Maximum monitored assets** | 10,000 |
+|**vCPU** | 8 |
+|**Memory** | 32 GB |
+|**Storage** | 1.8 TB (300 IOPS) |
+
+# [SMB](#tab/smb)
+
+|Specification |Requirements |
+|||
+|**Maximum bandwidth** | 160 Mb/sec |
+|**Maximum monitored assets** | 1000 |
+|**vCPU** | 4 |
+|**Memory** | 8 GB |
+|**Storage** | 500 GB (150 IOPS) |
+
+# [Office](#tab/office)
+
+|Specification |Requirements |
+|||
+|**Maximum bandwidth** | 100 Mb/sec |
+|**Maximum monitored assets** | 800 |
+|**vCPU** | 4 |
+|**Memory** | 8 GB |
+|**Storage** | 100 GB (150 IOPS) |
+
+# [Rugged](#tab/rugged)
+
+|Specification |Requirements |
+|||
+|**Maximum bandwidth** | 10 Mb/sec |
+|**Maximum monitored assets** | 100 |
+|**vCPU** | 4 |
+|**Memory** | 8 GB |
+|**Storage** | 60 GB (150 IOPS) |
+++
+## On-premises management console VM requirements
+
+An on-premises management console on a virtual appliance is supported for enterprise deployments with the following requirements:
+
+| Specification | Requirements |
+| | - |
+| vCPU | 8 |
+| Memory | 32 GB |
+| Storage | 1.8 TB |
+| Monitored sensors | Up to 300 |
+
+## Next steps
+
+Continue understanding system requirements for physical or virtual appliances. For more information, see:
+
+- [Which appliances do I need?](ot-appliance-sizing.md)
+- [Pre-configured physical appliances for OT monitoring](ot-pre-configured-appliances.md)
+
+Then, use any of the following procedures to continue:
+
+- [Purchase sensors or download software for sensors](how-to-manage-sensors-on-the-cloud.md#purchase-sensors-or-download-software-for-sensors)
+- [Download software for an on-premises management console](how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)
+- [Install software](how-to-install-software.md)
+
+Reference articles for OT monitoring appliances also include installation procedures in case you need to install software on your own appliances, or re-install software on preconfigured appliances.
devtest-labs Configure Shared Image Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/configure-shared-image-gallery.md
For more information, see [Shared Image Gallery documentation](../virtual-machin
If you have a large number of managed images that you need to maintain and would like to make them available throughout your company, you can use a shared image gallery as a repository that makes it easy to update and share your images. As a lab owner, you can attach an existing shared image gallery to your lab. Once this gallery is attached, lab users can create machines from these latest images. A key benefit of this feature is that DevTest Labs can now take the advantage of sharing images across labs, across subscriptions, and across regions. > [!NOTE]
-> To learn about costs associated with the Shared Image Gallery service, see [Billing for Shared Image Gallery](../virtual-machines/shared-image-galleries.md#billing).
+> To learn about costs associated with the Shared Image Gallery service, see [Billing for Azure Compute Gallery](../virtual-machines/azure-compute-gallery.md#billing).
## Considerations - You can only attach one shared image gallery to a lab at a time. If you would like to attach another gallery, you'll need to detach the existing one and attach another. - DevTest Labs currently doesn't support uploading images to the gallery through the lab. -- While creating a virtual machine using a shared image gallery image, DevTest Labs always uses the latest published version of this image. However if an image has multiple versions, user can chose to create a machine from an earlier version by going to the Advanced settings tab during virtual machine creation.
+- When you create a virtual machine using a shared image gallery image, DevTest Labs always uses the latest published version of this image. However if an image has multiple versions, users can choose to create a machine from an earlier version by going to the Advanced settings tab during virtual machine creation.
- Although DevTest Labs automatically makes a best attempt to ensure shared image gallery replicates images to the region in which the Lab exists, itΓÇÖs not always possible. To avoid users having issues creating VMs from these images, ensure the images are already replicated to the labΓÇÖs region.ΓÇ¥ ## Use Azure portal
digital-twins How To Use Azure Digital Twins Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-azure-digital-twins-explorer.md
Doing so will bring up the **Azure Digital Twins URL modal**, where you can ente
:::image type="content" source="media/how-to-use-azure-digital-twins-explorer/instance-url-2.png" alt-text="Screenshot of Azure Digital Twins Explorer. The Azure Digital Twins URL modal displays an editable box containing https:// and a host name." lightbox="media/how-to-use-azure-digital-twins-explorer/instance-url-2.png"::: >[!NOTE]
->At this time, the ability to switch contexts within the app isn't available for personal Microsoft Accounts (MSA). MSA users will need to access the explorer from the chosen instance in the Azure portal, or may connect to a certain instance through a [direct link to the environment](#link-to-your-environment).
+>At this time, the ability to switch contexts within the app isn't available for personal Microsoft Accounts (MSA). MSA users will need to access the explorer from the chosen instance in the Azure portal, or may connect to a certain instance through a [direct link to the environment](#link-to-your-environment-and-specific-query).
## Query your digital twin graph
This action enables a **Download** link in the Twin Graph box. Select it to down
>[!TIP] >This file can be edited and/or re-uploaded to Azure Digital Twins through the [import](#import-graph) feature.
-## Link to your environment
+## Link to your environment and specific query
You can share your Azure Digital Twins Explorer environment with others to collaborate on work. This section describes how to send your Azure Digital Twins Explorer environment to someone else and verify they have the permissions to access it.
-To share your environment, you can send a link to the recipient that will open an Azure Digital Twins Explorer window connected to your instance. Use the link below and replace the placeholders for your *tenant ID* and the *host name* of your Azure Digital Twins instance.
+To share your environment in general , you can send a link to the recipient that will open an Azure Digital Twins Explorer window connected to your instance. Use the link below and replace the placeholders for your *tenant ID* and the *host name* of your Azure Digital Twins instance.
`https://explorer.digitaltwins.azure.net/?tid=<tenant-ID>&eid=<Azure-Digital-Twins-host-name>`
Here's an example of a URL with the placeholder values filled in:
For the recipient to view the instance in the resulting Azure Digital Twins Explorer window, they must log into their Azure account, and have **Azure Digital Twins Data Reader** access to the instance (you can read more about Azure Digital Twins roles in [Security](concepts-security.md)). For the recipient to make changes to the graph and the data, they must have the **Azure Digital Twins Data Owner** role on the instance.
-### Link with a query
+### Link to a specific query
You may want to share an environment and specify a query to execute upon landing, to highlight a subgraph or custom view for a teammate. To do so, start with the URL for the environment and add the query text to the URL as a querystring parameter:
dms Migration Dms Powershell Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migration-dms-powershell-cli.md
The following sample scripts can be referenced to suit your migration scenario u
|Scripting language |Migration scenario |Azure Samples link | |||| |PowerShell |SQL Server assessment |[Azure-Samples/data-migration-sql/PowerShell/sql-server-assessment](https://github.com/Azure-Samples/data-migration-sql/tree/main/PowerShell/sql-server-assessment.md) |
+|PowerShell |Azure recommendation (SKU) for SQL Server |[Azure-Samples/data-migration-sql/PowerShell/sql-server-sku-recommendation](https://github.com/Azure-Samples/data-migration-sql/blob/main/PowerShell/sql-server-sku-recommendation.md) |
|PowerShell |SQL Server to **Azure SQL Managed Instance** (using file share) |[Azure-Samples/data-migration-sql/PowerShell/sql-server-to-sql-mi-fileshare](https://github.com/Azure-Samples/data-migration-sql/tree/main/PowerShell/sql-server-to-sql-mi-fileshare.md) | |PowerShell |SQL Server to **Azure SQL Managed Instance** (using Azure storage) |[Azure-Samples/data-migration-sql/PowerShell/sql-server-to-sql-mi-blob](https://github.com/Azure-Samples/data-migration-sql/tree/main/PowerShell/sql-server-to-sql-mi-blob.md) | |PowerShell |SQL Server to **SQL Server on Azure Virtual Machines** (using file share) |[Azure-Samples/data-migration-sql/PowerShell/sql-server-to-sql-vm-fileshare](https://github.com/Azure-Samples/data-migration-sql/tree/main/PowerShell/sql-server-to-sql-vm-fileshare.md) | |PowerShell |SQL Server to **SQL Server on Azure Virtual Machines** (using Azure Storage) |[Azure-Samples/data-migration-sql/PowerShell/sql-server-to-sql-vm-blob](https://github.com/Azure-Samples/data-migration-sql/tree/main/PowerShell/sql-server-to-sql-vm-blob.md) |
-|PowerShell |SQL Server to **Azure SQL Database** |[Azure-Samples/data-migration-sql/PowerShell/sql-server-to-sql-db](https://github.com/Azure-Samples/data-migration-sql/tree/main/PowerShell/sql-server-to-sql-db) |
+|PowerShell |SQL Server to **Azure SQL Database** |[Azure-Samples/data-migration-sql/PowerShell/sql-server-to-sql-db](https://github.com/Azure-Samples/data-migration-sql/tree/main/PowerShell/sql-server-to-sql-db.md) |
|PowerShell |Sample: End-to-End migration automation |[Azure-Samples/data-migration-sql/PowerShell/scripts/](https://github.com/Azure-Samples/data-migration-sql/tree/main/PowerShell/scripts/) | |PowerShell |Sample: End-to-End migration automation for multiple databases |[Azure-Samples/data-migration-sql/PowerShell/scripts/multiple%20databases/](https://github.com/Azure-Samples/data-migration-sql/tree/main/PowerShell/scripts/multiple%20databases/) | |CLI |SQL Server assessment |[Azure-Samples/data-migration-sql/CLI/sql-server-assessment](https://github.com/Azure-Samples/data-migration-sql/tree/main/CLI/sql-server-assessment.md) |
+|CLI |Azure recommendation (SKU) for SQL Server |[Azure-Samples/data-migration-sql/CLI/sql-server-sku-recommendation](https://github.com/Azure-Samples/data-migration-sql/blob/main/CLI/sql-server-sku-recommendation.md) |
|CLI |SQL Server to **Azure SQL Managed Instance** (using file share) |[Azure-Samples/data-migration-sql/CLI/sql-server-to-sql-mi-fileshare](https://github.com/Azure-Samples/data-migration-sql/tree/main/CLI/sql-server-to-sql-mi-fileshare.md) | |CLI |SQL Server to **Azure SQL Managed Instance** (using Azure storage) |[Azure-Samples/data-migration-sql/CLI/sql-server-to-sql-mi-blob](https://github.com/Azure-Samples/data-migration-sql/tree/main/CLI/sql-server-to-sql-mi-blob.md) | |CLI |SQL Server to **SQL Server on Azure Virtual Machines** (using file share) |[Azure-Samples/data-migration-sql/CLI/sql-server-to-sql-vm-fileshare](https://github.com/Azure-Samples/data-migration-sql/tree/main/CLI/sql-server-to-sql-vm-fileshare.md) | |CLI |SQL Server to **SQL Server on Azure Virtual Machines** (using Azure Storage) |[Azure-Samples/data-migration-sql/CLI/sql-server-to-sql-vm-blob](https://github.com/Azure-Samples/data-migration-sql/tree/main/CLI/sql-server-to-sql-vm-blob.md) |
-|CLI |SQL Server to **Azure SQL Database** |[Azure-Samples/data-migration-sql/CLI/sql-server-to-sql-db](https://github.com/Azure-Samples/data-migration-sql/tree/main/CLI/sql-server-to-sql-db) |
+|CLI |SQL Server to **Azure SQL Database** |[Azure-Samples/data-migration-sql/CLI/sql-server-to-sql-db](https://github.com/Azure-Samples/data-migration-sql/tree/main/CLI/sql-server-to-sql-db.md) |
|CLI |Sample: End-to-End migration automation |[Azure-Samples/data-migration-sql/CLI/scripts/](https://github.com/Azure-Samples/data-migration-sql/tree/main/CLI/scripts/) | |CLI |Sample: End-to-End migration automation for multiple databases |[Azure-Samples/data-migration-sql/CLI/scripts/multiple%20databases/](https://github.com/Azure-Samples/data-migration-sql/tree/main/CLI/scripts/multiple%20databases/) |
dms Migration Using Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migration-using-azure-data-studio.md
The key benefits of using the Azure SQL migration extension for Azure Data Studi
1. Monitor all migrations started in Azure Data Studio from the Azure portal. To learn more, see [Monitor database migration progress from the Azure portal](#monitor-database-migration-progress-from-the-azure-portal). 1. Leverage the capabilities of the Azure SQL migration extension to assess and migrate databases at scale using automation with Azure PowerShell and Azure CLI. To learn more, see [Migrate databases at scale using automation](migration-dms-powershell-cli.md).
+The following 16-minute video explains recent updates and features added to the Azure SQL migration extension in Azure Data Studio, including the new workflow for SQL Server database assessments and Azure recommendations described in this article.
+
+<iframe src="https://aka.ms/docs/player?show=data-exposed&ep=assess-get-recommendations-migrate-sql-server-to-azure-using-azure-data-studio" width="800" height="450"></iframe>
+ ## Architecture of Azure SQL migration extension for Azure Data Studio Azure Database Migration Service (DMS) is one of the core components in the overall architecture. DMS provides a reliable migration orchestrator to enable database migrations to Azure SQL.
event-grid Compare Messaging Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/compare-messaging-services.md
Title: Compare Azure messaging services description: Describes the three Azure messaging services - Azure Event Grid, Event Hubs, and Service Bus. Recommends which service to use for different scenarios. Previously updated : 07/22/2021 Last updated : 04/26/2022 # Choose between Azure messaging services - Event Grid, Event Hubs, and Service Bus
Event Grid is an eventing backplane that enables event-driven, reactive programm
Event Grid is deeply integrated with Azure services and can be integrated with third-party services. It simplifies event consumption and lowers costs by eliminating the need for constant polling. Event Grid efficiently and reliably routes events from Azure and non-Azure resources. It distributes the events to registered subscriber endpoints. The event message has the information you need to react to changes in services and applications. Event Grid isn't a data pipeline, and doesn't deliver the actual object that was updated.
-It has the following characteristics:
+It has the following characteristics:
- Dynamically scalable - Low cost - Serverless - At least once delivery of an event
-For more information, see [Event Grid overview](overview.md).
+Event Grid is offered in two editions: **Azure Event Grid**, a fully managed PaaS service on Azure, and **Event Grid on Kubernetes with Azure Arc**, which lets you use Event Grid on your Kubernetes cluster wherever that is deployed, on-prem or on the cloud. For more information, see [Azure Event Grid overview](overview.md) and [Event Grid on Kubernetes with Azure Arc overview](./kubernetes/overview.md).
## Azure Event Hubs Azure Event Hubs is a big data streaming platform and event ingestion service. It can receive and process millions of events per second. It facilitates the capture, retention, and replay of telemetry and event stream data. The data can come from many concurrent sources. Event Hubs allows telemetry and event data to be made available to various stream-processing infrastructures and analytics services. It's available either as data streams or bundled event batches. This service provides a single solution that enables rapid data retrieval for real-time processing, and repeated replay of stored raw data. It can capture the streaming data into a file for processing and analysis.
event-grid Delivery Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/delivery-properties.md
If you need to publish events to a specific partition within an event hub, set t
| Header name | Header type | | :-- | :-- |
-|`PartitionKey` | Static |
+|`PartitionKey` | Static or dynamic |
You can also specify custom properties when sending messages to an event hub. Don't use the `aeg-` prefix for the property name as it's used by system properties in message headers. For a list of message header properties, see [Event Hubs as an event handler](handler-event-hubs.md#message-headers)
event-hubs Process Data Azure Stream Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/process-data-azure-stream-analytics.md
Here are the key benefits of Azure Event Hubs and Azure Stream Analytics integra
![Set output and start the job](./media/process-data-azure-stream-analytics/set-output-start-job.png)
-## Known limitations
-While testing your query, the test results take approximately 6 seconds to load. We're working on improving the performance of testing. However, when deployed in production, Azure Stream Analytics will have subsecond latency.
+## Access
+Issue : User cannot access Preview data because they donΓÇÖt have right permissions on the Subscription.
+
+Option 1: The user who wants to preview incoming data needs to be added as a Contributor on Subscription.
+
+Option 2: The user needs to be added as Stream Analytics Query tester role on Subscription. Navigate to Access control for the subscription. Add a new role assignment for the user as "Stream Analytics Query Tester" role.
+
+Option 3: The user can create Azure Stream Analytics job. Set input as this Event Hub and navigate to "Query" to preview incoming data from this Event Hub.
+
+Option 4: The admin can create a custom role on the subscription. Add the following permissions to the custom role and then add user to the new custom role.
+![Add permissions to custom role](./media/process-data-azure-stream-analytics/custom-role.png)
+ ## Streaming units Your Azure Stream Analytics job defaults to three streaming units (SUs). To adjust this setting, select **Scale** on the left menu in the **Stream Analytics job** page in the Azure portal. To learn more about streaming units, see [Understand and adjust Streaming Units](../stream-analytics/stream-analytics-streaming-unit-consumption.md).
governance Assignment Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/assignment-structure.md
Title: Details of the policy assignment structure description: Describes the policy assignment definition used by Azure Policy to relate policy definitions and parameters to resources for evaluation. Previously updated : 08/17/2021 Last updated : 04/27/2022 ++ # Azure Policy assignment structure
initiatives. The policy assignment can determine the values of parameters for th
resources at assignment time, making it possible to reuse policy definitions that address the same resource properties with different needs for compliance.
-You use JSON to create a policy assignment. The policy assignment contains elements for:
+You use JavaScript Object Notation (JSON) to create a policy assignment. The policy assignment contains elements for:
- display name - description
reducing the duplication and complexity of policy definitions while providing fl
## Identity For policy assignments with effect set to **deployIfNotExisit** or **modify**, it is required to have an identity property to do remediation on non-compliant resources. When using identity, the user must also specify a location for the assignment.
+> [!NOTE]
+> A single policy assignment can be associated with only one system- or user-assigned managed identity. However, that identity can be assigned more than one role if necessary.
+ ```json
-# System assigned identity
+# System-assigned identity
"identity": { "type": "SystemAssigned" }
-# User assigned identity
+# User-assigned identity
"identity": { "type": "UserAssigned", "userAssignedIdentities": {
For policy assignments with effect set to **deployIfNotExisit** or **modify**, i
- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md). - Review what a management group is with [Organize your resources with Azure management groups](../../management-groups/overview.md).
-
+
governance Remediate Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/remediate-resources.md
Title: Remediate non-compliant resources description: This guide walks you through the remediation of resources that are non-compliant to policies in Azure Policy. Previously updated : 12/1/2021 Last updated : 04/27/2022 ++ # Remediate non-compliant resources with Azure Policy
understand and accomplish remediation with Azure Policy.
When Azure Policy starts a template deployment when evaluating **deployIfNotExists** policies or modifies a resource when evaluating **modify** policies, it does so using a [managed identity](../../../active-directory/managed-identities-azure-resources/overview.md) that is associated with the policy assignment.
-Policy assignments can either use a system assigned managed identity that is created by the policy service or a user assigned identity provided by the user. The managed identity needs to be assigned the minimum role(s) required to remediate resources.
+Policy assignments use [managed identities](../../../active-directory/managed-identities-azure-resources/overview.md) for Azure resource authorization. You can use either a system-assigned managed identity that is created by the policy service or a user-assigned identity provided by the user. The managed identity needs to be assigned the minimum role-based access control (RBAC) role(s) required to remediate resources.
If the managed identity is missing roles, an error is displayed during the assignment of the policy or an initiative. When using the portal, Azure Policy
-automatically grants the managed identity the listed roles once assignment starts. When using SDK,
+automatically grants the managed identity the listed roles once assignment starts. When using an Azure software development kit (SDK),
the roles must manually be granted to the managed identity. The _location_ of the managed identity doesn't impact its operation with Azure Policy. > [!IMPORTANT] > In the following scenarios, the assignment's managed identity must be
example](../concepts/effects.md#deployifnotexists-example) or the
The **roleDefinitionIds** property uses the full resource identifier and doesn't take the short **roleName** of the role. To get the ID for the 'Contributor' role in your environment, use the
-following code:
+following Azure CLI code:
```azurecli-interactive az role definition list --name "Contributor"
az role definition list --name "Contributor"
## Manually configure the managed identity
-When creating an assignment using the portal, Azure Policy can both generate a managed identity and
-grant it the roles defined in **roleDefinitionIds**. In the following conditions, steps to create
+When creating an assignment using the portal, Azure Policy can generate a system-assigned managed identity and
+grant it the roles defined in **roleDefinitionIds**. Alternatively, you can specify a user-assigned managed identity that receives the same role assignment.
+
+ > [!NOTE]
+ > Each Azure Policy assignment can be associated with only one managed identity. However, the managed identity can be assigned multiple roles.
+
+In the following conditions, steps to create
the managed identity and assign it permissions must be done manually: - While using the SDK (such as Azure PowerShell)
the managed identity and assign it permissions must be done manually:
## Configure a managed identity through the Azure portal
-When creating an assignment using the portal, you can select either a system assigned managed identity or a user assigned managed identity.
+When creating an assignment using the portal, you can select either a system-assigned managed identity or a user-assigned managed identity.
-To set a system assigned managed identity in the portal:
+To set a system-assigned managed identity in the portal:
1. On the **Remediation** tab of the create/edit assignment view, under **Types of Managed Identity**, ensure that **System assigned managed identity** is selected. 1. Specify the location at which the managed identity is to be located.
-To set a user assigned managed identity in the portal:
+To set a user-assigned managed identity in the portal:
1. On the **Remediation** tab of the create/edit assignment view, under **Types of Managed Identity**, ensure that **User assigned managed identity** is selected.
$assignment = New-AzPolicyAssignment -Name 'sqlDbTDE' -DisplayName 'Deploy SQL D
``` The `$assignment` variable now contains the principal ID of the managed identity along with the standard values returned when creating a policy assignment. It can be accessed through
-`$assignment.Identity.PrincipalId` for system assigned managed identities and `$assignment.Identity.UserAssignedIdentities[$userassignedidentityid].PrincipalId` for user assigned managed identities.
+`$assignment.Identity.PrincipalId` for system-assigned managed identities and `$assignment.Identity.UserAssignedIdentities[$userassignedidentityid].PrincipalId` for user-assigned managed identities.
### Grant a managed identity defined roles with PowerShell
governance Policy Devops Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/tutorials/policy-devops-pipelines.md
Title: "Tutorial: Implement Azure Policy with Azure DevOps" description: In this tutorial, you implement an Azure Policy with an Azure DevOps release pipeline. Previously updated : 03/24/2022 Last updated : 04/27/2022
and [Create your first pipeline](/azure/devops/pipelines/create-first-pipeline).
1. In Azure DevOps, create a release pipeline that contains at least one stage, or open an existing release pipeline.
-1. Add a pre- or post-deployment condition that includes the **Security and compliance assessment** task as a gate.
+1. Add a pre- or post-deployment condition that includes the **Check Azure Policy compliance** task as a gate.
[More details](/azure/devops/pipelines/release/deploy-using-approvals#set-up-gates). ![Screenshot of Azure Policy Gate.](../media/devops-policy/azure-policy-gate.png)
hdinsight Hdinsight Apps Use Edge Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-apps-use-edge-node.md
description: How to add an empty edge node to an HDInsight cluster. Used as a cl
Previously updated : 04/16/2020 Last updated : 04/27/2022 # Use empty edge nodes on Apache Hadoop clusters in HDInsight
hdinsight Hdinsight Capacity Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-capacity-planning.md
description: Identify key questions for capacity and performance planning of an
Previously updated : 05/07/2020 Last updated : 04/27/2022 # Capacity planning for HDInsight clusters
hdinsight Hdinsight Selecting Vm Size https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-selecting-vm-size.md
keywords: vm sizes, cluster sizes, cluster configuration
Previously updated : 10/09/2019 Last updated : 04/27/2022 # Selecting the right VM size for your Azure HDInsight cluster
For more information on benchmarking for VM SKUs and cluster sizes, see [Cluster
## Next steps - [Azure HDInsight supported node configurations](hdinsight-supported-node-configuration.md)-- [Sizes for Linux virtual machines in Azure](../virtual-machines/sizes.md)
+- [Sizes for Linux virtual machines in Azure](../virtual-machines/sizes.md)
hdinsight Apache Kafka Log Analytics Operations Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-log-analytics-operations-management.md
description: Learn how to use Azure Monitor logs to analyze logs from Apache Kaf
Previously updated : 02/17/2020 Last updated : 04/27/2022 # Analyze logs for Apache Kafka on HDInsight
For more information on working with Apache Kafka, see the following documents:
* [Mirror Apache Kafka between HDInsight clusters](apache-kafka-mirroring.md) * [Increase the scale of Apache Kafka on HDInsight](apache-kafka-scalability.md) * [Use Apache Spark streaming (DStreams) with Apache Kafka](../hdinsight-apache-spark-with-kafka.md)
-* [Use Apache Spark structured streaming with Apache Kafka](../hdinsight-apache-kafka-spark-structured-streaming.md)
+* [Use Apache Spark structured streaming with Apache Kafka](../hdinsight-apache-kafka-spark-structured-streaming.md)
hdinsight Apache Spark Streaming Exactly Once https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-streaming-exactly-once.md
description: How to set up Apache Spark Streaming to process an event once and o
Previously updated : 11/15/2018 Last updated : 04/27/2022 # Create Apache Spark Streaming jobs with exactly-once event processing
iot-develop Quickstart Devkit Renesas Rx65n 2Mb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-renesas-rx65n-2mb.md
To install the tools:
1. From File Explorer, navigate to the following path in the repo and run the setup script named *get-toolchain.bat*:
- *getting-started\tools\get-toolchain.bat*
+ *getting-started\tools\get-toolchain-rx.bat*
1. Add the RX compiler to the Windows Path:
iot-edge Iot Edge Limits And Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-limits-and-restrictions.md
IoT Hub only supports file upload APIs for device identities, not module identit
For more information on uploading files with IoT Hub, see [Upload files with IoT Hub](../iot-hub/iot-hub-devguide-file-upload.md).
+### Edge agent environment variables
+Changes made in `config.toml` to `edgeAgent` environment variables like the `hostname` aren't applied to `edgeAgent` if the container already existed. To apply these changes, remove the `edgeAgent` container using the command `sudo docker rm -f edgeAgent`. The IoT Edge daemon recreates the container and starts edgeAgent in about a minute.
+ <!-- 1.1 --> :::moniker range="iotedge-2018-06" ### AMQP transport
iot-edge Production Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/production-checklist.md
If your devices are going to be deployed on a network that uses a proxy server,
* **Helpful** * Set up logs and diagnostics
- * Place limits on log size
+ * Set up default logging driver
* Consider tests and CI/CD pipelines ### Set up logs and diagnostics
Starting with version 1.2, IoT Edge relies on multiple daemons. While each daemo
When you're testing an IoT Edge deployment, you can usually access your devices to retrieve logs and troubleshoot. In a deployment scenario, you may not have that option. Consider how you're going to gather information about your devices in production. One option is to use a logging module that collects information from the other modules and sends it to the cloud. One example of a logging module is [logspout-loganalytics](https://github.com/veyalla/logspout-loganalytics), or you can design your own.
-### Place limits on log size
+### Set up default logging driver
-By default the Moby container engine does not set container log size limits. Over time this can lead to the device filling up with logs and running out of disk space. Consider the following options to prevent this:
+By default, the Moby container engine does not set container log size limits. Over time, this can lead to the device filling up with logs and running out of disk space. Configure your container engine to use the [`local` logging driver](https://docs.docker.com/config/containers/logging/local/) as your logging mechanism. `Local` logging driver offers a default log size limit, performs log-rotation by default, and uses a more efficient file format which helps to prevent disk space exhaustion. You may also choose to use different [logging drivers](https://docs.docker.com/config/containers/logging/configure/) and set different size limits based on your need.
-#### Option: Set global limits that apply to all container modules
+#### Option: Configure the default logging driver for all container modules
-You can limit the size of all container logfiles in the container engine log options. The following example sets the log driver to `json-file` (recommended) with limits on size and number of files:
+You can configure your container engine to use a specific logging driver by setting the value of `log driver` to the name of the log driver in the `daemon.json`. The following example sets the default logging driver to the `local` log driver (recommended).
```JSON {
- "log-driver": "json-file",
+ "log-driver": "local"
+}
+```
+You can also configure your `log-opts` keys to use appropriate values in the `daemon.json` file. The following example sets the log driver to `local` and sets the `max-size` and `max-file` options.
+
+```JSON
+{
+ "log-driver": "local",
"log-opts": { "max-size": "10m", "max-file": "3"
You can do so in the **createOptions** of each module. For example:
"createOptions": { "HostConfig": { "LogConfig": {
- "Type": "json-file",
+ "Type": "local",
"Config": { "max-size": "10m", "max-file": "3"
iot-edge Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/troubleshoot.md
sudo iotedge check
The troubleshooting tool runs many checks that are sorted into these three categories:
-* *Configuration checks* examines details that could prevent IoT Edge devices from connecting to the cloud, including issues with the config file and the container engine.
+* *Configuration checks* examine details that could prevent IoT Edge devices from connecting to the cloud, including issues with the config file and the container engine.
* *Connection checks* verify that the IoT Edge runtime can access ports on the host device and that all the IoT Edge components can connect to the IoT Hub. This set of checks returns errors if the IoT Edge device is behind a proxy. * *Production readiness checks* look for recommended production best practices, such as the state of device certificate authority (CA) certificates and module log file configuration.
If you're still troubleshooting, wait until after you've inspected the container
docker rm --force <container name> ```
-For ongoing logs maintenance and production scenarios, [place limits on log size](production-checklist.md#place-limits-on-log-size).
+For ongoing logs maintenance and production scenarios, [Set up default logging driver](production-checklist.md#set-up-default-logging-driver).
## View the messages going through the IoT Edge hub
iot-hub Iot Hub Devguide Device Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-device-twins.md
Previously updated : 01/31/2022 Last updated : 04/27/2022
The following example shows a device twin JSON document:
}, "version": 2, "tags": {
- "$etag": "123",
"deploymentLocation": { "building": "43", "floor": "1"
The following example shows a device twin JSON document:
} ```
-In the root object are the device identity properties, and container objects for `tags` and both `reported` and `desired` properties. The `properties` container contains some read-only elements (`$metadata`, `$etag`, and `$version`) described in the [Device twin metadata](iot-hub-devguide-device-twins.md#device-twin-metadata) and [Optimistic concurrency](iot-hub-devguide-device-twins.md#optimistic-concurrency) sections.
+In the root object are the device identity properties, and container objects for `tags` and both `reported` and `desired` properties. The `properties` container contains some read-only elements (`$metadata` and `$version`) described in the [Device twin metadata](iot-hub-devguide-device-twins.md#device-twin-metadata) and [Optimistic concurrency](iot-hub-devguide-device-twins.md#optimistic-concurrency) sections.
### Reported property example
Tags, desired properties, and reported properties are JSON objects with the foll
## Device twin size
-IoT Hub enforces an 8 KB size limit on the value of `tags`, and a 32 KB size limit each on the value of `properties/desired` and `properties/reported`. These totals are exclusive of read-only elements like `$etag`, `$version`, and `$metadata/$lastUpdated`.
+IoT Hub enforces an 8 KB size limit on the value of `tags`, and a 32 KB size limit each on the value of `properties/desired` and `properties/reported`. These totals are exclusive of read-only elements like `$version` and `$metadata/$lastUpdated`.
Twin size is computed as follows:
This information is kept at every level (not just the leaves of the JSON structu
## Optimistic concurrency Tags, desired, and reported properties all support optimistic concurrency.
-Tags have an ETag, as per [RFC7232](https://tools.ietf.org/html/rfc7232), that represents the tag's JSON representation. You can use ETags in conditional update operations from the solution back end to ensure consistency.
-Device twin desired and reported properties do not have ETags, but have a `$version` value that is guaranteed to be incremental. Similarly to an ETag, the version can be used by the updating party to enforce consistency of updates. For example, a device app for a reported property or the solution back end for a desired property.
+Device twins have an ETag (`etag` property), as per [RFC7232](https://tools.ietf.org/html/rfc7232), that represents the twin's JSON representation. You can use the `etag` property in conditional update operations from the solution back end to ensure consistency. This is the only option for ensuring consistency in operations that involve the `tags` container.
-Versions are also useful when an observing agent (such as the device app observing the desired properties) must reconcile races between the result of a retrieve operation and an update notification. The [Device reconnection flow section](iot-hub-devguide-device-twins.md#device-reconnection-flow) provides more information.
+Device twin desired and reported properties also have a `$version` value that is guaranteed to be incremental. Similarly to an ETag, the version can be used by the updating party to enforce consistency of updates. For example, a device app for a reported property or the solution back end for a desired property.
+
+Versions are also useful when an observing agent (such as the device app observing the desired properties) must reconcile races between the result of a retrieve operation and an update notification. The [Device reconnection flow section](#device-reconnection-flow) provides more information.
## Device reconnection flow
iot-hub Iot Hub Devguide Module Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-module-twins.md
Previously updated : 09/29/2020 Last updated : 04/27/2022
The following example shows a module twin JSON document:
}, "version": 2, "tags": {
- "$etag": "123",
"deploymentLocation": { "building": "43", "floor": "1"
The following example shows a module twin JSON document:
} ```
-In the root object are the module identity properties, and container objects for `tags` and both `reported` and `desired` properties. The `properties` container contains some read-only elements (`$metadata`, `$etag`, and `$version`) described in the [Module twin metadata](iot-hub-devguide-module-twins.md#module-twin-metadata) and [Optimistic concurrency](iot-hub-devguide-device-twins.md#optimistic-concurrency) sections.
+In the root object are the module identity properties, and container objects for `tags` and both `reported` and `desired` properties. The `properties` container contains some read-only elements (`$metadata` and `$version`) described in the [Module twin metadata](iot-hub-devguide-module-twins.md#module-twin-metadata) and [Optimistic concurrency](iot-hub-devguide-device-twins.md#optimistic-concurrency) sections.
### Reported property example
In the previous example, the `telemetryConfig` module twin desired and reported
> IoT Plug and Play defines a schema that uses several additional properties to synchronize changes to desired and reported properties. If your solution uses IoT Plug and Play, you must follow the Plug and Play conventions when updating twin properties. For more information and an example, see [Writable properties in IoT Plug and Play](../iot-develop/concepts-convention.md#writable-properties). ## Back-end operations+ The solution back end operates on the module twin using the following atomic operations, exposed through HTTPS: * **Retrieve module twin by ID**. This operation returns the module twin document, including tags and desired and reported system properties.
Tags, desired properties, and reported properties are JSON objects with the foll
## Module twin size
-IoT Hub enforces an 8 KB size limit on the value of `tags`, and a 32 KB size limit each on the value of `properties/desired` and `properties/reported`. These totals are exclusive of read-only elements like `$etag`, `$version`, and `$metadata/$lastUpdated`.
+IoT Hub enforces an 8 KB size limit on the value of `tags`, and a 32 KB size limit each on the value of `properties/desired` and `properties/reported`. These totals are exclusive of read-only elements like `$version` and `$metadata/$lastUpdated`.
Twin size is computed as follows:
This information is kept at every level (not just the leaves of the JSON structu
## Optimistic concurrency Tags, desired, and reported properties all support optimistic concurrency.
-Tags have an ETag, as per [RFC7232](https://tools.ietf.org/html/rfc7232), that represents the tag's JSON representation. You can use ETags in conditional update operations from the solution back end to ensure consistency.
-Module twin desired and reported properties do not have ETags, but have a `$version` value that is guaranteed to be incremental. Similarly to an ETag, the version can be used by the updating party to enforce consistency of updates. For example, a module app for a reported property or the solution back end for a desired property.
+Module twins have an ETag (`etag` property), as per [RFC7232](https://tools.ietf.org/html/rfc7232), that represents the twin's JSON representation. You can use the `etag` property in conditional update operations from the solution back end to ensure consistency. This is the only option for ensuring consistency in operations that involve the `tags` container.
+
+Module twin desired and reported properties also have a `$version` value that is guaranteed to be incremental. Similarly to an ETag, the version can be used by the updating party to enforce consistency of updates. For example, a module app for a reported property or the solution back end for a desired property.
-Versions are also useful when an observing agent (such as the module app observing the desired properties) must reconcile races between the result of a retrieve operation and an update notification. The section [Device reconnection flow](iot-hub-devguide-device-twins.md#device-reconnection-flow) provides more information.
+Versions are also useful when an observing agent (such as the module app observing the desired properties) must reconcile races between the result of a retrieve operation and an update notification. The section [Module reconnection flow](#module-reconnection-flow) provides more information.
## Module reconnection flow
lab-services How To Attach Detach Shared Image Gallery 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-attach-detach-shared-image-gallery-1.md
Here are the couple of scenarios supported by this feature:
- A lab account admin attaches a shared image gallery to the lab account. The VM image is uploaded to the shared image gallery outside the context of a lab. The lab admin has to enable the use of the image on the lab account. Lab creators can use that image from the shared image gallery when creating labs. - A lab account admin attaches a shared image gallery to the lab account. A lab creator (educator) saves the customized image of their lab to the shared image gallery. Then, other lab creators can select this image from the shared image gallery to create a template for their labs.
- When an image is saved to a shared image gallery, Azure Lab Services replicates the saved image to other regions available in the same [geography](https://azure.microsoft.com/global-infrastructure/geographies/). It ensures that the image is available for labs created in other regions in the same geography. Saving images to a shared image gallery incurs an additional cost, which includes cost for all replicated images. This cost is separate from the Azure Lab Services usage cost. For more information about Shared Image Gallery pricing, see [Shared Image Gallery ΓÇô Billing](../virtual-machines/shared-image-galleries.md#billing).
+ When an image is saved to a shared image gallery, Azure Lab Services replicates the saved image to other regions available in the same [geography](https://azure.microsoft.com/global-infrastructure/geographies/). It ensures that the image is available for labs created in other regions in the same geography. Saving images to a shared image gallery incurs an additional cost, which includes cost for all replicated images. This cost is separate from the Azure Lab Services usage cost. For more information about Shared Image Gallery pricing, see [Azure Compute Gallery ΓÇô Billing](../virtual-machines/azure-compute-gallery.md#billing).
> [!IMPORTANT] > While using a Shared Image Gallery, Azure Lab Services supports only images with less than 128 GB of OS Disk Space. Images with more than 128 GB of disk space or multiple disks will not be shown in the list of virtual machine images during lab creation.
lab-services How To Attach Detach Shared Image Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-attach-detach-shared-image-gallery.md
This article shows you how to attach or detach an Azure Compute Gallery to a lab
> [!IMPORTANT] > Lab plan administrators must manually [replicate images](/azure/virtual-machines/shared-image-galleries) to other regions in the compute gallery. Replicate an Azure Compute Gallery image to the same region as the lab plan to be shown in the list of virtual machine images during lab creation.
-Saving images to a compute gallery and replicating those images incurs additional cost. This cost is separate from the Azure Lab Services usage cost. For more information about Azure Compute Gallery pricing, see [Azure Compute Gallery ΓÇô Billing](../virtual-machines/shared-image-galleries.md#billing).
+Saving images to a compute gallery and replicating those images incurs additional cost. This cost is separate from the Azure Lab Services usage cost. For more information about Azure Compute Gallery pricing, see [Azure Compute Gallery ΓÇô Billing](../virtual-machines/azure-compute-gallery.md#billing).
## Scenarios
logic-apps Quickstart Create Deploy Azure Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-create-deploy-azure-resource-manager-template.md
Title: Quickstart - Create and deploy logic app workflow by using Azure Resource Manager templates
-description: How to create and deploy a logic app using Azure Resource Manager templates.
+ Title: Quickstart - Create Consumption logic app workflow with ARM templates
+description: How to create and deploy a Consumption logic app workflow with Azure Resource Manager templates (ARM templates) in multi-tenant Azure Logic Apps.
ms.suite: integration Previously updated : 04/01/2021
-#Customer intent: As a developer, I want to automate creating and deploying a logic app workflow to whichever environment that I want by using Azure Resource Manager templates.
Last updated : 04/27/2022
+#Customer intent: As a developer, I want to create and deploy an automated workflow in multi-tenant Azure Logic Apps with Azure Resource Manager templates (ARM templates).
-# Quickstart: Create and deploy a logic app workflow by using an ARM template
+# Quickstart: Create and deploy a Consumption logic app workflow in multi-tenant Azure Logic Apps with an ARM template
-[Azure Logic Apps](../logic-apps/logic-apps-overview.md) is a cloud service that helps you create and run automated workflows that integrate data, apps, cloud-based services, and on-premises systems by selecting from [hundreds of connectors](/connectors/connector-reference/connector-reference-logicapps-connectors). This quickstart focuses on the process for deploying an Azure Resource Manager template (ARM template) to create a basic logic app that checks the status for Azure on an hourly schedule.
+[Azure Logic Apps](logic-apps-overview.md) is a cloud service that helps you create and run automated workflows that integrate data, apps, cloud-based services, and on-premises systems by choosing from [hundreds of connectors](/connectors/connector-reference/connector-reference-logicapps-connectors). This quickstart focuses on the process for deploying an Azure Resource Manager template (ARM template) to create a basic [Consumption logic app workflow](logic-apps-overview.md#resource-environment-differences) that checks the status for Azure on an hourly schedule and runs in [multi-tenant Azure Logic Apps](logic-apps-overview.md#resource-environment-differences).
[!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
-If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+If your environment meets the prerequisites, and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.logic%2Flogic-app-create%2Fazuredeploy.json)
If you don't have an Azure subscription, create a [free Azure account](https://a
This quickstart uses the [**Create a logic app**](https://azure.microsoft.com/resources/templates/logic-app-create/) template, which you can find in the [Azure Quickstart Templates Gallery](https://azure.microsoft.com/resources/templates) but is too long to show here. Instead, you can review the quickstart template's ["azuredeploy.json file"](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.logic/logic-app-create/azuredeploy.json) in the templates gallery.
-The quickstart template creates a logic app workflow that uses the Recurrence trigger, which is set to run every hour, and an HTTP [*built-in* action](../connectors/built-in.md), which calls a URL that returns the status for Azure. A built-in action is native to the Azure Logic Apps platform.
+The quickstart template creates a Consumption logic app workflow that uses the [*built-in*](../connectors/built-in.md) Recurrence trigger, which is set to run every hour, and a built-in HTTP action, which calls a URL that returns the status for Azure. Built-in operations run natively on Azure Logic Apps platform.
This template creates the following Azure resource:
-* [**Microsoft.Logic/workflows**](/azure/templates/microsoft.logic/workflows), which creates the workflow for a logic app.
+* [**Microsoft.Logic/workflows**](/azure/templates/microsoft.logic/workflows), which creates the workflow for a Consumption logic app resource.
To find more quickstart templates for Azure Logic Apps, review the [Microsoft.Logic](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Logic) templates in the gallery.
Follow the option that you want to use for deploying the quickstart template:
| Option | Description | |--|-|
-| [Azure portal](../logic-apps/quickstart-create-deploy-azure-resource-manager-template.md?tabs=azure-portal#deploy-template) | If your Azure environment meets the prerequisites, and you're familiar with using ARM templates, these steps help you sign in directly to Azure and open the quickstart template in the Azure portal. For more information, see [Deploy resources with ARM templates and Azure portal](../azure-resource-manager/templates/deploy-portal.md). |
-| [Azure CLI](../logic-apps/quickstart-create-deploy-azure-resource-manager-template.md?tabs=azure-cli#deploy-template) | The Azure CLI provides a command-line experience for creating and managing Azure resources. To run these commands, you need Azure CLI version 2.6 or later. To check your CLI version, type `az --version`. For more information, see these topics: <p><p>- [What is Azure CLI](/cli/azure/what-is-azure-cli) <br>- [Get started with Azure CLI](/cli/azure/get-started-with-azure-cli) |
-| [Azure PowerShell](../logic-apps/quickstart-create-deploy-azure-resource-manager-template.md?tabs=azure-powershell#deploy-template) | Azure PowerShell provides a set of cmdlets that use the Azure Resource Manager model for managing your Azure resources. For more information, see these topics: <p><p>- [Azure PowerShell Overview](/powershell/azure/azurerm/overview) <br>- [Introducing the Azure PowerShell Az module](/powershell/azure/new-azureps-module-az) <br>- [Get started with Azure PowerShell](/powershell/azure/get-started-azureps) |
-| [Azure Resource Management REST API](../logic-apps/quickstart-create-deploy-azure-resource-manager-template.md?tabs=rest-api#deploy-template) | Azure provides Representational State Transfer (REST) APIs, which are service endpoints that support HTTP operations (methods) that you use to create, retrieve, update, or delete access to service resources. For more information, see [Get started with Azure REST API](/rest/api/azure/). |
+| [Azure portal](quickstart-create-deploy-azure-resource-manager-template.md?tabs=azure-portal#deploy-template) | If your Azure environment meets the prerequisites, and you're familiar with using ARM templates, these steps help you sign in directly to Azure and open the quickstart template in the Azure portal. For more information, see [Deploy resources with ARM templates and Azure portal](../azure-resource-manager/templates/deploy-portal.md). |
+| [Azure CLI](quickstart-create-deploy-azure-resource-manager-template.md?tabs=azure-cli#deploy-template) | The Azure CLI provides a command-line experience for creating and managing Azure resources. To run these commands, you need Azure CLI version 2.6 or later. To check your CLI version, enter **az --version**. For more information, see the following documentation: <br><br>- [What is Azure CLI](/cli/azure/what-is-azure-cli) <br>- [Get started with Azure CLI](/cli/azure/get-started-with-azure-cli) |
+| [Azure PowerShell](quickstart-create-deploy-azure-resource-manager-template.md?tabs=azure-powershell#deploy-template) | Azure PowerShell provides a set of cmdlets that use the Azure Resource Manager model for managing your Azure resources. For more information, see the following documentation: <br><br>- [Azure PowerShell Overview](/powershell/azure/azurerm/overview) <br>- [Introducing the Azure PowerShell Az module](/powershell/azure/new-azureps-module-az) <br>- [Get started with Azure PowerShell](/powershell/azure/get-started-azureps) |
+| [Azure Resource Management REST API](quickstart-create-deploy-azure-resource-manager-template.md?tabs=rest-api#deploy-template) | Azure provides Representational State Transfer (REST) APIs, which are service endpoints that support HTTP operations (methods) that you use to create, retrieve, update, or delete access to service resources. For more information, see [Get started with Azure REST API](/rest/api/azure/). |
||| <a name="deploy-azure-portal"></a> #### [Portal](#tab/azure-portal)
-1. Select the following image to sign in with your Azure account and open the quickstart template in the Azure portal:
+1. To sign in with your Azure account and open the quickstart template in the Azure portal, select the following image:
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.logic%2Flogic-app-create%2Fazuredeploy.json)
-1. In the portal, on the **Create a logic app using a template** page, enter or select these values:
+1. In the portal, on the **Create a logic app using a template** page, enter or select the following values:
| Property | Value | Description | |-|-|-| | **Subscription** | <*Azure-subscription-name*> | The name for the Azure subscription to use |
- | **Resource group** | <*Azure-resource-group-name*> | The name for a new or existing Azure resource group. This example uses `Check-Azure-Status-RG`. |
- | **Region** | <*Azure-region*> | The Azure datacenter region to use your logic app. This example uses `West US`. |
- | **Logic App Name** | <*logic-app-name*> | The name to use for your logic app. This example uses `Check-Azure-Status-LA`. |
- | **Test Uri** | <*test-URI*> | The URI for the service to call based on a specific schedule. This example uses `https://status.azure.com/en-us/status/`, which is the Azure status page. |
- | **Location** | <*Azure-region-for-all-resources*> | The Azure region to use for all resources, if different from the default value. This example uses the default value, `[resourceGroup().location]`, which is the resource group location. |
+ | **Resource group** | <*Azure-resource-group-name*> | The name for a new or existing Azure resource group. This example uses **Check-Azure-Status-RG**. |
+ | **Region** | <*Azure-region*> | The Azure datacenter region to use your logic app. This example uses **West US**. |
+ | **Logic App Name** | <*logic-app-name*> | The name to use for your logic app. This example uses **Check-Azure-Status-LA**. |
+ | **Test Uri** | <*test-URI*> | The URI for the service to call based on a specific schedule. This example uses **https://status.azure.com/en-us/status/**, which is the Azure status page. |
+ | **Location** | <*Azure-region-for-all-resources*> | The Azure region to use for all resources, if different from the default value. This example uses the default value, **[resourceGroup().location]**, which is the resource group location. |
||||
- Here is how the page looks with the values used in this example:
+ The following example shows how the page looks with sample values:
- ![Provide information for quickstart template](./media/quickstart-create-deploy-azure-resource-manager-template/create-logic-app-template-portal.png)
+ ![Screenshot showing Azure portal with "Create a Logic App using a template" properties and sample values.](./media/quickstart-create-deploy-azure-resource-manager-template/create-logic-app-template-portal.png)
1. When you're done, select **Review + create**.
echo "Press [ENTER] to continue ..." &&
read ```
-For more information, see these topics:
+For more information, see the following documentation:
* [Azure CLI: az deployment group](/cli/azure/deployment/group) * [Deploy resources with ARM templates and Azure CLI](../azure-resource-manager/templates/deploy-cli.md)
New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateUri
Read-Host -Prompt "Press [ENTER] to continue ..." ```
-For more information, see these topics:
+For more information, see the following documentation:
* [Azure PowerShell: New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) * [Azure PowerShell: New-AzResourceGroupDeployment](/powershell/module/az.resources/new-azresourcegroupdeployment)
For more information, see these topics:
| Value | Description | |-|-|
- | `subscriptionId`| The GUID for the Azure subscription that you want to use |
- | `resourceGroupName` | The name for the Azure resource group to create. This example uses `Check-Azure-Status-RG`. |
+ | <*subscriptionId*> | The GUID for the Azure subscription that you want to use |
+ | <*resourceGroupName*> | The name for the Azure resource group to create. This example uses **Check-Azure-Status-RG**. |
||| For example:
For more information, see these topics:
PUT https://management.azure.com/subscriptions/xxxxXXXXxxxxXXXXX/resourcegroups/Check-Azure-Status-RG?api-version=2019-10-01 ```
- For more information, see these topics:
+ For more information, see the following documentation:
* [Azure REST API Reference - How to call Azure REST APIs](/rest/api/azure/) * [Resource Management REST API: Resource Groups - Create Or Update](/rest/api/resources/resourcegroups/createorupdate).
For more information, see these topics:
| Value | Description | |-|-|
- | `subscriptionId`| The GUID for the Azure subscription that you want to use |
- | `resourceGroupName` | The name for the Azure resource group to use. This example uses `Check-Azure-Status-RG`. |
- | `deploymentName` | The name to use for your deployment. This example uses `Check-Azure-Status-LA`. |
+ | <*subscriptionId*>| The GUID for the Azure subscription that you want to use |
+ | <*resourceGroupName*> | The name for the Azure resource group to use. This example uses **Check-Azure-Status-RG**. |
+ | <*deploymentName*> | The name to use for your deployment. This example uses **Check-Azure-Status-LA**. |
||| For example:
For more information, see these topics:
| Property | Value | Description | |-|-|-|
- | `location`| <*Azure-region*> | The Azure region to use for deployment. This example uses `West US`. |
- | `templateLink` : `uri` | <*quickstart-template-URL*> | The URL location for the quickstart template to use for deployment: <p><p>`https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.logic/logic-app-create/azuredeploy.json`. |
- | `parametersLink` : `uri` | <*quickstart-template-parameter-file-URL*> | The URL location for the quickstart template's parameter file to use for deployment: <p><p>`https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.logic/logic-app-create/azuredeploy.parameters.json` <p><p>For more information about the Resource Manager parameter file, see these topics: <p><p>- [Create Resource Manager parameter file](../azure-resource-manager/templates/parameter-files.md) <br>- [Tutorial: Use parameter files to deploy your ARM template](../azure-resource-manager/templates/template-tutorial-use-parameter-file.md) |
- | `mode` | <*deployment-mode*> | Run either a incremental update or complete update. This example uses `Incremental`, which is the default value. For more information, see [Azure Resource Manager deployment modes](../azure-resource-manager/templates/deployment-modes.md). |
+ | **location**| <*Azure-region*> | The Azure region to use for deployment. This example uses **West US**. |
+ | **templateLink : uri** | <*quickstart-template-URL*> | The URL location for the quickstart template to use for deployment: <br><br><br><br>**https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.logic/logic-app-create/azuredeploy.json** |
+ | **parametersLink : uri** | <*quickstart-template-parameter-file-URL*> | The URL location for the quickstart template's parameter file to use for deployment: <br><br><br><br>**https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.logic/logic-app-create/azuredeploy.parameters.json** <br><br><br><br>For more information about the Resource Manager parameter file, see the following documentation: <br><br>- [Create Resource Manager parameter file](../azure-resource-manager/templates/parameter-files.md) <br>- [Tutorial: Use parameter files to deploy your ARM template](../azure-resource-manager/templates/template-tutorial-use-parameter-file.md) |
+ | **mode** | <*deployment-mode*> | Run either an incremental update or complete update. This example uses **Incremental**, which is the default value. For more information, see [Azure Resource Manager deployment modes](../azure-resource-manager/templates/deployment-modes.md). |
||| For example:
For more information, see these topics:
## Review deployed resources
-To view the logic app, you can use the Azure portal, run a script that you create with Azure CLI or Azure PowerShell, or use the Logic App REST API.
+To view the logic app workflow, you can use the Azure portal, run a script that you create with Azure CLI or Azure PowerShell, or use the Logic App REST API.
### [Portal](#tab/azure-portal)
-1. In the Azure portal search box, enter your logic app's name, which is `Check-Azure-Status-LA` in this example. From the results list, select your logic app.
+1. In the Azure portal search box, enter your logic app's name, which is **Check-Azure-Status-LA** in this example. From the results list, select your logic app.
-1. In the Azure portal, find and select your logic app, which is `Check-Azure-Status-RG` in this example.
+1. In the Azure portal, find and select your logic app, which is **Check-Azure-Status-RG** in this example.
-1. When the Logic App Designer opens, review the logic app created by the quickstart template.
+1. When the workflow designer opens, review the logic app workflow created by the quickstart template.
1. To test the logic app, on the designer toolbar, select **Run**.
GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{
| Value | Description | |-|-|
-| `subscriptionId`| The GUID for the Azure subscription where you deployed the quickstart template. |
-| `resourceGroupName` | The name for the Azure resource group where you deployed the quickstart template. This example uses `Check-Azure-Status-RG`. |
-| `workflowName` | The name for the logic app that you deployed. This example uses `Check-Azure-Status-LA`. |
+| **subscriptionId**| The GUID for the Azure subscription where you deployed the quickstart template. |
+| **resourceGroupName** | The name for the Azure resource group where you deployed the quickstart template. This example uses **Check-Azure-Status-RG**. |
+| **workflowName** | The name for the logic app that you deployed. This example uses **Check-Azure-Status-LA**. |
||| For example:
If you plan to continue working with subsequent quickstarts and tutorials, you m
### [Portal](#tab/azure-portal)
-1. In the Azure portal, find and select the resource group that you want to delete, which is `Check-Azure-Status-RG` in this example.
+1. In the Azure portal, find and select the resource group that you want to delete, which is **Check-Azure-Status-RG** in this example.
1. On the resource group menu, select **Overview** if not already selected. On the overview page, select **Delete resource group**.
DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourcegroup
| Value | Description | |-|-|
-| `subscriptionId`| The GUID for the Azure subscription where you deployed the quickstart template. |
-| `resourceGroupName` | The name for the Azure resource group where you deployed the quickstart template. This example uses `Check-Azure-Status-RG`. |
+| **subscriptionId**| The GUID for the Azure subscription where you deployed the quickstart template. |
+| **resourceGroupName** | The name for the Azure resource group where you deployed the quickstart template. This example uses **Check-Azure-Status-RG**. |
||| For example:
logic-apps Quickstart Create Deploy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-create-deploy-bicep.md
+
+ Title: Quickstart - Create Consumption logic app workflow with Bicep
+description: How to create and deploy a Consumption logic app workflow with Bicep.
++
+ms.suite: integration
+++ Last updated : 04/07/2022
+#Customer intent: As a developer, I want to create and deploy an automated workflow in multi-tenant Azure Logic Apps with Bicep.
++
+# Quickstart: Create and deploy a Consumption logic app workflow in multi-tenant Azure Logic Apps with Bicep
+
+[Azure Logic Apps](logic-apps-overview.md) is a cloud service that helps you create and run automated workflows that integrate data, apps, cloud-based services, and on-premises systems by choosing from [hundreds of connectors](/connectors/connector-reference/connector-reference-logicapps-connectors). This quickstart focuses on the process for deploying a Bicep file to create a basic [Consumption logic app workflow](logic-apps-overview.md#resource-environment-differences) that checks the status for Azure on an hourly schedule and runs in [multi-tenant Azure Logic Apps](logic-apps-overview.md#resource-environment-differences).
++
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you start.
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.logic/logic-app-create/main.bicep).
+
+The quickstart template creates a Consumption logic app workflow that uses the [*built-in*](../connectors/built-in.md) Recurrence trigger, which is set to run every hour, and a built-in HTTP action, which calls a URL that returns the status for Azure. Built-in operations run natively on Azure Logic Apps platform.
+
+This Bicep file creates the following Azure resource:
+
+* [**Microsoft.Logic/workflows**](/azure/templates/microsoft.logic/workflows), which creates the workflow for a logic app.
++
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters logicAppName=<logic-name>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -logicAppName "<logic-name>"
+ ```
+
+
+
+ > [!NOTE]
+ > Replace **\<logic-name\>** with the name of the logic app to create.
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Review deployed resources
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Clean up resources
+
+When you no longer need the logic app, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and its resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Quickstart: Create Bicep files with Visual Studio Code](../azure-resource-manager/bicep/quickstart-create-bicep-use-visual-studio-code.md)
machine-learning Concept Component https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-component.md
The component specification file defines the metadata and execution parameters f
The following example is a component specification for a training component.
-```yaml
-name: Example_Train
-display_name: Example Train
-version: 20
-type: command
-description: Example of a torchvision training component
-tags: {category: Component Tutorial, contact: user@contoso.com}
-inputs:
- training_data:
- type: path
- description: Training data organized in torchvision structure
- max_epochs:
- type: integer
- description: Maximum epochs for training
- learning_rate:
- type: number
- description: Learning rate, default is 0.01
- default: 0.01
- learning_rate_schedule:
- type: string
- default: time-based
-outputs:
- model_output:
- type: path
-code:
- local_path: ./train_src
-environment: azureml:AzureML-Minimal:1
-command: >-
- python train.py
- --training_data ${{inputs.training_data}}
- --max_epochs ${{inputs.max_epochs}}
- --learning_rate ${{inputs.learning_rate}}
- --learning_rate_schedule ${{inputs.learning_rate_schedule}}
- --model_output ${{outputs.model_output}}
-```
+ The following table explains the fields in the example. For a full list of available fields, see the [YAML component specification reference page](reference-yaml-component-command.md).
Your Python script contains the executable logic for your component. Your script
To run, you must match the arguments for your Python script with the arguments you defined in the YAML specification. The following example is a Python training script that matches the YAML specification from the previous section.
-```python
-## Required imports
-import argparse
-import os
-## Import other dependencies your script needs
-from pathlib import Path
-from uuid import uuid4
-from datetime import datetime
-
-## Define an argument parser that matches the arguments from the components specification file
-parser = argparse.ArgumentParser("train")
-parser.add_argument("--training_data", type=str, help="Path to training data")
-parser.add_argument("--max_epochs", type=int, help="Max # of epochs for the training")
-parser.add_argument("--learning_rate", type=float, help="Learning rate")
-parser.add_argument("--learning_rate_schedule", type=str, help="Learning rate schedule")
-parser.add_argument("--model_output", type=str, help="Path of output model")
-
-args = parser.parse_args()
-
-## Implement your custom logic (in this case a training script)
-print ("hello training world...")
-
-lines = [
- f'Training data path: {args.training_data}',
- f'Max epochs: {args.max_epochs}',
- f'Learning rate: {args.learning_rate}',
- f'Learning rate: {args.learning_rate_schedule}',
- f'Model output path: {args.model_output}',
-]
-
-for line in lines:
- print(line)
-
-print("mounted_path files: ")
-arr = os.listdir(args.training_data)
-print(arr)
-
-for filename in arr:
- print ("reading file: %s ..." % filename)
- with open(os.path.join(args.training_data, filename), 'r') as handle:
- print (handle.read())
-
-## Do the train and save the trained model as a file into the output folder.
-## Here only output a dummy data for example.
-curtime = datetime.now().strftime("%b-%d-%Y %H:%M:%S")
-model = f"This is a dummy model with id: {str(uuid4())} generated at: {curtime}\n"
-(Path(args.model_output) / 'model.txt').write_text(model)
-```
:::image type="content" source="media/concept-component/component-introduction.png" lightbox="media/concept-component/component-introduction.png" alt-text="Conceptual doc showing mapping between source code elements and component UI." :::
machine-learning How To Autoscale Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-autoscale-endpoints.md
Previously updated : 11/03/2021 Last updated : 04/27/2022 # Autoscale a managed online endpoint (preview)
If you are not going to use your deployments, delete them:
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] ## Next steps
machine-learning How To Deploy And Where https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-and-where.md
For more information, see the documentation for the [Model class](/python/api/az
For more information, see the [AutoMLRun.register_model](/python/api/azureml-train-automl-client/azureml.train.automl.run.automlrun#register-model-model-name-none--description-none--tags-none--iteration-none--metric-none-) documentation.
- To deploy a registered model from an `AutoMLRun`, we recommend doing so via the [one-click deploy button in Azure Machine learning studio](how-to-use-automated-ml-for-ml-models.md#deploy-your-model).
+ To deploy a registered model from an `AutoMLRun`, we recommend doing so via the [one-click deploy button in Azure Machine Learning studio](how-to-use-automated-ml-for-ml-models.md#deploy-your-model).
When you deploy remotely, you may have key authentication enabled. The example b
See the article on [client applications to consume web services](how-to-consume-web-service.md) for more example clients in other languages.
+ [!INCLUDE [Email Notification Include](../../includes/machine-learning-email-notifications.md)]
+ ### Understanding service state During model deployment, you may see the service state change while it fully deploys.
machine-learning How To Deploy Managed Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-managed-online-endpoints.md
The `update` command also works with local deployments. Use the same `az ml onli
> With the `update` command, you can use the [`--set` parameter in the Azure CLI](/cli/azure/use-cli-effectively#generic-update-arguments) to override attributes in your YAML *or* to set specific attributes without passing the YAML file. Using `--set` for single attributes is especially valuable in development and test scenarios. For example, to scale up the `instance_count` value for the first deployment, you could use the `--set instance_count=2` flag. However, because the YAML isn't updated, this technique doesn't facilitate [GitOps](https://www.atlassian.com/git/tutorials/gitops). > [!Note] > The above is an example of inplace rolling update: i.e. the same deployment is updated with the new configuration, with 20% nodes at a time. If the deployment has 10 nodes, 2 nodes at a time will be updated. For production usage, you might want to consider [blue-green deployment](how-to-safely-rollout-managed-endpoints.md), which offers a safer alternative.+ ### (Optional) Configure autoscaling Autoscale automatically runs the right amount of resources to handle the load on your application. Managed online endpoints support autoscaling through integration with the Azure monitor autoscale feature. To configure autoscaling, see [How to autoscale online endpoints](how-to-autoscale-endpoints.md).
The logs might take up to an hour to connect. After an hour, send some scoring r
1. Double-click **AmlOnlineEndpointConsoleLog**. 1. Select **Run**.
+ [!INCLUDE [Email Notification Include](../../includes/machine-learning-email-notifications.md)]
+ ## Delete the endpoint and the deployment If you aren't going use the deployment, you should delete it by running the following code (it deletes the endpoint and all the underlying deployments):
machine-learning How To Train With Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-with-ui.md
You may choose **view the YAML spec** to review and download the yaml file gener
To launch the job, choose **Create**. Once the job is created, Azure will show you the run details page, where you can monitor and manage your training job.
+ [!INCLUDE [Email Notification Include](../../includes/machine-learning-email-notifications.md)]
+ ## Next steps * [Deploy and score a machine learning model with a managed online endpoint (preview)](how-to-deploy-managed-online-endpoints.md).
machine-learning How To Use Reinforcement Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-reinforcement-learning.md
-> [!NOTE]
-> Azure Machine Learning reinforcement learning is currently a preview feature. Only Ray and RLlib frameworks are supported at this time.
+> [!WARNING]
+> Azure Machine Learning reinforcement learning via the [`azureml.contrib.train.rl`](/python/api/azureml-contrib-reinforcementlearning/azureml.contrib.train.rl) package will no longer be supported after June 2022. We recommend customers use the [Ray on Azure Machine Learning library](https://github.com/microsoft/ray-on-aml) for reinforcement learning experiments with Azure Machine Learning. For an example, see the notebook [Reinforcement Learning in Azure Machine Learning - Pong problem](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/reinforcement-learning/atari-on-distributed-compute/pong_rllib.ipynb).
In this article, you learn how to train a reinforcement learning (RL) agent to play the video game Pong. You use the open-source Python library [Ray RLlib](https://docs.ray.io/en/master/rllib/) with Azure Machine Learning to manage the complexity of distributed RL.
machine-learning Reference Machine Learning Cloud Parity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-machine-learning-cloud-parity.md
The information in the rest of this document provides information on what featur
| **[Integrated notebooks](how-to-run-jupyter-notebooks.md)** | | | | | Workspace notebook and file sharing | GA | YES | YES | | R and Python support | GA | YES | YES |
-| Virtual Network support | Public Preview | NO | NO |
+| Virtual Network support | GA | YES | YES |
| **[Compute instance](concept-compute-instance.md)** | | | | | Managed compute Instances for integrated Notebooks | GA | YES | YES | | Jupyter, JupyterLab Integration | GA | YES | YES |
The information in the rest of this document provides information on what featur
| **Integrated notebooks** | | | | | Workspace notebook and file sharing | GA | YES | N/A | | R and Python support | GA | YES | N/A |
-| Virtual Network support | Preview | YES | N/A |
+| Virtual Network support | GA | YES | N/A |
| **Compute instance** | | | | | Managed compute Instances for integrated Notebooks | GA | YES | N/A | | Jupyter, JupyterLab Integration | GA | YES | N/A |
machine-learning Reference Yaml Compute Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-compute-kubernetes.md
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
-The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/kubernetesCompute.schema.json.
+The source JSON schema can be found at `https://azuremlschemas.azureedge.net/latest/kubernetesCompute.schema.json`.
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)]
mysql Concept Monitoring Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concept-monitoring-best-practices.md
Last updated 11/23/2020
# Best practices for monitoring Azure Database for MySQL - Single server
-Learn about the best practices that can be used to monitor your database operations and ensure that the performance is not compromised as data size grows. As we add new capabilities to the platform, we will continue refine the best practices detailed in this section.
+Learn about the best practices that can be used to monitor your database operations and ensure that the performance is not compromised as data size grows. As we add new capabilities to the platform, we will continue to refine the best practices detailed in this section.
## Layout of the current monitoring toolkit
Monitor the database server to make sure that the resources assigned to the data
### CPU utilization
-Monitor CPU usage and if the database is exhausting CPU resources. If CPU usage is 90% or more than you should scale up your compute by increasing the number of vCores or scale to next pricing tier. Make sure that the throughput or concurrency is as expected as you scale up/down the CPU.
+Monitor CPU usage and if the database is exhausting CPU resources. If CPU usage is 90% or more then you should scale up your compute by increasing the number of vCores or scale to next pricing tier. Make sure that the throughput or concurrency is as expected as you scale up/down the CPU.
### Memory
mysql Concept Operation Excellence Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concept-operation-excellence-best-practices.md
Last updated 11/23/2020
# Best practices for server operations on Azure Database for MySQL -Single server Learn about the best practices for working with Azure Database for MySQL. As we add new capabilities to the platform, we will continue to focus on refining the best practices detailed in this section.
mysql Concept Performance Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concept-performance-best-practices.md
Last updated 1/28/2021
# Best practices for optimal performance of your Azure Database for MySQL - Single server Learn how to get best performance while working with your Azure Database for MySQL - Single server. As we add new capabilities to the platform, we will continue refining our recommendations in this section.
postgresql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-high-availability.md
After the failover, while a new standby server is being provisioned, application
## On-demand failover
-Flexible server provides two methods for you to perform on-demand failover to the standby server. These are useful if you want to test the failover time and downtime impact for your applications and if you want to failover to the preferred availability zone.
+Flexible server provides two methods for you to perform on-demand failover to the standby server. These are useful if you want to test the failover time and downtime impact for your applications and if you want to fail over to the preferred availability zone.
### Forced failover
Here are some failure scenarios that require user action to recover:
* **Does the zone-redundant HA provides protection from planned and unplanned outages?** <br> Yes. The main purpose of HA is to offer higher uptime to mitigate from any outages. In the event of an unplanned outage - including a fault in database, VM, physical node, data center, or at the AZ-level, the monitoring system automatically fails over the server to the standby. Similarly, during planned outages including minor version updates or infrastructure patching that happen during scheduled maintenance window, the updates are applied at the standby first and the service is failed over while the old primary goes through the update process. This reduces the overall downtime.
-* **Can I enable or disable HA any any point of time?** <br>
+* **Can I enable or disable HA at any point of time?** <br>
Yes. You can enable or disable zone-redundant HA at any time except when the server is in certain states like stopped, restarting, or already in the process of failing over.
postgresql Howto Choose Distribution Column https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-choose-distribution-column.md
worker node, and blue shards are stored on another worker node. Notice how a
join query between Accounts and Campaigns has all the necessary data together on one node when both tables are restricted to the same account\_id.
-![Multi-tenant
-colocation](../media/concepts-hyperscale-choosing-distribution-column/multi-tenant-colocation.png)
+![Multi-tenantcolocation](../media/concepts-hyperscale-choosing-distribution-column/multi-tenant-colocation.png)
To apply this design in your own schema, identify what constitutes a tenant in your application. Common instances include company, account, organization, or
purview How To Link Azure Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-link-azure-data-factory.md
Follow the steps below to connect an existing data factory to your Microsoft Pur
4. Select your Data Factory account from the list and select **OK**. You can also filter by subscription name to limit your list.
- :::image type="content" source="./media/how-to-link-azure-data-factory/connect-data-factory.png" alt-text="Screenshot showing how to connect Azure Data Factory." lightbox="./media/how-to-link-azure-data-factory/connect-data-factory.png":::
- Some Data Factory instances might be disabled if the data factory is already connected to the current Microsoft Purview account, or the data factory doesn't have a managed identity. A warning message will be displayed if any of the selected Data Factories are already connected to other Microsoft Purview account. By selecting OK, the Data Factory connection with the other Microsoft Purview account will be disconnected. No additional confirmations are required.
purview How To Request Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-request-access.md
[!INCLUDE [Region Notice](./includes/workflow-regions.md)]
-If you discover a data asset in the catalog that you would like access to, you can request access directly through Microsoft Purview.
-
-The request will trigger a workflow that will request that the owners of the data resource grant you access to that data source.
+If you discover a data asset in the catalog that you would like to access, you can request access directly through Azure Purview. The request will trigger a workflow that will request that the owners of the data resource grant you access to that data source.
This article outlines how to make an access request.
-1. To find a data asset, use Microsoft Purview's [search](how-to-search-catalog.md) or [browse](how-to-browse-catalog.md) functionality.
+> [!NOTE]
+> For this option to be available for a resource, a [self-service access workflow](how-to-workflow-self-service-data-access-hybrid.md) needs to be created and assigned to the collection where the resource is registered. Contact the collection administrator, data source administrator, or workflow administrator of your collection for more information.
+> Or, for information on how to create a self-service access workflow, see our [self-service access workflow documentation](how-to-workflow-self-service-data-access-hybrid.md).
+
+## Request access
+
+1. To find a data asset, use Azure Purview's [search](how-to-search-catalog.md) or [browse](how-to-browse-catalog.md) functionality.
:::image type="content" source="./media/how-to-request-access/search-or-browse.png" alt-text="Screenshot of the Microsoft Purview governance portal, with the search bar and browse buttons highlighted.":::
This article outlines how to make an access request.
:::image type="content" source="./media/how-to-request-access/request-access.png" alt-text="Screenshot of a data asset's overview page, with the Request button highlighted in the mid-page menu.":::
+ > [!NOTE]
+ > If this option isn't available, a [self-service access workflow](how-to-workflow-self-service-data-access-hybrid.md) either hasn't been created, or hasn't been assigned to the collection where the resource is registered. Contact the collection administrator, data source administrator, or workflow administrator of your collection for more information.
+ > Or, for information on how to create a self-service access workflow, see our [self-service access workflow documentation](how-to-workflow-self-service-data-access-hybrid.md).
+ 1. The **Request access** window will open. You can provide comments on why data access is requested. 1. Select **Send** to trigger the self-service data access workflow.
purview How To Workflow Self Service Data Access Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-workflow-self-service-data-access-hybrid.md
Last updated 03/09/2022
-# Self-service data access workflows for hybrid data estates
+# Self-service access workflows for hybrid data estates
[!INCLUDE [Region Notice](./includes/workflow-regions.md)]
-This guide will take you through the creation and management of self-service data access [workflows](concept-workflow.md) for hybrid data estates.
+[Workflows](concept-workflow.md) allow you to automate some business processes through Azure Purview. Self-service access workflows allow you to create a process for your users to request access to datasets they've discovered in Azure Purview!
-## Create and enable self-service data access workflow
+For example: let's say your team has a new data analyst who will be doing some business reporting. You add them to your department's collection in Azure Purview. From there they can browse the data assets and read descriptions about the data your department has available. They notice that one of the Azure Data Lake Storage Gen2 accounts seems to have the exact data they need to get started. Since a self-service access workflow has been set up for that resource, they can [request access](how-to-request-access.md) to that Azure Data Lake Storage account from within Azure Purview!
++
+You can create these workflows for any of your resources across your data estate to automate the access request process. Workflows are assigned at the [collection](reference-azure-purview-glossary.md#collection) level, and so automate business processes along the same organizational lines as your permissions.
+
+This guide will show you how to create and manage self-service access workflows in Azure Purview.
+
+>[!NOTE]
+> To be able to create or edit a workflow, you'll need the to be in the [workflow admin role](catalog-permissions.md) in Azure Purview.
+> You can also contact the workflow admin in your collection, or reach out to your collection administrator for permissions.
+
+## Create and enable self-service access workflow
1. Sign in to [the Microsoft Purview governance portal](https://web.purview.azure.com/resource/) and select the Management center. You'll see three new icons in the table of contents.
This guide will take you through the creation and management of self-service dat
>[!NOTE] >If the authoring tab is greyed out, you don't have the permissions to be able to author workflows. You'll need the [workflow admin role](catalog-permissions.md).
-1. To create a new self-service workflow, select **+New** button.
+1. To create a new self-service workflow, select the **+New** button.
:::image type="content" source="./media/how-to-workflow-self-service-data-access-hybrid/workflow-authoring-select-new.png" alt-text="Screenshot showing the authoring workflows page, with the + New button highlighted.":::
This guide will take you through the creation and management of self-service dat
1. Approval connector that specifies a user or group that will be contacted to approve the request. 1. Condition to check approval status - If approved:
- 1. Condition to check if data source is registered for use governance (policy)
+ 1. Condition to check if data source is registered for [data use governance](how-to-enable-data-use-governance.md) (policy)
1. If a data source is registered with policy:
- 1. Create self-service policy
+ 1. Create a [self-service policy](concept-self-service-data-access-policy.md)
1. Send email to requestor that access is provided 1. If data source isn't registered with policy:
- 1. Task connector to assign a task to a user or Microsoft Azure Active Directory group to manually provide access to requestor.
- 1. Send an email to requestor that access is provided once the task is complete.
+ 1. Task connector to assign [a task](how-to-workflow-manage-requests-approvals.md#tasks) to a user or Microsoft Azure Active Directory group to manually provide access to requestor.
+ 1. Send an email to requestor that access is provided once the task is marked as complete.
- If rejected: 1. Send an email to requestor that data access request is denied. 1. The default template can be used as it is by populating two fields:
This guide will take you through the creation and management of self-service dat
:::image type="content" source="./media/how-to-workflow-self-service-data-access-hybrid/required-fields-for-template-inline.png" alt-text="Screenshot showing the workflow canvas with the start and wait for an approval step, and the Create Task and wait for task completion steps highlighted, and the Assigned to textboxes highlighted within those steps." lightbox="./media/how-to-workflow-self-service-data-access-hybrid/required-fields-for-template-expanded.png"::: > [!NOTE]
- > Please configure the workflow to create self-service policies ONLY for sources supported by Microsft Purview's policy feature. To see what's supported by policy, check the [Data owner policies documentation](tutorial-data-owner-policies-storage.md).
+ > Please configure the workflow to create self-service policies ONLY for sources supported by Microsoft Purview's policy feature. To see what's supported by policy, check the [Data owner policies documentation](tutorial-data-owner-policies-storage.md).
+ >
+ > If your source isn't supported by Azure purview's policy feature, use the Task connector to assign [tasks](how-to-workflow-manage-requests-approvals.md#tasks) to users or groups that can provide access.
1. You can also modify the template by adding more connectors to suit your organizational needs.
purview Register Scan Azure Blob Storage Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-blob-storage-source.md
It is important to register the data source in Microsoft Purview prior to settin
1. Go to the [Azure portal](https://portal.azure.com), and navigate to the **Microsoft Purview accounts** page and select your _Purview account_
- :::image type="content" source="media/register-scan-azure-blob-storage-source/register-blob-purview-acct.png" alt-text="Screenshot that shows the Microsoft Purview account used to register the data source":::
- 1. **Open Microsoft Purview governance portal** and navigate to the **Data Map --> Sources** :::image type="content" source="media/register-scan-azure-blob-storage-source/register-blob-open-purview-studio.png" alt-text="Screenshot that shows the link to open Microsoft Purview governance portal":::
purview Register Scan Azure Cosmos Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-cosmos-database.md
It is important to register the data source in Microsoft Purview prior to settin
1. Go to the [Azure portal](https://portal.azure.com), and navigate to the **Microsoft Purview accounts** page and select your _Purview account_
- :::image type="content" source="media/register-scan-azure-cosmos-database/register-cosmos-db-purview-acct.png" alt-text="Screenshot that shows the Microsoft Purview account used to register the data source":::
- 1. **Open Microsoft Purview governance portal** and navigate to the **Data Map --> Collections** :::image type="content" source="media/register-scan-azure-cosmos-database/register-cosmos-db-open-purview-studio.png" alt-text="Screenshot that navigates to the Sources link in the Data Map":::
remote-rendering Create An Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/how-tos/create-an-account.md
The steps in this paragraph have to be performed for each storage account that s
If you don't have owner permissions to this storage account, the **Add a role assignment** option will be disabled. 1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).-
- | Setting | Value |
- | | |
- | Role | Storage Blob Data Contributor |
- | Assign access to | User, group, or service principal |
- | Members | Remote Rendering Account |
+ 1. Select the **Storage Blob Data Contributor** role and click **Next**.
+ 1. Choose to assign access to a **Managed Identity**.
+ 1. Select **Select members**, select your subscription, select **Remote Rendering Account**, select your remote rendering account, and then click **Select**.
+ 1. Select **Review + assign** and select **Review + assign** again.
![Screenshot showing Add role assignment page in Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
role-based-access-control Resource Provider Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/resource-provider-operations.md
Previously updated : 04/25/2022 Last updated : 04/27/2022
Azure service: Microsoft.HybridConnectivity
> | Microsoft.HybridConnectivity/endpoints/write | Create or update the endpoint to the target resource. | > | Microsoft.HybridConnectivity/endpoints/delete | Deletes the endpoint access to the target resource. | > | Microsoft.HybridConnectivity/endpoints/listCredentials/action | List the endpoint access credentials to the resource. |
+> | Microsoft.HybridConnectivity/endpoints/listManagedProxyDetails/action | List the managed proxy details to the resource. |
> | Microsoft.HybridConnectivity/Locations/OperationStatuses/read | read OperationStatuses | > | Microsoft.HybridConnectivity/operations/read | Get the list of Operations |
Azure service: [Application Gateway](../application-gateway/index.yml), [Azure B
> | Microsoft.Network/locations/setResourceOwnership/action | Sets Resource Ownership | > | Microsoft.Network/locations/effectiveResourceOwnership/action | Gets Effective Resource Ownership | > | Microsoft.Network/locations/setAzureNetworkManagerConfiguration/action | Sets Azure Network Manager Configuration |
+> | Microsoft.Network/locations/publishResources/action | Publish Subscrioption Resources |
> | Microsoft.Network/locations/getAzureNetworkManagerConfiguration/action | Gets Azure Network Manager Configuration | > | Microsoft.Network/locations/bareMetalTenants/action | Allocates or validates a Bare Metal Tenant | > | Microsoft.Network/locations/commitInternalAzureNetworkManagerConfiguration/action | Commits Internal AzureNetworkManager Configuration In ANM |
Azure service: [Application Gateway](../application-gateway/index.yml), [Azure B
> | Microsoft.Network/networkInterfaces/tapConfigurations/read | Gets a Network Interface Tap Configuration. | > | Microsoft.Network/networkInterfaces/tapConfigurations/write | Creates a Network Interface Tap Configuration or updates an existing Network Interface Tap Configuration. | > | Microsoft.Network/networkInterfaces/tapConfigurations/delete | Deletes a Network Interface Tap Configuration. |
+> | Microsoft.Network/networkManagerConnections/read | Get Network Manager Connection |
+> | Microsoft.Network/networkManagerConnections/write | Create Or Update Network Manager Connection |
+> | Microsoft.Network/networkManagerConnections/delete | Delete Network Manager Connection |
+> | Microsoft.Network/networkManagers/read | Get Network Manager |
+> | Microsoft.Network/networkManagers/write | Create Or Update Network Manager |
+> | Microsoft.Network/networkManagers/delete | Delete Network Manager |
+> | Microsoft.Network/networkManagers/commit/action | Network Manager Commit |
+> | Microsoft.Network/networkManagers/listDeploymentStatus/action | List Deployment Status |
+> | Microsoft.Network/networkManagers/listActiveSecurityAdminRules/action | List Active Security Admin Rules |
+> | Microsoft.Network/networkManagers/listActiveSecurityUserRules/action | List Active Security User Rules |
+> | Microsoft.Network/networkManagers/connectivityConfigurations/read | Get Connectivity Configuration |
+> | Microsoft.Network/networkManagers/connectivityConfigurations/write | Create Or Update Connectivity Configuration |
+> | Microsoft.Network/networkManagers/connectivityConfigurations/delete | Delete Connectivity Configuration |
+> | Microsoft.Network/networkManagers/networkGroups/read | Get Network Group |
+> | Microsoft.Network/networkManagers/networkGroups/write | Create Or Update Network Group |
+> | Microsoft.Network/networkManagers/networkGroups/delete | Delete Network Group |
+> | Microsoft.Network/networkManagers/networkGroups/join/action | Join Network Group |
+> | Microsoft.Network/networkManagers/networkGroups/staticMembers/read | Get Network Group Static Member |
+> | Microsoft.Network/networkManagers/networkGroups/staticMembers/write | Create Or Update Network Group Static Member |
+> | Microsoft.Network/networkManagers/networkGroups/staticMembers/delete | Delete Network Group Static Member |
+> | Microsoft.Network/networkManagers/scopeConnections/read | Get Network Manager Scope Connection |
+> | Microsoft.Network/networkManagers/scopeConnections/write | Create Or Update Network Manager Scope Connection |
+> | Microsoft.Network/networkManagers/scopeConnections/delete | Delete Network Manager Scope Connection |
+> | Microsoft.Network/networkManagers/securityAdminConfigurations/read | Get Security Admin Configuration |
+> | Microsoft.Network/networkManagers/securityAdminConfigurations/write | Create Or Update Security Admin Configuration |
+> | Microsoft.Network/networkManagers/securityAdminConfigurations/delete | Delete Security Admin Configuration |
+> | Microsoft.Network/networkManagers/securityAdminConfigurations/ruleCollections/read | Get Security Admin Rule Collection |
+> | Microsoft.Network/networkManagers/securityAdminConfigurations/ruleCollections/write | Create Or Update Security Admin Rule Collection |
+> | Microsoft.Network/networkManagers/securityAdminConfigurations/ruleCollections/delete | Delete Security Admin Rule Collection |
+> | Microsoft.Network/networkManagers/securityAdminConfigurations/ruleCollections/rules/read | Get Security Admin Rule |
+> | Microsoft.Network/networkManagers/securityAdminConfigurations/ruleCollections/rules/write | Create Or Update Security Admin Rule |
+> | Microsoft.Network/networkManagers/securityAdminConfigurations/ruleCollections/rules/delete | Delete Security Admin Rule |
+> | Microsoft.Network/networkManagers/securityUserConfigurations/read | Get Security User Configuration |
+> | Microsoft.Network/networkManagers/securityUserConfigurations/write | Create Or Update Security User Configuration |
+> | Microsoft.Network/networkManagers/securityUserConfigurations/delete | Delete Security User Configuration |
+> | Microsoft.Network/networkManagers/securityUserConfigurations/ruleCollections/read | Get Security User Rule Collection |
+> | Microsoft.Network/networkManagers/securityUserConfigurations/ruleCollections/write | Create Or Update Security User Rule Collection |
+> | Microsoft.Network/networkManagers/securityUserConfigurations/ruleCollections/delete | Delete Security User Rule Collection |
+> | Microsoft.Network/networkManagers/securityUserConfigurations/ruleCollections/rules/read | Get Security User Rule |
+> | Microsoft.Network/networkManagers/securityUserConfigurations/ruleCollections/rules/write | Create Or Update Security User Rule |
+> | Microsoft.Network/networkManagers/securityUserConfigurations/ruleCollections/rules/delete | Delete Security User Rule |
> | Microsoft.Network/networkProfiles/read | Gets a Network Profile | > | Microsoft.Network/networkProfiles/write | Creates or updates a Network Profile | > | Microsoft.Network/networkProfiles/delete | Deletes a Network Profile |
Azure service: [Azure SignalR Service](../azure-signalr/index.yml)
> | Microsoft.SignalRService/SignalR/eventGridFilters/delete | Delete an event grid filter from a SignalR resource. | > | Microsoft.SignalRService/SignalR/operationResults/read | | > | Microsoft.SignalRService/SignalR/operationStatuses/read | |
-> | Microsoft.SignalRService/SignalR/privateEndpointConnectionProxies/updatePrivateEndpointProperties/action | |
> | Microsoft.SignalRService/SignalR/privateEndpointConnectionProxies/validate/action | Validate Private Endpoint Connection Proxy | > | Microsoft.SignalRService/SignalR/privateEndpointConnectionProxies/write | Write Private Endpoint Connection Proxy | > | Microsoft.SignalRService/SignalR/privateEndpointConnectionProxies/read | Read Private Endpoint Connection Proxy |
Azure service: [Azure SignalR Service](../azure-signalr/index.yml)
> | Microsoft.SignalRService/WebPubSub/hubs/delete | Delete hub settings | > | Microsoft.SignalRService/WebPubSub/operationResults/read | | > | Microsoft.SignalRService/WebPubSub/operationStatuses/read | |
-> | Microsoft.SignalRService/WebPubSub/privateEndpointConnectionProxies/updatePrivateEndpointProperties/action | |
> | Microsoft.SignalRService/WebPubSub/privateEndpointConnectionProxies/validate/action | Validate Private Endpoint Connection Proxy | > | Microsoft.SignalRService/WebPubSub/privateEndpointConnectionProxies/write | Write Private Endpoint Connection Proxy | > | Microsoft.SignalRService/WebPubSub/privateEndpointConnectionProxies/read | Read Private Endpoint Connection Proxy |
Azure service: [Azure SignalR Service](../azure-signalr/index.yml)
> | Microsoft.SignalRService/SignalR/group/read | Check group existence or user existence in group. | > | Microsoft.SignalRService/SignalR/group/write | Join / Leave group. | > | Microsoft.SignalRService/SignalR/hub/send/action | Broadcast messages to all client connections in hub. |
-> | Microsoft.SignalRService/SignalR/livetrace/read | Read live trace tool results |
-> | Microsoft.SignalRService/SignalR/livetrace/write | Create live trace connections |
> | Microsoft.SignalRService/SignalR/serverConnection/write | Start a server connection. | > | Microsoft.SignalRService/SignalR/user/send/action | Send messages to user, who may consist of multiple client connections. | > | Microsoft.SignalRService/SignalR/user/read | Check user existence. |
Azure service: [Azure SignalR Service](../azure-signalr/index.yml)
> | Microsoft.SignalRService/WebPubSub/group/read | Check group existence or user existence in group. | > | Microsoft.SignalRService/WebPubSub/group/write | Join / Leave group. | > | Microsoft.SignalRService/WebPubSub/hub/send/action | Broadcast messages to all client connections in hub. |
-> | Microsoft.SignalRService/WebPubSub/livetrace/read | Read live trace tool results |
-> | Microsoft.SignalRService/WebPubSub/livetrace/write | Create live trace connections |
> | Microsoft.SignalRService/WebPubSub/user/send/action | Send messages to user, who may consist of multiple client connections. | > | Microsoft.SignalRService/WebPubSub/user/read | Check user existence. |
Azure service: [Service Bus](../service-bus-messaging/index.yml)
> | Microsoft.ServiceBus/checkNameAvailability/action | Checks availability of namespace under given subscription. | > | Microsoft.ServiceBus/register/action | Registers the subscription for the ServiceBus resource provider and enables the creation of ServiceBus resources | > | Microsoft.ServiceBus/unregister/action | Registers the subscription for the ServiceBus resource provider and enables the creation of ServiceBus resources |
+> | Microsoft.ServiceBus/locations/deleteVirtualNetworkOrSubnets/action | Deletes the VNet rules in ServiceBus Resource Provider for the specified VNet |
> | Microsoft.ServiceBus/namespaces/write | Create a Namespace Resource and Update its properties. Tags and Capacity of the Namespace are the properties which can be updated. | > | Microsoft.ServiceBus/namespaces/read | Get the list of Namespace Resource Description | > | Microsoft.ServiceBus/namespaces/Delete | Delete Namespace Resource |
-> | Microsoft.ServiceBus/namespaces/authorizationRules/action | Updates Namespace Authorization Rule. This API is deprecated. Please use a PUT call to update the Namespace Authorization Rule instead. |
+> | Microsoft.ServiceBus/namespaces/authorizationRules/action | Updates Namespace Authorization Rule. This API is deprecated. Please use a PUT call to update the Namespace Authorization Rule instead.. This operation is not supported on API version 2017-04-01. |
> | Microsoft.ServiceBus/namespaces/migrate/action | Migrate namespace operation |
+> | Microsoft.ServiceBus/namespaces/removeAcsNamepsace/action | Remove ACS namespace |
+> | Microsoft.ServiceBus/namespaces/privateEndpointConnectionsApproval/action | Approve Private Endpoint Connection |
> | Microsoft.ServiceBus/namespaces/authorizationRules/write | Create a Namespace level Authorization Rules and update its properties. The Authorization Rules Access Rights, the Primary and Secondary Keys can be updated. | > | Microsoft.ServiceBus/namespaces/authorizationRules/read | Get the list of Namespaces Authorization Rules description. | > | Microsoft.ServiceBus/namespaces/authorizationRules/delete | Delete Namespace Authorization Rule. The Default Namespace Authorization Rule cannot be deleted. | > | Microsoft.ServiceBus/namespaces/authorizationRules/listkeys/action | Get the Connection String to the Namespace | > | Microsoft.ServiceBus/namespaces/authorizationRules/regenerateKeys/action | Regenerate the Primary or Secondary key to the Resource |
-> | Microsoft.ServiceBus/namespaces/diagnosticSettings/read | Get list of Namespace diagnostic settings Resource Descriptions |
-> | Microsoft.ServiceBus/namespaces/diagnosticSettings/write | Get list of Namespace diagnostic settings Resource Descriptions |
-> | Microsoft.ServiceBus/namespaces/eventhubs/write | Create or Update EventHub properties. |
+> | Microsoft.ServiceBus/namespaces/disasterrecoveryconfigs/checkNameAvailability/action | Checks availability of namespace alias under given subscription. |
+> | Microsoft.ServiceBus/namespaces/disasterRecoveryConfigs/write | Creates or Updates the Disaster Recovery configuration associated with the namespace. |
+> | Microsoft.ServiceBus/namespaces/disasterRecoveryConfigs/read | Gets the Disaster Recovery configuration associated with the namespace. |
+> | Microsoft.ServiceBus/namespaces/disasterRecoveryConfigs/delete | Deletes the Disaster Recovery configuration associated with the namespace. This operation can only be invoked via the primary namespace. |
+> | Microsoft.ServiceBus/namespaces/disasterRecoveryConfigs/breakPairing/action | Disables Disaster Recovery and stops replicating changes from primary to secondary namespaces. |
+> | Microsoft.ServiceBus/namespaces/disasterRecoveryConfigs/failover/action | Invokes a GEO DR failover and reconfigures the namespace alias to point to the secondary namespace. |
+> | Microsoft.ServiceBus/namespaces/disasterRecoveryConfigs/authorizationRules/read | Get Disaster Recovery Primary Namespace's Authorization Rules |
+> | Microsoft.ServiceBus/namespaces/disasterRecoveryConfigs/authorizationRules/listkeys/action | Gets the authorization rules keys for the Disaster Recovery primary namespace |
+> | Microsoft.ServiceBus/namespaces/eventGridFilters/write | Creates or Updates the Event Grid filter associated with the namespace. |
+> | Microsoft.ServiceBus/namespaces/eventGridFilters/read | Gets the Event Grid filter associated with the namespace. |
+> | Microsoft.ServiceBus/namespaces/eventGridFilters/delete | Deletes the Event Grid filter associated with the namespace. |
> | Microsoft.ServiceBus/namespaces/eventhubs/read | Get list of EventHub Resource Descriptions |
-> | Microsoft.ServiceBus/namespaces/eventhubs/Delete | Operation to delete EventHub Resource |
-> | Microsoft.ServiceBus/namespaces/eventhubs/authorizationRules/write | Create EventHub Authorization Rules and Update its properties. The Authorization Rules Access Rights can be updated. |
-> | Microsoft.ServiceBus/namespaces/eventhubs/authorizationRules/read | Get the list of EventHub Authorization Rules |
-> | Microsoft.ServiceBus/namespaces/eventhubs/authorizationRules/delete | Operation to delete EventHub Authorization Rules |
-> | Microsoft.ServiceBus/namespaces/eventhubs/authorizationRules/listkeys/action | Get the Connection String to EventHub |
-> | Microsoft.ServiceBus/namespaces/eventhubs/authorizationRules/regenerateKeys/action | Regenerate the Primary or Secondary key to the Resource |
-> | Microsoft.ServiceBus/namespaces/eventHubs/consumergroups/write | Create or Update ConsumerGroup properties. |
-> | Microsoft.ServiceBus/namespaces/eventHubs/consumergroups/read | Get list of ConsumerGroup Resource Descriptions |
-> | Microsoft.ServiceBus/namespaces/eventHubs/consumergroups/Delete | Operation to delete ConsumerGroup Resource |
-> | Microsoft.ServiceBus/namespaces/logDefinitions/read | Get list of Namespace logs Resource Descriptions |
-> | Microsoft.ServiceBus/namespaces/messagingPlan/read | Gets the Messaging Plan for a namespace. This API is deprecated. Properties exposed via the MessagingPlan resource are moved to the (parent) Namespace resource in later API versions. |
-> | Microsoft.ServiceBus/namespaces/messagingPlan/write | Updates the Messaging Plan for a namespace. This API is deprecated. Properties exposed via the MessagingPlan resource are moved to the (parent) Namespace resource in later API versions. |
-> | Microsoft.ServiceBus/namespaces/messagingplan/write | Create or Update MessagingPlan properties. |
-> | Microsoft.ServiceBus/namespaces/messagingplan/read | Get list of MessagingPlan Resource Descriptions |
-> | Microsoft.ServiceBus/namespaces/metricDefinitions/read | Get list of Namespace metrics Resource Descriptions |
-> | Microsoft.ServiceBus/namespaces/operationresults/read | Get the list of Namespace Resource Description |
+> | Microsoft.ServiceBus/namespaces/ipFilterRules/read | Get IP Filter Resource |
+> | Microsoft.ServiceBus/namespaces/ipFilterRules/write | Create IP Filter Resource |
+> | Microsoft.ServiceBus/namespaces/ipFilterRules/delete | Delete IP Filter Resource |
+> | Microsoft.ServiceBus/namespaces/messagingPlan/read | Gets the Messaging Plan for a namespace.<br>This API is deprecated.<br>Properties exposed via the MessagingPlan resource are moved to the (parent) Namespace resource in later API versions..<br>This operation is not supported on API version 2017-04-01. |
+> | Microsoft.ServiceBus/namespaces/messagingPlan/write | Updates the Messaging Plan for a namespace.<br>This API is deprecated.<br>Properties exposed via the MessagingPlan resource are moved to the (parent) Namespace resource in later API versions..<br>This operation is not supported on API version 2017-04-01. |
+> | Microsoft.ServiceBus/namespaces/migrationConfigurations/write | Creates or Updates Migration configuration. This will start synchronizing resources from the standard to the premium namespace |
+> | Microsoft.ServiceBus/namespaces/migrationConfigurations/read | Gets the Migration configuration which indicates the state of the migration and pending replication operations |
+> | Microsoft.ServiceBus/namespaces/migrationConfigurations/delete | Deletes the Migration configuration. |
+> | Microsoft.ServiceBus/namespaces/migrationConfigurations/revert/action | Reverts the standard to premium namespace migration |
+> | Microsoft.ServiceBus/namespaces/migrationConfigurations/upgrade/action | Assigns the DNS associated with the standard namespace to the premium namespace which completes the migration and stops the syncing resources from standard to premium namespace |
+> | Microsoft.ServiceBus/namespaces/networkruleset/read | Gets NetworkRuleSet Resource |
+> | Microsoft.ServiceBus/namespaces/networkruleset/write | Create VNET Rule Resource |
+> | Microsoft.ServiceBus/namespaces/networkruleset/delete | Delete VNET Rule Resource |
+> | Microsoft.ServiceBus/namespaces/networkrulesets/read | Gets NetworkRuleSet Resource |
+> | Microsoft.ServiceBus/namespaces/networkrulesets/write | Create VNET Rule Resource |
+> | Microsoft.ServiceBus/namespaces/networkrulesets/delete | Delete VNET Rule Resource |
+> | Microsoft.ServiceBus/namespaces/operationresults/read | Get the status of Namespace operation |
+> | Microsoft.ServiceBus/namespaces/privateEndpointConnectionProxies/validate/action | Validate Private Endpoint Connection Proxy |
+> | Microsoft.ServiceBus/namespaces/privateEndpointConnectionProxies/read | Get Private Endpoint Connection Proxy |
+> | Microsoft.ServiceBus/namespaces/privateEndpointConnectionProxies/write | Create Private Endpoint Connection Proxy |
+> | Microsoft.ServiceBus/namespaces/privateEndpointConnectionProxies/delete | Delete Private Endpoint Connection Proxy |
+> | Microsoft.ServiceBus/namespaces/privateEndpointConnectionProxies/operationstatus/read | Get the status of an asynchronous private endpoint operation |
+> | Microsoft.ServiceBus/namespaces/privateEndpointConnections/read | Get Private Endpoint Connection |
+> | Microsoft.ServiceBus/namespaces/privateEndpointConnections/write | Create or Update Private Endpoint Connection |
+> | Microsoft.ServiceBus/namespaces/privateEndpointConnections/delete | Removes Private Endpoint Connection |
+> | Microsoft.ServiceBus/namespaces/privateEndpointConnections/operationstatus/read | Get the status of an asynchronous private endpoint operation |
+> | Microsoft.ServiceBus/namespaces/privateLinkResources/read | Gets the resource types that support private endpoint connections |
+> | Microsoft.ServiceBus/namespaces/providers/Microsoft.Insights/diagnosticSettings/read | Get list of Namespace diagnostic settings Resource Descriptions |
+> | Microsoft.ServiceBus/namespaces/providers/Microsoft.Insights/diagnosticSettings/write | Get list of Namespace diagnostic settings Resource Descriptions |
+> | Microsoft.ServiceBus/namespaces/providers/Microsoft.Insights/logDefinitions/read | Get list of Namespace logs Resource Descriptions |
+> | Microsoft.ServiceBus/namespaces/providers/Microsoft.Insights/metricDefinitions/read | Get list of Namespace metrics Resource Descriptions |
> | Microsoft.ServiceBus/namespaces/queues/write | Create or Update Queue properties. | > | Microsoft.ServiceBus/namespaces/queues/read | Get list of Queue Resource Descriptions | > | Microsoft.ServiceBus/namespaces/queues/Delete | Operation to delete Queue Resource |
-> | Microsoft.ServiceBus/namespaces/queues/authorizationRules/action | Operation to update Queue Authorization Rules. Please use a PUT call to update Authorization Rule. |
+> | Microsoft.ServiceBus/namespaces/queues/authorizationRules/action | Operation to update Queue. This operation is not supported on API version 2017-04-01. Authorization Rules. Please use a PUT call to update Authorization Rule. |
> | Microsoft.ServiceBus/namespaces/queues/authorizationRules/write | Create Queue Authorization Rules and Update its properties. The Authorization Rules Access Rights can be updated. | > | Microsoft.ServiceBus/namespaces/queues/authorizationRules/read | Get the list of Queue Authorization Rules | > | Microsoft.ServiceBus/namespaces/queues/authorizationRules/delete | Operation to delete Queue Authorization Rules | > | Microsoft.ServiceBus/namespaces/queues/authorizationRules/listkeys/action | Get the Connection String to Queue | > | Microsoft.ServiceBus/namespaces/queues/authorizationRules/regenerateKeys/action | Regenerate the Primary or Secondary key to the Resource |
+> | Microsoft.ServiceBus/namespaces/skus/read | List Supported SKUs for Namespace |
> | Microsoft.ServiceBus/namespaces/topics/write | Create or Update Topic properties. | > | Microsoft.ServiceBus/namespaces/topics/read | Get list of Topic Resource Descriptions | > | Microsoft.ServiceBus/namespaces/topics/Delete | Operation to delete Topic Resource |
-> | Microsoft.ServiceBus/namespaces/topics/authorizationRules/action | Operation to update Topic Authorization Rules. Please use a PUT call to update Authorization Rule. |
+> | Microsoft.ServiceBus/namespaces/topics/authorizationRules/action | Operation to update Topic. This operation is not supported on API version 2017-04-01. Authorization Rules. Please use a PUT call to update Authorization Rule. |
> | Microsoft.ServiceBus/namespaces/topics/authorizationRules/write | Create Topic Authorization Rules and Update its properties. The Authorization Rules Access Rights can be updated. | > | Microsoft.ServiceBus/namespaces/topics/authorizationRules/read | Get the list of Topic Authorization Rules | > | Microsoft.ServiceBus/namespaces/topics/authorizationRules/delete | Operation to delete Topic Authorization Rules |
Azure service: [Service Bus](../service-bus-messaging/index.yml)
> | Microsoft.ServiceBus/namespaces/topics/subscriptions/rules/write | Create or Update Rule properties. | > | Microsoft.ServiceBus/namespaces/topics/subscriptions/rules/read | Get list of Rule Resource Descriptions | > | Microsoft.ServiceBus/namespaces/topics/subscriptions/rules/Delete | Operation to delete Rule Resource |
+> | Microsoft.ServiceBus/namespaces/virtualNetworkRules/read | Gets VNET Rule Resource |
+> | Microsoft.ServiceBus/namespaces/virtualNetworkRules/write | Create VNET Rule Resource |
+> | Microsoft.ServiceBus/namespaces/virtualNetworkRules/delete | Delete VNET Rule Resource |
> | Microsoft.ServiceBus/operations/read | Get Operations | > | Microsoft.ServiceBus/sku/read | Get list of Sku Resource Descriptions | > | Microsoft.ServiceBus/sku/regions/read | Get list of SkuRegions Resource Descriptions |
+> | **DataAction** | **Description** |
+> | Microsoft.ServiceBus/namespaces/messages/send/action | Send messages |
+> | Microsoft.ServiceBus/namespaces/messages/receive/action | Receive messages |
## Identity
Azure service: [Batch](../batch/index.yml)
> | Microsoft.Batch/batchAccounts/privateEndpointConnectionResults/read | Gets the results of a long running Batch account private endpoint connection operation | > | Microsoft.Batch/batchAccounts/privateEndpointConnections/write | Update an existing Private endpoint connection on a Batch account | > | Microsoft.Batch/batchAccounts/privateEndpointConnections/read | Gets Private endpoint connection or Lists Private endpoint connections on a Batch account |
+> | Microsoft.Batch/batchAccounts/privateEndpointConnections/delete | Delete a Private endpoint connection on a Batch account |
> | Microsoft.Batch/batchAccounts/privateLinkResources/read | Gets the properties of a Private link resource or Lists Private link resources on a Batch account | > | Microsoft.Batch/batchAccounts/providers/Microsoft.Insights/diagnosticSettings/read | Gets the diagnostic setting for the resource | > | Microsoft.Batch/batchAccounts/providers/Microsoft.Insights/diagnosticSettings/write | Creates or updates the diagnostic setting for the resource |
Azure service: [Site Recovery](../site-recovery/index.yml)
> | Action | Description | > | | | > | Microsoft.RecoveryServices/register/action | Registers subscription for given Resource Provider |
-> | microsoft.recoveryservices/Locations/backupCrossRegionRestore/action | Trigger Cross region restore. |
-> | microsoft.recoveryservices/Locations/backupCrrJob/action | Get Cross Region Restore Job Details in the secondary region for Recovery Services Vault. |
-> | microsoft.recoveryservices/Locations/backupCrrJobs/action | List Cross Region Restore Jobs in the secondary region for Recovery Services Vault. |
-> | microsoft.recoveryservices/Locations/backupPreValidateProtection/action | |
-> | microsoft.recoveryservices/Locations/backupStatus/action | Check Backup Status for Recovery Services Vaults |
-> | microsoft.recoveryservices/Locations/backupValidateFeatures/action | Validate Features |
+> | MICROSOFT.RECOVERYSERVICES/Locations/backupCrossRegionRestore/action | Trigger Cross region restore. |
+> | MICROSOFT.RECOVERYSERVICES/Locations/backupCrrJob/action | Get Cross Region Restore Job Details in the secondary region for Recovery Services Vault. |
+> | MICROSOFT.RECOVERYSERVICES/Locations/backupCrrJobs/action | List Cross Region Restore Jobs in the secondary region for Recovery Services Vault. |
+> | MICROSOFT.RECOVERYSERVICES/Locations/backupPreValidateProtection/action | |
+> | MICROSOFT.RECOVERYSERVICES/Locations/backupStatus/action | Check Backup Status for Recovery Services Vaults |
+> | MICROSOFT.RECOVERYSERVICES/Locations/backupValidateFeatures/action | Validate Features |
> | Microsoft.RecoveryServices/locations/allocateStamp/action | AllocateStamp is internal operation used by service | > | Microsoft.RecoveryServices/locations/checkNameAvailability/action | Check Resource Name Availability is an API to check if resource name is available | > | Microsoft.RecoveryServices/locations/allocatedStamp/read | GetAllocatedStamp is internal operation used by service |
-> | microsoft.recoveryservices/Locations/backupAadProperties/read | Get AAD Properties for authentication in the third region for Cross Region Restore. |
-> | microsoft.recoveryservices/Locations/backupCrrOperationResults/read | Returns CRR Operation Result for Recovery Services Vault. |
-> | microsoft.recoveryservices/Locations/backupCrrOperationsStatus/read | Returns CRR Operation Status for Recovery Services Vault. |
-> | microsoft.recoveryservices/Locations/backupProtectedItem/write | Create a backup Protected Item |
-> | microsoft.recoveryservices/Locations/backupProtectedItems/read | Returns the list of all Protected Items. |
+> | MICROSOFT.RECOVERYSERVICES/Locations/backupAadProperties/read | Get AAD Properties for authentication in the third region for Cross Region Restore. |
+> | MICROSOFT.RECOVERYSERVICES/Locations/backupCrrOperationResults/read | Returns CRR Operation Result for Recovery Services Vault. |
+> | MICROSOFT.RECOVERYSERVICES/Locations/backupCrrOperationsStatus/read | Returns CRR Operation Status for Recovery Services Vault. |
+> | MICROSOFT.RECOVERYSERVICES/Locations/backupProtectedItem/write | Create a backup Protected Item |
+> | MICROSOFT.RECOVERYSERVICES/Locations/backupProtectedItems/read | Returns the list of all Protected Items. |
> | Microsoft.RecoveryServices/locations/operationStatus/read | Gets Operation Status for a given Operation | > | Microsoft.RecoveryServices/operations/read | Operation returns the list of Operations for a Resource Provider |
-> | microsoft.recoveryservices/Vaults/backupJobsExport/action | Export Jobs |
-> | microsoft.recoveryservices/Vaults/backupSecurityPIN/action | Returns Security PIN Information for Recovery Services Vault. |
-> | microsoft.recoveryservices/Vaults/backupTriggerValidateOperation/action | Validate Operation on Protected Item |
-> | microsoft.recoveryservices/Vaults/backupValidateOperation/action | Validate Operation on Protected Item |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupJobsExport/action | Export Jobs |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupSecurityPIN/action | Returns Security PIN Information for Recovery Services Vault. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupTriggerValidateOperation/action | Validate Operation on Protected Item |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupValidateOperation/action | Validate Operation on Protected Item |
> | Microsoft.RecoveryServices/Vaults/write | Create Vault operation creates an Azure resource of type 'vault' | > | Microsoft.RecoveryServices/Vaults/read | The Get Vault operation gets an object representing the Azure resource of type 'vault' | > | Microsoft.RecoveryServices/Vaults/delete | The Delete Vault operation deletes the specified Azure resource of type 'vault' |
-> | microsoft.recoveryservices/Vaults/backupconfig/read | Returns Configuration for Recovery Services Vault. |
-> | microsoft.recoveryservices/Vaults/backupconfig/write | Updates Configuration for Recovery Services Vault. |
-> | microsoft.recoveryservices/Vaults/backupEncryptionConfigs/read | Gets Backup Resource Encryption Configuration. |
-> | microsoft.recoveryservices/Vaults/backupEncryptionConfigs/write | Updates Backup Resource Encryption Configuration |
-> | microsoft.recoveryservices/Vaults/backupEngines/read | Returns all the backup management servers registered with vault. |
-> | microsoft.recoveryservices/Vaults/backupFabrics/refreshContainers/action | Refreshes the container list |
-> | microsoft.recoveryservices/Vaults/backupFabrics/backupProtectionIntent/delete | Delete a backup Protection Intent |
-> | microsoft.recoveryservices/Vaults/backupFabrics/backupProtectionIntent/read | Get a backup Protection Intent |
-> | microsoft.recoveryservices/Vaults/backupFabrics/backupProtectionIntent/write | Create a backup Protection Intent |
-> | microsoft.recoveryservices/Vaults/backupFabrics/operationResults/read | Returns status of the operation |
-> | microsoft.recoveryservices/Vaults/backupFabrics/operationsStatus/read | Returns status of the operation |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectableContainers/read | Get all protectable containers |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/delete | Deletes the registered Container |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/inquire/action | Do inquiry for workloads within a container |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/read | Returns all registered containers |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/write | Creates a registered container |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/items/read | Get all items in a container |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/operationResults/read | Gets result of Operation performed on Protection Container. |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/operationsStatus/read | Gets status of Operation performed on Protection Container. |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/backup/action | Performs Backup for Protected Item. |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/delete | Deletes Protected Item |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/read | Returns object details of the Protected Item |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPointsRecommendedForMove/action | Get Recovery points recommended for move to another tier |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/write | Create a backup Protected Item |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/operationResults/read | Gets Result of Operation Performed on Protected Items. |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/operationsStatus/read | Returns the status of Operation performed on Protected Items. |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/accessToken/action | Get AccessToken for Cross Region Restore. |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/move/action | Move Recovery point to another tier |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/provisionInstantItemRecovery/action | Provision Instant Item Recovery for Protected Item |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/read | Get Recovery Points for Protected Items. |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/restore/action | Restore Recovery Points for Protected Items. |
-> | microsoft.recoveryservices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/revokeInstantItemRecovery/action | Revoke Instant Item Recovery for Protected Item |
-> | microsoft.recoveryservices/Vaults/backupJobs/cancel/action | Cancel the Job |
-> | microsoft.recoveryservices/Vaults/backupJobs/read | Returns all Job Objects |
-> | microsoft.recoveryservices/Vaults/backupJobs/operationResults/read | Returns the Result of Job Operation. |
-> | microsoft.recoveryservices/Vaults/backupJobs/operationsStatus/read | Returns the status of Job Operation. |
-> | microsoft.recoveryservices/Vaults/backupOperationResults/read | Returns Backup Operation Result for Recovery Services Vault. |
-> | microsoft.recoveryservices/Vaults/backupOperations/read | Returns Backup Operation Status for Recovery Services Vault. |
-> | microsoft.recoveryservices/Vaults/backupPolicies/delete | Delete a Protection Policy |
-> | microsoft.recoveryservices/Vaults/backupPolicies/read | Returns all Protection Policies |
-> | microsoft.recoveryservices/Vaults/backupPolicies/write | Creates Protection Policy |
-> | microsoft.recoveryservices/Vaults/backupPolicies/operationResults/read | Get Results of Policy Operation. |
-> | microsoft.recoveryservices/Vaults/backupPolicies/operations/read | Get Status of Policy Operation. |
-> | microsoft.recoveryservices/Vaults/backupProtectableItems/read | Returns list of all Protectable Items. |
-> | microsoft.recoveryservices/Vaults/backupProtectedItems/read | Returns the list of all Protected Items. |
-> | microsoft.recoveryservices/Vaults/backupProtectionContainers/read | Returns all containers belonging to the subscription |
-> | microsoft.recoveryservices/Vaults/backupProtectionIntents/read | List all backup Protection Intents |
-> | microsoft.recoveryservices/Vaults/backupResourceGuardProxies/delete | The Delete ResourceGuard proxy operation deletes the specified Azure resource of type 'ResourceGuard proxy' |
-> | microsoft.recoveryservices/Vaults/backupResourceGuardProxies/read | Get the list of ResourceGuard proxies for a resource |
-> | microsoft.recoveryservices/Vaults/backupResourceGuardProxies/read | Get ResourceGuard proxy operation gets an object representing the Azure resource of type 'ResourceGuard proxy' |
-> | microsoft.recoveryservices/Vaults/backupResourceGuardProxies/unlockDelete/action | Unlock delete ResourceGuard proxy operation unlocks the next delete critical operation |
-> | microsoft.recoveryservices/Vaults/backupResourceGuardProxies/write | Create ResourceGuard proxy operation creates an Azure resource of type 'ResourceGuard Proxy' |
-> | microsoft.recoveryservices/Vaults/backupstorageconfig/read | Returns Storage Configuration for Recovery Services Vault. |
-> | microsoft.recoveryservices/Vaults/backupstorageconfig/write | Updates Storage Configuration for Recovery Services Vault. |
-> | microsoft.recoveryservices/Vaults/backupUsageSummaries/read | Returns summaries for Protected Items and Protected Servers for a Recovery Services . |
-> | microsoft.recoveryservices/Vaults/backupValidateOperationResults/read | Validate Operation on Protected Item |
-> | microsoft.recoveryservices/Vaults/backupValidateOperationsStatuses/read | Validate Operation on Protected Item |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupconfig/read | Returns Configuration for Recovery Services Vault. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupconfig/write | Updates Configuration for Recovery Services Vault. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupEncryptionConfigs/read | Gets Backup Resource Encryption Configuration. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupEncryptionConfigs/write | Updates Backup Resource Encryption Configuration |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupEngines/read | Returns all the backup management servers registered with vault. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/refreshContainers/action | Refreshes the container list |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/backupProtectionIntent/delete | Delete a backup Protection Intent |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/backupProtectionIntent/read | Get a backup Protection Intent |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/backupProtectionIntent/write | Create a backup Protection Intent |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/operationResults/read | Returns status of the operation |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/operationsStatus/read | Returns status of the operation |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectableContainers/read | Get all protectable containers |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/delete | Deletes the registered Container |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/inquire/action | Do inquiry for workloads within a container |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/read | Returns all registered containers |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/write | Creates a registered container |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/items/read | Get all items in a container |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/operationResults/read | Gets result of Operation performed on Protection Container. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/operationsStatus/read | Gets status of Operation performed on Protection Container. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/backup/action | Performs Backup for Protected Item. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/delete | Deletes Protected Item |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/read | Returns object details of the Protected Item |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPointsRecommendedForMove/action | Get Recovery points recommended for move to another tier |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/write | Create a backup Protected Item |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/operationResults/read | Gets Result of Operation Performed on Protected Items. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/operationsStatus/read | Returns the status of Operation performed on Protected Items. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/accessToken/action | Get AccessToken for Cross Region Restore. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/move/action | Move Recovery point to another tier |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/provisionInstantItemRecovery/action | Provision Instant Item Recovery for Protected Item |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/read | Get Recovery Points for Protected Items. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/restore/action | Restore Recovery Points for Protected Items. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/revokeInstantItemRecovery/action | Revoke Instant Item Recovery for Protected Item |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupJobs/cancel/action | Cancel the Job |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupJobs/read | Returns all Job Objects |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupJobs/operationResults/read | Returns the Result of Job Operation. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupJobs/operationsStatus/read | Returns the status of Job Operation. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupOperationResults/read | Returns Backup Operation Result for Recovery Services Vault. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupOperations/read | Returns Backup Operation Status for Recovery Services Vault. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupPolicies/delete | Delete a Protection Policy |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupPolicies/read | Returns all Protection Policies |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupPolicies/write | Creates Protection Policy |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupPolicies/operationResults/read | Get Results of Policy Operation. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupPolicies/operations/read | Get Status of Policy Operation. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupProtectableItems/read | Returns list of all Protectable Items. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupProtectedItems/read | Returns the list of all Protected Items. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupProtectionContainers/read | Returns all containers belonging to the subscription |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupProtectionIntents/read | List all backup Protection Intents |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupResourceGuardProxies/delete | The Delete ResourceGuard proxy operation deletes the specified Azure resource of type 'ResourceGuard proxy' |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupResourceGuardProxies/read | Get the list of ResourceGuard proxies for a resource |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupResourceGuardProxies/read | Get ResourceGuard proxy operation gets an object representing the Azure resource of type 'ResourceGuard proxy' |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupResourceGuardProxies/unlockDelete/action | Unlock delete ResourceGuard proxy operation unlocks the next delete critical operation |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupResourceGuardProxies/write | Create ResourceGuard proxy operation creates an Azure resource of type 'ResourceGuard Proxy' |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupstorageconfig/read | Returns Storage Configuration for Recovery Services Vault. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupstorageconfig/write | Updates Storage Configuration for Recovery Services Vault. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupUsageSummaries/read | Returns summaries for Protected Items and Protected Servers for a Recovery Services . |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupValidateOperationResults/read | Validate Operation on Protected Item |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupValidateOperationsStatuses/read | Validate Operation on Protected Item |
> | Microsoft.RecoveryServices/Vaults/certificates/write | The Update Resource Certificate operation updates the resource/vault credential certificate. | > | Microsoft.RecoveryServices/Vaults/extendedInformation/read | The Get Extended Info operation gets an object's Extended Info representing the Azure resource of type ?vault? | > | Microsoft.RecoveryServices/Vaults/extendedInformation/write | The Get Extended Info operation gets an object's Extended Info representing the Azure resource of type ?vault? |
Azure service: [Site Recovery](../site-recovery/index.yml)
> | Microsoft.RecoveryServices/Vaults/monitoringAlerts/write | Resolves the alert. | > | Microsoft.RecoveryServices/Vaults/monitoringConfigurations/read | Gets the Recovery services vault notification configuration. | > | Microsoft.RecoveryServices/Vaults/monitoringConfigurations/write | Configures e-mail notifications to Recovery services vault. |
-> | microsoft.recoveryservices/Vaults/privateEndpointConnectionProxies/delete | Wait for a few minutes and then try the operation again. If the issue persists, please contact Microsoft support. |
-> | microsoft.recoveryservices/Vaults/privateEndpointConnectionProxies/read | Get all protectable containers |
-> | microsoft.recoveryservices/Vaults/privateEndpointConnectionProxies/validate/action | Get all protectable containers |
-> | microsoft.recoveryservices/Vaults/privateEndpointConnectionProxies/write | Get all protectable containers |
-> | microsoft.recoveryservices/Vaults/privateEndpointConnectionProxies/operationsStatus/read | Get all protectable containers |
-> | microsoft.recoveryservices/Vaults/privateEndpointConnections/delete | Delete Private Endpoint requests. This call is made by Backup Admin. |
-> | microsoft.recoveryservices/Vaults/privateEndpointConnections/write | Approve or Reject Private Endpoint requests. This call is made by Backup Admin. |
-> | microsoft.recoveryservices/Vaults/privateEndpointConnections/operationsStatus/read | Returns the operation status for a private endpoint connection. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/privateEndpointConnectionProxies/delete | Wait for a few minutes and then try the operation again. If the issue persists, please contact Microsoft support. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/privateEndpointConnectionProxies/read | Get all protectable containers |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/privateEndpointConnectionProxies/validate/action | Get all protectable containers |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/privateEndpointConnectionProxies/write | Get all protectable containers |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/privateEndpointConnectionProxies/operationsStatus/read | Get all protectable containers |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/privateEndpointConnections/delete | Delete Private Endpoint requests. This call is made by Backup Admin. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/privateEndpointConnections/write | Approve or Reject Private Endpoint requests. This call is made by Backup Admin. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/privateEndpointConnections/operationsStatus/read | Returns the operation status for a private endpoint connection. |
> | Microsoft.RecoveryServices/Vaults/providers/Microsoft.Insights/diagnosticSettings/read | Azure Backup Diagnostics | > | Microsoft.RecoveryServices/Vaults/providers/Microsoft.Insights/diagnosticSettings/write | Azure Backup Diagnostics | > | Microsoft.RecoveryServices/Vaults/providers/Microsoft.Insights/logDefinitions/read | Azure Backup Logs |
Azure service: [Site Recovery](../site-recovery/index.yml)
> | Microsoft.RecoveryServices/vaults/replicationVaultSettings/read | Read any | > | Microsoft.RecoveryServices/vaults/replicationVaultSettings/write | Create or Update any | > | Microsoft.RecoveryServices/vaults/replicationvCenters/read | Read any vCenters |
-> | microsoft.recoveryservices/Vaults/usages/read | Returns usage details for a Recovery Services Vault. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/usages/read | Returns usage details for a Recovery Services Vault. |
> | Microsoft.RecoveryServices/vaults/usages/read | Read any Vault Usages | > | Microsoft.RecoveryServices/Vaults/vaultTokens/read | The Vault Token operation can be used to get Vault Token for vault level backend operations. |
security Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/feature-availability.md
The following sections identify when a service has an integration with Microsoft
Azure Information Protection (AIP) is a cloud-based solution that enables organizations to discover, classify, and protect documents and emails by applying labels to content.
-AIP is part of the Microsoft Information Protection (MIP) solution, and extends the [labeling](/microsoft-365/compliance/sensitivity-labels) and [classification](/microsoft-365/compliance/data-classification-overview) functionality provided by Microsoft 365.
+AIP is part of the Microsoft Purview Information Protection (MIP) solution, and extends the [labeling](/microsoft-365/compliance/sensitivity-labels) and [classification](/microsoft-365/compliance/data-classification-overview) functionality provided by Microsoft 365.
For more information, see the [Azure Information Protection product documentation](/azure/information-protection/).
For more information, see the [Azure Information Protection product documentatio
<sup><a name="aipnote6"></a>6</sup> Sharing of protected documents and emails from government clouds to users in the commercial cloud is not currently available. Includes Microsoft 365 Apps users in the commercial cloud, non-Microsoft 365 Apps users in the commercial cloud, and users with an RMS for Individuals license.
-<sup><a name="aipnote7"></a>7</sup> The number of [Sensitive Information Types](/microsoft-365/compliance/sensitive-information-type-entity-definitions) in your Microsoft 365 Security & Compliance Center may vary based on region.
+<sup><a name="aipnote7"></a>7</sup> The number of [Sensitive Information Types](/microsoft-365/compliance/sensitive-information-type-entity-definitions) in your Microsoft Purview compliance portal may vary based on region.
## Microsoft Defender for Cloud
The following tables display the current Microsoft Sentinel feature availability
| - [Azure Active Directory](../../sentinel/connect-azure-active-directory.md) | GA | GA | | - [Azure ADIP](../../sentinel/data-connectors-reference.md#azure-active-directory-identity-protection) | GA | GA | | - [Azure DDoS Protection](../../sentinel/data-connectors-reference.md#azure-ddos-protection) | GA | GA |
-| - [Microsft Purview](../../sentinel/data-connectors-reference.md#microsoft-purview) | Public Preview | Not Available |
+| - [Microsoft Purview](../../sentinel/data-connectors-reference.md#microsoft-purview) | Public Preview | Not Available |
| - [Microsoft Defender for Cloud](../../sentinel/connect-azure-security-center.md) | GA | GA | | - [Microsoft Defender for IoT](../../sentinel/data-connectors-reference.md#microsoft-defender-for-iot) | GA | GA | | - [Microsoft Insider Risk Management](../../sentinel/sentinel-solutions-catalog.md#domain-solutions) | Public Preview | Not Available |
The following table displays the current Microsoft Defender for IoT feature avai
| **Unify IT, and OT security with SIEM, SOAR and XDR** | | | | [Active Directory](../../defender-for-iot/organizations/how-to-create-and-manage-users.md#integrate-with-active-directory-servers) | GA | GA | | [ArcSight](../../defender-for-iot/organizations/how-to-accelerate-alert-incident-response.md#accelerate-incident-workflows-by-using-alert-groups) | GA | GA |
-| [ClearPass (Alerts & Inventory)](../../defender-for-iot/organizations/how-to-install-software.md#attach-a-span-virtual-interface-to-the-virtual-switch) | GA | GA |
-| [CyberArk PSM](../../defender-for-iot/organizations/concept-key-concepts.md#integrations) | GA | GA |
+| [ClearPass (Alerts & Inventory)](../../defender-for-iot/organizations/tutorial-clearpass.md) | GA | GA |
+| [CyberArk PSM](../../defender-for-iot/organizations/tutorial-cyberark.md) | GA | GA |
| [Email](../../defender-for-iot/organizations/how-to-forward-alert-information-to-partners.md#email-address-action) | GA | GA | | [FortiGate](../../defender-for-iot/organizations/tutorial-fortinet.md) | GA | GA | | [FortiSIEM](../../defender-for-iot/organizations/tutorial-fortinet.md) | GA | GA |
sentinel Customize Alert Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/customize-alert-details.md
The procedure detailed below is part of the analytics rule creation wizard. It's
1. Click the **Set rule logic** tab.
-1. In the **Alert enrichment (Preview)** section, expand **Alert details**.
+1. In the **Alert enrichment** section, expand **Alert details**.
:::image type="content" source="media/customize-alert-details/alert-enrichment.png" alt-text="Customize alert details":::
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md
Configure eNcore to stream data via TCP to the Log Analytics Agent. This configu
| Connector attribute | Description | | | | | **Data ingestion method** | **Azure service-to-service integration: <br>[API-based connections](connect-azure-windows-microsoft-services.md#api-based-connections)** <br><br> Also available as part of the [Microsoft Sentinel 4 Dynamics 365 solution](sentinel-solutions-catalog.md#azure)|
-| **License prerequisites/<br>Cost information** | <li>[Microsoft Dynamics 365 production license](/office365/servicedescriptions/microsoft-dynamics-365-online-service-description). Not available for sandbox environments.<li>Microsoft 365 Enterprise [E3 or E5](/power-platform/admin/enable-use-comprehensive-auditing#requirements) subscription is required to do Activity Logging.<br>Other charges may apply |
+| **License prerequisites/<br>Cost information** | <li>[Microsoft Dynamics 365 production license](/office365/servicedescriptions/microsoft-dynamics-365-online-service-description). Not available for sandbox environments.<li>At least one user assigned a Microsoft/Office 365 [E1 or greater](/power-platform/admin/enable-use-comprehensive-auditing#requirements) license.<br>Other charges may apply |
| **Log Analytics table(s)** | Dynamics365Activity | | **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Supported by** | Microsoft |
sentinel Map Data Fields To Entities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/map-data-fields-to-entities.md
Title: Map data fields to Microsoft Sentinel entities | Microsoft Docs
description: Map data fields in tables to Microsoft Sentinel entities in analytics rules, for better incident information Previously updated : 11/09/2021 Last updated : 04/26/2022
> [!IMPORTANT] >
-> - The new version of the entity mapping feature is in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-> [!IMPORTANT]
->
-> - See [Notes on the new version](#notes-on-the-new-version) at the end of this document for important information about backward compatibility and differences between the new and old versions of entity mapping.
+> - See "[Notes on the new version](#notes-on-the-new-version)" at the end of this document for important information about backward compatibility and differences between the new and old versions of entity mapping.
## Introduction
The procedure detailed below is part of the analytics rule creation wizard. It's
1. Select the **Set rule logic** tab.
-1. In the **Alert enrichment (Preview)** section, expand **Entity mapping**.
+1. In the **Alert enrichment** section, expand **Entity mapping**.
:::image type="content" source="media/map-data-fields-to-entities/alert-enrichment.png" alt-text="Expand entity mapping":::
The procedure detailed below is part of the analytics rule creation wizard. It's
> - **Each mapped entity can identify *up to ten entities***. > - If an alert contains more than ten items that correspond to a single entity mapping, only the first ten will be recognized as entities and be able to be analyzed as such. > - This limitation applies to actual mappings, not to entity types. So if you have three different mapped entities for IP addresses (say, source, destination, and gateway), each of those mappings can accommodate ten entities.
+>
> - **The size limit for an entire alert is *64 KB***. > - Alerts that grow larger than 64 KB will be truncated. As entities are identified, they are added to the alert one by one until the alert size reaches 64 KB, and any remaining entities are dropped from the alert. ## Notes on the new version -- If you had previously defined entity mappings for this analytics rule using the old version, those mappings appear in the query code. Entity mappings defined under the new version **do not appear in the query code**. Analytics rules can only support one version of entity mappings at a time, and the new version takes precedence. Therefore, any single mapping you define here will cause **any and all** mappings defined in the query code to be **disregarded** when the query runs. --- If you still need to use the **old version** of entity mapping (as long as the new version is still in preview), you can still access it using a feature flag in the URL. Place your cursor between `https://portal.azure.com/` and `#blade`, and insert the text `?feature.EntityMapping=false`.-
- - The limits of the old version will continue to apply. You can map only the user, host, IP address, URL, and file hash entities, and only one of each.
-
- - You must **remove** any entity mappings created using the new version **before** you return to the old version, otherwise any entity mappings that use the old version **will not work**.
--- Once the new version of entity mapping is in General Availability, it will no longer be possible to use the old version. It is highly recommended that you migrate your old entity mappings to the new version.
+- As the new version is now generally available (GA), the feature-flag workaround to use the old version is no longer available.
+- If you had previously defined entity mappings for this analytics rule using the old version, they will be automatically converted to the new version.
## Next steps In this document, you learned how to map data fields to entities in Microsoft Sentinel analytics rules. To learn more about Microsoft Sentinel, see the following articles:+ - Get the complete picture on [scheduled query analytics rules](detect-threats-custom.md). - Learn more about [entities in Microsoft Sentinel](entities.md).
sentinel Sentinel Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-service-limits.md
+
+ Title: Microsoft Sentinel service limits
+description: This article provides a list of service limits for Microsoft Sentinel.
++ Last updated : 04/27/2022+++
+# Service limits for Microsoft Sentinel
+
+This article lists the most common service limits you might encounter as you use Microsoft Sentinel. For other limits that might impact services or features you use, see [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md).
+
+## Analytics rule limits
++
+## Incident limits
++
+## Machine learning-based limits
++
+## Notebook limits
++
+## Threat intelligence limits
++
+## Watchlist limits
++
+## User and Entity Behavior Analytics (UEBA) limits
++
+## Next steps
+
+[Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md)
sentinel Surface Custom Details In Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/surface-custom-details-in-alerts.md
Title: Surface custom details in Microsoft Sentinel alerts | Microsoft Docs
description: Extract and surface custom event details in alerts in Microsoft Sentinel analytics rules, for better and more complete incident information Previously updated : 11/09/2021 Last updated : 04/26/2022
[!INCLUDE [Banner for top of topics](./includes/banner.md)]
-> [!IMPORTANT]
->
-> - The custom details feature is in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- ## Introduction [Scheduled query analytics rules](detect-threats-custom.md) analyze **events** from data sources connected to Microsoft Sentinel, and produce **alerts** when the contents of these events are significant from a security perspective. These alerts are further analyzed, grouped, and filtered by Microsoft Sentinel's various engines and distilled into **incidents** that warrant a SOC analyst's attention. However, when the analyst views the incident, only the properties of the component alerts themselves are immediately visible. Getting to the actual content - the information contained in the events - requires doing some digging.
The procedure detailed below is part of the analytics rule creation wizard. It's
1. Click the **Set rule logic** tab.
-1. In the **Alert enrichment (Preview)** section, expand **Custom details**.
+1. In the **Alert enrichment** section, expand **Custom details**.
:::image type="content" source="media/surface-custom-details-in-alerts/alert-enrichment.png" alt-text="Find and select custom details":::
The procedure detailed below is part of the analytics rule creation wizard. It's
> - The size limit for all custom details, collectively, is **2 KB**. ## Next steps+ In this document, you learned how to surface custom details in alerts using Microsoft Sentinel analytics rules. To learn more about Microsoft Sentinel, see the following articles:+ - Get the complete picture on [scheduled query analytics rules](detect-threats-custom.md). - Learn more about [entities in Microsoft Sentinel](entities.md).
service-bus-messaging Service Bus Sas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-sas.md
Title: Azure Service Bus access control with Shared Access Signatures description: Overview of Service Bus access control using Shared Access Signatures overview, details about SAS authorization with Azure Service Bus. Previously updated : 04/14/2022 Last updated : 04/26/2022 ms.devlang: csharp
To regenerate primary and secondary keys in the **Azure portal**, follow these s
:::image type="content" source="./media/service-bus-sas/regenerate-keys.png" alt-text="Screenshot of SAS Policy page with Regenerate options selected.":::
-If you are using **Azure PowerShell**, use the [`New-AzServiceBusKey`](/powershell/module/az.servicebus/new-azservicebuskey) cmdlet to regenerate primary and secondary keys for a Service Bus namespace. With PowerShell, you can also specify values for primary and secondary keys that are being generated, by using the `-KeyValue` parameter.
+If you are using **Azure PowerShell**, use the [`New-AzServiceBusKey`](/powershell/module/az.servicebus/new-azservicebuskey) cmdlet to regenerate primary and secondary keys for a Service Bus namespace. You can also specify values for primary and secondary keys that are being generated, by using the `-KeyValue` parameter.
-If you are using **Azure CLI**, use the [`az servicebus namespace authorization-rule keys renew`](/cli/azure/servicebus/namespace/authorization-rule/keys#az-servicebus-namespace-authorization-rule-keys-renew) command to regenerate primary and secondary keys for a Service Bus namespace.
+If you are using **Azure CLI**, use the [`az servicebus namespace authorization-rule keys renew`](/cli/azure/servicebus/namespace/authorization-rule/keys#az-servicebus-namespace-authorization-rule-keys-renew) command to regenerate primary and secondary keys for a Service Bus namespace. You can also specify values for primary and secondary keys that are being generated, by using the `--key-value` parameter.
## Shared Access Signature authentication with Service Bus
service-bus-messaging Service Bus Tutorial Topics Subscriptions Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-tutorial-topics-subscriptions-portal.md
Title: Update inventory using Azure portal and topics/subscriptions
description: In this tutorial, you learn how to send and receive messages from a topic and subscription, and how to add and use filter rules using .NET Previously updated : 10/15/2020 Last updated : 04/26/2022 #Customer intent: In a retail scenario, how do I update inventory assortment and send a set of messages from the back office to the stores? # Tutorial: Update inventory using Azure portal and topics/subscriptions
+Azure Service Bus is a multi-tenant cloud messaging service that sends information between applications and services. Asynchronous operations give you flexible, brokered messaging, along with structured first-in, first-out (FIFO) messaging, and publish/subscribe capabilities. For detailed overview of Azure Service Bus, see [What is Service Bus?](service-bus-messaging-overview.md).
+
+This tutorial shows how to use Service Bus topics and subscriptions in a retail inventory scenario, with publish/subscribe channels using the Azure portal and .NET. An example of this scenario is an inventory assortment update for multiple retail stores. In this scenario, each store, or set of stores, gets messages intended for them to update their assortments. This tutorial shows how to implement this scenario using subscriptions and filters. First, you create a topic with three subscriptions, add some rules and filters, and then send and receive messages from the topic and subscriptions.
+
-Microsoft Azure Service Bus is a multi-tenant cloud messaging service that sends information between applications and services. Asynchronous operations give you flexible, brokered messaging, along with structured first-in, first-out (FIFO) messaging, and publish/subscribe capabilities. This tutorial shows how to use Service Bus topics and subscriptions in a retail inventory scenario, with publish/subscribe channels using the Azure portal and .NET.
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Create a Service Bus topic and one or more subscriptions to that topic using the Azure portal
-> * Add topic filters using .NET code
-> * Create two messages with different content
-> * Send the messages and verify they arrived in the expected subscriptions
+> * Create a Service Bus topic and three subscriptions to that topic using the Azure portal
+> * Add filters for subscriptions using .NET code
+> * Create messages with different content
+> * Send messages and verify that they arrived in the expected subscriptions
> * Receive messages from the subscriptions
-An example of this scenario is an inventory assortment update for multiple retail stores. In this scenario, each store, or set of stores, gets messages intended for them to update their assortments. This tutorial shows how to implement this scenario using subscriptions and filters. First, you create a topic with 3 subscriptions, add some rules and filters, and then send and receive messages from the topic and subscriptions.
-
-![topic](./media/service-bus-tutorial-topics-subscriptions-portal/about-service-bus-topic.png)
-
-If you don't have an Azure subscription, you can create a [free account][] before you begin.
- ## Prerequisites
-To complete this tutorial, make sure you have installed:
+To complete this tutorial, make sure you have:
-- [Visual Studio 2017 Update 3 (version 15.3, 26730.01)](https://www.visualstudio.com/vs) or later.-- [NET Core SDK](https://dotnet.microsoft.com/download), version 2.0 or later.
+- **Azure subscription**. To use Azure services, including Azure Service Bus, you need a subscription. If you don't have an Azure subscription, you can create a [free account][] before you begin.
+- [Visual Studio 2019](https://www.visualstudio.com/vs) or later.
## Service Bus topics and subscriptions
Each [subscription to a topic](service-bus-messaging-overview.md#topics) can rec
## Create filter rules on subscriptions
-After the namespace and topic/subscriptions are provisioned, and you have the necessary credentials, you are ready to create filter rules on the subscriptions, then send and receive messages. You can examine the code in [this GitHub sample folder](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus/TopicFilters).
+After the namespace and topic/subscriptions are provisioned, and you have the necessary credentials, you're ready to create filter rules on the subscriptions, then send and receive messages. You can examine the code in [this GitHub sample folder](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus/TopicFilters).
## Send and receive messages
-To run the code, do the following:
+To run the code, follow these steps:
1. In a command prompt or PowerShell prompt, clone the [Service Bus GitHub repository](https://github.com/Azure/azure-service-bus/) by issuing the following command:
To run the code, do the following:
```shell dotnet BasicSendReceiveTutorialwithFilters.dll -ConnectionString "myConnectionString" -TopicName "myTopicName" ```
-7. Follow the instructions in the console to select filter creation first. Part of creating filters is to remove the default filters. When you use PowerShell or CLI you don't need to remove the default filter, but if you do this in code, you must remove them. The console commands 1 and 3 help you manage the filters on the subscriptions you previously created:
+7. Follow the instructions in the console to select filter creation first. Part of creating filters is to remove the default filters. When you use PowerShell or CLI you don't need to remove the default filter, but if you do it in code, you must remove them. The console commands 1 and 3 help you manage the filters on the subscriptions you previously created:
- Execute 1: to remove the default filters. - Execute 2: to add your own filters.
- - Execute 3: to optionally remove your own filters. Note that this will not recreate the default filters.
+ - Execute 3: **Skip this step for the tutorial**. This option optionally removes your own filters. It will not recreate the default filters.
![Showing output of 2](./media/service-bus-tutorial-topics-subscriptions-portal/create-rules.png)
To run the code, do the following:
![Send output](./media/service-bus-tutorial-topics-subscriptions-portal/send-output.png)
-9. Press 5 and observe the messages being received. If you did not get 10 messages back, press "m" to display the menu, then press 5 again.
+9. Press 5 and observe the messages being received. If you didn't get 10 messages back, press "m" to display the menu, then press 5 again.
![Receive output](./media/service-bus-tutorial-topics-subscriptions-portal/receive-output.png) ## Clean up resources
-When no longer needed, delete the namespace and topic. To do so, select these resources on the portal and click **Delete**.
+When no longer needed, follow these steps to clean up resources.
+
+1. Navigate to your namespace in the Azure portal.
+2. On the **Service Bus Namespace** page, select **Delete** from the command bar to delete the namespace and resources (queues, topics, and subscriptions) in it.
## Understand the sample code
private async Task SendItems(ServiceBusClient client, string store)
### Receive messages
-Messages are again received via a task list, and the code uses batching. You can send and receive using batching, but this example only shows how to batch receive. In reality, you would not break out of the loop, but keep looping and set a higher timespan, such as one minute. The receive call to the broker is kept open for this amount of time and if messages arrive, they are returned immediately and a new receive call is issued. This concept is called *long polling*. Using the receive pump which you can see in the [quickstart](service-bus-quickstart-portal.md), and in several other samples in the repository, is a more typical option.
+Messages are again received via a task list, and the code uses batching. You can send and receive using batching, but this example only shows how to batch receive. In reality, you wouldn't break out of the loop, but keep looping and set a higher time span, such as one minute. The receive call to the broker is kept open for this amount of time and if messages arrive, they're returned immediately and a new receive call is issued. This concept is called *long polling*. Using the receive pump, which you can see in the [quickstart](service-bus-quickstart-portal.md), and in several other samples in the repository, is a more typical option.
```csharp public async Task Receive()
service-fabric Service Fabric Get Started Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-get-started-linux.md
# service-fabric-get-started-linux.md # service-fabric-get-started-mac.md # service-fabric-local-linux-cluster-windows.md
+# service-fabric-local-linux-cluster-windows-wsl2.md
# Prepare your development environment on Linux > [!div class="op_single_selector"]
service-fabric Service Fabric Local Linux Cluster Windows Wsl2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-local-linux-cluster-windows-wsl2.md
+
+ Title: Set up Azure Service Fabric Linux cluster on WSL2 linux distribution inside Windows
+description: This article covers how to set up Service Fabric Linux clusters inside WSL2 linux distribution running on Windows development machines. This approach is useful for cross-platform development.
++ Last updated : 10/31/2021+
+# Maintainer notes: Keep these documents in sync:
+# service-fabric-get-started-linux.md
+# service-fabric-get-started-mac.md
+# service-fabric-local-linux-cluster-windows.md
+# service-fabric-local-linux-cluster-windows-wsl2.md
+
+# Set up a Linux Service Fabric cluster via WSL2 on your Windows developer machine
+
+This document covers how to set up a local Linux Service Fabric cluster via WSL2 on a Windows development machine. Setting up a local Linux cluster is useful to quickly test applications targeted for Linux clusters but are developed on a Windows machine.
+
+## Prerequisites
+Linux based Service Fabric clusters do not run directly on Windows, but to enable cross-platform prototyping we have provided a way to deploy Service Fabric Cluster inside Linux distribution via WSL2 (Windows Subsystem for Linux) for Windows.
+
+Before you get started, you need:
+
+* WSL2 Set up in Windows and ensure WSL 2 as default version
+* Set up Ubuntu 18.04 Linux Distribution from Microsoft Store while setting up WSL2
+
+>[!TIP]
+> To install WSL2 on your Windows machine, follow the steps in the [WSL documentation](https://docs.microsoft.com/windows/wsl/install). After installing, please ensure installation of Ubuntu-18.04, make it your default distribution and it should be up and running.
+>
+
+## Set up Service Fabric SDK inside Linux Distribution
+Service Fabric Setup cannot be done in WSL2 Linux Distribution the way it is done in standard linux OS. Because systemd as PID1 is not running inside VM and systemd as PID1 is a prerequisite for SF SDK to work successfully.
+To enable systemd as PID1, systemd-genie is used as work-around. More details about systemd-genie can be found at [systemd genie setup](https://github.com/arkane-systems/genie) Script installation and manual installation steps cover installation of systemd-genie and service fabric sdk both.
+
+## Script installation
+
+For convenience, a script is provided to install the Service Fabric common SDK along with the [**sfctl** CLI](service-fabric-cli.md). Running the script assumes you agree to the licenses for all the software that is being installed. Alternatively you may run the [Manual installation](#manual-installation) steps in the next section, which will present associated licenses and the components being installed.
+
+After the script runs successfully, you can skip to [Set up a local cluster](#set-up-a-local-cluster).
+
+```bash
+sudo curl -s https://raw.githubusercontent.com/Azure/service-fabric-scripts-and-templates/master/scripts/SetupServiceFabric/SetupServiceFabric.sh | sudo bash
+```
+
+## Manual installation
+For manual installation of the Service Fabric runtime and common SDK, follow the rest of this guide.
+
+1. Open a terminal.
+
+2. Login into WSL2 Linux Distribution
+
+3. Set up systemd-genie as mentioned in [systemd genie setup](https://github.com/arkane-systems/genie) (if systemd-genie is already set up, you can jump to next step)
+
+4. Enter into genie namespace using genie -s
+
+5. Inside genie namespace, SF SDK can also be installed as mentioned under Script Installation or Manual Installation steps in [Set up a linux local cluster](service-fabric-get-started-linux.md)
+
+6. Provide sudo privileges to current user by making an entry (e.g. <USERNAME> ALL = (ALL) NOPASSWD:ALL) in /etc/sudoers
+
+## Set up a local cluster
+Service Fabric inside WSL2 VM is recommended to manage from host windows
+
+1. Install Service Fabric SDK (version 6.0 or above) in Windows host
+
+2. In Windows, cluster can be managed using ServiceFabricLocalClusterManager tool provided as part of SF SDK
+
+3. Option to manage Linux Local Cluster is enabled only when a. WSL2 VM is running, b. Systemd-genie, servicefabricruntime, and servicefabricsdkcommon packages are properly installed inside VM and c. Systemd-genie is in running state. You can set up or switch to Linux Local Cluster from this tool.
+
+4. Another way of setting up linux cluster is to deploy using cluster setup scripts provided as part SF SDK.
+
+5. Open a web browser and go to Service Fabric Explorer ``http://localhost:19080``. When the cluster starts, you see the Service Fabric Explorer dashboard. It might take several minutes for the cluster to be set up.
+ If your browser fails to open the URL or Service Fabric Explorer doesn't show the cluster, wait for a few minutes and try again. You can also see the cluster in ServiceFabricExplorer provided in SF SDK.
+
+6. Once Cluster is up and running, you can connect to local cluster in PowerShell and Visual Studio.
++
+## Manual installation with custom ServiceFabric and ServieFabricSdkCommon Debian Package
+For manual installation of the Service Fabric from custom or downloaded debian packages, follow the rest of this guide.
+
+1. Open a terminal.
+
+2. Login into WSL2 Linux Distribution
+
+3. Clone set up file
+
+```bash
+sudo curl -s https://raw.githubusercontent.com/Azure/service-fabric-scripts-and-templates/master/scripts/SetupServiceFabric/SetupServiceFabric.sh > SetupServiceFabric.sh
+```
+
+4. Make the file executable
+
+```bash
+sudo chmod +x SetupServiceFabric.sh
+```
+
+5. Run set up script with local debian packages path. Make sure that paths provided are valid. Below is an example:
+
+```bash
+sudo ./SetupServiceFabric.sh --servicefabricruntime=/mnt/c/Users/testuser/Downloads/servicefabric.deb --servicefabricsdk=/mnt/c/Users/testuser/Downloads/servicefabric_sdkcommon.deb
+```
++
+### Known Limitations
+
+ The following are known limitations of the local cluster running inside Linux Distribution:
+
+ * Currently Ubuntu-18.04 distribution is only supported.
+ * To have a seamless experience with Local Cluster Manager and Visual Studio, it is recommended to manage cluster from PowerShell scripts or LocalClusterManager in Windows host.
+
+### Frequently Asked Questions
+
+ 1. What linux distributions are supported for SF Local Cluster Set up?
+ Currently, only Ubuntu-18.04 is supported for linux local cluster.
+
+ 2. Can Windows and Linux SF Cluster be run in parallel with WSL2 setup?
+ No, at one time only one local cluster can be run either in host or in guest VM.
+
+ 3. How to deploy one node linux local cluster?
+ One node or five node linux local cluster can be deployed from Local Cluster Manager from the menu options. While deploying from setup script, five node cluster is deployed by default and for one node cluster CreateOneNodeCluster should be passed.
+
+ 4. How to connect to Linux Local Cluster in PowerShell and Visual Studio?
+ If linux local cluster is up and running, connect-servicefabriccluster cmdlet should automatically connect to this cluster. Similar Visual Studio will automatically detect this local cluster.
+ This cluster can also be connected by providing cluster endpoint in PowerShell or visual studio.
+
+ 5. Where is SF Cluster data is located for linux local cluster?
+ If using Ubuntu-18.04 distribution, SF data is located at \\wsl$\Ubuntu-18.04\home\sfuser\sfdevcluster from Windows host.
+
+## Next steps
+* Learn about [Service Fabric support options](service-fabric-support.md)
service-fabric Service Fabric Local Linux Cluster Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-local-linux-cluster-windows.md
Last updated 10/16/2020
# service-fabric-get-started-linux.md # service-fabric-get-started-mac.md # service-fabric-local-linux-cluster-windows.md
+# service-fabric-local-linux-cluster-windows-wsl2.md
# Set up a Linux Service Fabric cluster on your Windows developer machine
To set up a local Docker container and have a Service Fabric cluster running on
* Running container-based apps requires running SF on a Linux host. Nested container applications are currently not supported. ## Next steps
+* [Set up a Linux cluster on Windows via WSL2](service-fabric-local-linux-cluster-windows-wsl2.md)
* [Create and deploy your first Service Fabric Java application on Linux using Yeoman](service-fabric-create-your-first-linux-application-with-java.md) * Get started with [Eclipse](./service-fabric-get-started-eclipse.md) * Check out other [Java samples](https://github.com/Azure-Samples/service-fabric-java-getting-started)
service-fabric Service Fabric Reliable Services Communication Remoting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-communication-remoting.md
Title: Service remoting by using C# in Service Fabric
+ Title: Service remoting by using C# in Service Fabric
description: Service Fabric remoting allows clients and services to communicate with C# services by using a remote procedure call. Last updated 09/20/2017
This step makes sure that the service is listening only on the V2 listener.
```csharp [assembly: FabricTransportServiceRemotingProvider(RemotingListenerVersion = RemotingListenerVersion.V2_1, RemotingClientVersion = RemotingClientVersion.V2_1)] ```
-
+ ### Use custom serialization with a remoting wrapped message For a remoting wrapped message, we create a single wrapped object with all the parameters as a field in it.
Follow these steps:
## Next steps
+* [Enabling DataContract remoting exception serialization](./service-fabric-reliable-services-exception-serialization.md)
* [Web API with OWIN in Reliable Services](./service-fabric-reliable-services-communication-aspnetcore.md) * [Windows Communication Foundation communication with Reliable Services](service-fabric-reliable-services-communication-wcf.md) * [Secure communication for Reliable Services](service-fabric-reliable-services-secure-communication.md)
service-fabric Service Fabric Reliable Services Exception Serialization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-exception-serialization.md
+
+ Title: Enabling Data Contract serialization for Remoting exceptions in Service Fabric
+description: Enabling Data Contract serialization for Remoting exceptions in Service Fabric
+ Last updated : 03/30/2022++
+# Remoting Exception Serialization Overview
+BinaryFormatter based serialization is not secure and Microsoft strongly recommends not to use BinaryFormatter for data processing. More details on the security implications can be found [here](https://docs.microsoft.com/dotnet/standard/serialization/binaryformatter-security-guide).
+Service Fabric had been using BinaryFormatter for serializing Exceptions. Starting ServiceFabric v9.0, [Data Contract based serialization](https://docs.microsoft.com/dotnet/api/system.runtime.serialization.datacontractserializer?view=net-6.0) for remoting exceptions is made available as an opt-in feature. It is strongly recommended to opt for DataContract remoting exception serialization by following the below mentioned steps.
+
+Support for BinaryFormatter based remoting exception serialization will be deprecated in the future.
+
+## Steps to enable Data Contract Serialization for Remoting Exceptions
+
+>[!NOTE]
+>Data Contract Serialization for Remoting Exceptions is only available for Remoting V2/V2_1 services.
+
+You can enable Data Contract Serialization for Remoting Exceptions using the below steps
+
+1. Enable DataContract remoting exception serialization on the **Service** side by using `FabricTransportRemotingListenerSettings.ExceptionSerializationTechnique` while creating the remoting listener.
+
+ - StatelessService
+```csharp
+protected override IEnumerable<ServiceInstanceListener> CreateServiceInstanceListeners()
+{
+ return new[]
+ {
+ new ServiceInstanceListener(serviceContext =>
+ new FabricTransportServiceRemotingListener(
+ serviceContext,
+ this,
+ new FabricTransportRemotingListenerSettings
+ {
+ ExceptionSerializationTechnique = FabricTransportRemotingListenerSettings.ExceptionSerialization.Default,
+ }),
+ "ServiceEndpointV2")
+ };
+}
+```
+ - StatefulService
+```csharp
+protected override IEnumerable<ServiceReplicaListener> CreateServiceReplicaListeners()
+{
+ return new[]
+ {
+ new ServiceReplicaListener(serviceContext =>
+ new FabricTransportServiceRemotingListener(
+ serviceContext,
+ this,
+ new FabricTransportRemotingListenerSettings
+ {
+ ExceptionSerializationTechnique = FabricTransportRemotingListenerSettings.ExceptionSerialization.Default,
+ }),
+ "ServiceEndpointV2")
+ };
+}
+```
+
+ - ActorService
+To enable DataContract remoting exception serialization on the ActorService, override `CreateServiceReplicaListeners()` by extending `ActorService`
+```csharp
+protected override IEnumerable<ServiceReplicaListener> CreateServiceReplicaListeners()
+{
+ return new List<ServiceReplicaListener>
+ {
+ new ServiceReplicaListener(_ =>
+ {
+ return new FabricTransportActorServiceRemotingListener(
+ this,
+ new FabricTransportRemotingListenerSettings
+ {
+ ExceptionSerializationTechnique = FabricTransportRemotingListenerSettings.ExceptionSerialization.Default,
+ });
+ },
+ "MyActorServiceEndpointV2")
+ };
+}
+```
+
+If the original exception has multiple levels of inner exceptions, then you can control the number of levels of inner exceptions to be serialized by setting `FabricTransportRemotingListenerSettings.RemotingExceptionDepth`.
+
+2. Enable DataContract remoting exception serialization on the **Client** by using `FabricTransportRemotingSettings.ExceptionDeserializationTechnique` while creating the Client Factory
+ - ServiceProxyFactory creation
+```csharp
+var serviceProxyFactory = new ServiceProxyFactory(
+(callbackClient) =>
+{
+ return new FabricTransportServiceRemotingClientFactory(
+ new FabricTransportRemotingSettings
+ {
+ ExceptionDeserializationTechnique = FabricTransportRemotingSettings.ExceptionDeserialization.Default,
+ },
+ callbackClient);
+});
+```
+ - ActorProxyFactory
+```csharp
+var actorProxyFactory = new ActorProxyFactory(
+(callbackClient) =>
+{
+ return new FabricTransportActorRemotingClientFactory(
+ new FabricTransportRemotingSettings
+ {
+ ExceptionDeserializationTechnique = FabricTransportRemotingSettings.ExceptionDeserialization.Default,
+ },
+ callbackClient);
+});
+```
+
+3. DataContract remoting exception serialization converts Exception to Data Transfer Object(DTO) on the service side and the DTO is converted back to Exception on the client side. Users need to register `ExceptionConvertor` for converting desired exceptions to DTO objects and vice versa.
+Framework implements Convertors for the below list of the exceptions. If user service code depends on exceptions outside the below list for retry implementation, exception handling, etc., then user needs to implement and register convertors for such exceptions.
+
+ * All service fabric exceptions(derived from `System.Fabric.FabricException`)
+ * SystemExceptions(derived from `System.SystemException`)
+ * System.AccessViolationException
+ * System.AppDomainUnloadedException
+ * System.ArgumentException
+ * System.ArithmeticException
+ * System.ArrayTypeMismatchException
+ * System.BadImageFormatException
+ * System.CannotUnloadAppDomainException
+ * System.Collections.Generic.KeyNotFoundException
+ * System.ContextMarshalException
+ * System.DataMisalignedException
+ * System.ExecutionEngineException
+ * System.FormatException
+ * System.IndexOutOfRangeException
+ * System.InsufficientExecutionStackException
+ * System.InvalidCastException
+ * System.InvalidOperationException
+ * System.InvalidProgramException
+ * System.IO.InternalBufferOverflowException
+ * System.IO.InvalidDataException
+ * System.IO.IOException
+ * System.MemberAccessException
+ * System.MulticastNotSupportedException
+ * System.NotImplementedException
+ * System.NotSupportedException
+ * System.NullReferenceException
+ * System.OperationCanceledException
+ * System.OutOfMemoryException
+ * System.RankException
+ * System.Reflection.AmbiguousMatchException
+ * System.Reflection.ReflectionTypeLoadException
+ * System.Resources.MissingManifestResourceException
+ * System.Resources.MissingSatelliteAssemblyException
+ * System.Runtime.InteropServices.ExternalException
+ * System.Runtime.InteropServices.InvalidComObjectException
+ * System.Runtime.InteropServices.InvalidOleVariantTypeException
+ * System.Runtime.InteropServices.MarshalDirectiveException
+ * System.Runtime.InteropServices.SafeArrayRankMismatchException
+ * System.Runtime.InteropServices.SafeArrayTypeMismatchException
+ * System.Runtime.Serialization.SerializationException
+ * System.StackOverflowException
+ * System.Threading.AbandonedMutexException
+ * System.Threading.SemaphoreFullException
+ * System.Threading.SynchronizationLockException
+ * System.Threading.ThreadInterruptedException
+ * System.Threading.ThreadStateException
+ * System.TimeoutException
+ * System.TypeInitializationException
+ * System.TypeLoadException
+ * System.TypeUnloadedException
+ * System.UnauthorizedAccessException
+ * System.ArgumentNullException
+ * System.IO.FileNotFoundException
+ * System.IO.DirectoryNotFoundException
+ * System.ObjectDisposedException
+ * System.AggregateException
+
+## Sample implementation of service side convertor for a custom exception
+
+Below is reference `IExceptionConvertor` implementation on the **Service** and **Client** side for a well known exception type `CustomException`.
+
+- CustomException
+```csharp
+class CustomException : Exception
+{
+ public CustomException(string message, string field1, string field2)
+ : base(message)
+ {
+ this.Field1 = field1;
+ this.Field2 = field2;
+ }
+
+ public CustomException(string message, Exception innerEx, string field1, string field2)
+ : base(message, innerEx)
+ {
+ this.Field1 = field1;
+ this.Field2 = field2;
+ }
+
+ public string Field1 { get; set; }
+
+ public string Field2 { get; set; }
+}
+```
+
+- `IExceptionConvertor` implementation on **Service** side.
+```csharp
+class CustomConvertorService : Microsoft.ServiceFabric.Services.Remoting.V2.Runtime.IExceptionConvertor
+{
+ public Exception[] GetInnerExceptions(Exception originalException)
+ {
+ return originalException.InnerException == null ? null : new Exception[] { originalException.InnerException };
+ }
+
+ public bool TryConvertToServiceException(Exception originalException, out ServiceException serviceException)
+ {
+ serviceException = null;
+ if (originalException is CustomException customEx)
+ {
+ serviceException = new ServiceException(customEx.GetType().FullName, customEx.Message);
+ serviceException.ActualExceptionStackTrace = originalException.StackTrace;
+ serviceException.ActualExceptionData = new Dictionary<string, string>()
+ {
+ { "Field1", customEx.Field1 },
+ { "Field2", customEx.Field2 },
+ };
+
+ return true;
+ }
+
+ return false;
+ }
+}
+```
+Actual exception observed during the execution of the remoting call is passed as input to `TryConvertToServiceException`. If the type of the exception is a well known one, then `TryConvertToServiceException` should convert the original exception to `ServiceException`
+ and return it as an out parameter. A true value should be returned if the original exception type is well known one and original exception is successfully converted to the `ServiceException`, false otherwise.
+
+ A list of inner exceptions at the current level should be returned by `GetInnerExceptions()`.
+
+- `IExceptionConvertor` implementation on **Client** side.
+```csharp
+class CustomConvertorClient : Microsoft.ServiceFabric.Services.Remoting.V2.Client.IExceptionConvertor
+{
+ public bool TryConvertFromServiceException(ServiceException serviceException, out Exception actualException)
+ {
+ return this.TryConvertFromServiceException(serviceException, (Exception)null, out actualException);
+ }
+
+ public bool TryConvertFromServiceException(ServiceException serviceException, Exception innerException, out Exception actualException)
+ {
+ actualException = null;
+ if (serviceException.ActualExceptionType == typeof(CustomException).FullName)
+ {
+ actualException = new CustomException(
+ serviceException.Message,
+ innerException,
+ serviceException.ActualExceptionData["Field1"],
+ serviceException.ActualExceptionData["Field2"]);
+
+ return true;
+ }
+
+ return false;
+ }
+
+ public bool TryConvertFromServiceException(ServiceException serviceException, Exception[] innerExceptions, out Exception actualException)
+ {
+ throw new NotImplementedException();
+ }
+}
+```
+`ServiceException` is passed as a parameter to `TryConvertFromServiceException` along with converted `innerException[s]`. If the actual exception type(`ServiceException.ActualExceptionType`) is a known one, then the convertor should create an actual exception object from the `ServiceException` and `innerException[s]`.
+
+- `IExceptionConvertor` registration on the **Service** side.
+
+ To register convertors, `CreateServiceInstanceListeners` has to be overridden and list of `IExceptionConvertor` has to be passed while creating RemotingListener instance.
+
+ - *StatelessService*
+```csharp
+protected override IEnumerable<ServiceInstanceListener> CreateServiceInstanceListeners()
+{
+ return new[]
+ {
+ new ServiceInstanceListener(serviceContext =>
+ new FabricTransportServiceRemotingListener(
+ serviceContext,
+ this,
+ new FabricTransportRemotingListenerSettings
+ {
+ ExceptionSerializationTechnique = FabricTransportRemotingListenerSettings.ExceptionSerialization.Default,
+ },
+ exceptionConvertors: new[]
+ {
+ new CustomConvertorService(),
+ }),
+ "ServiceEndpointV2")
+ };
+}
+```
+
+ - *StatefulService*
+```csharp
+protected override IEnumerable<ServiceReplicaListener> CreateServiceReplicaListeners()
+{
+ return new[]
+ {
+ new ServiceReplicaListener(serviceContext =>
+ new FabricTransportServiceRemotingListener(
+ serviceContext,
+ this,
+ new FabricTransportRemotingListenerSettings
+ {
+ ExceptionSerializationTechnique = FabricTransportRemotingListenerSettings.ExceptionSerialization.Default,
+ },
+ exceptionConvertors: new []
+ {
+ new CustomConvertorService(),
+ }),
+ "ServiceEndpointV2")
+ };
+}
+```
+
+ - *ActorService*
+```csharp
+protected override IEnumerable<ServiceReplicaListener> CreateServiceReplicaListeners()
+{
+ return new List<ServiceReplicaListener>
+ {
+ new ServiceReplicaListener(_ =>
+ {
+ return new FabricTransportActorServiceRemotingListener(
+ this,
+ new FabricTransportRemotingListenerSettings
+ {
+ ExceptionSerializationTechnique = FabricTransportRemotingListenerSettings.ExceptionSerialization.Default,
+ },
+ exceptionConvertors: new[]
+ {
+ new CustomConvertorService(),
+ });
+ },
+ "MyActorServiceEndpointV2")
+ };
+}
+```
+- `IExceptionConvertor` registration on the **Client** side.
+
+ To register convertors, list of `IExceptionConvertor`s has to be passed while creating ClientFactory instance.
+
+ - *ServiceProxyFactory creation*
+```csharp
+var serviceProxyFactory = new ServiceProxyFactory(
+(callbackClient) =>
+{
+ return new FabricTransportServiceRemotingClientFactory(
+ new FabricTransportRemotingSettings
+ {
+ ExceptionDeserializationTechnique = FabricTransportRemotingSettings.ExceptionDeserialization.Default,
+ },
+ callbackClient,
+ exceptionConvertors: new[]
+ {
+ new CustomConvertorClient(),
+ });
+});
+```
+
+ - *ActorProxyFactory creation*
+```csharp
+var actorProxyFactory = new ActorProxyFactory(
+(callbackClient) =>
+{
+ return new FabricTransportActorRemotingClientFactory(
+ new FabricTransportRemotingSettings
+ {
+ ExceptionDeserializationTechnique = FabricTransportRemotingSettings.ExceptionDeserialization.Default,
+ },
+ callbackClient,
+ exceptionConvertors: new[]
+ {
+ new CustomConvertorClient(),
+ });
+});
+```
+>[!NOTE]
+>If the framework finds the convertor for the exception, then the converted(actual) exception is wrapped inside AggregateException and is thrown at the remoting API(proxy). If the framework fails to find the convertor, then ServiceException which contains all the details of the actual exception is wrapped inside AggregateException and is thrown.
+
+### Step to upgrade an existing service to enable DataContract serialization for remoting exceptions
+Existing services must follow the below order(*Service first*) to upgrade. Failure to follow the below order could result in misbehavior in retry logic, exception handling, etc.
+1. Implement the **Service** side `ExceptionConvertor`s for the desired exceptions(if any). Update the remoting listener registration logic with `ExceptionSerializationTechnique` and list of `IExceptionConvertor`s. Upgrade the existing service to apply the exception serialization changes
+2. Implement the **Client** side `ExceptionConvertor`s for the desired exceptions(if any). Update the ProxyFactory creation logic with `ExceptionSerializationTechnique` and list of `IExceptionConvertor`s. Upgrade the existing client to apply the exception serialization changes
+
+## Next steps
+
+* [Web API with OWIN in Reliable Services](./service-fabric-reliable-services-communication-aspnetcore.md)
+* [Windows Communication Foundation communication with Reliable Services](service-fabric-reliable-services-communication-wcf.md)
+* [Secure communication for Reliable Services](service-fabric-reliable-services-secure-communication.md)
static-web-apps Build Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/build-configuration.md
The following table lists the available configuration settings.
| `cwd`<br />(Azure Pipelines only) | Absolute path to the working folder. Defaults to `$(System.DefaultWorkingDirectory)`. | No | | `build_timeout_in_minutes` | Set this value to customize the build timeout. Defaults to `15`. | No |
-With these settings, you can set up GitHub Actions or [Azure Pipelines](publish-devops.md) to run continuous integration/continuous delivery (CI/CD) for your static web app.
+With these settings, you can set up GitHub Actions or [Azure Pipelines](get-started-portal.md?pivots=azure-devops) to run continuous integration/continuous delivery (CI/CD) for your static web app.
## File name and location
In this configuration:
- The `api_location` points to the `api` folder that contains the Azure Functions application for the site's API endpoints. This value is relative to the working directory (`cwd`). To set it to the working directory, use `/`. - The `output_location` points to the `public` folder that contains the final version of the app's source files. This value is relative to `app_location`. For .NET projects, the location is relative to the publish output folder. - The `cwd` is an absolute path pointing to the working directory. It defaults to `$(System.DefaultWorkingDirectory)`.-- The `$(deployment_token)` variable points to the [generated Azure DevOps deployment token](./publish-devops.md).
+- The `$(deployment_token)` variable points to the [generated Azure DevOps deployment token](./get-started-portal.md?pivots=azure-devops).
> [!NOTE] > `app_location` and `api_location` must be relative to the working directory (`cwd`) and they must be subdirectories under `cwd`.
static-web-apps Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/get-started-portal.md
Last updated 05/07/2021 -
+zone_pivot_groups: devops-or-github
# Quickstart: Building your first static site in the Azure portal
-Azure Static Web Apps publishes a website to a production environment by building apps from a GitHub repository. In this quickstart, you deploy a web application to Azure Static Web apps using the Azure portal.
+Azure Static Web Apps publishes a website to a production environment by building apps from an Azure DevOps or GitHub repository. In this quickstart, you deploy a web application to Azure Static Web apps using the Azure portal.
## Prerequisites - If you don't have an Azure subscription, [create a free trial account](https://azure.microsoft.com/free). - [GitHub](https://github.com) account-- [Azure](https://portal.azure.com) account+
+- If you don't have an Azure subscription, [create a free trial account](https://azure.microsoft.com/free).
+- [Azure DevOps](https://azure.microsoft.com/services/devops) account
+ [!INCLUDE [create repository from template](../../includes/static-web-apps-get-started-create-repo.md)] ++
+## Create a repository
+
+This article uses a GitHub repository to make it easy for you to get started. The repository features a starter app used to deploy using Azure Static Web Apps.
+
+1. Sign in to Azure DevOps.
+1. Select the **New repository** button.
+1. In the *Create new project* window, expand the **Advanced** button and make the following selections:
+
+ | Setting | Value |
+ |--|--|
+ | Project | Enter **my-first-web-static-app**. |
+ | Visibility | Select **Private**. |
+ | Version control | Select **Git**. |
+ | Work item process | Select the option that best suits your development methods. |
+
+1. Select the **Create** button.
+1. Select the **Repos** menu item.
+1. Select the **Files** menu item.
+1. Under the *Import repository* card, select the **Import** button.
+1. Copy a repository URL for the framework of your choice, and paste it into the *Clone URL* box.
+
+ # [No Framework](#tab/vanilla-javascript)
+
+ [https://github.com/staticwebdev/vanilla-basic.git](https://github.com/staticwebdev/vanilla-basic.git)
+
+ # [Angular](#tab/angular)
+
+ [https://github.com/staticwebdev/angular-basic.git](https://github.com/staticwebdev/angular-basic.git)
+
+ # [Blazor](#tab/blazor)
+
+ [https://github.com/staticwebdev/blazor-basic.git](https://github.com/staticwebdev/blazor-basic.git)
+
+ # [React](#tab/react)
+
+ [https://github.com/staticwebdev/react-basic.git](https://github.com/staticwebdev/react-basic.git)
+
+ # [Vue](#tab/vue)
+
+ [https://github.com/staticwebdev/vue-basic.git](https://github.com/staticwebdev/vue-basic.git)
+
+
+
+1. Select the **Import** button and wait for the import process to complete.
++ ## Create a static web app Now that the repository is created, you can create a static web app from the Azure portal.
Now that the repository is created, you can create a static web app from the Azu
1. Select **Static Web Apps**. 1. Select **Create**. + In the _Basics_ section, begin by configuring your new app and linking it to a GitHub repository. :::image type="content" source="media/getting-started-portal/quickstart-portal-basics.png" alt-text="Basics section":::
-1. Select your _Azure subscription_.
-1. Next to _Resource Group_, select the **Create new** link.
-1. Enter **static-web-apps-test** in the textbox.
-1. Under to _Static Web App details_, enter **my-first-static-web-app** in the textbox.
-1. Under _Azure Functions and staging details_, select a region closest to you.
-1. Under _Deployment details_, select **GitHub**.
-1. Select the **Sign-in with GitHub** button and authenticate with GitHub.
+| Setting | Value |
+|--|--|
+| Subscription | Select your Azure subscription. |
+| Resource Group | Select the **Create new** link, and enter **static-web-apps-test** in the textbox. |
+| Name | Enter **my-first-static-web-app** in the textbox. |
+| Plan type | Select **Free**. |
+| Azure Functions and staging details | Select a region closest to you. |
+| Source | Select **GitHub**. |
+
+Select the **Sign-in with GitHub** button and authenticate with GitHub.
After you sign in with GitHub, enter the repository information.
+| Setting | Value |
+|--|--|
+| Organization | Select your organization. |
+| Repository| Select **my-first-web-static-app**. |
+| Branch | Select **main**. |
+ :::image type="content" source="media/getting-started-portal/quickstart-portal-source-control.png" alt-text="Repository details":::
-1. Select your preferred _Organization_ name.
-1. Select **my-first-web-static-app** from the _Repository_ drop-down.
-1. Select **main** from the _Branch_ drop-down.
+> [!NOTE]
+> If you don't see any repositories, you may need to authorize Azure Static Web Apps in GitHub. Browse to your GitHub repository and go to **Settings > Applications > Authorized OAuth Apps**, select **Azure Static Web Apps**, and then select **Grant**. For organization repositories, you must be an owner of the organization to grant the permissions.
- > [!NOTE]
- > If you don't see any repositories, you may need to authorize Azure Static Web Apps in GitHub. Browse to your GitHub repository and go to **Settings > Applications > Authorized OAuth Apps**, select **Azure Static Web Apps**, and then select **Grant**. For organization repositories, you must be an owner of the organization to grant the permissions.
-1. In the _Build Details_ section, add configuration details specific to your preferred front-end framework.
- # [No Framework](#tab/vanilla-javascript)
+In the _Basics_ section, begin by configuring your new app and linking it to an Azure DevOps repository.
- 1. Select **Custom** from the _Build Presets_ dropdown.
- 1. Type **./src** in the _App location_ box.
- 1. Leave the _Api location_ box empty.
- 1. Type **./src** _App artifact location_ box.
+| Setting | Value |
+|--|--|
+| Subscription | Select your Azure subscription. |
+| Resource Group | Select the **Create new** link, and enter **static-web-apps-test** in the textbox. |
+| Name | Enter **my-first-static-web-app** in the textbox. |
+| Plan type | Select **Free**. |
+| Azure Functions and staging details | Select a region closest to you. |
+| Source | Select **DevOps**. |
+| Organization | Select your organization. |
+| Project | Select your project. |
+| Repository| Select **my-first-web-static-app**. |
+| Branch | Select **main**. |
- # [Angular](#tab/angular)
- 1. Select **Angular** from the _Build Presets_ dropdown.
- 1. Keep the default value in the _App location_ box.
- 1. Leave the _Api location_ box empty.
- 1. Type **dist/angular-basic** in the _App artifact location_ box.
+In the _Build Details_ section, add configuration details specific to your preferred front-end framework.
- # [Blazor](#tab/blazor)
+# [No Framework](#tab/vanilla-javascript)
- 1. Select **Blazor** from the _Build Presets_ dropdown.
- 1. Keep the default value of **Client** in the _App location_ box.
- 1. Leave the _Api location_ box empty.
- 1. Keep the default value of **wwwroot** in the _App artifact location_ box.
+1. Select **Custom** from the _Build Presets_ dropdown.
+1. Type **./src** in the _App location_ box.
+1. Leave the _Api location_ box empty.
+1. Type **./src** _App artifact location_ box.
- # [React](#tab/react)
+# [Angular](#tab/angular)
- 1. Select **React** from the _Build Presets_ dropdown.
- 1. Keep the default value in the _App location_ box.
- 1. Leave the _Api location_ box empty.
- 1. Type **build** in the _App artifact location_ box.
+1. Select **Angular** from the _Build Presets_ dropdown.
+1. Keep the default value in the _App location_ box.
+1. Leave the _Api location_ box empty.
+1. Type **dist/angular-basic** in the _App artifact location_ box.
- # [Vue](#tab/vue)
+# [Blazor](#tab/blazor)
- 1. Select **Vue.js** from the _Build Presets_ dropdown.
- 1. Keep the default value in the _App location_ box.
- 1. Leave the _Api location_ box empty.
- 1. Keep the default value in the _App artifact location_ box.
+1. Select **Blazor** from the _Build Presets_ dropdown.
+1. Keep the default value of **Client** in the _App location_ box.
+1. Leave the _Api location_ box empty.
+1. Keep the default value of **wwwroot** in the _App artifact location_ box.
-
+# [React](#tab/react)
-1. Select **Review + create**.
+1. Select **React** from the _Build Presets_ dropdown.
+1. Keep the default value in the _App location_ box.
+1. Leave the _Api location_ box empty.
+1. Type **build** in the _App artifact location_ box.
- :::image type="content" source="media/getting-started-portal/review-create.png" alt-text="Review create button":::
+# [Vue](#tab/vue)
- > [!NOTE]
- > You can edit the [workflow file](build-configuration.md) to change these values after you create the app.
+1. Select **Vue.js** from the _Build Presets_ dropdown.
+1. Keep the default value in the _App location_ box.
+1. Leave the _Api location_ box empty.
+1. Keep the default value in the _App artifact location_ box.
-1. Select **Create**.
++
+Select **Review + create**.
+++
+> [!NOTE]
+> You can edit the [workflow file](build-configuration.md) to change these values after you create the app.
++
+Select **Create**.
++
+Select **Go to resource**.
++
+## View the website
+
+There are two aspects to deploying a static app. The first creates the underlying Azure resources that make up your app. The second is a workflow that builds and publishes your application.
+
+Before you can navigate to your new static site, the deployment build must first finish running.
+
+The Static Web Apps *Overview* window displays a series of links that help you interact with your web app.
+++
+1. Clicking on the banner that says, _Click here to check the status of your GitHub Actions runs_ takes you to the GitHub Actions running against your repository. Once you verify the deployment job is complete, then you can navigate to your website via the generated URL.
+
+2. Once GitHub Actions workflow is complete, you can click on the _URL_ link to open the website in new tab.
- :::image type="content" source="media/getting-started-portal/create-button.png" alt-text="Create button":::
-1. Select **Go to resource**.
- :::image type="content" source="media/getting-started-portal/resource-button.png" alt-text="Go to resource button":::
+Once the workflow is complete, you can click on the _URL_ link to open the website in new tab.
## Clean up resources
static-web-apps Publish Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/publish-devops.md
- Title: "Tutorial: Publish Azure Static Web Apps with Azure DevOps"
-description: Learn to use Azure DevOps to publish Azure Static Web Apps.
---- Previously updated : 08/17/2021----
-# Tutorial: Publish Azure Static Web Apps with Azure DevOps
-
-This article demonstrates how to deploy to [Azure Static Web Apps](./overview.md) using [Azure DevOps](https://dev.azure.com/).
-
-In this tutorial, you learn to:
--- Set up an Azure Static Web Apps site-- Create an Azure Pipeline to build and publish a static web app-
-## Prerequisites
--- **Active Azure account:** If you don't have one, you can [create an account for free](https://azure.microsoft.com/free/).-- **Azure DevOps project:** If you don't have one, you can [create a project for free](https://azure.microsoft.com/pricing/details/devops/azure-devops-services/).
- - Azure DevOps includes **Azure Pipelines**. If you need help getting started with Azure Pipelines, see [Create your first pipeline](/azure/devops/pipelines/create-first-pipeline?preserve-view=true&view=azure-devops).
- - The Static Web App Pipeline Task currently only works on **Linux** machines. When running the pipeline mentioned below, please ensure it is running on a Linux VM.
-
-## Create a static web app in an Azure DevOps
-
- > [!NOTE]
- > If you have an existing app in your repository, you may skip to the next section.
-
-1. Navigate to your repository in Azure Repos.
-
-1. Select **Import** to begin importing a sample application.
-
- :::image type="content" source="media/publish-devops/devops-repo.png" alt-text="DevOps Repo":::
-
-1. In **Clone URL**, enter `https://github.com/staticwebdev/vanilla-api.git`.
-
-1. Select **Import**.
-
-## Create a static web app
-
-1. Navigate to the [Azure portal](https://portal.azure.com).
-
-1. Select **Create a Resource**.
-
-1. Search for **Static Web Apps**.
-
-1. Select **Static Web Apps**.
-
-1. Select **Create**.
-
-1. Create a new static web app with the following values.
-
- :::image type="content" source="media/publish-devops/azure-portal-static-web-apps-devops.png" alt-text="Deployment details - other":::
-
- | Setting | Value |
- |||
- | Subscription | Your Azure subscription name. |
- | Resource Group | Select an existing group name, or create a new one. |
- | Name | Enter **myDevOpsApp**. |
- | Hosting plan type | Select **Free**. |
- | Region | Select a region closest to you. |
- | Source | Select **Other**. |
-
-1. Select **Review + create**
-
-1. Select **Create**.
-
-1. Once the deployment is successful, select **Go to resource**.
-
-1. Select **Manage deployment token**.
-
-1. Copy the **deployment token** and paste the deployment token value into a text editor for use in another screen.
-
- > [!NOTE]
- > This value is set aside for now because you'll copy and paste more values in coming steps.
-
- :::image type="content" source="media/publish-devops/deployment-token.png" alt-text="Deployment token":::
-
-## Create the Pipeline Task in Azure DevOps
-
-1. Navigate to the repository in Azure Repos that was created earlier.
-
-2. Select **Set up build**.
-
- :::image type="content" source="media/publish-devops/azdo-build.png" alt-text="Build pipeline":::
-
-3. In the *Configure your pipeline* screen, select **Starter pipeline**.
-
- :::image type="content" source="media/publish-devops/configure-pipeline.png" alt-text="Configure pipeline":::
-
-4. Copy the following YAML and replace the generated configuration in your pipeline with this code.
-
- ```yaml
- trigger:
- - main
-
- pool:
- vmImage: ubuntu-latest
-
- steps:
- - checkout: self
- submodules: true
- - task: AzureStaticWebApp@0
- inputs:
- app_location: '/src'
- api_location: 'api'
- output_location: '/src'
- azure_static_web_apps_api_token: $(deployment_token)
- ```
-
- > [!NOTE]
- > If you are not using the sample app, the values for `app_location`, `api_location`, and `output_location` need to change to match the values in your application.
-
- [!INCLUDE [static-web-apps-folder-structure](../../includes/static-web-apps-folder-structure.md)]
-
- The `azure_static_web_apps_api_token` value is self managed and is manually configured.
-
-5. Select **Variables**.
-
-6. Select **New variable**.
-
-7. Name the variable **deployment_token** (matching the name in the workflow).
-
-8. Copy the deployment token that you previously pasted into a text editor.
-
-9. Paste in the deployment token in the _Value_ box.
-
- :::image type="content" source="media/publish-devops/yaml-token.png" alt-text="Variable token" lightbox="media/publish-devops/yaml-token.png":::
-
-10. Select **Keep this value secret**.
-
-11. Select **OK**.
-
-12. Select **Save** to return to your pipeline YAML.
-
-13. Select **Save and run** to open the _Save and run_ dialog.
-
- :::image type="content" source="media/publish-devops/yaml-save.png" alt-text="Pipeline" lightbox="media/publish-devops/yaml-save.png":::
-
-14. Select **Save and run** to run the pipeline.
-
-15. Once the deployment is successful, navigate to the Azure Static Web Apps **Overview** which includes links to the deployment configuration. Note how the _Source_ link now points to the branch and location of the Azure DevOps repository.
-
-16. Select the **URL** to see your newly deployed website.
-
- :::image type="content" source="media/publish-devops/deployment-location.png" alt-text="Deployment location":::
-
-## Clean up resources
-
-Clean up the resources you deployed by deleting the resource group.
-
-1. From the Azure portal, select **Resource group** from the left menu.
-2. Enter the resource group name in the **Filter by name** field.
-3. Select the resource group name you used in this tutorial.
-4. Select **Delete resource group** from the top menu.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Configure Azure Static Web Apps](./configuration.md)
storage Secure File Transfer Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-known-issues.md
For performance issues and considerations, see [SSH File Transfer Protocol (SFTP
- `ssh-keyscan` is not supported. -- SSH commands, that are not SFTP, are not supported.
+- SSH and SCP commands, that are not SFTP, are not supported.
+
+- FTPS and FTP are not supported.
## Troubleshooting
synapse-analytics Quickstart Create Sql Pool Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-create-sql-pool-portal.md
Sign in to the [Azure portal](https://portal.azure.com/)
![Dedicated SQL pool create flow - basics tab.](media/quickstart-create-sql-pool/create-sql-pool-portal-02.png) > [!IMPORTANT]
- > Note that there are specific limitations for the names that dedicated SQL pools can use. Names can't contain special characters, must be 15 or less characters, not contain reserved words, and be unique in the workspace.
+ > Note that there are specific limitations for the names that dedicated SQL pools can use. Names can't contain special characters, must be 60 or less characters, not contain reserved words, and be unique in the workspace.
3. Select **Next: Additional settings**. 4. Select **None** to provision the dedicated SQL pool without data. Leave the default collation selected.
synapse-analytics Quickstart Create Sql Pool Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-create-sql-pool-studio.md
Sign in to the [Azure portal](https://portal.azure.com/)
![SQL pools create flow - basics tab.](media/quickstart-create-sql-pool/create-sql-pool-studio-24.png) > [!IMPORTANT]
- > Note that there are specific limitations for the names that dedicated SQL pools can use. Names can't contain special characters, must be 15 or less characters, not contain reserved words, and be unique in the workspace.
+ > Note that there are specific limitations for the names that dedicated SQL pools can use. Names can't contain special characters, must be 60 or less characters, not contain reserved words, and be unique in the workspace.
4. In the next tab, **Additional settings**, select **none** to provision the SQL pool without data. Leave the default collation as selected.
synapse-analytics Workspaces Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/workspaces-encryption.md
A complete Encryption-at-Rest solution ensures the data is never persisted in un
The first layer of encryption for Azure services is enabled with platform-managed keys. By default, Azure Disks, and data in Azure Storage accounts are automatically encrypted at rest. Learn more about how encryption is used in Microsoft Azure in the [Azure Encryption Overview](../../security/fundamentals/encryption-overview.md).
+> [!NOTE]
+> Some items considered customer content, such as table names, object names, and index names, may be transmitted in log files for support and troubleshooting by Microsoft.
+ ## Azure Synapse encryption This section will help you better understand how customer-managed key encryption is enabled and enforced in Synapse workspaces. This encryption uses existing keys or new keys generated in Azure Key Vault. A single key is used to encrypt all the data in a workspace. Synapse workspaces support RSA 2048 and 3072 byte-sized keys, and RSA-HSM keys.
synapse-analytics Synapse Notebook Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-notebook-activity.md
You can reference other notebooks in a Synapse notebook activity via calling [%r
Go to **Pipeline runs** under the **Monitor** tab, you'll see the pipeline you have triggered. Open the pipeline that contains notebook activity to see the run history. You can see the latest notebook run snapshot including both cells input and output by selecting the **open notebook** button.
-![see-notebook-activity-history](./media/synapse-notebook-activity/input-output-open-notebook.png)
+![Screenshot that shows the notebook activity history.](./media/synapse-notebook-activity/input-output-open-notebook.png)
+
+Open notebook snapshot:
+
+![Screenshot that shows an open notebook snapshot.](./media/synapse-notebook-activity/open-notebook-snapshot.png)
You can see the notebook activity input or output by selecting the **input** or **Output** button. If your pipeline failed with a user error, select the **output** to check the **result** field to see the detailed user error traceback.
-![screenshot-showing-see-output-user-error](./media/synapse-notebook-activity/notebook-output-user-error.png)
+![Screenshot that shows the user error details.](./media/synapse-notebook-activity/notebook-output-user-error.png)
## Synapse notebook activity definition
virtual-desktop Autoscale Scaling Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-scaling-plan.md
For best results, we recommend using autoscale with VMs you deployed with Azure
> > - You can only use autoscale in the Azure public cloud. > - You can only configure autoscale with the Azure portal.
-> - You can only deploy the scaling plan to US, Canadian, and European regions.
+> - You can only deploy the scaling plan to these regions:
+> - Canada Central
+> - Canada East
+> - Central US
+> - East US
+> - East US 2
+> - North Central US
+> - North Europe
+> - South Central US
+> - West Central US
+> - West Europe
+> - West US
+> - West US 2
+ ## Requirements
virtual-desktop Deploy Azure Ad Joined Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/deploy-azure-ad-joined-vm.md
Previously updated : 04/11/2022 Last updated : 04/27/2022
User accounts can be cloud-only or synced users from the same Azure AD tenant.
The following known limitations may impact access to your on-premises or Active Directory domain-joined resources and should be considered when deciding whether Azure AD-joined VMs are right for your environment. We currently recommend Azure AD-joined VMs for scenarios where users only need access to cloud-based resources or Azure AD-based authentication. - Azure Virtual Desktop (classic) doesn't support Azure AD-joined VMs.-- Azure AD-joined VMs don't currently support external users.
+- Azure AD-joined VMs don't currently support external identities, such as Azure AD Business-to-Business (B2B) and Azure AD Business-to-Consumer (B2C).
- Azure AD-joined VMs can only access Azure Files file shares for synced users using Azure AD Kerberos. - The Windows Store client doesn't currently support Azure AD-joined VMs. - Azure Virtual Desktop doesn't currently support single sign-on for Azure AD-joined VMs.
virtual-desktop Getting Started Feature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/getting-started-feature.md
Last updated 07/14/2021 -+ # Deploy Azure Virtual Desktop with the getting started feature
virtual-desktop Licensing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/remote-app-streaming/licensing.md
Previously updated : 07/14/2021 Last updated : 04/27/2022
Here's a summary of the two types of licenses for Azure Virtual Desktop you can
- Cost per user each month depends on user behavior - Only includes access rights to Azure Virtual Desktop
+> [!IMPORTANT]
+> Per-user access pricing only supports Windows 10 Enterprise multi-session and Windows 11 Enterprise multi-session. Per-user access pricing currently doesn't support Windows Server session hosts.
+ ## Licensing other products and services for use with Azure Virtual Desktop The Azure Virtual Desktop per-user access license isn't a full replacement for a Windows or Microsoft 365 license. Per-user licenses only grant access rights to Azure Virtual Desktop and don't include Microsoft Office, Microsoft 365 Defender, or Universal Print. This means that if you choose a per-user license, you'll need to separately license other products and services to grant your users access to them in your Azure Virtual Desktop environment.
virtual-machine-scale-sets Instance Generalized Image Version Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/instance-generalized-image-version-cli.md
- Title: Create a scale set from a generalized image with Azure CLI
-description: Create a scale set using a generalized image in an Azure Compute Gallery using the Azure CLI.
------ Previously updated : 05/01/2020---
-# Create a scale set from a generalized image with Azure CLI
-
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Uniform scale sets
-
-Create a scale set from a generalized image version stored in an [Azure Compute Gallery](../virtual-machines/shared-image-galleries.md) using the Azure CLI. If want to create a scale set using a specialized image version, see [Create scale set instances from a specialized image](instance-specialized-image-version-cli.md).
-
-If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version 2.4.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli).
-
-Replace resource names as needed in this example.
-
-List the image definitions in a gallery using [az sig image-definition list](/cli/azure/sig/image-definition#az-sig-image-definition-list) to see the name and ID of the definitions.
-
-```azurecli-interactive
-resourceGroup=myGalleryRG
-gallery=myGallery
-az sig image-definition list \
- --resource-group $resourceGroup \
- --gallery-name $gallery \
- --query "[].[name, id]" \
- --output tsv
-```
-
-Create the scale set using [`az vmss create`](/cli/azure/vmss#az-vmss-create).
-
-Use the image definition ID for `--image` to create the scale set instances from the latest version of the image that is available. You can also create the scale set instances from a specific version by supplying the image version ID for `--image`. Be aware that using a specific image version means automation could fail if that specific image version isn't available because it was deleted or removed from the region. We recommend using the image definition ID for creating your new VM, unless a specific image version is required.
-
-In this example, we are creating instances from the latest version of the *myImageDefinition* image.
-
-```azurecli
-az group create --name myResourceGroup --location eastus
-az vmss create \
- --resource-group myResourceGroup \
- --name myScaleSet \
- --image "/subscriptions/<Subscription ID>/resourceGroups/myGalleryRG/providers/Microsoft.Compute/galleries/myGallery/images/myImageDefinition"
- --admin-username azureuser \
- --generate-ssh-keys
-```
-
-It takes a few minutes to create and configure all the scale set resources and VMs.
-
-## Next steps
-[Azure Image Builder (preview)](../virtual-machines/image-builder-overview.md) can help automate image version creation, you can even use it to update and [create a new image version from an existing image version](../virtual-machines/linux/image-builder-gallery-update-image-version.md).
-
-You can also create Azure Compute Gallery resource using templates. There are several Azure Quickstart Templates available:
--- [Create an Azure Compute Gallery](https://azure.microsoft.com/resources/templates/sig-create/)-- [Create an Image Definition in an Azure Compute Gallery](https://azure.microsoft.com/resources/templates/sig-image-definition-create/)-- [Create an Image Version in an Azure Compute Gallery](https://azure.microsoft.com/resources/templates/sig-image-version-create/)-
-For more information about Shared Image Galleries, see the [Overview](../virtual-machines/shared-image-galleries.md). If you run into issues, see [Troubleshooting shared image galleries](../virtual-machines/troubleshooting-shared-images.md).
virtual-machine-scale-sets Instance Generalized Image Version Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/instance-generalized-image-version-powershell.md
- Title: Create a scale set from a generalized image with Azure PowerShell
-description: Create a scale set using a generalized image in an Azure Compute Gallery using PowerShell.
------ Previously updated : 05/04/2020----
-# Create a scale set from a generalized image using PowerShell
-
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Uniform scale sets
-
-Create a VM from a generalized image version stored in an [Azure Compute Gallery](../virtual-machines/shared-image-galleries.md). If want to create a scale set using a specialized image, see [Create scale set instances from a specialized image](instance-specialized-image-version-powershell.md).
-
-Once you have a generalized image, you can create a virtual machine scale set using the [New-AzVmss](/powershell/module/az.compute/new-azvmss) cmdlet.
-
-In this example, we are using the image definition ID to ensure your new VM will use the most recent version of an image. You can also use a specific version by using the image version ID for `-ImageReferenceId`. For example, to use image version *1.0.0* type: `-ImageReferenceId "/subscriptions/<subscription ID where the gallery is located>/resourceGroups/myGalleryRG/providers/Microsoft.Compute/galleries/myGallery/images/myImageDefinition/versions/1.0.0"`.
-
-Be aware that using a specific image version means automation could fail if that specific image version isn't available because it was deleted or removed from the region. We recommend using the image definition ID for creating your new VM, unless a specific image version is required.
--
-The following examples create a scale set named *myScaleSet*, in the *myVMSSRG* resource group, in the *SouthCentralUS* location. The scale set will be created from the *myImageDefinition* image, in the *myGallery* image gallery in the *myGalleryRG* resource group. When prompted, set your own administrative credentials for the VM instances in the scale set.
--
-## Simplified parameter set
-
-To quickly create a scale set, while providing minimal information, use the simplified parameter set to create a scale set from a Share Image Gallery image.
-
-```azurepowershell-interactive
-$imageDefinition = Get-AzGalleryImageDefinition `
- -GalleryName myGallery `
- -ResourceGroupName myGalleryRG `
- -Name myImageDefinition
-
-# Create user object
-
-$cred = Get-Credential `
- -Message "Enter a username and password for the virtual machine."
-
-# Create the resource group and scale set
-New-AzResourceGroup -ResourceGroupName myVMSSRG -Location SouthCentralUS
-New-AzVmss `
- -Credential $cred `
- -VMScaleSetName myScaleSet `
- -ImageName $imageDefinition.Id `
- -UpgradePolicyMode Automatic `
- -ResourceGroupName myVMSSRG
-```
-
-It takes a few minutes to create and configure all the scale set resources and VMs.
-
-## Extended parameter set
-
-For full control over all of the resources, including naming, use the full parameter set to create a scale set using an Azure Compute Gallery image.
-
-```azurepowershell-interactive
-# Get the image definition
-
-$imageDefinition = Get-AzGalleryImageDefinition `
- -GalleryName myGallery `
- -ResourceGroupName myGalleryRG `
- -Name myImageDefinition
-
-# Create user object
-
-$cred = Get-Credential `
- -Message "Enter a username and password for the virtual machine."
-
-# Define variables for the scale set
-$resourceGroupName = "myVMSSRG"
-$scaleSetName = "myScaleSet"
-$location = "South Central US"
-
-# Create a resource group
-New-AzResourceGroup -ResourceGroupName $resourceGroupName -Location $location
-
-# Create a networking pieces
-$subnet = New-AzVirtualNetworkSubnetConfig `
- -Name "mySubnet" `
- -AddressPrefix 10.0.0.0/24
-$vnet = New-AzVirtualNetwork `
- -ResourceGroupName $resourceGroupName `
- -Name "myVnet" `
- -Location $location `
- -AddressPrefix 10.0.0.0/16 `
- -Subnet $subnet
-$publicIP = New-AzPublicIpAddress `
- -ResourceGroupName $resourceGroupName `
- -Location $location `
- -AllocationMethod Static `
- -Name "myPublicIP"
-$frontendIP = New-AzLoadBalancerFrontendIpConfig `
- -Name "myFrontEndPool" `
- -PublicIpAddress $publicIP
-$backendPool = New-AzLoadBalancerBackendAddressPoolConfig -Name "myBackEndPool"
-$inboundNATPool = New-AzLoadBalancerInboundNatPoolConfig `
- -Name "myRDPRule" `
- -FrontendIpConfigurationId $frontendIP.Id `
- -Protocol TCP `
- -FrontendPortRangeStart 50001 `
- -FrontendPortRangeEnd 50010 `
- -BackendPort 3389
-# Create the load balancer and health probe
-$lb = New-AzLoadBalancer `
- -ResourceGroupName $resourceGroupName `
- -Name "myLoadBalancer" `
- -Location $location `
- -FrontendIpConfiguration $frontendIP `
- -BackendAddressPool $backendPool `
- -InboundNatPool $inboundNATPool
-Add-AzLoadBalancerProbeConfig -Name "myHealthProbe" `
- -LoadBalancer $lb `
- -Protocol TCP `
- -Port 80 `
- -IntervalInSeconds 15 `
- -ProbeCount 2
-Add-AzLoadBalancerRuleConfig `
- -Name "myLoadBalancerRule" `
- -LoadBalancer $lb `
- -FrontendIpConfiguration $lb.FrontendIpConfigurations[0] `
- -BackendAddressPool $lb.BackendAddressPools[0] `
- -Protocol TCP `
- -FrontendPort 80 `
- -BackendPort 80 `
- -Probe (Get-AzLoadBalancerProbeConfig -Name "myHealthProbe" -LoadBalancer $lb)
-Set-AzLoadBalancer -LoadBalancer $lb
-
-# Create IP address configurations
-$ipConfig = New-AzVmssIpConfig `
- -Name "myIPConfig" `
- -LoadBalancerBackendAddressPoolsId $lb.BackendAddressPools[0].Id `
- -LoadBalancerInboundNatPoolsId $inboundNATPool.Id `
- -SubnetId $vnet.Subnets[0].Id
-
-# Create a configuration
-$vmssConfig = New-AzVmssConfig `
- -Location $location `
- -SkuCapacity 2 `
- -SkuName "Standard_DS2" `
- -UpgradePolicyMode "Automatic"
-
-# Reference the image version
-Set-AzVmssStorageProfile $vmssConfig `
- -OsDiskCreateOption "FromImage" `
- -ImageReferenceId $imageDefinition.Id
-
-# Complete the configuration
-Set-AzVmssOsProfile $vmssConfig `
- -AdminUsername $cred.UserName `
- -AdminPassword $cred.Password `
- -ComputerNamePrefix "myVM"
-Add-AzVmssNetworkInterfaceConfiguration `
- -VirtualMachineScaleSet $vmssConfig `
- -Name "network-config" `
- -Primary $true `
- -IPConfiguration $ipConfig
-
-# Create the scale set
-New-AzVmss `
- -ResourceGroupName $resourceGroupName `
- -Name $scaleSetName `
- -VirtualMachineScaleSet $vmssConfig
-```
-
-It takes a few minutes to create and configure all the scale set resources and VMs.
-
-## Next steps
-[Azure Image Builder (preview)](../virtual-machines/image-builder-overview.md) can help automate image version creation, you can even use it to update and [create a new image version from an existing image version](../virtual-machines/linux/image-builder-gallery-update-image-version.md).
-
-You can also create Azure Compute Gallery resource using templates. There are several Azure Quickstart Templates available:
--- [Create an Azure Compute Gallery](https://azure.microsoft.com/resources/templates/sig-create/)-- [Create an Image Definition in an Azure Compute Gallery](https://azure.microsoft.com/resources/templates/sig-image-definition-create/)-- [Create an Image Version in an Azure Compute Gallery](https://azure.microsoft.com/resources/templates/sig-image-version-create/)-
-For more information about Shared Image Galleries, see the [Overview](../virtual-machines/shared-image-galleries.md). If you run into issues, see [Troubleshooting shared image galleries](../virtual-machines/troubleshooting-shared-images.md).
virtual-machine-scale-sets Instance Generalized Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/instance-generalized-image-version.md
+
+ Title: Create a scale set from a generalized image
+description: Create a scale set using a generalized image in an Azure Compute Gallery.
++++++ Last updated : 04/26/2022+++
+# Create a scale set from a generalized image
+
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Uniform scale sets
+
+Create a scale set from a generalized image version stored in an [Azure Compute Gallery](../virtual-machines/shared-image-galleries.md). If you want to create a scale set using a specialized image version, see [Create scale set instances from a specialized image](instance-specialized-image-version-cli.md).
+
+## Create a scale set from an image in your gallery
+
+### [CLI](#tab/cli)
+
+Replace resource names as needed in this example.
+
+List the image definitions in a gallery using [az sig image-definition list](/cli/azure/sig/image-definition#az-sig-image-definition-list) to see the name and ID of the definitions.
+
+```azurecli-interactive
+resourceGroup=myGalleryRG
+gallery=myGallery
+az sig image-definition list \
+ --resource-group $resourceGroup \
+ --gallery-name $gallery \
+ --query "[].[name, id]" \
+ --output tsv
+```
+
+Create the scale set using [`az vmss create`](/cli/azure/vmss#az-vmss-create).
+
+Use the image definition ID for `--image` to create the scale set instances from the latest version of the image that is available. You can also create the scale set instances from a specific version by supplying the image version ID for `--image`. Be aware that using a specific image version means automation could fail if that specific image version isn't available because it was deleted or removed from the region. We recommend using the image definition ID for creating your new VM, unless a specific image version is required.
+
+In this example, we are creating instances from the latest version of the *myImageDefinition* image.
+
+```azurecli
+az group create --name myResourceGroup --location eastus
+az vmss create \
+ --resource-group myResourceGroup \
+ --name myScaleSet \
+ --image "/subscriptions/<Subscription ID>/resourceGroups/myGalleryRG/providers/Microsoft.Compute/galleries/myGallery/images/myImageDefinition"
+ --admin-username azureuser \
+ --generate-ssh-keys
+```
+
+It takes a few minutes to create and configure all the scale set resources and VMs.
+
+### [Portal](#tab/portal)
+
+Creating a scale set using an image stored in an Azure Compute Gallery is the same as creating a scale set using a Marketplace image, except when you select an image, select **See all images**.
++
+The **Select an image** page will open. Select **My images** if the image you want is in your own gallery, or select **Shared images** if the image has been shared to you from someone else's gallery.
++++
+### [PowerShell](#tab/powershell)
+
+The following examples create a scale set named *myScaleSet*, in the *myVMSSRG* resource group, in the *SouthCentralUS* location. The scale set will be created from the *myImageDefinition* image, in the *myGallery* image gallery in the *myGalleryRG* resource group. When prompted, set your own administrative credentials for the VM instances in the scale set.
++
+**Simplified parameter set**
+
+To quickly create a scale set, while providing minimal information, use the simplified parameter set to create a scale set from a Share Image Gallery image.
+
+```azurepowershell-interactive
+$imageDefinition = Get-AzGalleryImageDefinition `
+ -GalleryName myGallery `
+ -ResourceGroupName myGalleryRG `
+ -Name myImageDefinition
+
+# Create user object
+
+$cred = Get-Credential `
+ -Message "Enter a username and password for the virtual machine."
+
+# Create the resource group and scale set
+New-AzResourceGroup -ResourceGroupName myVMSSRG -Location SouthCentralUS
+New-AzVmss `
+ -Credential $cred `
+ -VMScaleSetName myScaleSet `
+ -ImageName $imageDefinition.Id `
+ -UpgradePolicyMode Automatic `
+ -ResourceGroupName myVMSSRG
+```
+
+It takes a few minutes to create and configure all the scale set resources and VMs.
+
+**Extended parameter set**
+
+For full control over all of the resources, including naming, use the full parameter set to create a scale set using an Azure Compute Gallery image.
+
+```azurepowershell-interactive
+# Get the image definition
+
+$imageDefinition = Get-AzGalleryImageDefinition `
+ -GalleryName myGallery `
+ -ResourceGroupName myGalleryRG `
+ -Name myImageDefinition
+
+# Create user object
+
+$cred = Get-Credential `
+ -Message "Enter a username and password for the virtual machine."
+
+# Define variables for the scale set
+$resourceGroupName = "myVMSSRG"
+$scaleSetName = "myScaleSet"
+$location = "South Central US"
+
+# Create a resource group
+New-AzResourceGroup -ResourceGroupName $resourceGroupName -Location $location
+
+# Create a networking pieces
+$subnet = New-AzVirtualNetworkSubnetConfig `
+ -Name "mySubnet" `
+ -AddressPrefix 10.0.0.0/24
+$vnet = New-AzVirtualNetwork `
+ -ResourceGroupName $resourceGroupName `
+ -Name "myVnet" `
+ -Location $location `
+ -AddressPrefix 10.0.0.0/16 `
+ -Subnet $subnet
+$publicIP = New-AzPublicIpAddress `
+ -ResourceGroupName $resourceGroupName `
+ -Location $location `
+ -AllocationMethod Static `
+ -Name "myPublicIP"
+$frontendIP = New-AzLoadBalancerFrontendIpConfig `
+ -Name "myFrontEndPool" `
+ -PublicIpAddress $publicIP
+$backendPool = New-AzLoadBalancerBackendAddressPoolConfig -Name "myBackEndPool"
+$inboundNATPool = New-AzLoadBalancerInboundNatPoolConfig `
+ -Name "myRDPRule" `
+ -FrontendIpConfigurationId $frontendIP.Id `
+ -Protocol TCP `
+ -FrontendPortRangeStart 50001 `
+ -FrontendPortRangeEnd 50010 `
+ -BackendPort 3389
+# Create the load balancer and health probe
+$lb = New-AzLoadBalancer `
+ -ResourceGroupName $resourceGroupName `
+ -Name "myLoadBalancer" `
+ -Location $location `
+ -FrontendIpConfiguration $frontendIP `
+ -BackendAddressPool $backendPool `
+ -InboundNatPool $inboundNATPool
+Add-AzLoadBalancerProbeConfig -Name "myHealthProbe" `
+ -LoadBalancer $lb `
+ -Protocol TCP `
+ -Port 80 `
+ -IntervalInSeconds 15 `
+ -ProbeCount 2
+Add-AzLoadBalancerRuleConfig `
+ -Name "myLoadBalancerRule" `
+ -LoadBalancer $lb `
+ -FrontendIpConfiguration $lb.FrontendIpConfigurations[0] `
+ -BackendAddressPool $lb.BackendAddressPools[0] `
+ -Protocol TCP `
+ -FrontendPort 80 `
+ -BackendPort 80 `
+ -Probe (Get-AzLoadBalancerProbeConfig -Name "myHealthProbe" -LoadBalancer $lb)
+Set-AzLoadBalancer -LoadBalancer $lb
+
+# Create IP address configurations
+$ipConfig = New-AzVmssIpConfig `
+ -Name "myIPConfig" `
+ -LoadBalancerBackendAddressPoolsId $lb.BackendAddressPools[0].Id `
+ -LoadBalancerInboundNatPoolsId $inboundNATPool.Id `
+ -SubnetId $vnet.Subnets[0].Id
+
+# Create a configuration
+$vmssConfig = New-AzVmssConfig `
+ -Location $location `
+ -SkuCapacity 2 `
+ -SkuName "Standard_DS2" `
+ -UpgradePolicyMode "Automatic"
+
+# Reference the image version
+Set-AzVmssStorageProfile $vmssConfig `
+ -OsDiskCreateOption "FromImage" `
+ -ImageReferenceId $imageDefinition.Id
+
+# Complete the configuration
+Set-AzVmssOsProfile $vmssConfig `
+ -AdminUsername $cred.UserName `
+ -AdminPassword $cred.Password `
+ -ComputerNamePrefix "myVM"
+Add-AzVmssNetworkInterfaceConfiguration `
+ -VirtualMachineScaleSet $vmssConfig `
+ -Name "network-config" `
+ -Primary $true `
+ -IPConfiguration $ipConfig
+
+# Create the scale set
+New-AzVmss `
+ -ResourceGroupName $resourceGroupName `
+ -Name $scaleSetName `
+ -VirtualMachineScaleSet $vmssConfig
+```
+
+It takes a few minutes to create and configure all the scale set resources and VMs.
+++
+## Create a scale set from an image in a community gallery
+
+> [!IMPORTANT]
+> Azure Compute Gallery ΓÇô community galleries is currently in PREVIEW and subject to the [Preview Terms for Azure Compute Gallery - community gallery](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> Microsoft does not provide support for images in the [community gallery](../virtual-machines/azure-compute-gallery.md#community).
+>
+> You can create scale sets from images in the community gallery, but if the image is removed at a later time, you won't be able to scale up. To ensure you have long-term access to the image, you should consider creating an image in your own gallery from a VM created using the community gallery image that you want to use for your scale set. For more information, see [Create an image definition and an image version](../virtual-machines/image-version.md).
+
+As an end user, to get the public name of a community gallery, you need to use the portal. Go to **Virtual machines** > **Create** > **Azure virtual machine** > **Image** > **See all images** > **Community Images** > **Public gallery name**.
+
+### [CLI](#tab/cli2)
+
+If you choose to install and use the CLI locally, the community gallery requires that you are running the Azure CLI version 2.35.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli).
+
+Replace resource names as needed in this example.
+
+> [!NOTE]
+> As an end user, to get the public name of a community gallery, you need to use the portal. Go to **Virtual machines** > **Create** > **Azure virtual machine** > **Image** > **See all images** > **Community Images** > **Public gallery name**.
+
+To create a VM using an image shared to a community gallery, use the unique ID of the image for the `--image` which will be in the following format:
+
+```
+/CommunityGalleries/<community gallery name>/Images/<image name>/Versions/latest
+```
+
+To list all of the image definitions that are available in a community gallery using [az sig image-definition list-community](/cli/azure/sig/image-definition#az_sig_image_definition_list_community). In this example, we list all of the images in the *ContosoImage* gallery in *West US* and by name, the unique ID that is needed to create a VM, OS and OS state.
+
+```azurecli-interactive
+ az sig image-definition list-community \
+ --public-gallery-name "ContosoImages-1a2b3c4d-1234-abcd-1234-1a2b3c4d5e6f" \
+ --location westus \
+ --query [*]."{Name:name,ID:uniqueId,OS:osType,State:osState}" -o table
+```
+
+Create the scale set by setting the `--image` parameter to the unique ID of the image in the community gallery. In this example, we are creating a `Flexible` scale set.
+
+```azurecli
+az group create --name myResourceGroup --location eastus
+
+imgDef="/CommunityGalleries/ContosoImages-1a2b3c4d-1234-abcd-1234-1a2b3c4d5e6f/Images/myLinuxImage/Versions/latest"
+
+az vmss create \
+ --resource-group myResourceGroup \
+ --name myScaleSet \
+ --image $imgDef \
+ --orchestration-mode Flexible
+ --admin-username azureuser \
+ --generate-ssh-keys
+```
+
+When using a community image, you'll be prompted to accept the legal terms. The message will look like this:
+
+```output
+To create the scale set from community gallery image, you must accept the license agreement and privacy statement: http://contoso.com. (If you want to accept the legal terms by default, please use the option '--accept-term' when creating VM/VMSS) (Y/n):
+```
+### [Portal](#tab/portal2)
+
+Creating a scale set using an image from the community gallery is the same as creating a scale set using a Marketplace image, except when you select an image, select **See all images**.
++
+The **Select an image** page will open. Select **Community images (PREVIEW)** to see the list of images available in the community gallery.
++++
+## Next steps
+[Azure Image Builder (preview)](../virtual-machines/image-builder-overview.md) can help automate image version creation, you can even use it to update and [create a new image version from an existing image version](../virtual-machines/linux/image-builder-gallery-update-image-version.md).
+
+You can also create Azure Compute Gallery resource using templates. There are several Azure Quickstart Templates available:
+
+- [Create an Azure Compute Gallery](https://azure.microsoft.com/resources/templates/sig-create/)
+- [Create an Image Definition in an Azure Compute Gallery](https://azure.microsoft.com/resources/templates/sig-image-definition-create/)
+- [Create an Image Version in an Azure Compute Gallery](https://azure.microsoft.com/resources/templates/sig-image-version-create/)
+
+For more information about Shared Image Galleries, see the [Overview](../virtual-machines/shared-image-galleries.md). If you run into issues, see [Troubleshooting shared image galleries](../virtual-machines/troubleshooting-shared-images.md).
virtual-machine-scale-sets Instance Specialized Image Version Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/instance-specialized-image-version-cli.md
- Title: Create a scale set from a specialized image version using the Azure CLI
-description: Create a scale set using a specialized image version in an Azure Compute Gallery using the Azure CLI.
------ Previously updated : 05/01/2020----
-# Create a scale set using a specialized image version with the Azure CLI
-
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Uniform scale sets
-
-Create a scale set from a [specialized image version](../virtual-machines/shared-image-galleries.md#generalized-and-specialized-images) stored in an Azure Compute Gallery. If you want to create a scale set using a generalized image version, see [Create a scale set from a generalized image](instance-generalized-image-version-cli.md).
-
-If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version 2.4.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli).
-
-Replace resource names as needed in this example.
-
-List the image definitions in a gallery using [az sig image-definition list](/cli/azure/sig/image-definition#az-sig-image-definition-list) to see the name and ID of the definitions.
-
-```azurecli-interactive
-resourceGroup=myGalleryRG
-gallery=myGallery
-az sig image-definition list \
- --resource-group $resourceGroup \
- --gallery-name $gallery \
- --query "[].[name, id]" \
- --output tsv
-```
-
-Create a scale set using [`az vmss create`](/cli/azure/vmss#az-vmss-create) using the `--specialized` parameter to indicate the image is a specialized image.
-
-Use the image definition ID for `--image` to create the scale set instances from the latest version of the image that is available. You can also create the scale set instances from a specific version by supplying the image version ID for `--image`. Be aware that using a specific image version means automation could fail if that specific image version isn't available because it was deleted or removed from the region. We recommend using the image definition ID for creating your new VM, unless a specific image version is required.
-
-In this example, we are creating instances from the latest version of the *myImageDefinition* image.
-
-```azurecli
-az group create --name myResourceGroup --location eastus
-az vmss create \
- --resource-group myResourceGroup \
- --name myScaleSet \
- --image "/subscriptions/<Subscription ID>/resourceGroups/myGalleryRG/providers/Microsoft.Compute/galleries/myGallery/images/myImageDefinition" \
- --specialized
-```
--
-## Next steps
-[Azure Image Builder (preview)](../virtual-machines/image-builder-overview.md) can help automate image version creation, you can even use it to update and [create a new image version from an existing image version](../virtual-machines/linux/image-builder-gallery-update-image-version.md).
-
-You can also create Azure Compute Gallery resource using templates. There are several Azure Quickstart Templates available:
--- [Create an Azure Compute Gallery](https://azure.microsoft.com/resources/templates/sig-create/)-- [Create an Image Definition in an Azure Compute Gallery](https://azure.microsoft.com/resources/templates/sig-image-definition-create/)-- [Create an Image Version in an Azure Compute Gallery](https://azure.microsoft.com/resources/templates/sig-image-version-create/)
virtual-machine-scale-sets Instance Specialized Image Version Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/instance-specialized-image-version-powershell.md
- Title: Create a scale set from a specialized image
-description: Create a scale set using a specialized image in an Azure Compute Gallery.
------ Previously updated : 05/04/2020----
-# Create a scale set from a specialized image using PowerShell
-
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Uniform scale sets
-
-Create a VM from a specialized image version stored in an [Azure Compute Gallery](../virtual-machines/shared-image-galleries.md) using Azure PowerShell. If want to create a scale set using a generalized image version, see [Create scale set instances from a generalized image version](instance-generalized-image-version-powershell.md).
-
-Once you have a specialized image in your gallery, you can create a virtual machine scale set using the [New-AzVmss](/powershell/module/az.compute/new-azvmss) cmdlet.
-
-In this example, we are using the image definition ID to ensure your new VM will use the most recent version of an image. You can also use a specific version by using the image version ID for `-ImageReferenceId`. For example, to use image version *1.0.0* type: `-ImageReferenceId "/subscriptions/<subscription ID where the gallery is located>/resourceGroups/myGalleryRG/providers/Microsoft.Compute/galleries/myGallery/images/myImageDefinition/versions/1.0.0"`.
-
-Be aware that using a specific image version means automation could fail if that specific image version isn't available because it was deleted or removed from the region. We recommend using the image definition ID for creating your new VM, unless a specific image version is required.
-
-The following examples create a scale set named *myScaleSet*, in the *myVMSSRG* resource group, in the *SouthCentralUS* location. The scale set will be created from the *myImageDefinition* image, in the *myGallery* image gallery in the *myGalleryRG* resource group. When prompted, set your own administrative credentials for the VM instances in the scale set.
---
-```azurepowershell-interactive
-# Get the image definition
-
-$imageDefinition = Get-AzGalleryImageDefinition `
- -GalleryName myGallery `
- -ResourceGroupName myGalleryRG `
- -Name myImageDefinition
-
-# Define variables for the scale set
-$resourceGroupName = "myVMSSRG"
-$scaleSetName = "myScaleSet"
-$location = "South Central US"
-
-# Create a resource group
-New-AzResourceGroup -ResourceGroupName $resourceGroupName -Location $location
-
-# Create a networking pieces
-$subnet = New-AzVirtualNetworkSubnetConfig `
- -Name "mySubnet" `
- -AddressPrefix 10.0.0.0/24
-$vnet = New-AzVirtualNetwork `
- -ResourceGroupName $resourceGroupName `
- -Name "myVnet" `
- -Location $location `
- -AddressPrefix 10.0.0.0/16 `
- -Subnet $subnet
-$publicIP = New-AzPublicIpAddress `
- -ResourceGroupName $resourceGroupName `
- -Location $location `
- -AllocationMethod Static `
- -Name "myPublicIP"
-$frontendIP = New-AzLoadBalancerFrontendIpConfig `
- -Name "myFrontEndPool" `
- -PublicIpAddress $publicIP
-$backendPool = New-AzLoadBalancerBackendAddressPoolConfig -Name "myBackEndPool"
-$inboundNATPool = New-AzLoadBalancerInboundNatPoolConfig `
- -Name "myRDPRule" `
- -FrontendIpConfigurationId $frontendIP.Id `
- -Protocol TCP `
- -FrontendPortRangeStart 50001 `
- -FrontendPortRangeEnd 50010 `
- -BackendPort 3389
-# Create the load balancer and health probe
-$lb = New-AzLoadBalancer `
- -ResourceGroupName $resourceGroupName `
- -Name "myLoadBalancer" `
- -Location $location `
- -FrontendIpConfiguration $frontendIP `
- -BackendAddressPool $backendPool `
- -InboundNatPool $inboundNATPool
-Add-AzLoadBalancerProbeConfig -Name "myHealthProbe" `
- -LoadBalancer $lb `
- -Protocol TCP `
- -Port 80 `
- -IntervalInSeconds 15 `
- -ProbeCount 2
-Add-AzLoadBalancerRuleConfig `
- -Name "myLoadBalancerRule" `
- -LoadBalancer $lb `
- -FrontendIpConfiguration $lb.FrontendIpConfigurations[0] `
- -BackendAddressPool $lb.BackendAddressPools[0] `
- -Protocol TCP `
- -FrontendPort 80 `
- -BackendPort 80 `
- -Probe (Get-AzLoadBalancerProbeConfig -Name "myHealthProbe" -LoadBalancer $lb)
-Set-AzLoadBalancer -LoadBalancer $lb
-
-# Create IP address configurations
-$ipConfig = New-AzVmssIpConfig `
- -Name "myIPConfig" `
- -LoadBalancerBackendAddressPoolsId $lb.BackendAddressPools[0].Id `
- -LoadBalancerInboundNatPoolsId $inboundNATPool.Id `
- -SubnetId $vnet.Subnets[0].Id
-
-# Create a configuration
-$vmssConfig = New-AzVmssConfig `
- -Location $location `
- -SkuCapacity 2 `
- -SkuName "Standard_DS2" `
- -UpgradePolicyMode "Automatic"
-
-# Reference the image version
-Set-AzVmssStorageProfile $vmssConfig `
- -OsDiskCreateOption "FromImage" `
- -ImageReferenceId $imageDefinition.Id
-
-# Complete the configuration
-
-Add-AzVmssNetworkInterfaceConfiguration `
- -VirtualMachineScaleSet $vmssConfig `
- -Name "network-config" `
- -Primary $true `
- -IPConfiguration $ipConfig
-
-# Create the scale set
-New-AzVmss `
- -ResourceGroupName $resourceGroupName `
- -Name $scaleSetName `
- -VirtualMachineScaleSet $vmssConfig
-```
-
-It takes a few minutes to create and configure all the scale set resources and VMs.
-
-## Next steps
-[Azure Image Builder (preview)](../virtual-machines/image-builder-overview.md) can help automate image version creation, you can even use it to update and [create a new image version from an existing image version](../virtual-machines/linux/image-builder-gallery-update-image-version.md).
-
-You can also create Azure Compute Gallery resource using templates. There are several Azure Quickstart Templates available:
--- [Create an Azure Compute Gallery](https://azure.microsoft.com/resources/templates/sig-create/)-- [Create an Image Definition in an Azure Compute Gallery](https://azure.microsoft.com/resources/templates/sig-image-definition-create/)-- [Create an Image Version in an Azure Compute Gallery](https://azure.microsoft.com/resources/templates/sig-image-version-create/)-
-For more information about Shared Image Galleries, see the [Overview](../virtual-machines/shared-image-galleries.md). If you run into issues, see [Troubleshooting shared image galleries](../virtual-machines/troubleshooting-shared-images.md).
virtual-machine-scale-sets Instance Specialized Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/instance-specialized-image-version.md
+
+ Title: Create a scale set from a specialized image version using the Azure CLI
+description: Create a scale set using a specialized image version in an Azure Compute Gallery using the Azure CLI.
++++++ Last updated : 04/26/2022++++
+# Create a scale set using a specialized image version with the Azure CLI
+
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Uniform scale sets
+
+Create a scale set from a [specialized image version](../virtual-machines/shared-image-galleries.md#generalized-and-specialized-images) stored in an Azure Compute Gallery. If you want to create a scale set using a generalized image version, see [Create a scale set from a generalized image](instance-generalized-image-version-cli.md).
+
+> [!IMPORTANT]
+>
+> When you create a new scale set from a specialized image, the VMs retain the computer name of the original VM. Other computer-specific information, like the CMID, is also kept. This duplicate information can cause issues. When using a specialized image, be aware of what types of computer-specific information your applications rely on.
++
+Replace resource names as needed in these examples.
+
+## Create a scale set from your gallery
+### [Portal](#tab/portal)
+
+Creating a scale set using an image stored in an Azure Compute Gallery is the same as creating a scale set using a Marketplace image, except when you select an image, select **See all images**.
++
+The **Select an image** page will open. Select **My images** if the image you want is in your own gallery, or select **Shared images** if the image has been shared to you from someone else's gallery.
+
+### [CLI](#tab/cli)
+If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version 2.35.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli).
+
+List the image definitions in a gallery using [az sig image-definition list](/cli/azure/sig/image-definition#az-sig-image-definition-list) to see the name and ID of the definitions.
+
+```azurecli-interactive
+resourceGroup=myGalleryRG
+gallery=myGallery
+az sig image-definition list \
+ --resource-group $resourceGroup \
+ --gallery-name $gallery \
+ --query "[].[name, id]" \
+ --output tsv
+```
+
+Create a scale set using [`az vmss create`](/cli/azure/vmss#az-vmss-create) using the `--specialized` parameter to indicate the image is a specialized image.
+
+Use the image definition ID for `--image` to create the scale set instances from the latest version of the image that is available. You can also create the scale set instances from a specific version by supplying the image version ID for `--image`. Be aware that using a specific image version means automation could fail if that specific image version isn't available because it was deleted or removed from the region. We recommend using the image definition ID for creating your new VM, unless a specific image version is required.
+
+In this example, we are creating instances from the latest version of the *myImageDefinition* image.
+
+```azurecli
+az group create --name myResourceGroup --location eastus
+az vmss create \
+ --resource-group myResourceGroup \
+ --name myScaleSet \
+ --image "/subscriptions/<Subscription ID>/resourceGroups/myGalleryRG/providers/Microsoft.Compute/galleries/myGallery/images/myImageDefinition" \
+ --specialized
+```
+### [PowerShell](#tab/powershell)
+
+In this example, we are using the image definition ID to ensure your new VM will use the most recent version of an image. You can also use a specific version by using the image version ID for `-ImageReferenceId`. For example, to use image version *1.0.0* type: `-ImageReferenceId "/subscriptions/<subscription ID where the gallery is located>/resourceGroups/myGalleryRG/providers/Microsoft.Compute/galleries/myGallery/images/myImageDefinition/versions/1.0.0"`.
+
+Be aware that using a specific image version means automation could fail if that specific image version isn't available because it was deleted or removed from the region. We recommend using the image definition ID for creating your new VM, unless a specific image version is required.
+
+The following example creates a scale set named *myScaleSet*, in the *myVMSSRG* resource group, in the *SouthCentralUS* location. The scale set will be created from the *myImageDefinition* image, in the *myGallery* image gallery in the *myGalleryRG* resource group. When prompted, set your own administrative credentials for the VM instances in the scale set.
+++
+```azurepowershell-interactive
+# Get the image definition
+
+$imageDefinition = Get-AzGalleryImageDefinition `
+ -GalleryName myGallery `
+ -ResourceGroupName myGalleryRG `
+ -Name myImageDefinition
+
+# Define variables for the scale set
+$resourceGroupName = "myVMSSRG"
+$scaleSetName = "myScaleSet"
+$location = "South Central US"
+
+# Create a resource group
+New-AzResourceGroup -ResourceGroupName $resourceGroupName -Location $location
+
+# Create a networking pieces
+$subnet = New-AzVirtualNetworkSubnetConfig `
+ -Name "mySubnet" `
+ -AddressPrefix 10.0.0.0/24
+$vnet = New-AzVirtualNetwork `
+ -ResourceGroupName $resourceGroupName `
+ -Name "myVnet" `
+ -Location $location `
+ -AddressPrefix 10.0.0.0/16 `
+ -Subnet $subnet
+$publicIP = New-AzPublicIpAddress `
+ -ResourceGroupName $resourceGroupName `
+ -Location $location `
+ -AllocationMethod Static `
+ -Name "myPublicIP"
+$frontendIP = New-AzLoadBalancerFrontendIpConfig `
+ -Name "myFrontEndPool" `
+ -PublicIpAddress $publicIP
+$backendPool = New-AzLoadBalancerBackendAddressPoolConfig -Name "myBackEndPool"
+$inboundNATPool = New-AzLoadBalancerInboundNatPoolConfig `
+ -Name "myRDPRule" `
+ -FrontendIpConfigurationId $frontendIP.Id `
+ -Protocol TCP `
+ -FrontendPortRangeStart 50001 `
+ -FrontendPortRangeEnd 50010 `
+ -BackendPort 3389
+# Create the load balancer and health probe
+$lb = New-AzLoadBalancer `
+ -ResourceGroupName $resourceGroupName `
+ -Name "myLoadBalancer" `
+ -Location $location `
+ -FrontendIpConfiguration $frontendIP `
+ -BackendAddressPool $backendPool `
+ -InboundNatPool $inboundNATPool
+Add-AzLoadBalancerProbeConfig -Name "myHealthProbe" `
+ -LoadBalancer $lb `
+ -Protocol TCP `
+ -Port 80 `
+ -IntervalInSeconds 15 `
+ -ProbeCount 2
+Add-AzLoadBalancerRuleConfig `
+ -Name "myLoadBalancerRule" `
+ -LoadBalancer $lb `
+ -FrontendIpConfiguration $lb.FrontendIpConfigurations[0] `
+ -BackendAddressPool $lb.BackendAddressPools[0] `
+ -Protocol TCP `
+ -FrontendPort 80 `
+ -BackendPort 80 `
+ -Probe (Get-AzLoadBalancerProbeConfig -Name "myHealthProbe" -LoadBalancer $lb)
+Set-AzLoadBalancer -LoadBalancer $lb
+
+# Create IP address configurations
+$ipConfig = New-AzVmssIpConfig `
+ -Name "myIPConfig" `
+ -LoadBalancerBackendAddressPoolsId $lb.BackendAddressPools[0].Id `
+ -LoadBalancerInboundNatPoolsId $inboundNATPool.Id `
+ -SubnetId $vnet.Subnets[0].Id
+
+# Create a configuration
+$vmssConfig = New-AzVmssConfig `
+ -Location $location `
+ -SkuCapacity 2 `
+ -SkuName "Standard_DS2" `
+ -UpgradePolicyMode "Automatic"
+
+# Reference the image version
+Set-AzVmssStorageProfile $vmssConfig `
+ -OsDiskCreateOption "FromImage" `
+ -ImageReferenceId $imageDefinition.Id
+
+# Complete the configuration
+
+Add-AzVmssNetworkInterfaceConfiguration `
+ -VirtualMachineScaleSet $vmssConfig `
+ -Name "network-config" `
+ -Primary $true `
+ -IPConfiguration $ipConfig
+
+# Create the scale set
+New-AzVmss `
+ -ResourceGroupName $resourceGroupName `
+ -Name $scaleSetName `
+ -VirtualMachineScaleSet $vmssConfig
+```
+
+It takes a few minutes to create and configure all the scale set resources and VMs.
++
+## Create a scale set from an image in a community gallery
+
+> [!IMPORTANT]
+> Azure Compute Gallery ΓÇô community galleries is currently in PREVIEW and subject to the [Preview Terms for Azure Compute Gallery - community gallery](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> Microsoft does not provide support for images in the [community gallery](../virtual-machines/azure-compute-gallery.md#community).
+>
+> You can create scale sets from images in the community gallery, but if the image is removed at a later time, you won't be able to scale up. To ensure you have long-term access to the image, you should consider creating an image in your own gallery from a VM created using the community gallery image that you want to use for your scale set. For more information, see [Create an image definition and an image version](../virtual-machines/image-version.md).
++
+As an end user, to get the public name of a community gallery, you need to use the portal. Go to **Virtual machines** > **Create** > **Azure virtual machine** > **Image** > **See all images** > **Community Images** > **Public gallery name**.
+
+Replace resource names as needed in these examples.
+### [CLI](#tab/cli2)
+
+If you choose to install and use the CLI locally, the community gallery requires that you are running the Azure CLI version 2.4.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli).
+
+To create a VM using an image shared to a community gallery, use the unique ID of the image for the `--image` which will be in the following format:
+
+```
+/CommunityGalleries/<community gallery name>/Images/<image name>/Versions/latest
+```
+
+To list all of the image definitions that are available in a community gallery using [az sig image-definition list-community](/cli/azure/sig/image-definition#az_sig_image_definition_list_community). In this example, we list all of the images in the *ContosoImage* gallery in *West US* and by name, the unique ID that is needed to create a VM, OS and OS state.
+
+```azurecli-interactive
+ az sig image-definition list-community \
+ --public-gallery-name "ContosoImages-1a2b3c4d-1234-abcd-1234-1a2b3c4d5e6f" \
+ --location westus \
+ --query [*]."{Name:name,ID:uniqueId,OS:osType,State:osState}" -o table
+```
+
+Create the scale set by setting the `--image` parameter to the unique ID of the image in the community gallery. In this example, we are creating a `Flexible` scale set.
+
+```azurecli
+az group create --name myResourceGroup --location eastus
+
+imgDef="/CommunityGalleries/ContosoImages-1a2b3c4d-1234-abcd-1234-1a2b3c4d5e6f>/Images/myLinuxImage/Versions/latest"
+
+az vmss create \
+ --resource-group myResourceGroup \
+ --name myScaleSet \
+ --image $imgDef \
+ --orchestration-mode Flexible
+```
+
+When using a community image, you'll be prompted to accept the legal terms. The message will look like this:
+
+```output
+To create the scale set from community gallery image, you must accept the license agreement and privacy statement: http://contoso.com. (If you want to accept the legal terms by default, please use the option '--accept-term' when creating VM/VMSS) (Y/n):
+```
+### [Portal](#tab/portal2)
+
+Creating a scale set using an image from the community gallery is the same as creating a scale set using a Marketplace image, except when you select an image, select **See all images**.
++
+The **Select an image** page will open. Select **Community images (PREVIEW)** to see the list of images available in the community gallery.
+++
+## Next steps
+[Azure Image Builder (preview)](../virtual-machines/image-builder-overview.md) can help automate image version creation, you can even use it to update and [create a new image version from an existing image version](../virtual-machines/linux/image-builder-gallery-update-image-version.md).
+
+You can also create Azure Compute Gallery resource using templates. There are several Azure Quickstart Templates available:
+
+- [Create an Azure Compute Gallery](https://azure.microsoft.com/resources/templates/sig-create/)
+- [Create an Image Definition in an Azure Compute Gallery](https://azure.microsoft.com/resources/templates/sig-image-definition-create/)
+- [Create an Image Version in an Azure Compute Gallery](https://azure.microsoft.com/resources/templates/sig-image-version-create/)
virtual-machine-scale-sets Virtual Machine Scale Sets Automatic Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md
The availability-first model for platform orchestrated updates described below e
- All VMs in a common scale set are not updated concurrently. - VMs in a common virtual machine scale set are grouped in batches and updated within Update Domain boundaries as described below.
-The platform orchestrated updates process is followed for rolling out supported OS platform image upgrades every month. For custom images through Azure Compute Gallery, an image upgrade is only kicked off for a particular Azure region when the new image is published and [replicated](../virtual-machines/shared-image-galleries.md#replication) to the region of that scale set.
+The platform orchestrated updates process is followed for rolling out supported OS platform image upgrades every month. For custom images through Azure Compute Gallery, an image upgrade is only kicked off for a particular Azure region when the new image is published and [replicated](../virtual-machines/azure-compute-gallery.md#replication) to the region of that scale set.
### Upgrading VMs in a scale set The region of a scale set becomes eligible to get image upgrades either through the availability-first process for platform images or replicating new custom image versions for Share Image Gallery. The image upgrade is then applied to an individual scale set in a batched manner as follows:
-1. Before beginning the upgrade process, the orchestrator will ensure that no more than 20% of instances in the entire scale set are unhealthy (for any reason).
+1. Before you begin the upgrade process, the orchestrator will ensure that no more than 20% of instances in the entire scale set are unhealthy (for any reason).
2. The upgrade orchestrator identifies the batch of VM instances to upgrade, with any one batch having a maximum of 20% of the total instance count, subject to a minimum batch size of one virtual machine. There is no minimum scale set size requirement and scale sets with 5 or fewer instances will have 1 VM per upgrade batch (minimum batch size). 3. The OS disk of every VM in the selected upgrade batch is replaced with a new OS disk created from the latest image. All specified extensions and configurations in the scale set model are applied to the upgraded instance. 4. For scale sets with configured application health probes or Application Health extension, the upgrade waits up to 5 minutes for the instance to become healthy, before moving on to upgrade the next batch. If an instance does not recover its health in 5 minutes after an upgrade, then by default the previous OS disk for the instance is restored. 5. The upgrade orchestrator also tracks the percentage of instances that become unhealthy post an upgrade. The upgrade will stop if more than 20% of upgraded instances become unhealthy during the upgrade process. 6. The above process continues until all instances in the scale set have been upgraded.
-The scale set OS upgrade orchestrator checks for the overall scale set health before upgrading every batch. While upgrading a batch, there could be other concurrent planned or unplanned maintenance activities that could impact the health of your scale set instances. In such cases if more than 20% of the scale set's instances become unhealthy, then the scale set upgrade stops at the end of current batch.
+The scale set OS upgrade orchestrator checks for the overall scale set health before upgrading every batch. While you're upgrading a batch, there could be other concurrent planned or unplanned maintenance activities that could impact the health of your scale set instances. In such cases if more than 20% of the scale set's instances become unhealthy, then the scale set upgrade stops at the end of current batch.
> [!NOTE] >Automatic OS upgrade does not upgrade the reference image Sku on the scale set. To change the Sku (such as Ubuntu 16.04-LTS to 18.04-LTS), you must update the [scale set model](virtual-machine-scale-sets-upgrade-scale-set.md#the-scale-set-model) directly with the desired image Sku. Image publisher and offer can't be changed for an existing scale set.
Automatic OS image upgrade is supported for custom images deployed through [Azur
### Additional requirements for custom images - The setup and configuration process for automatic OS image upgrade is the same for all scale sets as detailed in the [configuration section](virtual-machine-scale-sets-automatic-upgrade.md#configure-automatic-os-image-upgrade) of this page.-- Scale sets instances configured for automatic OS image upgrades will be upgraded to the latest version of the Azure Compute Gallery image when a new version of the image is published and [replicated](../virtual-machines/shared-image-galleries.md#replication) to the region of that scale set. If the new image is not replicated to the region where the scale is deployed, the scale set instances will not be upgraded to the latest version. Regional image replication allows you to control the rollout of the new image for your scale sets.
+- Scale sets instances configured for automatic OS image upgrades will be upgraded to the latest version of the Azure Compute Gallery image when a new version of the image is published and [replicated](../virtual-machines/azure-compute-gallery.md#replication) to the region of that scale set. If the new image is not replicated to the region where the scale is deployed, the scale set instances will not be upgraded to the latest version. Regional image replication allows you to control the rollout of the new image for your scale sets.
- The new image version should not be excluded from the latest version for that gallery image. Image versions excluded from the gallery image's latest version are not rolled out to the scale set through automatic OS image upgrade. > [!NOTE]
virtual-machines Azure Compute Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/azure-compute-gallery.md
+
+ Title: Overview of Azure Compute Gallery
+description: Learn about the Azure Compute Gallery and how to share Azure resources.
+++++ Last updated : 04/26/2022++++
+# Store and share resources in an Azure Compute Gallery
+
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
+
+An Azure Compute Gallery helps you build structure and organization around your Azure resources, like images and [applications](vm-applications.md). An Azure Compute Gallery provides:
+
+- Global replication.
+- Versioning and grouping of resources for easier management.
+- Highly available resources with Zone Redundant Storage (ZRS) accounts in regions that support Availability Zones. ZRS offers better resilience against zonal failures.
+- Premium storage support (Premium_LRS).
+- Sharing to the community, across subscriptions, and between Active Directory (AD) tenants.
+- Scaling your deployments with resource replicas in each region.
+
+With a gallery, you can share your resources to everyone, or limit sharing to different users, service principals, or AD groups within your organization. Resources can be replicated to multiple regions, for quicker scaling of your deployments.
++
+## Images
+
+For more information about storing images in an Azure Compute Gallery, see [Store and share images in an Azure Compute Gallery](shared-image-galleries.md).
+
+## VM apps
+
+While you can create an image of a VM with apps pre-installed, you would need to update your image each time you have application changes. Separating your application installation from your VM images means thereΓÇÖs no need to publish a new image for every line of code change.
+
+For more information about storing applications in an Azure Compute Gallery, see [VM Applications](vm-applications.md).
++
+## Regional Support
+
+All public regions can be target regions, but certain regions require that customers go through a request process in order to gain access. To request that a subscription is added to the allowlist for a region such as Australia Central or Australia Central 2, submit [an access request](/troubleshoot/azure/general/region-access-request-process)
+
+## Limits
+
+There are limits, per subscription, for deploying resources using Azure Compute Galleries:
+- 100 galleries, per subscription, per region
+- 1,000 image definitions, per subscription, per region
+- 10,000 image versions, per subscription, per region
+- 10 image version replicas, per subscription, per region
+- Any disk attached to the image must be less than or equal to 1TB in size
+
+For more information, see [Check resource usage against limits](../networking/check-usage-against-limits.md) for examples on how to check your current usage.
+
+## Scaling
+Azure Compute Gallery allows you to specify the number of replicas you want to keep. This helps in multi-VM deployment scenarios as the VM deployments can be spread to different replicas reducing the chance of instance creation processing being throttled due to overloading of a single replica.
+
+With Azure Compute Gallery, you can deploy up to a 1,000 VM instances in a virtual machine scale set. You can set a different replica count in each target region, based on the scale needs for the region. Since each replica is a copy of your resource, this helps scale your deployments linearly with each extra replica. While we understand no two resources or regions are the same, here's our general guideline on how to use replicas in a region:
+
+- For every 20 VMs that you create concurrently, we recommend you keep one replica. For example, if you are creating 120 VMs concurrently using the same image in a region, we suggest you keep at least 6 replicas of your image.
+- For each scale set you create concurrently, we recommend you keep one replica.
+
+We always recommend that to over-provision the number of replicas due to factors like resource size, content and OS type.
+
+![Graphic showing how you can scale images](./media/shared-image-galleries/scaling.png)
+
+## High availability
+
+[Azure Zone Redundant Storage (ZRS)](https://azure.microsoft.com/blog/azure-zone-redundant-storage-in-public-preview/) provides resilience against an Availability Zone failure in the region. With the general availability of Azure Compute Gallery, you can choose to store your images in ZRS accounts in regions with Availability Zones.
+
+You can also choose the account type for each of the target regions. The default storage account type is Standard_LRS, but you can choose Standard_ZRS for regions with Availability Zones. For more information on regional availability of ZRS, see [Data redundancy](../storage/common/storage-redundancy.md).
+
+![Graphic showing ZRS](./media/shared-image-galleries/zrs.png)
+
+## Replication
+Azure Compute Gallery also allows you to replicate your resources to other Azure regions automatically. Each image version can be replicated to different regions depending on what makes sense for your organization. One example is to always replicate the latest image in multi-regions while all older image versions are only available in 1 region. This can help save on storage costs.
+
+The regions that a resource is replicated to can be updated after creation time. The time it takes to replicate to different regions depends on the amount of data being copied and the number of regions the version is replicated to. This can take a few hours in some cases. While the replication is happening, you can view the status of replication per region. Once the image replication is complete in a region, you can then deploy a VM or scale-set using that resource in the region.
+
+![Graphic showing how you can replicate images](./media/shared-image-galleries/replication.png)
+
+<a name=community></a>
+## Community gallery (preview)
++
+> [!IMPORTANT]
+> Azure Compute Gallery ΓÇô community gallery is currently in PREVIEW and subject to the [Preview Terms for Azure Compute Gallery - community gallery](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> To share images in the community gallery, you need to register for the preview at [https://aka.ms/communitygallery-preview](https://aka.ms/communitygallery-preview). Creating VMs and scale sets from images shared the community gallery is open to all Azure users.
++
+Sharing images to the community is a new capability in Azure Compute Gallery. In the preview, you can make your image galleries public, and share them to all Azure customers. When a gallery is marked as a community gallery, all images under the gallery become available to all Azure customers as a new resource type under Microsoft.Compute/communityGalleries. All Azure customers can see the galleries and use them to create VMs. Your original resources of the type `Microsoft.Compute/galleries` are still under your subscription, and private.
+
+### Why share to the community?
+
+As a content publisher, you might want to share a gallery to the community:
+
+- If you have non-commercial, non-proprietary content to share widely on Azure.
+
+- You want greater control over the number of versions, regions, and the duration of image availability.
+
+- You have daily/nightly builds to share publicly with your customers and donΓÇÖt want to deal with the overhead that comes with publishing on Azure Marketplace
+
+- You want to quickly share daily or nightly builds with your customers.
+
+### How sharing with the community works
+
+You [create a gallery resource](create-gallery.md#create-a-community-gallery-preview) under `Microsoft.Compute/Galleries` and choose `community` as a sharing option.
+
+When you are ready, you flag your gallery as ready to be shared publicly. Only the owner of a subscription, or a user or service principal with the `Compute Gallery Sharing Admin` role at the subscription or gallery level, can enable a gallery to go public to the community. At this point, the Azure infrastructure creates proxy read-only regional resources, under `Microsoft.Compute/CommunityGalleries`, which are public.
+
+The end-users can only interact with the proxy resources, they never interact with your private resources. As the publisher of the private resource, you should consider the private resource as your handle to the public proxy resources. The `prefix` you provide when you create the gallery will be used, along with a unique GUID, to create the public facing name for your gallery.
+
+Azure users can see the latest image versions shared to the community in the portal, or query for them using the CLI. Only the latest version of an image is listed in the community gallery.
+
+When creating a community gallery, you will need to provide contact information for your images. This information will be shown **publicly**, so be careful when providing it:
+- Community gallery prefix
+- Publisher support email
+- Publisher URL
+- Legal agreement URL
+
+Information from your image definitions will also be publicly available, like what you provide for **Publisher**, **Offer**, and **SKU**.
+
+> [!WARNING]
+> If you want to stop sharing a gallery publicly, you can update the gallery to stop sharing, but making the gallery private will prevent existing virtual machine scale set users from scaling their resources.
+>
+> If you stop sharing your gallery during the preview, you won't be able to re-share it.
++
+### Limitations for images shared to the community
+
+There are some limitations for sharing your gallery to the community:
+- Encrypted images aren't supported.
+- For the preview, image resources need to be created in the same region as the gallery. For example, if you create a gallery in West US, the image definitions and image versions should be created in West US if you want to make them available during the public preview.
+- For the preview, you can't share [VM Applications](vm-applications.md) to the community.
+- The gallery must be created as a community gallery. For the preview, there is no way to migrate an existing gallery to be a community gallery.
+- To find images shared to the community from the Azure portal, you need to go through the VM create or scale set creation pages. You can't search the portal or Azure Marketplace for the images.
+
+> [!IMPORTANT]
+> Microsoft does not provide support for images you share to the community.
+
+### Community-shared images FAQ
+
+**Q: What are the charges for using a gallery that is shared to the community?**
+
+**A**: There are no charges for using the service itself. However, content publishers would be charged for the following:
+- Storage charges for application versions and replicas in each of the regions (source and target). These charges are based on the storage account type chosen.
+- Network egress charges for replication across regions.
+
+**Q: Is it safe to use images shared to the community?**
+
+**A**: Users should exercise caution while using images from non-verified sources, since these images are not subject to Azure certification.
+
+**Q**: If an image that is shared to the community doesnΓÇÖt work, who do I contact for support?**
+
+**A**: Azure is not responsible for any issues users might encounter with community-shared images. The support is provided by the image publisher. Please look up the publisher contact information for the image and reach out to them for any support.
++
+**Q: I have concerns about an image, who do I contact?**
+
+**A**: For issues with images shared to the community:
+- To report malicious images, contact [Abuse Report](https://msrc.microsoft.com/report/abuse).
+- To report images that potentially violate intellectual property rights, contact [Infringement Report](https://msrc.microsoft.com/report/infringement).
+
+
+**Q: How do I request that an image shared to the community be replicated to a specific region?**
+
+**A**: Only the content publishers have control over the regions their images are available in. If you donΓÇÖt find an image in a specific region, reach out to the publisher directly.
++
+## Explicit sharing using RBAC roles
+
+As the Azure Compute Gallery, definition, and version are all resources, they can be shared using the built-in native Azure Roles-based Access Control (RBAC) roles. Using Azure RBAC roles you can share these resources to other users, service principals, and groups. You can even share access to individuals outside of the tenant they were created within. Once a user has access to the resource version, they can use it to deploy a VM or a Virtual Machine Scale Set. Here is the sharing matrix that helps understand what the user gets access to:
+
+| Shared with User | Azure Compute Gallery | Image Definition | Image version |
+|-|-|--|-|
+| Azure Compute Gallery | Yes | Yes | Yes |
+| Image Definition | No | Yes | Yes |
+
+We recommend sharing at the Gallery level for the best experience. We do not recommend sharing individual image versions. For more information about Azure RBAC, see [Assign Azure roles](../role-based-access-control/role-assignments-portal.md).
++
+## Billing
+There is no extra charge for using the Azure Compute Gallery service. You will be charged for the following resources:
+- Storage costs of storing each replica. For images, the storage cost is charged as a snapshot and is based on the occupied size of the image version, the number of replicas of the image version and the number of regions the version is replicated to.
+- Network egress charges for replication of the first resource version from the source region to the replicated regions. Subsequent replicas are handled within the region, so there are no additional charges.
+
+For example, let's say you have an image of a 127 GB OS disk, that only occupies 10GB of storage, and one empty 32 GB data disk. The occupied size of each image would only be 10 GB. The image is replicated to 3 regions and each region has two replicas. There will be six total snapshots, each using 10GB. You will be charged the storage cost for each snapshot based on the occupied size of 10 GB. You will pay network egress charges for the first replica to be copied to the additional two regions. For more information on the pricing of snapshots in each region, see [Managed disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/). For more information on network egress, see [Bandwidth pricing](https://azure.microsoft.com/pricing/details/bandwidth/).
++
+## SDK support
+
+The following SDKs support creating Azure Compute Galleries:
+
+- [.NET](/dotnet/api/overview/azure/virtualmachines/management)
+- [Java](/java/azure/)
+- [Node.js](/javascript/api/overview/azure/arm-compute-readme)
+- [Python](/python/api/overview/azure/virtualmachines)
+- [Go](/azure/go/)
+
+## Templates
+
+You can create Azure Compute Gallery resource using templates. There are several quickstart templates available:
+
+- [Create a gallery](https://azure.microsoft.com/resources/templates/sig-create/)
+- [Create an image definition in a gallery](https://azure.microsoft.com/resources/templates/sig-image-definition-create/)
+- [Create an image version in a gallery](https://azure.microsoft.com/resources/templates/sig-image-version-create/)
++
+## Next steps
+
+Learn how to deploy [images](shared-image-galleries.md) and [VM apps](vm-applications.md) using an Azure Compute Gallery.
virtual-machines Capture Image Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capture-image-portal.md
Previously updated : 06/21/2021 Last updated : 04/12/2022 # Create an image of a VM in the portal
-A image can be created from a VM and then used to create multiple VMs.
+An image can be created from a VM and then used to create multiple VMs.
For images stored in an Azure Compute Gallery (formerly known as Shared Image Gallery), you can use VMs that already have accounts created on them (specialized) or you can generalize the VM before creating the image to remove machine accounts and other machines specific information. To generalize a VM, see [Generalized a Windows VM](generalize.md). For more information, see [Generalized and specialized images](shared-image-galleries.md#generalized-and-specialized-images).
For images stored in an Azure Compute Gallery (formerly known as Shared Image Ga
2. Select your VM from the list.
-3. In the **Virtual machine** page for the VM, on the upper menu, select **Capture**.
+3. On the page for the VM, on the upper menu, select **Capture**.
The **Create an image** page appears.
For images stored in an Azure Compute Gallery (formerly known as Shared Image Ga
1. Select an **End of life** date. This date can be used to track when older images need to be retired.
-1. Under [Replication](shared-image-galleries.md#replication), select a default replica count and then select any additional regions where you would like your image replicated.
+1. Under [Replication](azure-compute-gallery.md#replication), select a default replica count and then select any additional regions where you would like your image replicated.
8. When you are done, select **Review + create**.
virtual-machines Classic Vm Deprecation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/classic-vm-deprecation.md
Azure Cloud Services (classic) retirement was announced in August 2021 [here](ht
- [Microsoft Fast Track](https://www.microsoft.com/fasttrack): Fast track can assist eligible customers with planning & execution for this migration. [Nominate yourself](https://azure.microsoft.com/programs/azure-fasttrack/#nominations) for DC Migration Program. -- If your company/organization has partnered with Microsoft or works with Microsoft representatives (like cloud solution architects (CSAs) or technical account managers (TAMs)), please work with them for additional resources for migration.
+- If your company/organization has partnered with Microsoft or works with Microsoft representatives (like cloud solution architects (CSAs) or customer success account managers (CSAMs)), please work with them for additional resources for migration.
## What actions should I take?
virtual-machines Create Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/create-gallery.md
Previously updated : 10/05/2021 Last updated : 04/24/2022
ms.devlang: azurecli
An [Azure Compute Gallery](./shared-image-galleries.md) (formerly known as Shared Image Gallery) simplifies sharing resources, like images and application packages, across your organization.
-The Azure Compute Gallery lets you share custom VM images and application packages with others in your organization, within or across regions, within an AAD tenant. Choose what you want to share, which regions you want to make them available in, and who you want to share them with. You can create multiple galleries so that you can logically group resources.
+The Azure Compute Gallery lets you share custom VM images and application packages with others in your organization, within or across regions, within a tenant. Choose what you want to share, which regions you want to make them available in, and who you want to share them with. You can create multiple galleries so that you can logically group resources.
The gallery is a top-level resource that provides full Azure role-based access control (Azure RBAC).
-## Create a gallery
+## Create a private gallery
Allowed characters for gallery name are uppercase or lowercase letters, digits, dots, and periods. The gallery name cannot contain dashes. Gallery names must be unique within your subscription.
Choose an option below for creating your gallery:
### [Portal](#tab/portal)
-The following example creates a gallery named *myGallery* in the *myGalleryRG* resource group.
+ 1. Sign in to the Azure portal at https://portal.azure.com.
-1. Use the type **Azure Compute Gallery** in the search box and select **Azure Compute Gallery** in the results.
+1. Type **Azure Compute Gallery** in the search box and select **Azure Compute Gallery** in the results.
1. In the **Azure Compute Gallery** page, click **Add**. 1. On the **Create Azure Compute Gallery** page, select the correct subscription.
-1. In **Resource group**, select **Create new** and type *myGalleryRG* for the name.
-1. In **Name**, type *myGallery* for the name of the gallery.
-1. Leave the default for **Region**.
+1. In **Resource group**, select a resource group from the drop-down or select **Create new** and type a name for the new resource group.
+1. In **Name**, type a name for the name of the gallery.
+1. Select a **Region** from the drop-down.
1. You can type a short description of the gallery, like *My gallery for testing.* and then click **Review + create**. 1. After validation passes, select **Create**. 1. When the deployment is finished, select **Go to resource**.
az group create --name myGalleryRG --location eastus
az sig create --resource-group myGalleryRG --gallery-name myGallery ``` + ### [PowerShell](#tab/powershell) Create a gallery using [New-AzGallery](/powershell/module/az.compute/new-azgallery). The following example creates a gallery named *myGallery* in the *myGalleryRG* resource group.
PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{
"location": "eastus", } ```+ +
+<a name=community></a>
+
+## Create a community gallery (preview)
+
+A [community gallery](azure-compute-gallery.md#community) is shared publicly with everyone. To create a community gallery, you create the gallery first, then enable it for sharing. The name of public instance of your gallery will be the prefix you provide, plus a unique GUID.
+
+During the preview, make sure that you create your gallery, image definitions, and image versions in the same region in order to share your gallery publicly.
+
+> [!IMPORTANT]
+> Azure Compute Gallery ΓÇô community galleries is currently in PREVIEW and subject to the [Preview Terms for Azure Compute Gallery - community gallery](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> To publish a community gallery, you need to register for the preview at [https://aka.ms/communitygallery-preview](https://aka.ms/communitygallery-preview). Creating VMs from the community gallery is open to all Azure users.
+
+When creating an image to share with the community, you will need to provide contact information. This information will be shown **publicly**, so be careful when providing:
+- Community gallery prefix
+- Publisher support email
+- Publisher URL
+- Legal agreement URL
+
+Information from your image definitions will also be publicly available, like what you provide for **Publisher**, **Offer**, and **SKU**.
+
+### Prerequisites
+
+ Only the owner of a subscription, or a user or service principal assigned to the `Compute Gallery Sharing Admin` role at the subscription or gallery level, can enable a gallery to go public to the community. To assign a role to a user, group, service principal or managed identity, see [Steps to assign an Azure role](../role-based-access-control/role-assignments-steps.md).
+
+### [CLI](#tab/cli2)
+
+The `--public-name-prefix` value is used to create a name for the public version of your gallery. The `--public-name-prefix` will be the first part of the public name, and the last part will be a GUID, created by the platform, that is unique to your gallery.
+
+```azurecli-interactive
+location=westus
+galleryName=contosoGallery
+resourceGroup=myCGRG
+publisherUri=https://www.contoso.com
+publisherEmail=support@contoso.com
+eulaLink=https://www.contoso.com/eula
+prefix=ContosoImages
+
+az group create --name $resourceGroup --location $location
+
+az sig create \
+ --gallery-name $galleryName \
+ --permissions community \
+ --resource-group $resourceGroup \
+ --publisher-uri $publisherUri \
+ --publisher-email $publisherEmail \
+ --eula $eulaLink \
+ --public-name-prefix $prefix
+```
+
+The output of this command will give you the public name for your community gallery in the `sharingProfile` section, under `publicNames`.
+
+Once you are ready to make the gallery available to the public, enable the community gallery using [az sig share enable-community](/cli/azure/sig/share#az-sig-share-enable-community). Only a user in the `Owner` role definition can enable a gallery for community sharing.
+
+```azurecli-interactive
+az sig share enable-community \
+ --gallery-name $galleryName \
+ --resource-group $resourceGroup
+```
++
+> [!IMPORTANT]
+> If you are listed as the owner of your subscription, but you are having trouble sharing the gallery publicly, you may need to explicitly [add yourself as owner again](../role-based-access-control/role-assignments-portal-subscription-admin.md).
+
+To go back to only RBAC based sharing, use the [az sig share reset](/cli/azure/sig/share#az-sig-share-reset) command.
+
+To delete a gallery shared to community, you must first run `az sig share reset` to stop sharing, then delete the gallery.
+
+### [REST](#tab/rest2)
+To create gallery, submit a PUT request:
+
+```rest
+PUT https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/myResourceGroup/providers/Microsoft.Compute/galleries/myGalleryName?api-version=2021-10-01
+```
+
+Specify `permissions` as `Community` and information about your gallery in the request body:
+
+```json
+{
+ "location": "West US",
+ "properties": {
+ "description": "This is the gallery description.",
+ "sharingProfile": {
+ "permissions": "Community",
+ "communityGalleryInfo": {
+ "publisherUri": "http://www.uri.com",
+ "publisherContact": "contact@domain.com",
+ "eula": "http://www.uri.com/terms",
+ "publicNamePrefix": "Prefix"
+ }
+ }
+ }
+}
+```
+
+To go live with community sharing, send the following POST request. As part of the request, include the property `operationType` with value `EnableCommunity`.
+
+```rest
+POST
+https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Compu
+te/galleries/{galleryName}/share?api-version=2021-07-01
+{ΓÇ»
+ΓÇ» "operationType" : "EnableCommunity"
+}ΓÇ»
+```
+
+### [Portal](#tab/portal2)
+
+Making a community gallery available to all Azure users is a two-step process. First you create the gallery with community sharing enabled, when you are ready to make it public, you share the gallery.
+
+1. Sign in to the Azure portal at https://portal.azure.com.
+1. Type **Azure Compute Gallery** in the search box and select **Azure Compute Gallery** in the results.
+1. In the **Azure Compute Gallery** page, click **Add**.
+1. On the **Create Azure Compute Gallery** page, select the correct subscription.
+1. In **Resource group**, select **Create new** and type *myGalleryRG* for the name.
+1. In **Name**, type *myGallery* for the name of the gallery.
+1. Leave the default for **Region**.
+1. You can type a short description of the gallery, like *My gallery for testing*.
+1. At the bottom of the page, select **Next: Sharing method**.
+ :::image type="content" source="media/create-gallery/create-gallery.png" alt-text="Screenshot showing where to select to go on to sharing methods.":::
+1. On the **Sharing** tab, select **RBAC + share to public community gallery**.
+
+ :::image type="content" source="media/create-gallery/sharing-type.png" alt-text="Screenshot showing the option to share using both role-based access control and a community gallery.":::
+
+1. For **Community gallery prefix** type a prefix that will be appended to a GUID to create the unique name for your community gallery.
+1. For **Publisher email** type a valid e-mail address that can be used to communicate with you about the gallery.
+1. For **Publisher URL**, type the URL for where users can get more information about the images in your community gallery.
+1. For **Legal Agreement URL**, type the URL where end users can find legal terms for the image.
+1. When you are done, select **Review + create**.
+
+ :::image type="content" source="media/create-gallery/rbac-community.png" alt-text="Screenshot showing the information that needs to be completed to create a community gallery.":::
+
+1. After validation passes, select **Create**.
+1. When the deployment is finished, select **Go to resource**.
+
+To see the public name of your gallery, select **Sharing** in the left menu.
+
+When you are ready to make the gallery public:
+
+1. On the page for the gallery, select **Sharing** from the left menu.
+1. Select **Share** from the top of the page.
+ :::image type="content" source="media/create-gallery/share.png" alt-text="Screenshot showing the Share button for sharing your gallery to the community.":::
+1. When you are done, select **Save**.
++
+> [!IMPORTANT]
+> If you are listed as the owner of your subscription, but you are having trouble sharing the gallery publicly, you may need to explicitly [add yourself as owner again](../role-based-access-control/role-assignments-portal-subscription-admin.md).
+++++ ## Next steps - Create an [image definition and an image version](image-version.md).
virtual-machines Error Codes Spot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/error-codes-spot.md
Here are some possible error codes you could receive when using Azure Spot Virtu
| MoveResourcesWithAzureSpotVMNotSupported | The Move resources request contains an Azure Spot Virtual Machine. Not supported. Check the error details for virtual machine Ids. | You cannot move Azure Spot Virtual Machines. | | MoveResourcesWithAzureSpotVmssNotSupported | The Move resources request contains an Azure Spot virtual machine scale set. Not supported. Check the error details for virtual machine scale set Ids. | You cannot move Azure Spot virtual machine scale set instances. | | AzureSpotVMNotSupportedInVmssWithVMOrchestrationMode | Azure Spot Virtual Machine is not supported in Virtual machine scale set with VM Orchestration mode. | Set the orchestration mode to virtual machine scale set in order to use Azure Spot Virtual Machine instances. |
-| SpotRestorationIsNotSupportedForThisAPIVersion | Spot restoration feature is not supported for this API version. | For an existing scaleset, perform a PATCH using using API version 2021-07-01 or later. <br><br> For new scale set deployments, add the following property to the Azure Resource Manager template using API version 2021-07-01 or later: <br><br> :::image type="content" source="media/spot/spot-try-restore-error-codes-1.png" alt-text="Error code sample to use the correct API version.":::|
-| SpotRestorationIsSupportedOnlyForAzureSpotScaleSets | Spot restoration feature is supported only for Azure Spot Virtual Machine scale sets. | Spot restoration feature is only supported for Azure Spot Virtual Machine scale sets. To use this feature, deploy Azure Spot using Virtual Machine scale sets. |
+| SpotRestorationIsNotSupportedForThisAPIVersion | Spot restoration feature is not supported for this API version. | For an existing scaleset, perform a PATCH using API version 2021-07-01 or later. <br><br> For new scale set deployments, add the following property to the Azure Resource Manager template using API version 2021-07-01 or later: <br><br> :::image type="content" source="media/spot/spot-try-restore-error-codes-1.png" alt-text="Error code sample to use the correct API version.":::|
+| SpotRestorationIsSupportedOnlyForAzureSpotScaleSets | Spot restoration feature is supported only for Azure Spot virtual machine scale sets. | Spot restoration feature is only supported for Azure Spot virtual machine scale sets. To use this feature, deploy Azure Spot using virtual machine scale sets. |
**Next steps**
virtual-machines Generalize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/generalize.md
Previously updated : 06/16/2021 Last updated : 03/24/2022
Generalizing a VM is not necessary for creating an image in an [Azure Compute Gallery](shared-image-galleries.md#generalized-and-specialized-images) unless you specifically want to create a generalized image. Generalizing is required when creating a managed image outside of a gallery.
-Generalizing removes machine specific information so the image can be used to create multiple VMs. Once the VM has been generalized, you need to let the platform know that the VM has been generalized so that the boot sequence can be set correctly. Once a VM is generalized, it should not be restarted.
+Generalizing removes machine specific information so the image can be used to create multiple VMs. Once the VM has been generalized, you need to let the platform know so that the boot sequence can be set correctly.
## Linux
virtual-machines Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-version.md
Previously updated : 08/31/2021 Last updated : 04/26/2022
# Create an image definition and an image version
-A [Azure Compute Gallery](shared-image-galleries.md) (formerly known as Shared Image Gallery)simplifies custom image sharing across your organization. Custom images are like marketplace images, but you create them yourself. Custom images can be used to bootstrap deployment tasks like preloading applications, application configurations, and other OS configurations.
+A [Azure Compute Gallery](shared-image-galleries.md) (formerly known as Shared Image Gallery)simplifies custom image sharing across your organization. Custom images are like marketplace images, but you create them yourself. Images can be created from a VM, VHD, snapshot, managed image, or another image version.
-The Azure Compute Gallery lets you share your custom VM images with others in your organization, within or across regions, within an Azure AD tenant. Choose which images you want to share, which regions you want to make them available in, and who you want to share them with. You can create multiple galleries so that you can logically group images.
+The Azure Compute Gallery lets you share your custom VM images with others in your organization, within or across regions, within an Azure AD tenant, or publicly using a [community gallery (preview)](azure-compute-gallery.md#community). Choose which images you want to share, which regions you want to make them available in, and who you want to share them with. You can create multiple galleries so that you can logically group images.
The Azure Compute Gallery feature has multiple resource types:
Allowed characters for the image version are numbers and periods. Numbers must b
When working through this article, replace the resource names where needed.
+## Community gallery (preview)
+
+> [!IMPORTANT]
+> Azure Compute Gallery ΓÇô community gallery is currently in PREVIEW and subject to the [Preview Terms for Azure Compute Gallery - community gallery](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> To share images in the community gallery, you need to register for the preview at [https://aka.ms/communitygallery-preview](https://aka.ms/communitygallery-preview). Creating VMs and scale sets from images shared the community gallery is open to all Azure users.
+>
+> Information from your image definitions will be publicly available, like what you provide for **Publish**, **Offer**, and **SKU**.
+
+If you will be sharing your images using a [community gallery (preview)](azure-compute-gallery.md#community), make sure that you create your gallery, image definitions, and image versions in the same region.
+
+When users search for community gallery images, only the latest version of an image are shown.
++ ## Create an image Choose an option below for creating your image definition and image version:
To create an image using a source other than a VM, follow these steps.
1. Go to the [Azure portal](https://portal.azure.com), then search for and select **Azure Compute Gallery**. 1. Select the gallery you want to use from the list.
-1. On the page for your gallery, select **Add** from the top of the page and then select **VM image definition** from the drop-down.
-1. on the **Add new image definition to Azure Compute Gallery** page, in the **Basics** tab, select a **Region**.
+1. On the page for your gallery, select **Add** from the top of the page and then select **VM image definition** from the drop-down.
+1. on the **Add new image definition to Azure Compute Gallery** page, in the **Basics** tab, select a **Region**.
1. For **Image definition name**, type a name like *myImageDefinition*. 1. For **Operating system**, select the correct option based on your source. 1. For **VM generation**, select the option based on your source. In most cases, this will be *Gen 1*. For more information, see [Support for generation 2 VMs](generation-2.md).
The syntax for creating the image will change, depending on what you are using a
| Source | Parameter set | |||
-| **OS Disk**| |
+| **OS Disk:**| |
| VM using the VM ID| `--managed-image <Resource ID of the VM>` | | Managed image or another image version | `--managed-image <Resource ID of the managed image or image version` | | Snapshot or managed disk | `--os-snapshot <Resource ID of the snapshot or managed disk>` | | VHD in a storage account | `--os-vhd-uri <URI> --os-vhd-storage-account <storage account name>`. |
-| **Data disk** |
+| **Data disk:** |
| Snapshot or managed disk | `--data-snapshots <Resource ID of the snapshot or managed disk> --data-snapshot-luns <LUN number>` | | VHD in a storage account | `--data-vhds-sa <storageaccountname> --data-vhds-uris <URI> --data-vhds-luns <LUN number>` |
az sig image-version create \
### [PowerShell](#tab/powershell)
-Image definitions create a logical grouping for images. When making your image definition, make sure is has all of the correct information. If you generalized the source for the image (using Sysprep for Windows, or waagent -deprovision for Linux) then you should create an image definition using `-OsState generalized`. If you didn't generalized the source, create an image definition using `-OsState specialized`.
+Image definitions create a logical grouping for images. When making your image definition, make sure it has all of the correct information. If you [generalized](generalize.md) the source VM, then you should create an image definition using `-OsState generalized`. If you didn't generalized the source, create an image definition using `-OsState specialized`.
For more information about the values you can specify for an image definition, see [Image definitions](./shared-image-galleries.md#image-definitions).
virtual-machines Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/quick-create-portal.md
Sign in to the [Azure portal](https://portal.azure.com).
![Screenshot showing how to copy the IP address for the virtual machine](./media/quick-create-portal/ip-address.png) ## Connect to virtual machine
virtual-machines Tutorial Custom Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-custom-images.md
Custom images are like marketplace images, but you create them yourself. Custom
This tutorial uses the CLI within the [Azure Cloud Shell](../../cloud-shell/overview.md), which is constantly updated to the latest version. To open the Cloud Shell, select **Try it** from the top of any code block.
-If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version 2.4.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli).
+If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version 2.35.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli).
## Overview
A gallery is the primary resource used for enabling image sharing.
Allowed characters for gallery name are uppercase or lowercase letters, digits, dots, and periods. The gallery name cannot contain dashes. Gallery names must be unique within your subscription.
-Create an gallery using [az sig create](/cli/azure/sig#az-sig-create). The following example creates a resource group named gallery named *myGalleryRG* in *East US*, and a gallery named *myGallery*.
+Create a gallery using [az sig create](/cli/azure/sig#az-sig-create). The following example creates a resource group named gallery named *myGalleryRG* in *East US*, and a gallery named *myGallery*.
```azurecli-interactive az group create --name myGalleryRG --location eastus
az sig image-version create \
## Create the VM
-Create the VM using [az vm create](/cli/azure/vm#az-vm-create) using the --specialized parameter to indicate the the image is a specialized image.
+Create the VM using [az vm create](/cli/azure/vm#az-vm-create) using the --specialized parameter to indicate the image is a specialized image.
Use the image definition ID for `--image` to create the VM from the latest version of the image that is available. You can also create the VM from a specific version by supplying the image version ID for `--image`.
virtual-machines Share Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/share-gallery.md
Title: Share a gallery using RBAC
-description: Learn how to share a gallery using role-based access control (RBAC).
+ Title: Share resources in an Azure Compute Gallery
+description: Learn how to share resources explicitly or to all Azure users using role-based access control or community galleries.
Previously updated : 08/31/2021 Last updated : 04/24/2022
ms.devlang: azurecli
-# Use RBAC to share gallery resources
+# Share gallery resources
+
+There are two main ways to share images in an Azure Compute Gallery:
+
+- Role-based access control (RBAC) lets you share resources to specific people, groups, or service principals on a granular level.
+- Community gallery lets you share your entire gallery publicly, to all Azure users.
+
+## RBAC
The Azure Compute Gallery, definitions, and versions are all resources, they can be shared using the built-in native Azure RBAC controls. Using Azure RBAC you can share these resources to other users, service principals, and groups. You can even share access to individuals outside of the tenant they were created within. Once a user has access to the image or application version, they can deploy a VM or a Virtual Machine Scale Set.
We recommend sharing at the gallery level for the best experience and prevent ma
If the user is outside of your organization, they will get an email invitation to join the organization. The user needs to accept the invitation, then they will be able to see the gallery and all of the image definitions and versions in their list of resources.
-## Share a gallery
- ### [Portal](#tab/portal) If the user is outside of your organization, they will get an email invitation to join the organization. The user needs to accept the invitation, then they will be able to see the gallery and all of the definitions and versions in their list of resources.
New-AzRoleAssignment `
+<a name=community></a>
+## Community gallery (preview)
+
+To share a gallery with all Azure users, you can [create a community gallery (preview)](create-gallery.md#community). Community galleries can be used by anyone with an Azure subscription. Someone creating a VM can browse images shared with the community using the portal, REST, or the Azure CLI.
+
+> [!IMPORTANT]
+> Azure Compute Gallery ΓÇô community galleries is currently in PREVIEW and subject to the [Preview Terms for Azure Compute Gallery - community gallery](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> To publish a community gallery, you need to register for the preview at [https://aka.ms/communitygallery-preview](https://aka.ms/communitygallery-preview). Creating VMs from the community gallery is open to all Azure users.
+>
+> During the preview, the gallery must be created as a community gallery (for CLI, this means using the `--permissions community` parameter) you currently can't migrate a regular gallery to a community gallery.
+
+To learn more, see [Community gallery (preview) overview](azure-compute-gallery.md#community) and [Create a community gallery](create-gallery.md#community).
++ ## Next steps Create an [image definition and an image version](image-version.md).
-You can also create Azure Compute Gallery resources using templates. There are several Azure Quickstart Templates available:
+You can also create Azure Compute Gallery resources using templates. There are several quickstart templates available:
- [Create an Azure Compute Gallery](https://azure.microsoft.com/resources/templates/sig-create/) - [Create an Image Definition in an Azure Compute Gallery](https://azure.microsoft.com/resources/templates/sig-image-definition-create/)
virtual-machines Shared Image Galleries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/shared-image-galleries.md
Previously updated : 6/8/2021 Last updated : 04/24/2022 #Customer intent: As an IT administrator, I want to learn about how to create shared VM images to minimize the number of post-deployment configuration tasks.
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets -
-Azure Compute Gallery now includes the existing Shared Image Gallery service and the new [VM Applications](vm-applications.md) features and capabilities.
-
-An Azure Compute Gallery helps you build structure and organization around your Azure resources, like images and [applications](vm-applications.md). An Azure Compute Gallery provides:
-- Global replication.-- Versioning and grouping of resources for easier management.-- Highly available resources with Zone Redundant Storage (ZRS) accounts in regions that support Availability Zones. ZRS offers better resilience against zonal failures.-- Premium storage support (Premium_LRS).-- Sharing across subscriptions, and even between Active Directory (AD) tenants, using Azure RBAC.-- Scaling your deployments with resource replicas in each region.-
-With a gallery, you can share your resources to different users, service principals, or AD groups within your organization. Resources can be replicated to multiple regions, for quicker scaling of your deployments.
-
-For more information about storing applications in an Azure Compute Gallery, see [VM Applications](vm-applications.md)
-
-## Image management
An image is a copy of either a full VM (including any attached data disks) or just the OS disk, depending on how it is created. When you create a VM from the image, a copy of the VHDs in the image are used to create the disks for the new VM. The image remains in storage and can be used over and over again to create new VMs.
-If you have a large number of images that you need to maintain, and would like to make them available throughout your company, you can use an Azure Compute Gallery as a repository.
+If you have a large number of images that you need to maintain, and would like to make them available throughout your company, you can use an [Azure Compute Gallery](azure-compute-gallery.md) as a repository.
When you use a gallery to store images, multiple resource types are created:
The following parameters determine which types of image versions they can contai
The following are other parameters that can be set on your image definition so that you can more easily track your resources: - Description - use description to give more detailed information on why the image definition exists. For example, you might have an image definition for your front-end server that has the application pre-installed.-- Eula - can be used to point to an end-user license agreement specific to the image definition.
+- EULA - can be used to point to an end-user license agreement specific to the image definition.
- Privacy Statement and Release notes - store release notes and privacy statements in Azure storage and provide a URI for accessing them as part of the image definition.-- End-of-life date - establish a default end-of-life dates for all image versions in the image definition. End-of-life dates are informational; users will still be able to create VMs from images and versions past the end-of-life date.
+- End-of-life date - establish a default date after which the image shouldn't be used, for all image versions in the image definition. End-of-life dates are informational; users will still be able to create VMs from images and versions past the end-of-life date.
- Tag - you can add tags when you create your image definition. For more information about tags, see [Using tags to organize your resources](../azure-resource-manager/management/tag-resources.md) - Minimum and maximum vCPU and memory recommendations - if your image has vCPU and memory recommendations, you can attach that information to your image definition. - Disallowed disk types - you can provide information about the storage needs for your VM. For example, if the image isn't suited for standard HDD disks, you add them to the disallow list.
Specialized VMs have not been through a process to remove machine specific infor
- VMs will have the **Computer name** of the VM the image was taken from. You should change the computer name to avoid collisions. - The `osProfile` is how some sensitive information is passed to the VM, using `secrets`. This may cause issues using KeyVault, WinRM and other functionality that uses `secrets` in the `osProfile`. In some cases, you can use managed service identities (MSI) to work around these limitations.
-## Regional Support
-
-All public regions can be target regions, but certain regions require that customers go through a request process in order to gain access. To request that a subscription is added to the allowlist for a region such as Australia Central or Australia Central 2, submit [an access request](/troubleshoot/azure/general/region-access-request-process)
-
-## Limits
-
-There are limits, per subscription, for deploying resources using Azure Compute Galleries:
-- 100 galleries, per subscription, per region-- 1,000 image definitions, per subscription, per region-- 10,000 image versions, per subscription, per region-- 10 image version replicas, per subscription, per region-- Any disk attached to the image must be less than or equal to 1TB in size-
-For more information, see [Check resource usage against limits](../networking/check-usage-against-limits.md) for examples on how to check your current usage.
-
-## Scaling
-Azure Compute Gallery allows you to specify the number of replicas you want Azure to keep of the images. This helps in multi-VM deployment scenarios as the VM deployments can be spread to different replicas reducing the chance of instance creation processing being throttled due to overloading of a single replica.
-
-With Azure Compute Gallery, you can now deploy up to a 1,000 VM instances in a virtual machine scale set (up from 600 with managed images). Image replicas provide for better deployment performance, reliability and consistency.  You can set a different replica count in each target region, based on the scale needs for the region. Since each replica is a deep copy of your image, this helps scale your deployments linearly with each extra replica. While we understand no two images or regions are the same, here's our general guideline on how to use replicas in a region:
--- For non-Virtual Machine Scale Set deployments - For every 20 VMs that you create concurrently, we recommend you keep one replica. For example, if you are creating 120 VMs concurrently using the same image in a region, we suggest you keep at least 6 replicas of your image. -- For Virtual Machine Scale Set deployments - For each scale set you create concurrently, we recommend you keep one replica.-
-We always recommend you to overprovision the number of replicas due to factors like image size, content and OS type.
-
-![Graphic showing how you can scale images](./media/shared-image-galleries/scaling.png)
-
-## Make your images highly available
-
-[Azure Zone Redundant Storage (ZRS)](https://azure.microsoft.com/blog/azure-zone-redundant-storage-in-public-preview/) provides resilience against an Availability Zone failure in the region. With the general availability of Azure Compute Gallery, you can choose to store your images in ZRS accounts in regions with Availability Zones.
-
-You can also choose the account type for each of the target regions. The default storage account type is Standard_LRS, but you can choose Standard_ZRS for regions with Availability Zones. For more information on regional availability of ZRS, see [Data redundancy](../storage/common/storage-redundancy.md).
-
-![Graphic showing ZRS](./media/shared-image-galleries/zrs.png)
-
-## Replication
-Azure Compute Gallery also allows you to replicate your images to other Azure regions automatically. Each image version can be replicated to different regions depending on what makes sense for your organization. One example is to always replicate the latest image in multi-regions while all older versions are only available in 1 region. This can help save on storage costs for image versions.
-
-The regions an image version is replicated to can be updated after creation time. The time it takes to replicate to different regions depends on the amount of data being copied and the number of regions the version is replicated to. This can take a few hours in some cases. While the replication is happening, you can view the status of replication per region. Once the image replication is complete in a region, you can then deploy a VM or scale-set using that image version in the region.
-
-![Graphic showing how you can replicate images](./media/shared-image-galleries/replication.png)
-
-## Access
-
-As the Azure Compute Gallery, Image Definition, and Image version are all resources, they can be shared using the built-in native Azure RBAC controls. Using Azure RBAC you can share these resources to other users, service principals, and groups. You can even share access to individuals outside of the tenant they were created within. Once a user has access to the image version, they can deploy a VM or a Virtual Machine Scale Set. Here is the sharing matrix that helps understand what the user gets access to:
-
-| Shared with User | Azure Compute Gallery | Image Definition | Image version |
-|-|-|--|-|
-| Azure Compute Gallery | Yes | Yes | Yes |
-| Image Definition | No | Yes | Yes |
-
-We recommend sharing at the Gallery level for the best experience. We do not recommend sharing individual image versions. For more information about Azure RBAC, see [Assign Azure roles](../role-based-access-control/role-assignments-portal.md).
-
-Images can also be shared, at scale, even across tenants using a multi-tenant app registration. For more information about sharing images across tenants, see "Share gallery VM images across Azure tenants" using the [Azure CLI](./linux/share-images-across-tenants.md) or [PowerShell](./windows/share-images-across-tenants.md).
-
-## Billing
-There is no extra charge for using the Azure Compute Gallery service. You will be charged for the following resources:
-- Storage costs of storing each replica. The storage cost is charged as a snapshot and is based on the occupied size of the image version, the number of replicas of the image version and the number of regions the version is replicated to. -- Network egress charges for replication of the first image version from the source region to the replicated regions. Subsequent replicas are handled within the region, so there are no additional charges. -
-For example, let's say you have an image of a 127 GB OS disk, that only occupies 10GB of storage, and one empty 32 GB data disk. The occupied size of each image would only be 10 GB. The image is replicated to 3 regions and each region has two replicas. There will be six total snapshots, each using 10GB. You will be charged the storage cost for each snapshot based on the occupied size of 10 GB. You will pay network egress charges for the first replica to be copied to the additional two regions. For more information on the pricing of snapshots in each region, see [Managed disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/). For more information on network egress, see [Bandwidth pricing](https://azure.microsoft.com/pricing/details/bandwidth/).
- ## Updating resources
Image version:
- Exclude from latest - End of life date
+## Sharing
+
+You can [share images](share-gallery.md) to users and groups using the standard role-based access control (RBAC) or you can share an entire gallery of images to the public, using a [community gallery (preview)](azure-compute-gallery.md#community).
+
+> [!IMPORTANT]
+> Azure Compute Gallery ΓÇô community gallery is currently in PREVIEW and subject to the [Preview Terms for Azure Compute Gallery - community gallery](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> To share images in the community gallery, you need to register for the preview at [https://aka.ms/communitygallery-preview](https://aka.ms/communitygallery-preview). Creating VMs and scale sets from images shared the community gallery is open to all Azure users.
++
+## Shallow replication
+
+When you create an image version, you can set the replication mode to shallow for development and test. Shallow replication skips copying the image, so the image version is ready much faster. But, it also means you can't deploy a large number of VMs from that image version. This is similar to the way that the older managed images worked.
+
+Shallow replication can also be useful if you have very large images (up to 32TB) that aren't frequently deployed. Because the source image isn't copied, larger disks can be used. But, they also can't be used for deploying large numbers of VMs concurrently.
+
+To set an image for shallow replication, use `--replication-mode Shallow` with the Azure CLI.
+ ## SDK support The following SDKs support creating Azure Compute Galleries:
The following SDKs support creating Azure Compute Galleries:
## Templates
-You can create Azure Compute Gallery resource using templates. There are several Azure Quickstart Templates available:
+You can create Azure Compute Gallery resource using templates. There are several quickstart templates available:
- [Create a gallery](https://azure.microsoft.com/resources/templates/sig-create/) - [Create an image definition in a gallery](https://azure.microsoft.com/resources/templates/sig-image-definition-create/)
To list all the Azure Compute Gallery resources across subscriptions that you ha
1. Open the [Azure portal](https://portal.azure.com). 1. Scroll down the page and select **All resources**. 1. Select all the subscriptions under which you'd like to list all the resources.
-1. Look for resources of type **Azure Compute Gallery**, .
+1. Look for resources of the **Azure Compute Gallery** type.
To list all the Azure Compute Gallery resources, across subscriptions that you have permissions to, use the following command in the Azure CLI:
virtual-machines Update Image Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/update-image-resources.md
Title: List, update, and delete image resources
-description: List, update, and delete image resources in your Azure Compute Gallery.
+ Title: List, update, and delete resources
+description: List, update, and delete resources in your Azure Compute Gallery.
++ - Previously updated : 08/05/2021--- Last updated : 04/20/2022++
-# List, update, and delete image resources
+# List, update, and delete gallery resources
You can manage your Azure Compute Gallery (formerly known as Shared Image Gallery) resources using the Azure CLI or Azure PowerShell.
-## List information
+## List your gallery information
### [CLI](#tab/cli)
-Get the location, status and other information about the available image galleries using [az sig list](/cli/azure/sig#az-sig-list).
+Get the location, status and other information about your image galleries using [az sig list](/cli/azure/sig#az-sig-list).
```azurecli-interactive az sig list -o table ```
-List the image definitions in a gallery, including information about OS type and status, using [az sig image-definition list](/cli/azure/sig/image-definition#az-sig-image-definition-list).
++++
+**List the image definitions**
+
+List the image definitions in your gallery, including information about OS type and status, using [az sig image-definition list](/cli/azure/sig/image-definition#az-sig-image-definition-list).
+ ```azurecli-interactive az sig image-definition list --resource-group myGalleryRG --gallery-name myGallery -o table ```
-List the image versions in a gallery, using [az sig image-version list](/cli/azure/sig/image-version#az-sig-image-version-list).
+++
+**List image versions**
+
+List image versions in your gallery using [az sig image-version list](/cli/azure/sig/image-version#az_sig_image_version_list):
+ ```azurecli-interactive az sig image-version list --resource-group myGalleryRG --gallery-name myGallery --gallery-image-definition myImageDefinition -o table ```
-Get the ID of an image version using [az sig image-version show](/cli/azure/sig/image-version#az-sig-image-version-show).
++
+**Get a specific image version**
+
+Get the ID of a specific image version in your gallery using [az sig image-version show](/cli/azure/sig/image-version#az_sig_image_version_show).
```azurecli-interactive az sig image-version show \
az sig image-version show \
--gallery-name myGallery \ --gallery-image-definition myImageDefinition \ --gallery-image-version 1.0.0 \
- --query "id"
+ --query "id"
``` + ### [PowerShell](#tab/powershell) List all galleries by name.
Image version:
- Exclusion from latest - End of life date
-If you plan on adding replica regions, do not delete the source managed image. The source managed image is needed for replicating the image version to additional regions.
+If you plan on adding replica regions, don't delete the source managed image. The source managed image is needed for replicating the image version to additional regions.
Update the description of a gallery using ([az sig update](/cli/azure/sig#az-sig-update).
Image version:
- Exclusion from latest - End of life date
-If you plan on adding replica regions, do not delete the source managed image. The source managed image is needed for replicating the image version to additional regions.
+If you plan on adding replica regions, don`t delete the source managed image. The source managed image is needed for replicating the image version to additional regions.
To update the description of a gallery, use [Update-AzGallery](/powershell/module/az.compute/update-azgallery).
Update-AzGalleryImageVersion `
You have to delete resources in reverse order, by deleting the image version first. After you delete all of the image versions, you can delete the image definition. After you delete all image definitions, you can delete the gallery.
+Before you can delete a community shared gallery, you need to use [az sig share reset](/cli/azure/sig/share#az-sig-share-reset) to stop sharing the gallery publicly.
+ ### [CLI](#tab/cli) Delete an image version using [az sig image-version delete](/cli/azure/sig/image-version#az-sig-image-version-delete).
Remove-AzResourceGroup -Name $resourceGroup
+## Community galleries
+
+> [!IMPORTANT]
+> Azure Compute Gallery ΓÇô community galleries is currently in PREVIEW and subject to the [Preview Terms for Azure Compute Gallery - community gallery](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+To list your own galleries, and output the public names for your community galleries:
+
+```azurecli-interactive
+az sig list --query [*]."{Name:name,PublicName:sharingProfile.communityGalleryInfo.publicNames}"
+```
++
+> [!NOTE]
+> As an end user, to get the public name of a community gallery, you currently need to use the portal. Go to **Virtual machines** > **Create** > **Azure virtual machine** > **Image** > **See all images** > **Community Images** > **Public gallery name**.
++
+List all of the image definitions that are available in a community gallery using [az sig image-definition list-community](/cli/azure/sig/image-definition#az_sig_image_definition_list_community).
+
+In this example, we list all of the images in the *ContosoImage* gallery in *West US* and by name, the unique ID that is needed to create a VM, OS and OS state.
+
+```azurecli-interactive
+ az sig image-definition list-community \
+ --public-gallery-name "ContosoImages-1a2b3c4d-1234-abcd-1234-1a2b3c4d5e6f" \
+ --location westus \
+ --query [*]."{Name:name,ID:uniqueId,OS:osType,State:osState}" -o table
+```
+
+List image versions shared in a community gallery using [az sig image-version list-community](/cli/azure/sig/image-version#az_sig_image_version_list_community):
+
+```azurecli-interactive
+az sig image-version list-community \
+ --location westus \
+ --public-gallery-name "ContosoImages-1a2b3c4d-1234-abcd-1234-1a2b3c4d5e6f" \
+ --gallery-image-definition myImageDefinition \
+ --query [*]."{Name:name,UniqueId:uniqueId}" \
+ -o table
+```
+ ## Next steps [Azure Image Builder (preview)](./image-builder-overview.md) can help automate image version creation, you can even use it to update and [create a new image version from an existing image version](./linux/image-builder-gallery-update-image-version.md).
virtual-machines Vm Generalized Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-generalized-image-version.md
Previously updated : 08/31/2021 Last updated : 04/26/2022
Create a VM from a [generalized image version](./shared-image-galleries.md#generalized-and-specialized-images) stored in an Azure Compute Gallery (formerly known as Shared Image Gallery). If you want to create a VM using a specialized image, see [Create a VM from a specialized image](vm-specialized-image-version.md).
+## Create a VM from your gallery
### [Portal](#tab/portal) Now you can create one or more new VMs. This example creates a VM named *myVM*, in the *myResourceGroup*, in the *East US* datacenter.
az sig image-definition list --resource-group $resourceGroup --gallery-name $gal
Create a VM using [az vm create](/cli/azure/vm#az-vm-create). To use the latest version of the image, set `--image` to the ID of the image definition.
-The example below is for creating a Linux VMsecured with SSH. For Windows or to secure a Linux VM with a password, remove `--generate-ssh-keys` to be prompted for a password. If you want to supply a password directly, replace `--generate-ssh-keys` with `--admin-password`. Replace resource names as needed in this example.
+The example below is for creating a Linux VM secured with SSH. For Windows or to secure a Linux VM with a password, remove `--generate-ssh-keys` to be prompted for a password. If you want to supply a password directly, replace `--generate-ssh-keys` with `--admin-password`. Replace resource names as needed in this example.
-```azurecli-interactive
+```azurecli-interactive
imgDef="/subscriptions/<subscription ID where the gallery is located>/resourceGroups/myGalleryRG/providers/Microsoft.Compute/galleries/myGallery/images/myImageDefinition" vmResourceGroup=myResourceGroup location=eastus vmName=myVM adminUsername=azureuser - az group create --name $vmResourceGroup --location $location az vm create\
PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{
"location": "eastus", } ```+ Create a Linux VM. The `oSProfile` section contains some OS specific details. See the next code example for the Windows syntax. ```rest
PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{
+
+## Create a VM from a community gallery image
+
+> [!IMPORTANT]
+> Azure Compute Gallery ΓÇô community galleries is currently in PREVIEW and subject to the [Preview Terms for Azure Compute Gallery - community gallery](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> Microsoft does not provide support for images in the [community gallery](azure-compute-gallery.md#community).
++
+### [CLI](#tab/cli2)
+
+To create a VM using an image shared to a community gallery, use the unique ID of the image for the `--image` which will be in the following format:
+
+```
+/CommunityGalleries/<community gallery name, like: ContosoImages-1a2b3c4d-1234-abcd-1234-1a2b3c4d5e6f>/Images/<image name>/Versions/latest
+```
+
+As an end user, to get the public name of a community gallery, you need to use the portal. Go to **Virtual machines** > **Create** > **Azure virtual machine** > **Image** > **See all images** > **Community Images** > **Public gallery name**.
+
+In this example, we are creating a VM from a Linux image and creating SSH keys for authentication.
+
+```azurecli-interactive
+imgDef="/CommunityGalleries/ContosoImages-1a2b3c4d-1234-abcd-1234-1a2b3c4d5e6f>/Images/myLinuxImage/Versions/latest"
+vmResourceGroup=myResourceGroup
+location=eastus
+vmName=myVM
+adminUsername=azureuser
+
+az group create --name $vmResourceGroup --location $location
+
+az vm create\
+ --resource-group $vmResourceGroup \
+ --name $vmName \
+ --image $imgDef \
+ --admin-username $adminUsername \
+ --generate-ssh-keys
+```
+
+When using a community image, you'll be prompted to accept the legal terms. The message will look like this:
+
+```output
+To create the VM from community gallery image, you must accept the license agreement and privacy statement: http://contoso.com. (If you want to accept the legal terms by default, please use the option '--accept-term' when creating VM/VMSS) (Y/n):
+```
+
+### [Portal](#tab/portal2)
+
+1. Type **virtual machines** in the search.
+1. Under **Services**, select **Virtual machines**.
+1. In the **Virtual machines** page, select **Create** and then **Virtual machine**. The **Create a virtual machine** page opens.
+1. In the **Basics** tab, under **Project details**, make sure the correct subscription is selected and then choose to **Create new** resource group or select one from the drop-down.
+1. Under **Instance details**, type a name for the **Virtual machine name**.
+1. For **Security type**, make sure *Standard* is selected.
+1. For your **Image**, select **See all images**. The **Select an image** page will open.
+ :::image type="content" source="media/shared-image-galleries/see-all-images.png" alt-text="Screenshot showing the link to select to see more image options.":::
+1. In the left menu, under **Other Items**, select **Community images (PREVIEW)**. The **Other Items | Community Images (PREVIEW)** page will open.
+ :::image type="content" source="media/shared-image-galleries/community.png" alt-text="Screenshot showing where to select community gallery images.":::
+1. Select an image from the list. Make sure that the **OS state** is *Generalized*. If you want to use a specialized image, see [Create a VM using a specialized image version](vm-specialized-image-version.md). Depending on the image choose, the **Region** the VM will be created in will change to match the image.
+1. Complete the rest of the options and then select the **Review + create** button at the bottom of the page.
+1. On the **Create a virtual machine** page, you can see the details about the VM you are about to create. When you are ready, select **Create**.
++
+### [REST](#tab/rest2)
+
+Get the ID of the image version. The value will be used in the VM deployment request.
+
+```rest
+GET
+https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Compute/Locations/{location}/CommunityGalleries/{CommunityGalleryPublicName}/Images/{galleryImageName}/Versions/{1.0.0}?api-version=2021-07-01
+
+```
+
+Response:
+
+```json
+"location": "West US",
+ "identifier": {
+ "uniqueId": "/CommunityGalleries/{PublicGalleryName}/Images/{imageName}/Versions/{verionsName}"
+ },
+ "name": "1.0.0"
+```
+
++
+Now you can deploy the VM. The example requires API version 2021-07-01 or later.
+
+```rest
+PUT
+https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{rg}/providers/Microsoft.Compute/virtualMachines/{VMName}?api-version=2021-03-01
+{
+ "location": "{location}",
+ "properties": {
+ "hardwareProfile": {
+ "vmSize": "Standard_D1_v2"
+ },
+ "storageProfile": {
+ "imageReference": {
+ "communityGalleryImageId":"/communityGalleries/{publicGalleryName}/images/{galleryImageName}/versions/1.0.0"
+ },
+ "osDisk": {
+ "caching": "ReadWrite",
+ "managedDisk": {
+ "storageAccountType": "Standard_LRS"
+ },
+ "name": "myVMosdisk",
+ "createOption": "FromImage"
+ }
+ },
+ "osProfile": {
+ "adminUsername": "azureuser",
+ "computerName": "myVM",
+ "adminPassword": "{password}}"
+ },
+ "networkProfile": {
+ "networkInterfaces": [
+ {
+ "id": "/subscriptions/00000000-0000-0000-0000-
+000000000000/resourceGroups/{rg}/providers/Microsoft.Network/networkInterfaces/{networkIntefaceName}",
+ "properties": {
+ "primary": true
+ }
+ }
+ ]
+ }
+ }
+}
+
+```
+++ **Next steps** [Azure Image Builder (preview)](./image-builder-overview.md) can help automate image version creation, you can even use it to update and [create a new image version from an existing image version](./linux/image-builder-gallery-update-image-version.md).
virtual-machines Vm Specialized Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-specialized-image-version.md
Previously updated : 08/05/2021 Last updated : 04/26/2022
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs
-Create a VM from a [specialized image version](./shared-image-galleries.md#generalized-and-specialized-images) stored in an Azure Compute Gallery (formerly known as Shared Image Gallery). If want to create a VM using a generalized image version, see [Create a VM from a generalized image version](vm-generalized-image-version.md).
+Create a VM from a [specialized image version](./shared-image-galleries.md#generalized-and-specialized-images) stored in an Azure Compute Gallery (formerly known as Shared Image Gallery). If you want to create a VM using a generalized image version, see [Create a VM from a generalized image version](vm-generalized-image-version.md).
> [!IMPORTANT] >
-> When you create a new VM from a specialized image, the new VM retains the computer name of the original VM. Other computer-specific information (e.g. CMID) is also kept and, in some cases, this duplicate information could cause issues. When copying a VM, be aware of what types of computer-specific information your applications rely on.
+> When you create a new VM from a specialized image, the new VM retains the computer name of the original VM. Other computer-specific information, like the CMID, is also kept. This duplicate information can cause issues. When copying a VM, be aware of what types of computer-specific information your applications rely on.
Replace resource names as needed in these examples.
+## Create a VM from your gallery
+ ### [Portal](#tab/portal) Now you can create one or more new VMs. This example creates a VM named *myVM*, in the *myResourceGroup*, in the *East US* datacenter.
az sig image-definition list \
--output tsv ```
-Create the VM using [az vm create](/cli/azure/vm#az-vm-create) using the --specialized parameter to indicate the the image is a specialized image.
+Create the VM using [az vm create](/cli/azure/vm#az-vm-create) using the --specialized parameter to indicate that the image is a specialized image.
-Use the image definition ID for `--image` to create the VM from the latest version of the image that is available. You can also create the VM from a specific version by supplying the image version ID for `--image`.
+Use the image definition ID for `--image` to create the VM from the latest version of the image that is available. You can also create the VM from a specific version by supplying the image version ID for `--image`.
In this example, we are creating a VM from the latest version of the *myImageDefinition* image.
New-AzVM `
-VM $vmConfig ```++
+## Create a VM from a community gallery image
+
+> [!IMPORTANT]
+> Azure Compute Gallery ΓÇô community galleries is currently in PREVIEW and subject to the [Preview Terms for Azure Compute Gallery - community gallery](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> Microsoft does not provide support for images in the [community gallery](azure-compute-gallery.md#community).
++
+### [CLI](#tab/cli2)
+
+To create a VM using an image shared to a community gallery, use the unique ID of the image for the `--image`, which will be in the following format:
+
+```
+/CommunityGalleries/<community gallery name, like: ContosoImages-1a2b3c4d-1234-abcd-1234-1a2b3c4d5e6f>/Images/<image name>/Versions/latest
+```
+
+As an end user, to get the public name of a community gallery, you need to use the portal. Go to **Virtual machines** > **Create** > **Azure virtual machine** > **Image** > **See all images** > **Community Images** > **Public gallery name**.
++
+List all of the image definitions that are available in a community gallery using [az sig image-definition list-community](/cli/azure/sig/image-definition#az_sig_image_definition_list_community). In this example, we list all of the images in the *ContosoImage* gallery in *West US* and by name, the unique ID that is needed to create a VM, OS and OS state.
+
+```azurecli-interactive
+ az sig image-definition list-community \
+ --public-gallery-name "ContosoImages-1a2b3c4d-1234-abcd-1234-1a2b3c4d5e6f" \
+ --location westus \
+ --query [*]."{Name:name,ID:uniqueId,OS:osType,State:osState}" -o table
+```
+
+To create a VM from a generalized image in a community gallery, see [Create a VM from a generalized image version](vm-generalized-image-version.md).
+
+Create the VM using [az vm create](/cli/azure/vm#az-vm-create) using the `--specialized` parameter to indicate that the image is a specialized image.
+
+In this example, we are creating a VM from the latest version of the *myImageDefinition* image.
+
+```azurecli
+az group create --name myResourceGroup --location eastus
+az vm create --resource-group myResourceGroup \
+ --name myVM \
+ --image "/CommunityGalleries/ContosoImages-f61bb1d9-3c5a-4ad2-99b5-744030225de6/Images/LinuxSpecializedVersions/latest" \
+ --specialized
+```
+
+When using a community image, you'll be prompted to accept the legal terms. The message will look like this:
+
+```output
+To create the VM from community gallery image, you must accept the license agreement and privacy statement: http://contoso.com. (If you want to accept the legal terms by default, please use the option '--accept-term' when creating VM/VMSS) (Y/n):
+```
+
+### [Portal](#tab/portal2)
+
+1. Type **virtual machines** in the search.
+1. Under **Services**, select **Virtual machines**.
+1. In the **Virtual machines** page, select **Create** and then **Virtual machine**. The **Create a virtual machine** page opens.
+1. In the **Basics** tab, under **Project details**, make sure the correct subscription is selected and then choose to **Create new** resource group or select one from the drop-down.
+1. Under **Instance details**, type a name for the **Virtual machine name**.
+1. For **Security type**, make sure *Standard* is selected.
+1. For your **Image**, select **See all images**. The **Select an image** page will open.
+ :::image type="content" source="media/shared-image-galleries/see-all-images.png" alt-text="Screenshot showing the link to select to see more image options.":::
+1. In the left menu, under **Other Items**, select **Community images (PREVIEW)**. The **Other Items | Community Images (PREVIEW)** page will open.
+ :::image type="content" source="media/shared-image-galleries/community.png" alt-text="Screenshot showing where to select community gallery images.":::
+1. Select an image from the list. Make sure that the **OS state** is *Specialized*. If you want to use a specialized image, see [Create a VM using a generalized image version](vm-generalized-image-version.md). Depending on the image choose, the **Region** the VM will be created in will change to match the image.
+1. Complete the rest of the options and then select the **Review + create** button at the bottom of the page.
+1. On the **Create a virtual machine** page, you can see the details about the VM you are about to create. When you are ready, select **Create**.
+ **Next steps**
-You can also create Azure Compute Gallery resource using templates. There are several Azure Quickstart Templates available:
+You can also create Azure Compute Gallery resource using templates. There are several quickstart templates available:
- [Create an Azure Compute Gallery](https://azure.microsoft.com/resources/templates/sig-create/) - [Create an Image Definition in an Azure Compute Gallery](https://azure.microsoft.com/resources/templates/sig-image-definition-create/)
virtual-machines Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/get-started.md
ms.assetid: ad8e5c75-0cf6-4564-ae62-ea1246b4e5f2
vm-linux Previously updated : 03/30/2022 Last updated : 04/26/2022
In this section, you find documents about Microsoft Power BI integration into SA
## Change Log
+- April 26, 2022: Changes in [Setting up Pacemaker on SUSE Linux Enterprise Server in Azure](high-availability-guide-suse-pacemaker.md) to add Azure Identity python module to installation instructions for Azure Fence Agent
- March 30, 2022: Adding information that Red Hat Gluster Storage is being phased out [GlusterFS on Azure VMs on RHEL](./high-availability-guide-rhel-glusterfs.md) - March 30, 2022: Correcting DNN support for older releases of SQL Server in [SQL Server Azure Virtual Machines DBMS deployment for SAP NetWeaver](./dbms_guide_sqlserver.md) - March 28, 2022: Formatting changes and reorganizing ILB configuration instructions in: [HA for SAP HANA on Azure VMs on SLES](./sap-hana-high-availability.md), [HA for SAP HANA Scale-up with Azure NetApp Files on SLES](./sap-hana-high-availability-netapp-files-suse.md), [HA for SAP HANA on Azure VMs on RHEL](./sap-hana-high-availability-rhel.md), [HA for SAP HANA scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md), [HA for SAP NW on SLES with NFS on Azure Files](./high-availability-guide-suse-nfs-azure-files.md), [HA for SAP NW on Azure VMs on SLES with ANF](./high-availability-guide-suse-netapp-files.md), [HA for SAP NW on Azure VMs on SLES for SAP applications](./high-availability-guide-suse.md), [HA for NFS on Azure VMs on SLES](./high-availability-guide-suse-nfs.md), [HA for SAP NNW on Azure VMs on SLES multi-SID guide](./high-availability-guide-suse-multi-sid.md), [HA for SAP NW on RHEL with NFS on Azure Files](./high-availability-guide-rhel-nfs-azure-files.md), [HA for SAP NW on Azure VMs on RHEL with ANF](./high-availability-guide-rhel-netapp-files.md), [HA for SAP NW on Azure VMs on RHEL for SAP applications](./high-availability-guide-rhel.md) and [HA for SAP NW on Azure VMs on RHEL multi-SID guide](./high-availability-guide-rhel-multi-sid.md)
virtual-machines High Availability Guide Suse Pacemaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-suse-pacemaker.md
vm-windows Previously updated : 09/08/2021 Last updated : 04/26/2022
Be sure to assign the role for both cluster nodes.
>[!IMPORTANT] > The installed version of the *fence-agents* package must be 4.4.0 or later to benefit from the faster failover times with the Azure fence agent, when a cluster node is fenced. If you're running an earlier version, we recommend that you update the package.
-1. **[A]** Install the Azure Python SDK on SLES 12 SP4 or SLES 12 SP5.
+1. **[A]** Install the Azure Python SDK and Azure Identity python module.
+
+ Install the Azure Python SDK on SLES 12 SP4 or SLES 12 SP5:
<pre><code># You might need to activate the public cloud extension first SUSEConnect -p sle-module-public-cloud/12/x86_64 sudo zypper install python-azure-mgmt-compute
+ sudo zypper install python-azure-identity
</code></pre> Install the Azure Python SDK on SLES 15 or later: <pre><code># You might need to activate the public cloud extension first. In this example, the SUSEConnect command is for SLES 15 SP1 SUSEConnect -p sle-module-public-cloud/15.1/x86_64 sudo zypper install python3-azure-mgmt-compute
+ sudo zypper install python3-azure-identity
</code></pre> >[!IMPORTANT]
virtual-network Resource Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/resource-health.md
+
+ Title: Azure Virtual Network NAT Resource Health
+
+description: Understand how to use resource health for Virtual Network NAT.
+++
+# Customer intent: As an IT administrator, I want to understand how to use resource health to monitor Virtual Network NAT.
+ Last updated : 04/25/2022++
+# Azure Virtual Network NAT Resource Health
+
+This article provides guidance on how to use Azure Resource Health to monitor and troubleshoot connectivity issues with your NAT gateway resource. Resource health provides an automatic check to keep you informed on the current availability of your NAT gateway.
+
+## Resource health status
+
+[Azure Resource Health](/azure/service-health/overview) provides information about the health of your NAT gateway resource. You can use resource health and Azure monitor notifications to keep you informed on the availability and health status of your NAT gateway resource. Resource health can help you quickly assess whether an issue is due to a problem in your Azure infrastructure or because of an Azure platform event. The resource health of your NAT gateway is evaluated by measuring the data-path availability of your NAT gateway endpoint.
+
+You can view the status of your NAT gatewayΓÇÖs health status on the **Resource Health** page, found under **Support + troubleshooting** for your NAT gateway resource.
+
+The health of your NAT gateway resource is displayed as one of the following statuses:
+
+| Resource health status | Description |
+|||
+| Available | Your NAT gateway resource is healthy and available. |
+| Degraded | Your NAT gateway resource has platform or user initiated events impacting the health of your NAT gateway. The metric for the data-path availability has reported less than 80% but greater than 25% health for the last fifteen minutes. |
+| Unavailable | Your NAT gateway resource is not healthy. The metric for the data-path availability has reported less than 25% for the past 15 minutes. You may experience unavailability of your NAT gateway resource for outbound connectivity. |
+| Unknown | Health status for your NAT gateway resource hasnΓÇÖt been updated or hasnΓÇÖt received information for data-path availability for more than 5 minutes. This state should be transient and will reflect the correct status as soon as data is received. |
+
+For more information about Azure Resource Health, see [Resource Health overview](/azure/service-health/resource-health-overview).
+
+To view the health of your NAT gateway resource:
+
+1. From the NAT gateway resource page, under **Support + troubleshooting**, select **Resource health**.
+
+2. In the health history section, select the drop-down arrows next to dates to get more information on health history events of your NAT gateway resource. You can view up to 30 days of history in the health history section.
+
+3. Select the **+ Add resource health alert** at the top of the page to set up an alert for a specific health status of your NAT gateway resource.
+
+## Next steps
+
+- Learn about [Virtual Network NAT](/azure/virtual-network/nat-gateway/nat-overview)
+- Learn about [metrics and alerts for NAT gateway](/azure/virtual-network/nat-gateway/nat-metrics)
+- Learn about [troubleshooting NAT gateway resources](/azure/virtual-network/nat-gateway/troubleshoot-nat)
+- Learn about [Azure resource health](/azure/service-health/resource-health-overview)
virtual-wan Monitor Point To Site Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/monitor-point-to-site-connections.md
The output stored in the storage account is fetched from within the workbook by
| Name | Value| ||| |"resourcegroup" | your resource group |
- | "sasuri"| your resource group|
- | "storageaccountname"| `@Microsoft.KeyVault(SecretUri=https://\<keyvaultname>.vault.azure.net/secrets/sasuri/\<version>)`-->update accordingly after keyvault is created in next section. |
- | "storagecontainer"| your storage container name|
+ | "sasuri"| `@Microsoft.KeyVault(SecretUri=https://\<keyvaultname>.vault.azure.net/secrets/sasuri/\<version>)`<br />--> update accordingly after keyvault is created in next section.|
|"subscription" |your subscription ID | |"tenantname" | your tenant ID | | "vpngw"|This name is something like \<guid>-eastus-ps2-gw. You can get this from the vWAN HUB User VPN settings. |
The output stored in the storage account is fetched from within the workbook by
$tenantname = $env:appsetting_tenantname $subscription = $env:appsetting_subscription $resourceGroup = $env:appsetting_resourcegroup
- $storageAccountName = $env:appsetting_storageaccountname
- $vpnstatsfile = $env:appsetting_vpnstatsfile
$vpngw = $env:appsetting_vpngw $sasuri = $env:appsetting_sasuri-
+
+ Write-Host "Connecting to Managed Identity..."
connect-azaccount -tenant $tenantname -identity -subscription $subscription-
- Get-AzP2sVpnGatewayDetailedConnectionHealth -name $vpngw -ResourceGroupName $resourceGroup -OutputBlobSasUrl
- $sasuri
+
+ Write-Host "Executing File Update..."
+ Get-AzP2sVpnGatewayDetailedConnectionHealth -name $vpngw -ResourceGroupName $resourceGroup -OutputBlobSasUrl $sasuri
+
+ Write-Host "Function Execution Completed!"
``` 1. Navigate back to the **Function App** page and select on **App Service Editor** in the left panel under **Development Tools**. Then, select **Go -->**.
web-application-firewall Application Gateway Waf Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-waf-configuration.md
$exclusion2 = New-AzApplicationGatewayFirewallExclusionConfig `
-SelectorMatchOperator "StartsWith" ` -Selector "user" ```
-So if the URL `http://www.contoso.com/?user%281%29=fdafdasfda` is passed to the WAF, it won't evaluate the string **fdafdasfda**, but it will still evaluate the parameter name **user%281%29**.
+So if the URL `http://www.contoso.com/?user%3c%3e=joe` is passed to the WAF, it won't evaluate the string **joe**, but it will still evaluate the parameter name **user%3c%3e**.
## Next steps