Updates from: 04/20/2023 01:15:21
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Plan Cloud Hr Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/plan-cloud-hr-provision.md
Previously updated : 04/18/2023 Last updated : 04/19/2023
The following example describes the end-to-end user provisioning solution archit
The following key steps are indicated in the diagram:   1. **HR team** performs the transactions in the cloud HR app tenant.
-2. **Azure AD provisioning service** runs the scheduled cycles from the cloud HR app tenant and identifies changes that need to be processed for sync with Active Directory.
+2. **Azure AD provisioning service** runs the scheduled cycles from the cloud HR app tenant and identifies changes to process for sync with Active Directory.
3. **Azure AD provisioning service** invokes the Azure AD Connect provisioning agent with a request payload that contains Active Directory account create, update, enable, and disable operations. 4. **Azure AD Connect provisioning agent** uses a service account to manage Active Directory account data. 5. **Azure AD Connect** runs delta [sync](../hybrid/how-to-connect-sync-whatis.md) to pull updates in Active Directory.
For high availability, you can deploy more than one Azure AD Connect provisionin
## Design HR provisioning app deployment topology
-Depending on the number of Active Directory domains involved in the inbound user provisioning configuration, you may consider one of the following deployment topologies. Each topology diagram uses an example deployment scenario to highlight configuration aspects. Use the example that closely resembles your deployment requirement to determine the configuration that will meet your needs.
+Depending on the number of Active Directory domains involved in the inbound user provisioning configuration, you may consider one of the following deployment topologies. Each topology diagram uses an example deployment scenario to highlight configuration aspects. Use the example that closely resembles your deployment requirement to determine the configuration that meets your needs.
-### Deployment topology 1: Single app to provision all users from Cloud HR to single on-premises Active Directory domain
+### Deployment topology one: Single app to provision all users from Cloud HR to single on-premises Active Directory domain
-This is the most common deployment topology. Use this topology, if you need to provision all users from Cloud HR to a single AD domain and same provisioning rules apply to all users.
+Deployment topology one is the most common deployment topology. Use this topology, if you need to provision all users from Cloud HR to a single AD domain and same provisioning rules apply to all users.
:::image type="content" source="media/plan-cloud-hr-provision/topology-1-single-app-with-single-ad-domain.png" alt-text="Screenshot of single app to provision users from Cloud HR to single AD domain" lightbox="media/plan-cloud-hr-provision/topology-1-single-app-with-single-ad-domain.png":::
This is the most common deployment topology. Use this topology, if you need to p
* When configuring the provisioning app, select the AD domain from the dropdown of registered domains. * If you're using scoping filters, configure [skip out of scope deletions flag](skip-out-of-scope-deletions.md) to prevent accidental account deactivations.
-### Deployment topology 2: Separate apps to provision distinct user sets from Cloud HR to single on-premises Active Directory domain
+### Deployment topology two: Separate apps to provision distinct user sets from Cloud HR to single on-premises Active Directory domain
This topology supports business requirements where attribute mapping and provisioning logic differ based on user type (employee/contractor), user location or user's business unit. You can also use this topology to delegate the administration and maintenance of inbound user provisioning based on division or country.
This topology supports business requirements where attribute mapping and provisi
**Salient configuration aspects** * Setup two provisioning agent nodes for high availability and failover. * Create an HR2AD provisioning app for each distinct user set that you want to provision.
-* Use [scoping filters](define-conditional-rules-for-provisioning-user-accounts.md) in the provisioning app to define users to be processed by each app.
-* To handle the scenario where managers references need to be resolved across distinct user sets (e.g. contractors reporting to managers who are employees), you can create a separate HR2AD provisioning app for updating only the *manager* attribute. Set the scope of this app to all users.
+* Use [scoping filters](define-conditional-rules-for-provisioning-user-accounts.md) in the provisioning app to define users to process each app.
+* In the scenario where manager references need to be resolved across distinct user sets, create a separate HR2AD provisioning app. For example, contractors reporting to managers who are employees. Use the separate app to update only the *manager* attribute. Set the scope of this app to all users.
* Configure [skip out of scope deletions flag](skip-out-of-scope-deletions.md) to prevent accidental account deactivations. > [!NOTE] > If you do not have a test AD domain and use a TEST OU container in AD, then you may use this topology to create two separate apps *HR2AD (Prod)* and *HR2AD (Test)*. Use the *HR2AD (Test)* app to test your attribute mapping changes before promoting it to the *HR2AD (Prod)* app.
-### Deployment topology 3: Separate apps to provision distinct user sets from Cloud HR to multiple on-premises Active Directory domains (no cross-domain visibility)
+### Deployment topology three: Separate apps to provision distinct user sets from Cloud HR to multiple on-premises Active Directory domains (no cross-domain visibility)
-Use this topology to manage multiple independent child AD domains belonging to the same forest, if managers always exist in the same domain as the user and your unique ID generation rules for attributes like *userPrincipalName*, *samAccountName* and *mail* doesn't require a forest-wide lookup. It also offers the flexibility of delegating the administration of each provisioning job by domain boundary.
+Use topology three to manage multiple independent child AD domains belonging to the same forest. Make sure that managers always exist in the same domain as the user. Also make sure that your unique ID generation rules for attributes like *userPrincipalName*, *samAccountName*, and *mail* don't require a forest-wide lookup. Topology three offers the flexibility of delegating the administration of each provisioning job by domain boundary.
-For example: In the diagram below, the provisioning apps are set up for each geographic region: North America (NA), Europe, Middle East and Africa (EMEA) and Asia Pacific (APAC). Depending on the location, users are provisioned to the respective AD domain. Delegated administration of the provisioning app is possible so that *EMEA administrators* can independently manage the provisioning configuration of users belonging to the EMEA region.
+For example: In the diagram, the provisioning apps are set up for each geographic region: North America (NA), Europe, Middle East and Africa (EMEA) and Asia Pacific (APAC). Depending on the location, users are provisioned to the respective AD domain. Delegated administration of the provisioning app is possible so that *EMEA administrators* can independently manage the provisioning configuration of users belonging to the EMEA region.
:::image type="content" source="media/plan-cloud-hr-provision/topology-3-separate-apps-with-multiple-ad-domains-no-cross-domain.png" alt-text="Screenshot of separate apps to provision users from Cloud HR to multiple AD domains" lightbox="media/plan-cloud-hr-provision/topology-3-separate-apps-with-multiple-ad-domains-no-cross-domain.png":::
For example: In the diagram below, the provisioning apps are set up for each geo
* Configure [skip out of scope deletions flag](skip-out-of-scope-deletions.md) to prevent accidental account deactivations.
-### Deployment topology 4: Separate apps to provision distinct user sets from Cloud HR to multiple on-premises Active Directory domains (with cross-domain visibility)
+### Deployment topology four: Separate apps to provision distinct user sets from Cloud HR to multiple on-premises Active Directory domains (with cross-domain visibility)
-Use this topology to manage multiple independent child AD domains belonging to the same forest, if a user's manager may exist in the different domain and your unique ID generation rules for attributes like *userPrincipalName*, *samAccountName* and *mail* requires a forest-wide lookup.
+Use topology four to manage multiple independent child AD domains belonging to the same forest. A user's manager may exist in a different domain. Also, your unique ID generation rules for attributes like *userPrincipalName*, *samAccountName* and *mail* require a forest-wide lookup.
-For example: In the diagram below, the provisioning apps are set up for each geographic region: North America (NA), Europe, Middle East and Africa (EMEA) and Asia Pacific (APAC). Depending on the location, users are provisioned to the respective AD domain. Cross-domain manager references and forest-wide lookup are handled by enabling referral chasing on the provisioning agent.
+For example: In the diagram, the provisioning apps are set up for each geographic region: North America (NA), Europe, Middle East and Africa (EMEA) and Asia Pacific (APAC). Depending on the location, users are provisioned to the respective AD domain. Cross-domain manager references and forest-wide lookup are handled by enabling referral chasing on the provisioning agent.
:::image type="content" source="media/plan-cloud-hr-provision/topology-4-separate-apps-with-multiple-ad-domains-cross-domain.png" alt-text="Screenshot of separate apps to provision users from Cloud HR to multiple AD domains with cross domain support" lightbox="media/plan-cloud-hr-provision/topology-4-separate-apps-with-multiple-ad-domains-cross-domain.png":::
For example: In the diagram below, the provisioning apps are set up for each geo
Use this topology if you want to use a single provisioning app to manage users belonging to all your parent and child AD domains. This topology is recommended if provisioning rules are consistent across all domains and there's no requirement for delegated administration of provisioning jobs. This topology supports resolving cross-domain manager references and can perform forest-wide uniqueness check.
-For example: In the diagram below, a single provisioning app manages users present in three different child domains grouped by region: North America (NA), Europe, Middle East and Africa (EMEA) and Asia Pacific (APAC). The attribute mapping for *parentDistinguishedName* is used to dynamically create a user in the appropriate child domain. Cross-domain manager references and forest-wide lookup are handled by enabling referral chasing on the provisioning agent.
+For example: In the diagram, a single provisioning app manages users present in three different child domains grouped by region: North America (NA), Europe, Middle East and Africa (EMEA) and Asia Pacific (APAC). The attribute mapping for *parentDistinguishedName* is used to dynamically create a user in the appropriate child domain. Cross-domain manager references and forest-wide lookup are handled by enabling referral chasing on the provisioning agent.
:::image type="content" source="media/plan-cloud-hr-provision/topology-5-single-app-with-multiple-ad-domains-cross-domain.png" alt-text="Screenshot of single app to provision users from Cloud HR to multiple AD domains with cross domain support" lightbox="media/plan-cloud-hr-provision/topology-5-single-app-with-multiple-ad-domains-cross-domain.png":::
Use this topology if your IT infrastructure has disconnected/disjoint AD forests
### Deployment topology 7: Separate apps to provision distinct users from multiple Cloud HR to disconnected on-premises Active Directory forests
-In large organizations, it isn't uncommon to have multiple HR systems. During business M&A (mergers and acquisitions) scenarios, you may come across a need to connect your on-premises Active Directory to multiple HR sources. We recommend the topology below if you have multiple HR sources and would like to channel the identity data from these HR sources to either the same or different on-premises Active Directory domains.
+In large organizations, it isn't uncommon to have multiple HR systems. During business M&A (mergers and acquisitions) scenarios, you may come across a need to connect your on-premises Active Directory to multiple HR sources. We recommend the topology if you have multiple HR sources and would like to channel the identity data from these HR sources to either the same or different on-premises Active Directory domains.
:::image type="content" source="media/plan-cloud-hr-provision/topology-7-separate-apps-from-multiple-hr-to-disconnected-ad-forests.png" alt-text="Screenshot of separate apps to provision users from multiple Cloud HR to disconnected AD forests" lightbox="media/plan-cloud-hr-provision/topology-7-separate-apps-from-multiple-hr-to-disconnected-ad-forests.png":::
active-directory App Proxy Protect Ndes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/app-proxy-protect-ndes.md
+
+ Title: Integrate with Azure Active Directory Application Proxy on an NDES server
+description: Guidance on deploying an Azure Active Directory Application Proxy to protect your NDES server.
+++++++ Last updated : 04/19/2023+++
+# Integrate with Azure Active Directory Application Proxy on a Network Device Enrollment Service (NDES) server
+
+Azure Active Directory (AD) Application Proxy lets you publish applications inside your network. These applications are ones such as SharePoint sites, Microsoft Outlook Web App, and other web applications. It also provides secure access to users outside your network via Azure.
+
+If you're new to Azure AD Application Proxy and want to learn more, see [Remote access to on-premises applications through Azure AD Application Proxy](application-proxy.md).
+
+Azure AD Application Proxy is built on Azure. It gives you a massive amount of network bandwidth and server infrastructure for better protection against distributed denial-of-service (DDOS) attacks and superb availability. Furthermore, there's no need to open external firewall ports to your on-premises network and no DMZ server is required. All traffic is originated inbound. For a complete list of outbound ports, see [Tutorial: Add an on-premises application for remote access through Application Proxy in Azure Active Directory](./application-proxy-add-on-premises-application.md#prepare-your-on-premises-environment).
+
+> Azure AD Application Proxy is a feature that is available only if you are using the Premium or Basic editions of Azure Active Directory. For more information, see [Azure Active Directory pricing](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
+> If you have Enterprise Mobility Suite (EMS) licenses, you are eligible to use this solution.
+> The Azure AD Application Proxy connector only installs on Windows Server 2012 R2 or later. This is also a requirement of the NDES server.
+
+## Install and register the connector on the NDES server
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) as an application administrator of the directory that uses Application Proxy. For example, if the tenant domain is contoso.com, the admin should be admin@contoso.com or any other admin alias on that domain.
+1. Select your username in the upper-right corner. Verify you're signed in to a directory that uses Application Proxy. If you need to change directories, select **Switch directory** and choose a directory that uses Application Proxy.
+1. In left navigation panel, select **Azure Active Directory**.
+1. Under **Manage**, select **Application proxy**.
+1. Select **Download connector service**.
+
+ ![Download connector service to see the Terms of Service](./media/app-proxy-protect-ndes/application-proxy-download-connector-service.png)
+
+1. Read the Terms of Service. When you're ready, select **Accept terms & Download**.
+1. Copy the Azure AD Application Proxy connector setup file to your NDES server.
+ > You can install the connector on any server within your corporate network with access to NDES. You don't have to install it on the NDES server itself.
+1. Run the setup file, such as *AADApplicationProxyConnectorInstaller.exe*. Accept the software license terms.
+1. During the install, you're prompted to register the connector with the Application Proxy in your Azure AD directory.
+ * Provide the credentials for a global or application administrator in your Azure AD directory. The Azure AD global or application administrator credentials may be different from your Azure credentials in the portal.
+
+ > [!NOTE]
+ > The global or application administrator account used to register the connector must belong to the same directory where you enable the Application Proxy service.
+ >
+ > For example, if the Azure AD domain is *contoso.com*, the global/application administrator should be `admin@contoso.com` or another valid alias on that domain.
+
+ * If Internet Explorer Enhanced Security Configuration is turned on for the server where you install the connector, the registration screen might be blocked. To allow access, follow the instructions in the error message, or turn off Internet Explorer Enhanced Security during the install process.
+ * If connector registration fails, see [Troubleshoot Application Proxy](application-proxy-troubleshoot.md).
+1. At the end of the setup, a note is shown for environments with an outbound proxy. To configure the Azure AD Application Proxy connector to work through the outbound proxy, run the provided script, such as `C:\Program Files\Microsoft AAD App Proxy connector\ConfigureOutBoundProxy.ps1`.
+1. On the Application proxy page in the Azure portal, the new connector is listed with a status of *Active*, as shown in the following example:
+
+ ![The new Azure AD Application Proxy connector shown as active in the Azure portal](./media/app-proxy-protect-ndes/connected-app-proxy.png)
+
+ > [!NOTE]
+ > To provide high availability for applications authenticating through the Azure AD Application Proxy, you can install connectors on multiple VMs. Repeat the same steps listed in the previous section to install the connector on other servers joined to the Azure AD DS managed domain.
+
+1. After successful installation, go back to the Azure portal.
+
+1. Select **Enterprise applications**.
+
+ ![ensure that you're engaging the right stakeholders](./media/app-proxy-protect-ndes/enterprise-applications.png)
+
+1. Select **+New Application**, and then select **On-premises application**.
+
+1. On the **Add your own on-premises application**, configure the following fields:
+
+ * **Name**: Enter a name for the application.
+ * **Internal Url**: Enter the internal URL/FQDN of your NDES server on which you installed the connector.
+ * **Pre Authentication**: Select **Passthrough**. ItΓÇÖs not possible to use any form of pre authentication. The protocol used for Certificate Requests (SCEP) doesn't provide such option.
+ * Copy the provided **External URL** to your clipboard.
+
+1. Select **+Add** to save your application.
+
+1. Test whether you can access your NDES server via the Azure AD Application proxy by pasting the link you copied in step 15 into a browser. You should see a default IIS welcome page.
+
+1. As a final test, add the *mscep.dll* path to the existing URL you pasted in the previous step:
+
+ `https://scep-test93635307549127448334.msappproxy.net/certsrv/mscep/mscep.dll`
+
+1. You should see an **HTTP Error 403 ΓÇô Forbidden** response.
+
+1. Change the NDES URL provided (via Microsoft Intune) to devices. This change could either be in Microsoft Configuration Manager or the Microsoft Intune admin center.
+
+ * For Configuration Manager, go to the certificate registration point and adjust the URL. This URL is what devices call out to and present their challenge.
+ * For Intune standalone, either edit or create a new SCEP policy and add the new URL.
+
+## Next steps
+
+With the Azure AD Application Proxy integrated with NDES, publish applications for users to access. For more information, see [publish applications using Azure AD Application Proxy](./application-proxy-add-on-premises-application.md).
active-directory Concept Sspr Howitworks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-sspr-howitworks.md
Previously updated : 03/22/2023 Last updated : 04/19/2023
To get started with SSPR, complete the following tutorial:
> [!div class="nextstepaction"] > [Tutorial: Enable self-service password reset (SSPR)](tutorial-enable-sspr.md)-
-The following articles provide additional information regarding password reset through Azure AD:
-
-[Authentication]: ./media/concept-sspr-howitworks/manage-authentication-methods-for-password-reset.png "Azure AD authentication methods available and quantity required"
-[Registration]: ./media/concept-sspr-howitworks/configure-registration-options.png "Configure SSPR registration options in the Azure portal"
-[Writeback]: ./media/concept-sspr-howitworks/on-premises-integration.png "On-premises integration for SSPR in the Azure portal"
active-directory How To Mfa Authenticator Lite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-authenticator-lite.md
# How to enable Microsoft Authenticator Lite for Outlook mobile (preview)
->[!NOTE]
->Rollout has not yet completed across Outlook applications. If this feature is enabled in your tenant, your users may not yet be prompted for the experience. To minimize user disruption, we recommend enabling this feature when the rollout completes.
Microsoft Authenticator Lite is another surface for Azure Active Directory (Azure AD) users to complete multifactor authentication by using push notifications or time-based one-time passcodes (TOTP) on their Android or iOS device. With Authenticator Lite, users can satisfy a multifactor authentication requirement from the convenience of a familiar app. Authenticator Lite is currently enabled in [Outlook mobile](https://www.microsoft.com/microsoft-365/outlook-mobile-for-android-and-ios).
Users receive a notification in Outlook mobile to approve or deny sign-in, or th
| Operating system | Outlook version | |:-:|::|
- |Android | 4.2309.1 |
- |iOS | 4.2309.0 |
+ |Android | 4.2310.1 |
+ |iOS | 4.2312.1 |
## Enable Authenticator Lite
By default, Authenticator Lite is [Microsoft managed](concept-authentication-def
To enable Authenticator Lite in the Azure portal, complete the following steps:
- 1. In the Azure portal, click Security > Authentication methods > Microsoft Authenticator.
+ 1. In the Azure portal, click Azure Active Directory > Security > Authentication methods > Microsoft Authenticator.
+ In the Entra admin center, on the sidebar select Azure Active Directory > Protect & Secure > Authentication methods > Microsoft Authenticator.
2. On the Enable and Target tab, click Yes and All users to enable the policy for everyone or add selected users and groups. Set the Authentication mode for these users/groups to Any or Push.
active-directory Msal Net Token Cache Serialization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-token-cache-serialization.md
If you're using the [MSAL library](/dotnet/api/microsoft.identity.client) direct
| Extension method | Description | | - | | | [AddInMemoryTokenCaches](/dotnet/api/microsoft.identity.web.microsoftidentityappcallswebapiauthenticationbuilder.addinmemorytokencaches) | Creates a temporary cache in memory for token storage and retrieval. In-memory token caches are faster than other cache types, but their tokens aren't persisted between application restarts, and you can't control the cache size. In-memory caches are good for applications that don't require tokens to persist between app restarts. Use an in-memory token cache in apps that participate in machine-to-machine auth scenarios like services, daemons, and others that use [AcquireTokenForClient](/dotnet/api/microsoft.identity.client.acquiretokenforclientparameterbuilder) (the client credentials grant). In-memory token caches are also good for sample applications and during local app development. Microsoft.Identity.Web versions 1.19.0+ share an in-memory token cache across all application instances.
-| [AddSessionTokenCaches](/dotnet/api/microsoft.identity.web.microsoftidentityappcallswebapiauthenticationbuilder.addsessiontokencaches) | The token cache is bound to the user session. This option isn't ideal if the ID token contains many claims, because the cookie becomes too large.
+| [AddSessionTokenCaches](/dotnet/api/microsoft.identity.web.microsoftidentityappcallswebapiauthenticationbuilderextension.addsessiontokencaches) | The token cache is bound to the user session. This option isn't ideal if the ID token contains many claims, because the cookie becomes too large.
| `AddDistributedTokenCaches` | The token cache is an adapter against the ASP.NET Core `IDistributedCache` implementation. It enables you to choose between a distributed memory cache, a Redis cache, a distributed NCache, or a SQL Server cache. For details about the `IDistributedCache` implementations, see [Distributed memory cache](/aspnet/core/performance/caching/distributed).
active-directory Security Best Practices For App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/security-best-practices-for-app-registration.md
Certificates and secrets, also known as credentials, are a vital part of an appl
Consider the following guidance related to certificates and secrets: - Always use [certificate credentials](./active-directory-certificate-credentials.md) whenever possible and don't use password credentials, also known as *secrets*. While it's convenient to use password secrets as a credential, when possible use x509 certificates as the only credential type for getting tokens for an application.
+ - Configure [application authentication method policies](/graph/api/resources/applicationauthenticationmethodpolicy) to govern the use of secrets by limiting their lifetimes or blocking their use altogether.
- Use Key Vault with [managed identities](../managed-identities-azure-resources/overview.md) to manage credentials for an application. - If an application is used only as a Public Client App (allows users to sign in using a public endpoint), make sure that there are no credentials specified on the application object. - Review the credentials used in applications for freshness of use and their expiration. An unused credential on an application can result in a security breach. Rollover credentials frequently and don't share credentials across applications. Don't have many credentials on one application.
active-directory Configure Logic App Lifecycle Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/configure-logic-app-lifecycle-workflows.md
Previously updated : 01/26/2023 Last updated : 03/17/2023
Before you can use an existing Azure Logic App with the custom task extension feature of Lifecycle Workflows, it must first be made compatible. This reference guide provides a list of steps that must be taken to make the Azure Logic App compatible. For a guide on creating a new compatible Logic App via the Lifecycle Workflows portal, see [Trigger Logic Apps based on custom task extensions (preview)](trigger-custom-task.md).
+## Determine type of token security of your custom task extension
+
+Before configuring your Azure Logic App custom extension for use with Lifecycle Workflows, you must first figure out what type of token security it has. The two token security types can either be:
+
+- Normal
+- Proof of Possession(POP)
++
+To determine the security token type of your custom task extension, you'd check the **Custom extensions (Preview)** page:
+++
+> [!NOTE]
+> New custom task extensions will only have Proof of Possession(POP) token security type. Only task extensions created before the inclusion of the Proof of Possession token security type will have a type of Normal.
+ ## Configure existing Logic Apps for LCW use Making an Azure Logic app compatible to run with the **Custom Task Extension** requires the following steps: - Configure the logic app trigger-- Configure the callback action (only applicable to the callback scenario)-- Enable system assigned managed identity.-- Configure AuthZ policies.
+- Configure the callback action (Only applicable to the callback scenario.)
+- Enable system assigned managed identity (Always required for Normal security token type extensions. This is also the default for callback scenarios with custom task extensions. For more information on this, and other, custom task extension deployment scenarios, see: [Custom task extension deployment scenarios](lifecycle-workflow-extensibility.md#custom-task-extension-deployment-scenarios).)
+- Configure AuthZ policies
-To configure those you'll follow these steps:
+To configure those you follow these steps:
1. Open the Azure Logic App you want to use with Lifecycle Workflow. Logic Apps may greet you with an introduction screen, which you can close with the X in the upper right corner.
To configure those you'll follow these steps:
1. Select Save.
-1. For Logic Apps authorization policy, we'll need the managed identities **Application ID**. Since the Azure portal only shows the Object ID, we need to look up the Application ID. You can search for the managed identity by Object ID under **Enterprise Applications in the Azure portal** to find the required Application ID.
+## Configure authorization policy for custom task extension with POP security token type
+If the security token type is **Proof of Possession (POP)** for your custom task extension, you'd set the authorization policy by following these steps:
+
+1. For Logic Apps authorization policy, we need the managed identities **Application ID**. Since the Azure portal only shows the Object ID, we need to look up the Application ID. You can search for the managed identity by Object ID under **Enterprise Applications in the Azure AD Portal** to find the required Application ID.
1. Go back to the logic app you created, and select **Authorization**.
-1. Create two authorization policies based on the tables below:
+1. Create two authorization policies based on these tables:
- Policy name: AzureADLifecycleWorkflowsAuthPolicy
+ Policy name: POP-Policy
+
+ Policy type: (Preview) AADPOP
+
+ |Claim |Value |
+ |||
+ |Issuer | https://sts.windows.net/(Tenant ID)/ |
+ |appid | ce79fdc4-cd1d-4ea5-8139-e74d7dbe0bb7 |
+ |m | POST |
+ |u | management.azure.com |
+ |p | /subscriptions/(subscriptionId)/resourceGroups/(resourceGroupName)/providers/Microsoft.Logic/workflows/(LogicApp name) |
++
+1. Save the Authorization policy.
++
+> [!CAUTION]
+> Please pay attention to the details as minor differences can lead to problems later.
+- For Issuer, ensure you did include the slash after your Tenant ID
+- For appid, ensure the custom claim is ΓÇ£appidΓÇ¥ in all lowercase. The appid value represents Lifecycle Workflows and is always the same.
+
+## Configure authorization policy for custom task extension with normal security token type
+
+If the security token type is **Normal** for your custom task extension, you'd set the authorization policy by following these steps:
+
+1. For Logic Apps authorization policy, we need the managed identities **Application ID**. Since the Azure portal only shows the Object ID, we need to look up the Application ID. You can search for the managed identity by Object ID under **Enterprise Applications in the Azure AD Portal** to find the required Application ID.
+
+1. Go back to the logic app you created, and select **Authorization**.
+
+1. Create two authorization policies based on these tables:
+
+ Policy name: AzureADLifecycleWorkflowsAuthPolicy
+
+ Policy type: AAD
|Claim |Value | |||
To configure those you'll follow these steps:
|Audience | Application ID of your Logic Apps Managed Identity | |appid | ce79fdc4-cd1d-4ea5-8139-e74d7dbe0bb7 |
- Policy name: AzureADLifecycleWorkflowsAuthPolicyV2App
+ Policy name: AzureADLifecycleWorkflowsAuthPolicyV2App
+
+ Policy type: AAD
|Claim |Value | |||
To configure those you'll follow these steps:
|azp | ce79fdc4-cd1d-4ea5-8139-e74d7dbe0bb7 | 1. Save the Authorization policy.
-> [!NOTE]
-> Due to a current bug in the Logic Apps UI you may have to save the authorization policy after each claim before adding another.
> [!CAUTION] > Please pay attention to the details as minor differences can lead to problems later.
active-directory Entitlement Management Logic Apps Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-logic-apps-integration.md
These triggers to Logic Apps are controlled in a tab within access package polic
1. The **Extension Configuration** tab allows you to decide if your extension has ΓÇ£launch and continueΓÇ¥ or ΓÇ£launch and waitΓÇ¥ behavior. With ΓÇ£Launch and continueΓÇ¥ the linked policy action on the access package, such as a request, triggers the Logic App attached to the custom extension. After the Logic App is triggered, the entitlement management process associated with the access package will continue. For ΓÇ£Launch and waitΓÇ¥, we'll pause the associated access package action until after the Logic App linked to the extension completes its task, and a resume action is sent by the admin to continue the process. If no response is sent back in the wait time period defined, this process would be considered a failure. This process is further described below in its own section [Configuring custom extensions that pause entitlement management processes](entitlement-management-logic-apps-integration.md#configuring-custom-extensions-that-pause-entitlement-management-processes).
-1. In the **Details** tab, choose whether youΓÇÖd like to use an existing Logic App. Selecting Yes in the field ΓÇ£Create new logic appΓÇ¥ (default) creates a new blank Logic App that is already linked to this custom extension. Regardless, you need to provide:
+1. In the **Details** tab, choose whether youΓÇÖd like to use an existing consumption plan Logic App. Selecting Yes in the field ΓÇ£Create new logic appΓÇ¥ (default) creates a new blank consumption plan Logic App that is already linked to this custom extension. Regardless, you need to provide:
1. An Azure subscription.
A new update to the custom extensions feature is the ability to pause the access
This pause process allows admins to have control of workflows theyΓÇÖd like to run before continuing with access lifecycle tasks in entitlement management. The only exception to this is if a timeout occurs. Launch and wait processes require a timeout of up to 14 days noted in minutes, hours, or days. If a resume response isn't sent back to entitlement management by the time the ΓÇ£timeoutΓÇ¥ period elapses, the entitlement management request workflow process pauses.
-The admin is responsible for configuring an automated process that is able to send the API **resume request** payload back to entitlement management, once the Logic App workflow has completed. To send back the resume request payload, follow the instructions here in the graph API documents. See information here on the [resume request](/graph/api/accesspackageassignmentrequest-resume)
+The admin is responsible for configuring an automated process that is able to send the API **resume request** payload back to entitlement management, once the Logic App workflow has completed. To send back the resume request payload, follow the instructions here in the graph API documents. See information here on the [resume request](/graph/api/accesspackageassignmentrequest-resume).
Specifically, when an access package policy has been enabled to call out a custom extension and the request processing is waiting for the callback from the customer, the customer can initiate a resume action. It's performed on an [accessPackageAssignmentRequest](/graph/api/resources/accesspackageassignmentrequest) object whose **requestStatus** is in a **WaitingForCallback** state.
The resume request can be sent back for the following stages:
microsoft.graph.accessPackageCustomExtensionStage.assignmentRequestCreated microsoft.graph.accessPackageCustomExtensionStage.assignmentRequestApproved microsoft.graph.accessPackageCustomExtensionStage.assignmentRequestGranted
-Microsoft.graph.accessPackageCustomExtensionStage.assignmentRequestRemoved
+microsoft.graph.accessPackageCustomExtensionStage.assignmentRequestRemoved
`` The following flow diagram shows the entitlement management callout to Logic Apps workflow:
+The diagram flow diagram shows:
+
+1. The user creates a custom endpoint able to receive the call from the Identity Service
+1. The identity service makes a test call to confirm the endpoint can be called by the Identity Service
+1. The User calls Graph API to request to add a user to an access package
+1. The Identity Service is added to the queue triggering the backend workflow
+1. Entitlement Management Service request processing calls the logic app with the request payload
+1. Workflow expects the accepted code
+1. The Entitlement Management Service waits for the blocking custom action to resume
+1. The customer system calls the request resume API to the identity service to resume processing the request
+1. The identity service adds the resume request message to the Entitlement Management Service queue resuming the backend workflow
+1. The Entitlement Management Service is resumed from the blocked state
+ An example of a resume request payload is: ``` http
active-directory Assign User Or Group Access Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/assign-user-or-group-access-portal.md
This article shows you how to assign users and groups to an enterprise applicati
When you assign a group to an application, only users in the group will have access. The assignment doesn't cascade to nested groups.
-Group-based assignment requires Azure Active Directory Premium P1 or P2 edition. Group-based assignment is supported for Security groups only. Nested group memberships and Microsoft 365 groups aren't currently supported. For more licensing requirements for the features discussed in this article, see the [Azure Active Directory pricing page](https://azure.microsoft.com/pricing/details/active-directory).
+Group-based assignment requires Azure Active Directory Premium P1 or P2 edition. Group-based assignment is supported for Security groups and Microsoft 365 groups whose `SecurityEnabled` setting is set to `True` only. Nested group memberships aren't currently supported. For more licensing requirements for the features discussed in this article, see the [Azure Active Directory pricing page](https://azure.microsoft.com/pricing/details/active-directory).
For greater control, certain types of enterprise applications can be configured to require user assignment. For more information on requiring user assignment for an app, see [Manage access to an application](what-is-access-management.md#requiring-user-assignment-for-an-app).
active-directory Configure User Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-user-consent.md
Previously updated : 10/12/2022 Last updated : 04/19/2023
To reduce the risk of malicious applications attempting to trick users into gran
To configure user consent, you need: - A user account. If you don't already have one, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- A Global Administrator or Privileged Administrator role.
+- A Global Administrator role.
## Configure user consent settings
active-directory Pim Resource Roles Activate Your Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-activate-your-roles.md
na Previously updated : 3/15/2023 Last updated : 4/14/2023
Status code: 201
"type": "Microsoft.Authorization/RoleAssignmentScheduleRequests" } ````
-## Activate a role with PowerShell
-
-There is also an option to activate Privileged Identity Management using PowerShell. You may find more details as documented in the article [PowerShell for Azure AD roles PIM](powershell-for-azure-ad-roles.md).
-
-The following is a sample script for how to activate Azure resource roles using PowerShell.
-
-```powershell
-$managementgroupID = "<management group ID" # Tenant Root Group
-$guid = (New-Guid)
-$startTime = Get-Date -Format o
-$userObjectID = "<user object ID"
-$RoleDefinitionID = "b24988ac-6180-42a0-ab88-20f7382dd24c" # Contributor
-$scope = "/providers/Microsoft.Management/managementGroups/$managementgroupID"
-New-AzRoleAssignmentScheduleRequest -Name $guid -Scope $scope -ExpirationDuration PT8H -ExpirationType AfterDuration -PrincipalId $userObjectID -RequestType SelfActivate -RoleDefinitionId /providersproviders/Microsoft.Management/managementGroups/$managementgroupID/providers/Microsoft.Authorization/roleDefinitions/$roledefinitionId -ScheduleInfoStartDateTime $startTime -Justification work
-```
## View the status of your requests
active-directory Powershell For Azure Ad Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/powershell-for-azure-ad-roles.md
- Title: PowerShell for Azure AD roles in PIM
-description: Manage Azure AD roles using PowerShell cmdlets in Azure AD Privileged Identity Management (PIM).
-------- Previously updated : 10/07/2021------
-# PowerShell for Azure AD roles in Privileged Identity Management
-
-This article tells you how to use PowerShell cmdlets to manage Azure AD roles using Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra. It also tells you how to get set up with the Azure AD PowerShell module.
-
-## Installation and Setup
-
-1. Install the Azure AD Preview module
-
- ```powershell
- Install-module AzureADPreview
- ```
-
-1. Ensure that you have the required role permissions before proceeding. If you are trying to perform management tasks like giving a role assignment or updating role setting, ensure that you have either the Global administrator or Privileged role administrator role. If you are just trying to activate your own assignment, no permissions beyond the default user permissions are required.
-
-1. Connect to Azure AD.
-
- ```powershell
- $AzureAdCred = Get-Credential
- Connect-AzureAD -Credential $AzureAdCred
- ```
-
-1. Find the Tenant ID for your Azure AD organization by going to **Azure Active Directory** > **Properties** > **Directory ID**. In the cmdlets section, use this ID whenever you need to supply the resourceId.
-
- ![Find the organization ID in the properties for the Azure AD organization](./media/powershell-for-azure-ad-roles/tenant-id-for-Azure-ad-org.png)
-
-> [!Note]
-> The following sections are simple examples that can help get you up and running. You can find more detailed documentation regarding the following cmdlets at [/powershell/module/azuread/?view=azureadps-2.0-preview&preserve-view=true#privileged_role_management](/powershell/module/azuread/?view=azureadps-2.0-preview&preserve-view=true#privileged_role_management). However, you must replace "azureResources" in the providerID parameter with "aadRoles". You will also need to remember to use the Tenant ID for your Azure AD organization as the resourceId parameter.
-
-## Retrieving role definitions
-
-Use the following cmdlet to get all built-in and custom Azure AD roles in your Azure AD organization. This important step gives you the mapping between the role name and the roleDefinitionId. The roleDefinitionId is used throughout these cmdlets in order to reference a specific role.
-
-The roleDefinitionId is specific to your Azure AD organization and is different from the roleDefinitionId returned by the role management API.
-
-```powershell
-Get-AzureADMSPrivilegedRoleDefinition -ProviderId aadRoles -ResourceId 926d99e7-117c-4a6a-8031-0cc481e9da26
-```
-
-Result:
-
-![Get all roles for the Azure AD organization](./media/powershell-for-azure-ad-roles/get-all-roles-result.png)
-
-## Retrieving role assignments
-
-Use the following cmdlet to retrieve all role assignments in your Azure AD organization.
-
-```powershell
-Get-AzureADMSPrivilegedRoleAssignment -ProviderId "aadRoles" -ResourceId "926d99e7-117c-4a6a-8031-0cc481e9da26"
-```
-
-Use the following cmdlet to retrieve all role assignments for a particular user. This list is also known as "My Roles" in the Azure portal. The only difference here is that you have added a filter for the subject ID. The subject ID in this context is the user ID or the group ID.
-
-```powershell
-Get-AzureADMSPrivilegedRoleAssignment -ProviderId "aadRoles" -ResourceId "926d99e7-117c-4a6a-8031-0cc481e9da26" -Filter "subjectId eq 'f7d1887c-7777-4ba3-ba3d-974488524a9d'"
-```
-
-Use the following cmdlet to retrieve all role assignments for a particular role. The roleDefinitionId here is the ID that is returned by the previous cmdlet.
-
-```powershell
-Get-AzureADMSPrivilegedRoleAssignment -ProviderId "aadRoles" -ResourceId "926d99e7-117c-4a6a-8031-0cc481e9da26" -Filter "roleDefinitionId eq '0bb54a22-a3df-4592-9dc7-9e1418f0f61c'"
-```
-
-The cmdlets result in a list of role assignment objects shown below. The subject ID is the user ID of the user to whom the role is assigned. The assignment state could either be active or eligible. If the user is active and there is an ID in the LinkedEligibleRoleAssignmentId field, that means the role is currently activated.
-
-Result:
-
-![Retrieve all role assignments for the Azure AD organization](./media/powershell-for-azure-ad-roles/get-all-role-assignments-result.png)
-
-## Assign a role
-
-Use the following cmdlet to create an eligible assignment.
-
-```powershell
-Open-AzureADMSPrivilegedRoleAssignmentRequest -ProviderId 'aadRoles' -ResourceId '926d99e7-117c-4a6a-8031-0cc481e9da26' -RoleDefinitionId 'ff690580-d1c6-42b1-8272-c029ded94dec' -SubjectId 'f7d1887c-7777-4ba3-ba3d-974488524a9d' -Type 'adminAdd' -AssignmentState 'Eligible' -schedule $schedule -reason "dsasdsas"
-```
-
-The schedule, which defines the start and end time of the assignment, is an object that can be created like the following example:
-
-```powershell
-$schedule = New-Object Microsoft.Open.MSGraph.Model.AzureADMSPrivilegedSchedule
-$schedule.Type = "Once"
-$schedule.StartDateTime = (Get-Date).ToUniversalTime().ToString("yyyy-MM-ddTHH:mm:ss.fffZ")
-$schedule.endDateTime = "2020-07-25T20:49:11.770Z"
-```
-> [!Note]
-> If the value of endDateTime is set to null, it indicates a permanent assignment.
-
-## Activate a role assignment
-
-Use the following cmdlet to activate an eligible assignment in a context of a regular user:
-
-```powershell
-Open-AzureADMSPrivilegedRoleAssignmentRequest -ProviderId 'aadRoles' -ResourceId '926d99e7-117c-4a6a-8031-0cc481e9da26' -RoleDefinitionId 'f55a9a68-f424-41b7-8bee-cee6a442d418' -SubjectId 'f7d1887c-7777-4ba3-ba3d-974488524a9d' -Type 'UserAdd' -AssignmentState 'Active' -Schedule $schedule -Reason "Business Justification for the role assignment"
-```
-
-If you need to activate an eligible assignment as administrator, for the `Type` parameter, specify `adminAdd`:
-
-```powershell
-Open-AzureADMSPrivilegedRoleAssignmentRequest -ProviderId 'aadRoles' -ResourceId '926d99e7-117c-4a6a-8031-0cc481e9da26' -RoleDefinitionId 'f55a9a68-f424-41b7-8bee-cee6a442d418' -SubjectId 'f7d1887c-7777-4ba3-ba3d-974488524a9d' -Type 'adminAdd' -AssignmentState 'Active' -Schedule $schedule -Reason "Business Justification for the role assignment"
-```
-
-This cmdlet is almost identical to the cmdlet for creating a role assignment. The key difference between the cmdlets is that for the ΓÇôType parameter, activation is "userAdd" instead of "adminAdd". The other difference is that the ΓÇôAssignmentState parameter is "Active" instead of "Eligible."
-
-> [!Note]
-> There are two limiting scenarios for role activation through PowerShell.
-> 1. If you require ticket system / ticket number in your role setting, there is no way to supply those as a parameter. Thus, it would not be possible to activate the role beyond the Azure portal. This feature is being rolled out to PowerShell over the next few months.
-> 1. If you require multi-factor authentication for role activation, there is currently no way for PowerShell to challenge the user when they activate their role. Instead, users will need to trigger the MFA challenge when they connect to Azure AD by following [this blog post](http://www.anujchaudhary.com/2020/02/connect-to-azure-ad-powershell-with-mfa.html) from one of our engineers. If you are developing an app for PIM, one possible implementation is to challenge users and reconnect them to the module after they receive a "MfaRule" error.
-
-## Retrieving and updating role settings
-
-Use the following cmdlet to get all role settings in your Azure AD organization.
-
-```powershell
-Get-AzureADMSPrivilegedRoleSetting -ProviderId 'aadRoles' -Filter "ResourceId eq '926d99e7-117c-4a6a-8031-0cc481e9da26'"
-```
-
-There are four main objects in the setting. Only three of these objects are currently used by PIM. The UserMemberSettings are activation settings, AdminEligibleSettings are assignment settings for eligible assignments, and the AdminmemberSettings are assignment settings for active assignments.
-
-[![Get and update role settings.](media/powershell-for-azure-ad-roles/get-update-role-settings-result.png)](media/powershell-for-azure-ad-roles/get-update-role-settings-result.png#lightbox)
-
-To update the role setting, you must get the existing setting object for a particular role and make changes to it:
-
-```powershell
-Get-AzureADMSPrivilegedRoleSetting -ProviderId 'aadRoles' -Filter "ResourceId eq 'tenant id' and RoleDefinitionId eq 'role id'"
-$settinga = New-Object Microsoft.Open.MSGraph.Model.AzureADMSPrivilegedRuleSetting
-$settinga.RuleIdentifier = "JustificationRule"
-$settinga.Setting = '{"required":false}'
-```
-
-You can then go ahead and apply the setting to one of the objects for a particular role as shown below. The ID here is the role setting ID that can be retrieved from the result of the list role settings cmdlet.
-
-```powershell
-Set-AzureADMSPrivilegedRoleSetting -ProviderId 'aadRoles' -Id 'ff518d09-47f5-45a9-bb32-71916d9aeadf' -ResourceId '3f5887ed-dd6e-4821-8bde-c813ec508cf9' -RoleDefinitionId '2387ced3-4e95-4c36-a915-73d803f93702' -UserMemberSettings $settinga
-```
-
-## Next steps
--- [Role definitions in Azure AD](../roles/permissions-reference.md)
active-directory Leapsome Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/leapsome-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
| Name | Source Attribute | Namespace | | | | |
- | firstname | user.givenname | http://schemas.xmlsoap.org/ws/2005/05/identity/claims |
- | lastname | user.surname | http://schemas.xmlsoap.org/ws/2005/05/identity/claims |
- | title | user.jobtitle | http://schemas.xmlsoap.org/ws/2005/05/identity/claims |
- | picture | URL to the employee's picture | http://schemas.xmlsoap.org/ws/2005/05/identity/claims |
+ | firstname | user.givenname | https://schemas.xmlsoap.org/ws/2005/05/identity/claims |
+ | lastname | user.surname | https://schemas.xmlsoap.org/ws/2005/05/identity/claims |
+ | title | user.jobtitle | https://schemas.xmlsoap.org/ws/2005/05/identity/claims |
+ | picture | URL to the employee's picture | https://schemas.xmlsoap.org/ws/2005/05/identity/claims |
| | | > [!Note]
active-directory Textmagic Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/textmagic-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
| Name | Source Attribute| Namespace | | | | |
- | company | user.companyname | http://schemas.xmlsoap.org/ws/2005/05/identity/claims |
- | firstName | user.givenname | http://schemas.xmlsoap.org/ws/2005/05/identity/claims |
- | lastName | user.surname | http://schemas.xmlsoap.org/ws/2005/05/identity/claims |
- | phone | user.telephonenumber | http://schemas.xmlsoap.org/ws/2005/05/identity/claims |
+ | company | user.companyname | https://schemas.xmlsoap.org/ws/2005/05/identity/claims |
+ | firstName | user.givenname | https://schemas.xmlsoap.org/ws/2005/05/identity/claims |
+ | lastName | user.surname | https://schemas.xmlsoap.org/ws/2005/05/identity/claims |
+ | phone | user.telephonenumber | https://schemas.xmlsoap.org/ws/2005/05/identity/claims |
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
active-directory Hipaa Access Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/hipaa-access-controls.md
The following table has HIPAA guidance on the automatic logoff safeguard. Find M
| Recommendation | Action | | - | - | | Create group policy | Support for devices not migrated to Azure AD and managed by Intune, [Group Policy (GPO)](../../active-directory-domain-services/manage-group-policy.md) can enforce sign out, or lock screen time for devices on AD, or in hybrid environments. |
-| Assess device management requirements | [Microsoft IntTune](/mem/intune/fundamentals/what-is-intune) provides mobile device management (MDM) and mobile application management (MAM). It provides control over company and personal devices. You can manage device usage and enforce policies to control mobile applications. |
+| Assess device management requirements | [Microsoft Intune](/mem/intune/fundamentals/what-is-intune) provides mobile device management (MDM) and mobile application management (MAM). It provides control over company and personal devices. You can manage device usage and enforce policies to control mobile applications. |
| Device Conditional Access policy | Implement device lock by using a conditional access policy to restrict access to [compliant](../conditional-access/concept-conditional-access-grant.md) or hybrid Azure AD joined devices. Configure [policy settings](../conditional-access/concept-conditional-access-grant.md#require-hybrid-azure-ad-joined-device).</br>For unmanaged devices, configure the [Sign-In Frequency](../conditional-access/howto-conditional-access-session-lifetime.md) setting to force users to reauthenticate. | | Configure session time out for Microsoft 365 | Review the [session timeouts](/microsoft-365/admin/manage/idle-session-timeout-web-apps) for Microsoft 365 applications and services, to amend any prolonged timeouts. | | Configure session time out for Azure portal | Review the [session timeouts for Azure portal session](../../azure-portal/set-preferences.md), by implementing a timeout due to inactivity it helps to protect resources from unauthorized access. |
active-directory Using Authenticator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/using-authenticator.md
Using the Authenticator for the first time presents a set of screens that you ha
When the Microsoft Authenticator app is installed and ready, you use the public end to end demo webapp to issue your first verifiable credential onto the Authenticator.
-1. Open [end to end demo](http://woodgroveemployee.azurewebsites.net/) in your browser
+1. Open [end to end demo](https://woodgroveemployee.azurewebsites.net/) in your browser
1. Enter your First Name and Last Name and press **Next** 1. Select **Verify with True Identity** 1. Click **Take a selfie** and **Upload government issued ID**. The demo uses simulated data and you don't need to provide a real selfie or an ID.
advisor Advisor Reference Cost Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-cost-recommendations.md
Learn more about [Subscription - MySQLReservedCapacity (Consider Database for My
### Consider Database for PostgreSQL reserved instance to save over your pay-as-you-go costs
-We analyzed your Database for PostgreSQL usage pattern over last 30 days and recommend reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase PostgresSQL Database hourly usage and save over your on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+We analyzed your Database for PostgreSQL usage pattern over last 30 days and recommend reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase PostgreSQL Database hourly usage and save over your on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
Learn more about [Subscription - PostgreSQLReservedCapacity (Consider Database for PostgreSQL reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations).
advisor Advisor Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-release-notes.md
Title: Release notes for Azure Advisor description: A description of what's new and changed in Azure Advisor Previously updated : 01/03/2022 Last updated : 04/18/2023 # What's new in Azure Advisor? Learn what's new in the service. These items may be release notes, videos, blog posts, and other types of information. Bookmark this page to stay up to date with the service.
+## April 2023
+
+### VM/VMSS right-sizing recommendations with custom lookback period
+
+Customers can now improve the relevance of recommendations to make them more actionable, resulting in additional cost savings.
+The right sizing recommendations help optimize costs by identifying idle or underutilized virtual machines based on their CPU, memory, and network activity over the default lookback period of seven days.
+Now, with this latest update, customers can adjust the default look back period to get recommendations based on 14, 21,30, 60, or even 90 days of use. The configuration can be applied at the subscription level. This is especially useful when the workloads have biweekly or monthly peaks (such as with payroll applications).
+
+To learn more, visit [Optimize virtual machine (VM) or virtual machine scale set (VMSS) spend by resizing or shutting down underutilized instances](advisor-cost-recommendations.md#optimize-virtual-machine-vm-or-virtual-machine-scale-set-vmss-spend-by-resizing-or-shutting-down-underutilized-instances).
## May 2022
aks Auto Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/auto-upgrade-cluster.md
Part of the AKS cluster lifecycle involves performing periodic upgrades to the l
Cluster auto-upgrade provides a set once and forget mechanism that yields tangible time and operational cost benefits. By enabling auto-upgrade, you can ensure your clusters are up to date and don't miss the latest AKS features or patches from AKS and upstream Kubernetes.
-AKS follows a strict versioning window with regard to supportability. With properly selected auto-upgrade channels, you can avoid clusters falling into an unsupported version. For more on the AKS support window, see [Alias minor versions][supported-kubernetes-versions].
+AKS follows a strict supportability versioning window. With properly selected auto-upgrade channels, you can avoid clusters falling into an unsupported version. For more on the AKS support window, see [Alias minor versions][supported-kubernetes-versions].
+
+## Customer versus AKS-initiated auto-upgrades
+
+Customers can specify cluster auto-upgrade specifics in the following guidance. These upgrades occur based on the cadence the customer specifies and are recommended for customers to remain on supported Kubernetes versions.
+
+AKS also initiates auto-upgrades for unsupported clusters. When a cluster in an n-3 version (where n is the latest supported AKS GA minor version) is about to drop to n-4, AKS automatically upgrades the cluster to n-2 to remain in an AKS support [policy][supported-kubernetes-versions]. Automatically upgrading a platform supported cluster to a supported version is enabled by default.
+
+For example, Kubernetes v1.25 will upgrade to v1.26 during the v1.29 GA release. To minimize disruptions, set up [maintenance windows][planned-maintenance].
## Cluster auto-upgrade limitations
-If youΓÇÖre using cluster auto-upgrade, you can no longer upgrade the control plane first and then upgrade the individual node pools. Cluster auto-upgrade will always upgrade the control plane and the node pools together. There is no ability of upgrading the control plane only, and trying to run the command `az aks upgrade --control-plane-only` will raise the error: `NotAllAgentPoolOrchestratorVersionSpecifiedAndUnchanged: Using managed cluster api, all Agent pools' OrchestratorVersion must be all specified or all unspecified. If all specified, they must be stay unchanged or the same with control plane.`
+If youΓÇÖre using cluster auto-upgrade, you can no longer upgrade the control plane first, and then upgrade the individual node pools. Cluster auto-upgrade always upgrades the control plane and the node pools together. There's no ability of upgrading the control plane only, and trying to run the command `az aks upgrade --control-plane-only` raises the following error: `NotAllAgentPoolOrchestratorVersionSpecifiedAndUnchanged: Using managed cluster api, all Agent pools' OrchestratorVersion must be all specified or all unspecified. If all specified, they must be stay unchanged or the same with control plane.`
-If using the `node-image` cluster auto-upgrade channel or the `NodeImage` node image auto-upgrade channel, Linux [unattended upgrades][unattended-upgrades] will be disabled by default.
+If using the `node-image` cluster auto-upgrade channel or the `NodeImage` node image auto-upgrade channel, Linux [unattended upgrades][unattended-upgrades] is disabled by default.
## Using cluster auto-upgrade
The following upgrade channels are available:
| `patch`| automatically upgrade the cluster to the latest supported patch version when it becomes available while keeping the minor version the same.| For example, if a cluster is running version *1.17.7* and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, your cluster is upgraded to *1.17.9*| | `stable`| automatically upgrade the cluster to the latest supported patch release on minor version *N-1*, where *N* is the latest supported minor version.| For example, if a cluster is running version *1.17.7* and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, your cluster is upgraded to *1.18.6*. | `rapid`| automatically upgrade the cluster to the latest supported patch release on the latest supported minor version.| In cases where the cluster is at a version of Kubernetes that is at an *N-2* minor version where *N* is the latest supported minor version, the cluster first upgrades to the latest supported patch version on *N-1* minor version. For example, if a cluster is running version *1.17.7* and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, your cluster first is upgraded to *1.18.6*, then is upgraded to *1.19.1*.
-| `node-image`| automatically upgrade the node image to the latest version available.| Microsoft provides patches and new images for image nodes frequently (usually weekly), but your running nodes won't get the new images unless you do a node image upgrade. Turning on the node-image channel will automatically update your node images whenever a new version is available. If you use this channel, Linux [unattended upgrades] will be disabled by default.|
+| `node-image`| automatically upgrade the node image to the latest version available.| Microsoft provides patches and new images for image nodes frequently (usually weekly), but your running nodes don't get the new images unless you do a node image upgrade. Turning on the node-image channel automatically updates your node images whenever a new version is available. If you use this channel, Linux [unattended upgrades] are disabled by default.|
> [!NOTE] > Cluster auto-upgrade only updates to GA versions of Kubernetes and will not update to preview versions.
The Azure portal also highlights all the deprecated APIs between your current ve
## Using auto-upgrade with Planned Maintenance
-If youΓÇÖre using Planned Maintenance and cluster auto-upgrade, your upgrade will start during your specified maintenance window.
+If youΓÇÖre using Planned Maintenance and cluster auto-upgrade, your upgrade starts during your specified maintenance window.
> [!NOTE] > To ensure proper functionality, use a maintenance window of four hours or more.
For more information on Planned Maintenance, see [Use Planned Maintenance to sch
## Best practices for cluster auto-upgrade
-The following best practices will help maximize your success when using auto-upgrade:
+Use the following best practices to help maximize your success when using auto-upgrade:
- In order to keep your cluster always in a supported version (i.e within the N-2 rule), choose either `stable` or `rapid` channels. - If you're interested in getting the latest patches as soon as possible, use the `patch` channel. The `node-image` channel is a good fit if you want your agent pools to always be running the most recent node images.
The following best practices will help maximize your success when using auto-upg
- Follow [PDB best practices][pdb-best-practices]. <!-- INTERNAL LINKS -->
-[supported-kubernetes-versions]: supported-kubernetes-versions.md
-[upgrade-aks-cluster]: upgrade-cluster.md
-[planned-maintenance]: planned-maintenance.md
+[supported-kubernetes-versions]: ./supported-kubernetes-versions.md
+[upgrade-aks-cluster]: ./upgrade-cluster.md
+[planned-maintenance]: ./planned-maintenance.md
[operator-best-practices-scheduler]: operator-best-practices-scheduler.md#plan-for-availability-using-pod-disruption-budgets [node-image-auto-upgrade]: auto-upgrade-node-image.md
aks Csi Secrets Store Driver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-driver.md
In addition to an AKS cluster, you'll need an Azure key vault resource that stor
The Secrets Store CSI Driver allows for the following methods to access an Azure key vault: * An [Azure Active Directory pod identity][aad-pod-identity] (preview)
-* An [Azure Active Directory workload identity][aad-workload-identity] (preview)
+* An [Azure Active Directory workload identity][aad-workload-identity]
* A user-assigned or system-assigned managed identity Follow the instructions in [Provide an identity to access the Azure Key Vault Provider for Secrets Store CSI Driver][identity-access-methods] for your chosen method.
aks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/faq.md
Most clusters are deleted upon user request; in some cases, especially where cus
No, you're unable to restore your cluster after deleting it. When you delete your cluster, the associated resource group and all its resources will also be deleted. If you want to keep any of your resources, move them to another resource group before deleting your cluster. If you have the **Owner** or **User Access Administrator** built-in role, you can lock Azure resources to protect them from accidental deletions and modifications. For more information, see [Lock your resources to protect your infrastructure][lock-azure-resources].
+## What is platform support, and what does it include?
+
+Platform support is a reduced support plan for unsupported "N-3" version clusters. Platform support only includes Azure infrastructure support. Platform support does not include anything related to Kubernetes functionality and components, cluster or node pool creation, hotfixes, bug fixes, security patches, retired components, etc. See [platform support policy][supported-kubernetes-versions] for additional restrictions.
+
+AKS relies on the releases and patches from [Kubernetes](https://kubernetes.io/releases/), which is an Open Source project that only supports a sliding window of 3 minor versions. AKS can only guarantee [full support](./supported-kubernetes-versions.md#kubernetes-version-support-policy) while those versions are being serviced upstream. Since there's no more patches being produced upstream, AKS can either leave those versions unpatched or fork. Due to this limitation, platform support will not support anything from relying on kubernetes upstream.
+
+## Will AKS automatically upgrade my unsupported clusters?
+
+AKS will initiate auto-upgrades for unsupported clusters. When a cluster in an n-3 version (where n is the latest supported AKS GA minor version) is about to drop to n-4, AKS will automatically upgrade the cluster to n-2 to remain in an AKS support [policy][supported-kubernetes-versions]. Automatically upgrading a platform supported cluster to a supported version is enabled by default.
+
+For example, kubernetes v1.25 will be upgraded to v1.26 during the v1.29 GA release. To minimize disruptions, set up [maintenance windows][planned-maintenance]. See [auto-upgrade][auto-upgrade-cluster] for details on automatic upgrade channels.
+ ## If I have pod / deployments in state 'NodeLost' or 'Unknown' can I still upgrade my cluster? You can, but we don't recommend it. Upgrades should be performed when the state of the cluster is known and healthy.
The extension **does not** require any additional outbound access to any URLs, I
<!-- LINKS - internal --> [aks-upgrade]: ./upgrade-cluster.md
+[auto-upgrade-cluster]: ./auto-upgrade-cluster.md
+[planned-maintenance]: ./planned-maintenance.md
[aks-cluster-autoscale]: ./cluster-autoscaler.md [aks-advanced-networking]: ./configure-azure-cni.md [aks-rbac-aad]: ./azure-ad-integration-cli.md
The extension **does not** require any additional outbound access to any URLs, I
[multi-node-pools]: ./use-multiple-node-pools.md [availability-zones]: ./availability-zones.md [private-clusters]: ./private-clusters.md
+[supported-kubernetes-versions]: ./supported-kubernetes-versions.md
[bcdr-bestpractices]: ./operator-best-practices-multi-region.md#plan-for-multiregion-deployment [availability-zones]: ./availability-zones.md [az-regions]: ../availability-zones/az-region.md
aks Ingress Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-tls.md
When you upgrade your ingress controller, you must pass a parameter to the Helm
helm upgrade ingress-nginx ingress-nginx/ingress-nginx \ --namespace $NAMESPACE \ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=$DNS_LABEL \
- --set controller.service.loadBalancerIP=$STATIC_IP
+ --set controller.service.loadBalancerIP=$STATIC_IP \
+ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz
``` ### [Azure PowerShell](#tab/azure-powershell)
When you upgrade your ingress controller, you must pass a parameter to the Helm
helm upgrade ingress-nginx ingress-nginx/ingress-nginx ` --namespace $Namespace ` --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=$DnsLabel `
- --set controller.service.loadBalancerIP=$StaticIP
+ --set controller.service.loadBalancerIP=$StaticIP `
+ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz
```
NAMESPACE="ingress-basic"
helm upgrade ingress-nginx ingress-nginx/ingress-nginx \ --namespace $NAMESPACE \
- --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=$DNSLABEL
+ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=$DNSLABEL \
+ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz
``` ### [Azure PowerShell](#tab/azure-powershell)
$Namespace = "ingress-basic"
helm upgrade ingress-nginx ingress-nginx/ingress-nginx ` --namespace $Namespace `
- --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=$DnsLabel
+ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=$DnsLabel `
+ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz
```
aks Tutorial Kubernetes Workload Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/tutorial-kubernetes-workload-identity.md
Title: Tutorial - Use a workload identity with an application on Azure Kubernete
description: In this Azure Kubernetes Service (AKS) tutorial, you deploy an Azure Kubernetes Service cluster and configure an application to use a workload identity. Previously updated : 01/11/2023 Last updated : 04/19/2023 # Tutorial: Use a workload identity with an application on Azure Kubernetes Service (AKS)
This tutorial assumes a basic understanding of Kubernetes concepts. For more inf
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- This article requires version 2.47.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
-- This article requires version 2.40.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.--- You have installed the latest version of the `aks-preview` extension, version 0.5.102 or later.--- The identity you are using to create your cluster has the appropriate minimum permissions. For more information on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)][aks-identity-concepts].
+- The identity you're using to create your cluster has the appropriate minimum permissions. For more information on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)][aks-identity-concepts].
- If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the [az account][az-account] command. ## Create a resource group
-An [Azure resource group][azure-resource-group] is a logical group in which Azure resources are deployed and managed. When you create a resource group, you are prompted to specify a location. This location is:
+An [Azure resource group][azure-resource-group] is a logical group in which Azure resources are deployed and managed. When you create a resource group, you're prompted to specify a location. This location is:
* The storage location of your resource group metadata.
-* Where your resources will run in Azure if you don't specify another region during resource creation.
+* Where your resources run in Azure if you don't specify another region during resource creation.
The following example creates a resource group named *myResourceGroup* in the *eastus* location.
The following output example resembles successful creation of the resource group
} ```
-## Install the aks-preview Azure CLI extension
--
-To install the aks-preview extension, run the following command:
-
-```azurecli-interactive
-az extension add --name aks-preview
-```
-
-Run the following command to update to the latest version of the extension released:
-
-```azurecli-interactive
-az extension update --name aks-preview
-```
-
-## Register the 'EnableWorkloadIdentityPreview' feature flag
-
-Register the `EnableWorkloadIdentityPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
-
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "EnableWorkloadIdentityPreview"
-```
-
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
+## Export environmental variables
-```azurecli-interactive
-az feature show --namespace "Microsoft.ContainerService" --name "EnableWorkloadIdentityPreview"
-```
+To help simplify steps to configure the identities required, the steps below define
+environmental variables for reference on the cluster.
-When the status shows *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+Run the following commands to create these variables. Replace the default values for `RESOURCE_GROUP`, `LOCATION`, `SERVICE_ACCOUNT_NAME`, `SUBSCRIPTION`, `USER_ASSIGNED_IDENTITY_NAME`, and `FEDERATED_IDENTITY_CREDENTIAL_NAME`.
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
+```bash
+export RESOURCE_GROUP="myResourceGroup"
+export LOCATION="westcentralus"
+export SERVICE_ACCOUNT_NAMESPACE="default"
+export SERVICE_ACCOUNT_NAME="workload-identity-sa"
+export SUBSCRIPTION="$(az account show --query id --output tsv)"
+export USER_ASSIGNED_IDENTITY_NAME="myIdentity"
+export FEDERATED_IDENTITY_CREDENTIAL_NAME="myFedIdentity"
+export KEYVAULT_NAME="azwi-kv-tutorial"
+export KEYVAULT_SECRET_NAME="my-secret"
``` ## Create AKS cluster
az provider register --namespace Microsoft.ContainerService
Create an AKS cluster using the [az aks create][az-aks-create] command with the `--enable-oidc-issuer` parameter to use the OIDC Issuer. The following example creates a cluster named *myAKSCluster* with one node in the *myResourceGroup*: ```azurecli-interactive
-az aks create -g myResourceGroup -n myAKSCluster --node-count 1 --enable-oidc-issuer --enable-workload-identity --generate-ssh-keys
+az aks create -g "${RESOURCE_GROUP}" -n myAKSCluster --node-count 1 --enable-oidc-issuer --enable-workload-identity
``` After a few minutes, the command completes and returns JSON-formatted information about the cluster.
After a few minutes, the command completes and returns JSON-formatted informatio
> [!NOTE] > When you create an AKS cluster, a second resource group is automatically created to store the AKS resources. For more information, see [Why are two resource groups created with AKS?][aks-two-resource-groups].
-To get the OIDC Issuer URL and save it to an environmental variable, run the following command. Replace the default value for the arguments `-n`, which is the name of the cluster and `-g`, the resource group name:
+To get the OIDC Issuer URL and save it to an environmental variable, run the following command. Replace the default value for the arguments `-n`, which is the name of the cluster:
```azurecli-interactive
-export AKS_OIDC_ISSUER="$(az aks show -n myAKSCluster -g myResourceGroup --query "oidcIssuerProfile.issuerUrl" -otsv)"
-```
-
-## Export environmental variables
-
-To help simplify steps to configure creating Azure Key Vault and other identities required, the steps below define
-environmental variables for reference on the cluster.
-
-Run the following commands to create these variables. Replace the default values for `RESOURCE_GROUP`, `LOCATION`, `KEYVAULT_SECRET_NAME`, `SERVICE_ACCOUNT_NAME`, `SUBSCRIPTION`, `UAID`, and `FICID`.
-
-```bash
-# environment variables for the Azure Key Vault resource
-export KEYVAULT_NAME="azwi-kv-tutorial"
-export KEYVAULT_SECRET_NAME="my-secret"
-export RESOURCE_GROUP="resourceGroupName"
-export LOCATION="westcentralus"
-
-# environment variables for the Kubernetes Service account & federated identity credential
-export SERVICE_ACCOUNT_NAMESPACE="default"
-export SERVICE_ACCOUNT_NAME="workload-identity-sa"
-
-# environment variables for the Federated Identity
-export SUBSCRIPTION="{your subscription ID}"
-# user assigned identity name
-export UAID="fic-test-ua"
-# federated identity name
-export FICID="fic-test-fic-name"
+export AKS_OIDC_ISSUER="$(az aks show -n myAKSCluster -g "${RESOURCE_GROUP}" --query "oidcIssuerProfile.issuerUrl" -otsv)"
``` ## Create an Azure Key Vault and secret
az keyvault secret set --vault-name "${KEYVAULT_NAME}" --name "${KEYVAULT_SECRET
To add the Key Vault URL to the environment variable `KEYVAULT_URL`, you can run the Azure CLI [az keyvault show][az-keyvault-show] command. ```bash
-export KEYVAULT_URL="$(az keyvault show -g ${RESOURCE_GROUP} -n ${KEYVAULT_NAME} --query properties.vaultUri -o tsv)"
+export KEYVAULT_URL="$(az keyvault show -g "${RESOURCE_GROUP}" -n ${KEYVAULT_NAME} --query properties.vaultUri -o tsv)"
``` ## Create a managed identity and grant permissions to access the secret
az account set --subscription "${SUBSCRIPTION}"
``` ```azurecli-interactive
-az identity create --name "${UAID}" --resource-group "${RESOURCE_GROUP}" --location "${LOCATION}" --subscription "${SUBSCRIPTION}"
+az identity create --name "${USER_ASSIGNED_IDENTITY_NAME}" --resource-group "${RESOURCE_GROUP}" --location "${LOCATION}" --subscription "${SUBSCRIPTION}"
``` Next, you need to set an access policy for the managed identity to access the Key Vault secret by running the following commands: ```azurecli-interactive
-export USER_ASSIGNED_CLIENT_ID="$(az identity show --resource-group "${RESOURCE_GROUP}" --name "${UAID}" --query 'clientId' -otsv)"
+export USER_ASSIGNED_CLIENT_ID="$(az identity show --resource-group "${RESOURCE_GROUP}" --name "${USER_ASSIGNED_IDENTITY_NAME}" --query 'clientId' -otsv)"
``` ```azurecli-interactive
Serviceaccount/workload-identity-sa created
Use the [az identity federated-credential create][az-identity-federated-credential-create] command to create the federated identity credential between the managed identity, the service account issuer, and the subject. ```azurecli-interactive
-az identity federated-credential create --name ${FICID} --identity-name ${UAID} --resource-group ${RESOURCE_GROUP} --issuer ${AKS_OIDC_ISSUER} --subject system:serviceaccount:${SERVICE_ACCOUNT_NAMESPACE}:${SERVICE_ACCOUNT_NAME}
+az identity federated-credential create --name ${FEDERATED_IDENTITY_CREDENTIAL_NAME} --identity-name ${USER_ASSIGNED_IDENTITY_NAME} --resource-group ${RESOURCE_GROUP} --issuer ${AKS_OIDC_ISSUER} --subject system:serviceaccount:${SERVICE_ACCOUNT_NAMESPACE}:${SERVICE_ACCOUNT_NAME}
``` > [!NOTE]
The following output resembles successful creation of the pod:
pod/quick-start created ```
-To check whether all properties are injected properly by the webhook, use
+To check whether all properties are injected properly with the webhook, use
the [kubectl describe][kubelet-describe] command: ```bash
az group delete --name "${RESOURCE_GROUP}"
## Next steps In this tutorial, you deployed a Kubernetes cluster and then deployed a simple container application to
-test working with an Azure AD workload identity (preview).
+test working with an Azure AD workload identity.
This tutorial is for introductory purposes. For guidance on a creating full solutions with AKS for production, see [AKS solution guidance][aks-solution-guidance].
aks Manage Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-azure-rbac.md
When you leverage [integrated authentication between Azure Active Directory (Azure AD) and AKS](managed-aad.md), you can use Azure AD users, groups, or service principals as subjects in [Kubernetes role-based access control (Kubernetes RBAC)][kubernetes-rbac]. This feature frees you from having to separately manage user identities and credentials for Kubernetes. However, you still have to set up and manage Azure RBAC and Kubernetes RBAC separately.
-This article covers how to use Azure RBAC for Kubernetes Authorization, which allows for the unified management and access control across Azure resources, AKS, and Kubernetes resources. For more information, see [Azure RBAC for Kubernetes Authorization][azure-rbac-kubernetes-rbac].
+This article covers how to use Azure RBAC for Kubernetes Authorization, which allows for the unified management and access control across Azure resources, AKS, and Kubernetes resources. For more information, see [Azure RBAC for Kubernetes Authorization][kubernetes-rbac].
## Before you begin
az group delete -n myResourceGroup
To learn more about AKS authentication, authorization, Kubernetes RBAC, and Azure RBAC, see:
-* [Access and identity options for AKS](/concepts-identity.md)
+* [Access and identity options for AKS](./concepts-identity.md)
* [What is Azure RBAC?](../role-based-access-control/overview.md) * [Microsoft.ContainerService operations](../role-based-access-control/resource-provider-operations.md#microsoftcontainerservice)
To learn more about AKS authentication, authorization, Kubernetes RBAC, and Azur
[install-azure-cli]: /cli/azure/install-azure-cli [az-role-definition-create]: /cli/azure/role/definition#az_role_definition_create [az-aks-get-credentials]: /cli/azure/aks#az_aks_get-credentials
-[kubernetes-rbac]: /concepts-identity#azure-rbac-for-kubernetes-authorization
-[azure-rbac-kubernetes-rbac]: /concepts-identity#azure-rbac-for-kubernetes-authorization
+[kubernetes-rbac]: ./concepts-identity.md#azure-rbac-for-kubernetes-authorization
aks Managed Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/managed-aad.md
Title: Use Azure AD in Azure Kubernetes Service description: Learn how to use Azure AD in Azure Kubernetes Service (AKS) Previously updated : 03/02/2023 Last updated : 04/17/2023
In order to access the cluster, follow the steps in [access an Azure AD enabled
There are some non-interactive scenarios, such as continuous integration pipelines, that aren't currently available with `kubectl`. You can use [`kubelogin`](https://github.com/Azure/kubelogin) to connect to the cluster with a non-interactive service principal credential.
+Starting with Kubernetes version 1.24, the default format of the clusterUser credential for Azure AD clusters is `exec`, which requires [kubelogin](https://github.com/Azure/kubelogin) binary in the execution PATH. If you use the Azure CLI, it prompts you to download kubelogin. For non-Azure AD clusters, or Azure AD clusters where the version of Kubernetes is older than 1.24, there is no change in behavior. The version of kubeconfig installed continues to work.
+
+An optional query parameter named `format` is available when retrieving the clusterUser credential to overwrite the default behavior change. You can set the value to `azure` to use the original kubeconfig format.
+
+Example:
+
+```azurecli-interactive
+az aks get-credentials --format azure
+```
+
+For Azure AD integrated clusters using a version of Kubernetes newer than 1.24, it uses the kubelogin format automatically and no conversion is needed. For Azure AD integrated clusters running a version older than 1.24, you need to run the following commands to convert the kubeconfig format manually
+
+```azurecli-interactive
+export KUBECONFIG=/path/to/kubeconfig
+kubelogin convert-kubeconfig
+```
+ ## Disable local accounts When you deploy an AKS cluster, local accounts are enabled by default. Even when enabling RBAC or Azure AD integration, `--admin` access still exists as a non-auditable backdoor option. You can disable local accounts using the parameter `disable-local-accounts`. The `properties.disableLocalAccounts` field has been added to the managed cluster API to indicate whether the feature is enabled or not on the cluster.
aks Open Service Mesh Deploy Addon Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-deploy-addon-bicep.md
touch osm.aks.bicep && touch osm.aks.parameters.json
Open the *osm.aks.bicep* file and copy the following example content to it. Then save the file.
-```azurecli-interactive
+```bicep
// https://learn.microsoft.com/azure/aks/troubleshooting#what-naming-restrictions-are-enforced-for-aks-resources-and-parameters @minLength(3) @maxLength(63)
Open the *osm.aks.parameters.json* file and copy the following example content t
> [!NOTE] > The *osm.aks.parameters.json* file is an example template parameters file needed for the Bicep deployment. Update the parameters specifically for your deployment environment. The specific parameter values in this example need the following parameters to be updated: `clusterName`, `clusterDNSPrefix`, `k8Version`, and `sshPubKey`. To find a list of supported Kubernetes versions in your region, use the `az aks get-versions --location <region>` command.
-```azurecli-interactive
+```json
{ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#", "contentVersion": "1.0.0.0",
kubectl get meshconfig osm-mesh-config -n kube-system -o yaml
Here's an example output of MeshConfig:
-```
+```yaml
apiVersion: config.openservicemesh.io/v1alpha1 kind: MeshConfig metadata:
Notice that `enablePermissiveTrafficPolicyMode` is configured to `true`. In OSM,
When you no longer need the Azure resources, use the Azure CLI to delete the deployment's test resource group:
-```
+```azurecli-interactive
az group delete --name osm-bicep-test ```
aks Servicemesh About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/servicemesh-about.md
Title: About service meshes
description: Obtain an overview of service meshes, supported scenarios, selection criteria, and next steps to explore. Previously updated : 04/06/2023 Last updated : 04/18/2023
Before you select a service mesh, make sure you understand your requirements and
## Next steps
-Open Service Mesh (OSM) is a supported service mesh that runs Azure Kubernetes Service (AKS):
+Azure Kubernetes Service (AKS) offers officially supported add-ons for Istio and Open Service Mesh:
> [!div class="nextstepaction"]
+> [Learn more about Istio][istio-about]
> [Learn more about OSM][osm-about] There are also service meshes provided by open-source projects and third parties that are commonly used with AKS. These service meshes aren't covered by the [AKS support policy][aks-support-policy]. -- [Istio][istio] - [Linkerd][linkerd] - [Consul Connect][consul]
For more details on service mesh standardization efforts, see:
- [Service Mesh Performance (SMP)][smp] <!-- LINKS - external -->
-[istio]: https://istio.io/latest/docs/setup/install/
[linkerd]: https://linkerd.io/getting-started/ [consul]: https://learn.hashicorp.com/tutorials/consul/service-mesh-deploy [service-mesh-landscape]: https://layer5.io/service-mesh-landscape
For more details on service mesh standardization efforts, see:
<!-- LINKS - internal --> [osm-about]: ./open-service-mesh-about.md
+[istio-about]: ./istio-about.md
[aks-support-policy]: support-policies.md
aks Static Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/static-ip.md
This article shows you how to create a static public IP address and assign it to
loadBalancerIP: 40.121.183.52 type: LoadBalancer ports:
- - port: 80
+ - port: 80
selector: app: azure-load-balancer ```
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
For the past release history, see [Kubernetes history](https://en.wikipedia.org/
> [!NOTE] > Alias minor version requires Azure CLI version 2.37 or above as well as API version 20220201 or above. Use `az upgrade` to install the latest version of the CLI.
-With AKS, you can create a cluster without specifying the exact patch version. When you create a cluster without designating a patch, the cluster will run the minor version's latest GA patch. For example, if you create a cluster with **`1.21`**, your cluster will run **`1.21.7`**, which is the latest GA patch version of *1.21*.
+AKS allows you to create a cluster without specifying the exact patch version. When you create a cluster without designating a patch, the cluster runs the minor version's latest GA patch. For example, if you create a cluster with **`1.21`**, your cluster will run **`1.21.7`**, which is the latest GA patch version of *1.21*.
-When you upgrade by alias minor version, only a higher minor version is supported. For example, upgrading from `1.14.x` to `1.14` won't trigger an upgrade to the latest GA `1.14` patch, but upgrading to `1.15` will trigger an upgrade to the latest GA `1.15` patch. If you wish to upgrade your patch version in the same minor version, please use [auto-upgrade](./auto-upgrade-cluster.md#using-cluster-auto-upgrade).
+When you upgrade by alias minor version, only a higher minor version is supported. For example, upgrading from `1.14.x` to `1.14` doesn't trigger an upgrade to the latest GA `1.14` patch, but upgrading to `1.15` triggers an upgrade to the latest GA `1.15` patch.
-To see what patch you're on, run the `az aks show --resource-group myResourceGroup --name myAKSCluster` command. The `currentKubernetesVersion` property shows the whole Kubernetes version.
+To see what patch you're on, run the `az aks show --resource-group myResourceGroup --name myAKSCluster` command. The property `currentKubernetesVersion` shows the whole Kubernetes version.
``` {
AKS defines a generally available (GA) version as a version available in all reg
AKS may also support preview versions, which are explicitly labeled and subject to [preview terms and conditions][preview-terms].
+AKS provides platform support only for one GA minor version of Kubernetes after the regular supported versions. The platform support window of Kubernetes versions on AKS is known as "N-3". For more information, see [platform support policy](#platform-support-policy).
+ > [!NOTE] > AKS uses safe deployment practices which involve gradual region deployment. This means it may take up to 10 business days for a new release or a new version to be available in all regions.
New minor version | Supported Version List
-- | - 1.17.a | 1.17.a, 1.17.b, 1.16.c, 1.16.d, 1.15.e, 1.15.f
-Where ".letter" is representative of patch versions.
-
-When a new minor version is introduced, the oldest minor version and patch releases supported are deprecated and removed. For example, if the current supported version list is:
+When a new minor version is introduced, the oldest supported minor version and patch releases are deprecated and removed. For example, the current supported version list is:
``` 1.17.a
New Supported Version List
1.17.*9*, 1.17.*8*, 1.16.*11*, 1.16.*10* ```
+## Platform support policy
+
+Platform support policy is a reduced support plan for certain unsupported kubernetes versions. During platform support, customers will only receive support from Microsoft for AKS/Azure platform related issues. Any issues related to Kubernetes functionality and components will not be supported.
+
+Platform support policy applies to clusters in an n-3 version (where n is the latest supported AKS GA minor version), before the cluster drops to n-4. For example, kubernetes v1.25 will be considered platform support when v1.28 is the latest GA version. However, during the v1.29 GA release, v1.25 will then be auto-upgraded to v1.26.
+
+AKS relies on the releases and patches from [kubernetes](https://kubernetes.io/releases/), which is an Open Source project that only supports a sliding window of 3 minor versions. AKS can only guarantee [full support](#kubernetes-version-support-policy) while those versions are being serviced upstream. Since there's no more patches being produced upstream, AKS can either leave those versions unpatched or fork. Due to this limitation, platform support will not support anything from relying on kubernetes upstream.
+
+This table outlines support guidelines for Community Support compared to Platform support.
+
+| Support category | Community Support (N-2) | Platform Support (N-3) |
+||||
+| Upgrades from N-3 to a supported version| Supported | Supported|
+| Platform (Azure) availability | Supported | Supported|
+| Node pool scaling| Supported | Supported|
+| VM availability| Supported | Supported|
+| Storage, Networking related issues| Supported | Supported with the exception of bug fixes and retired components |
+| Start/stop | Supported | Supported|
+| Rotate certificates | Supported | Supported|
+| Infrastructure SLA| Supported | Supported|
+| Control Plane SLA| Supported | Supported|
+| Platform (AKS) SLA| Supported | Not supported|
+| Kubernetes components (including Add-ons) | Supported | Not supported|
+| Component updates | Supported | Not supported|
+| Component hotfixes | Supported | Not supported|
+| Applying bug fixes | Supported | Not supported|
+| Applying security patches | Supported | Not supported|
+| Kubernetes API support | Supported | Not supported|
+| Cluster or node pool creation| Supported | Not supported|
+| Node pool snapshot| Supported | Not supported|
+| Node image upgrade| Supported | Not supported|
+
+ > [!NOTE]
+ > The above table is subject to change and outlines common support scenarios. Any scenarios related to Kubernetes functionality and components will not be supported for N-3. For further support, see [Support and troubleshooting for AKS](./aks-support-help.md).
+ ### Supported `kubectl` versions You can use one minor version older or newer of `kubectl` relative to your *kube-apiserver* version, consistent with the [Kubernetes support policy for kubectl](https://kubernetes.io/docs/setup/release/version-skew-policy/#kubectl).
aks Workload Identity Deploy Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-deploy-cluster.md
Title: Deploy and configure an Azure Kubernetes Service (AKS) cluster with workload identity description: In this Azure Kubernetes Service (AKS) article, you deploy an Azure Kubernetes Service cluster and configure it with an Azure AD workload identity. Previously updated : 04/18/2023-+ Last updated : 04/19/2023 # Deploy and configure workload identity on an Azure Kubernetes Service (AKS) cluster
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui
This article assumes you have a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts]. If you aren't familiar with Azure AD workload identity, see the following [Overview][workload-identity-overview] article. -- This article requires version 2.40.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+- This article requires version 2.47.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
- The identity you're using to create your cluster has the appropriate minimum permissions. For more information about access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)][aks-identity-concepts]. - If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the [az account][az-account] command.
+## Export environmental variables
+
+To help simplify steps to configure the identities required, the steps below define
+environmental variables for reference on the cluster.
+
+Run the following commands to create these variables. Replace the default values for `RESOURCE_GROUP`, `LOCATION`, `SERVICE_ACCOUNT_NAME`, `SUBSCRIPTION`, `USER_ASSIGNED_IDENTITY_NAME`, and `FEDERATED_IDENTITY_CREDENTIAL_NAME`.
+
+```bash
+export RESOURCE_GROUP="myResourceGroup"
+export LOCATION="westcentralus"
+export SERVICE_ACCOUNT_NAMESPACE="default"
+export SERVICE_ACCOUNT_NAME="workload-identity-sa"
+export SUBSCRIPTION="$(az account show --query id --output tsv)"
+export USER_ASSIGNED_IDENTITY_NAME="myIdentity"
+export FEDERATED_IDENTITY_CREDENTIAL="myFedIdentity"
+```
+ ## Create AKS cluster Create an AKS cluster using the [az aks create][az-aks-create] command with the `--enable-oidc-issuer` parameter to use the OIDC Issuer. The following example creates a cluster named *myAKSCluster* with one node in the *myResourceGroup*: ```azurecli-interactive
-az group create --name myResourceGroup --location eastus
-
-az aks create -g myResourceGroup -n myAKSCluster --enable-oidc-issuer --enable-workload-identity
+az aks create -g "${RESOURCE_GROUP}" -n myAKSCluster --enable-oidc-issuer --enable-workload-identity
``` After a few minutes, the command completes and returns JSON-formatted information about the cluster.
After a few minutes, the command completes and returns JSON-formatted informatio
> [!NOTE] > When you create an AKS cluster, a second resource group is automatically created to store the AKS resources. For more information, see [Why are two resource groups created with AKS?][aks-two-resource-groups].
-To get the OIDC Issuer URL and save it to an environmental variable, run the following command. Replace the default values for the cluster name and the resource group name.
+To get the OIDC Issuer URL and save it to an environmental variable, run the following command. Replace the default value for the arguments `-n`, which is the name of the cluster:
```bash
-export AKS_OIDC_ISSUER="$(az aks show -n myAKSCluster -g myResourceGroup --query "oidcIssuerProfile.issuerUrl" -otsv)"
+export AKS_OIDC_ISSUER="$(az aks show -n myAKSCluster -g "${RESOURCE_GROUP}" --query "oidcIssuerProfile.issuerUrl" -otsv)"
``` ## Create a managed identity
export AKS_OIDC_ISSUER="$(az aks show -n myAKSCluster -g myResourceGroup --query
Use the Azure CLI [az account set][az-account-set] command to set a specific subscription to be the current active subscription. Then use the [az identity create][az-identity-create] command to create a managed identity. ```azurecli
-export SUBSCRIPTION_ID="$(az account show --query id --output tsv)"
-export USER_ASSIGNED_IDENTITY_NAME="myIdentity"
-export RG_NAME="myResourceGroup"
-export LOCATION="eastus"
+az identity create --name "${USER_ASSIGNED_IDENTITY_NAME}" --resource-group "${RESOURCE_GROUP}" --location "${LOCATION}" --subscription "${SUBSCRIPTION}"
+```
-az identity create --name "${USER_ASSIGNED_IDENTITY_NAME}" --resource-group "${RG_NAME}" --location "${LOCATION}" --subscription "${SUBSCRIPTION_ID}"
+Next, let's create a variable for the managed identity ID.
+
+```bash
+export USER_ASSIGNED_CLIENT_ID="$(az identity show --resource-group "${RESOURCE_GROUP}" --name "${USER_ASSIGNED_IDENTITY_NAME}" --query 'clientId' -otsv)"
``` ## Create Kubernetes service account
az identity create --name "${USER_ASSIGNED_IDENTITY_NAME}" --resource-group "${R
Create a Kubernetes service account and annotate it with the client ID of the managed identity created in the previous step. Use the [az aks get-credentials][az-aks-get-credentials] command and replace the values for the cluster name and the resource group name. ```azurecli
-az aks get-credentials -n myAKSCluster -g myResourceGroup
+az aks get-credentials -n myAKSCluster -g "${RESOURCE_GROUP}"
```
-Copy and paste the following multi-line input in the Azure CLI, and update the values for `SERVICE_ACCOUNT_NAME` and `SERVICE_ACCOUNT_NAMESPACE` with the Kubernetes service account name and its namespace.
+Copy and paste the following multi-line input in the Azure CLI.
```bash
-export SERVICE_ACCOUNT_NAME="workload-identity-sa"
-export SERVICE_ACCOUNT_NAMESPACE="my-namespace"
-export USER_ASSIGNED_CLIENT_ID="$(az identity show --resource-group "${RESOURCE_GROUP}" --name "${UAID}" --query 'clientId' -otsv)"
- cat <<EOF | kubectl apply -f - apiVersion: v1 kind: ServiceAccount
Serviceaccount/workload-identity-sa created
Use the [az identity federated-credential create][az-identity-federated-credential-create] command to create the federated identity credential between the managed identity, the service account issuer, and the subject. ```azurecli
-az identity federated-credential create --name myfederatedIdentity --identity-name "${USER_ASSIGNED_IDENTITY_NAME}" --resource-group "${RG_NAME}" --issuer "${AKS_OIDC_ISSUER}" --subject system:serviceaccount:"${SERVICE_ACCOUNT_NAMESPACE}":"${SERVICE_ACCOUNT_NAME}" --audience api://AzureADTokenExchange
+az identity federated-credential create --name ${FEDERATED_IDENTITY_CREDENTIAL_NAME} --identity-name "${USER_ASSIGNED_IDENTITY_NAME}" --resource-group "${RESOURCE_GROUP}" --issuer "${AKS_OIDC_ISSUER}" --subject system:serviceaccount:"${SERVICE_ACCOUNT_NAMESPACE}":"${SERVICE_ACCOUNT_NAME}" --audience api://AzureADTokenExchange
``` > [!NOTE]
You can retrieve this information using the Azure CLI command: [az keyvault list
1. Set an access policy for the managed identity to access secrets in your Key Vault by running the following commands: ```azurecli
- export RG_NAME="myResourceGroup"
+ export RESOURCE_GROUP="myResourceGroup"
export USER_ASSIGNED_IDENTITY_NAME="myIdentity" export KEYVAULT_NAME="myKeyVault"
- export USER_ASSIGNED_CLIENT_ID="$(az identity show --resource-group "${RG_NAME}" --name "${USER_ASSIGNED_IDENTITY_NAME}" --query 'clientId' -otsv)"
+ export USER_ASSIGNED_CLIENT_ID="$(az identity show --resource-group "${RESOURCE_GROUP}" --name "${USER_ASSIGNED_IDENTITY_NAME}" --query 'clientId' -otsv)"
az keyvault set-policy --name "${KEYVAULT_NAME}" --secret-permissions get --spn "${USER_ASSIGNED_CLIENT_ID}" ```
aks Workload Identity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-overview.md
Title: Use an Azure AD workload identities (preview) on Azure Kubernetes Service (AKS) description: Learn about Azure Active Directory workload identity (preview) for Azure Kubernetes Service (AKS) and how to migrate your application to authenticate using this identity. Previously updated : 04/18/2023 Last updated : 04/19/2023
Workloads deployed on an Azure Kubernetes Services (AKS) cluster require Azure A
[Azure AD workload identity][azure-ad-workload-identity] uses [Service Account Token Volume Projection][service-account-token-volume-projection] enabling pods to use a Kubernetes identity (that is, a service account). A Kubernetes token is issued and [OIDC federation][oidc-federation] enables Kubernetes applications to access Azure resources securely with Azure AD based on annotated service accounts.
-Azure AD workload identity works especially well with the Azure Identity client library using the [Azure SDK][azure-sdk-download] and the [Microsoft Authentication Library][microsoft-authentication-library] (MSAL) if you're using [application registration][azure-ad-application-registration]. Your workload can use any of these libraries to seamlessly authenticate and access Azure cloud resources.
+Azure AD workload identity works especially well with the [Azure Identity client libraries](#azure-identity-client-libraries) and the [Microsoft Authentication Library][microsoft-authentication-library] (MSAL) collection if you're using [application registration][azure-ad-application-registration]. Your workload can use any of these libraries to seamlessly authenticate and access Azure cloud resources.
This article helps you understand this new authentication feature, and reviews the options available to plan your project strategy and potential migration from Azure AD pod-managed identity.
This article helps you understand this new authentication feature, and reviews t
- The Azure CLI version 2.47.0 or later. Run `az --version` to find the version, and run `az upgrade` to upgrade the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
-## Azure Identity SDK
+## Azure Identity client libraries
-The following client libraries are the **minimum** version required
+In the Azure Identity client libraries, choose one of the following approaches:
+
+- Use `DefaultAzureCredential`, which will attempt to use the `WorkloadIdentityCredential`.
+- Create a `ChainedTokenCredential` instance that includes `WorkloadIdentityCredential`.
+- Use `WorkloadIdentityCredential` directly.
+
+The following table provides the **minimum** package version required for each language's client library.
-| Language | Library | Minimum Version | Example |
-|--|--|-|-|
-| Go | [azure-sdk-for-go](https://github.com/Azure/azure-sdk-for-go) | [sdk/azidentity/v1.3.0-beta.1](https://github.com/Azure/azure-sdk-for-go/releases/tag/sdk/azidentity/v1.3.0-beta.1)| [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/go) |
-| C# | [azure-sdk-for-net](https://github.com/Azure/azure-sdk-for-net) | [Azure.Identity_1.5.0](https://github.com/Azure/azure-sdk-for-net/releases/tag/Azure.Identity_1.5.0)| [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/dotnet) |
-| JavaScript/TypeScript | [azure-sdk-for-js](https://github.com/Azure/azure-sdk-for-js) | [@azure/identity_2.0.0](https://github.com/Azure/azure-sdk-for-js/releases/tag/@azure/identity_2.0.0) | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/node) |
-| Python | [azure-sdk-for-python](https://github.com/Azure/azure-sdk-for-python) | [azure-identity_1.7.0](https://github.com/Azure/azure-sdk-for-python/releases/tag/azure-identity_1.7.0) | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/python) |
-| Java | [azure-sdk-for-java]() | [azure-identity_1.4.0](https://github.com/Azure/azure-sdk-for-java/releases/tag/azure-identity_1.4.0) | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/java) |
+| Language | Library | Minimum Version | Example |
+||-|--||
+| .NET | [Azure.Identity](https://learn.microsoft.com/dotnet/api/overview/azure/identity-readme) | 1.9.0-beta.2 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/dotnet) |
+| Go | [azidentity](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity) | 1.3.0-beta.1 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/go) |
+| Java | [azure-identity](https://learn.microsoft.com/java/api/overview/azure/identity-readme) | 1.9.0-beta.1 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/java) |
+| JavaScript | [@azure/identity](https://learn.microsoft.com/javascript/api/overview/azure/identity-readme) | 3.2.0-beta.1 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/node) |
+| Python | [azure-identity](https://learn.microsoft.com/python/api/overview/azure/identity-readme) | 1.13.0b2 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/python) |
## Microsoft Authentication Library (MSAL)
The following client libraries are the **minimum** version required
| Language | Library | Image | Example | Has Windows | |--|--|-|-|-|
+| .NET | [microsoft-authentication-library-for-dotnet](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet) | ghcr.io/azure/azure-workload-identity/msal-net | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-net/akvdotnet) | Yes |
| Go | [microsoft-authentication-library-for-go](https://github.com/AzureAD/microsoft-authentication-library-for-go) | ghcr.io/azure/azure-workload-identity/msal-go | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-go) | Yes |
-| C# | [microsoft-authentication-library-for-dotnet](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet) | ghcr.io/azure/azure-workload-identity/msal-net | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-net/akvdotnet) | Yes |
-| JavaScript/TypeScript | [microsoft-authentication-library-for-js](https://github.com/AzureAD/microsoft-authentication-library-for-js) | ghcr.io/azure/azure-workload-identity/msal-node | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-node) | No |
-| Python | [microsoft-authentication-library-for-python](https://github.com/AzureAD/microsoft-authentication-library-for-python) | ghcr.io/azure/azure-workload-identity/msal-python | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-python) | No |
| Java | [microsoft-authentication-library-for-java](https://github.com/AzureAD/microsoft-authentication-library-for-java) | ghcr.io/azure/azure-workload-identity/msal-java | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-java) | No |
+| JavaScript | [microsoft-authentication-library-for-js](https://github.com/AzureAD/microsoft-authentication-library-for-js) | ghcr.io/azure/azure-workload-identity/msal-node | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-node) | No |
+| Python | [microsoft-authentication-library-for-python](https://github.com/AzureAD/microsoft-authentication-library-for-python) | ghcr.io/azure/azure-workload-identity/msal-python | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-python) | No |
## Limitations
The following table summarizes our migration or deployment recommendations for w
* See the tutorial [Use a workload identity with an application on Azure Kubernetes Service (AKS)][tutorial-use-workload-identity], which helps you deploy an Azure Kubernetes Service cluster and configure a sample application to use a workload identity. <!-- EXTERNAL LINKS -->
-[azure-sdk-download]: https://azure.microsoft.com/downloads/
[custom-resource-definition]: https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/ [service-account-token-volume-projection]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#serviceaccount-token-volume-projection [oidc-federation]: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens
api-management Api Management Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-features.md
Previously updated : 02/06/2023 Last updated : 04/17/2023
Each API Management [pricing tier](https://aka.ms/apimpricing) offers a distinct
| Direct management API | No | Yes | Yes | Yes | Yes | | Azure Monitor logs and metrics | No | Yes | Yes | Yes | Yes | | Static IP | No | Yes | Yes | Yes | Yes |
-| [WebSocket APIs](websocket-api.md) | No | Yes | Yes | Yes | Yes |
-| [GraphQL APIs](graphql-api.md)<sup>5</sup> | Yes | Yes | Yes | Yes | Yes |
-| [Synthetic GraphQL APIs (preview)](graphql-schema-resolve-api.md) | No | Yes | Yes | Yes | Yes |
+| [Pass-through WebSocket APIs](websocket-api.md) | No | Yes | Yes | Yes | Yes |
+| [Pass-through GraphQL APIs](graphql-apis-overview.md) | Yes | Yes | Yes | Yes | Yes |
+| [Synthetic GraphQL APIs](graphql-apis-overview.md) | Yes | Yes | Yes | Yes | Yes |
<sup>1</sup> Enables the use of Azure AD (and Azure AD B2C) as an identity provider for user sign in on the developer portal.<br/> <sup>2</sup> Including related functionality such as users, groups, issues, applications, and email templates and notifications.<br/> <sup>3</sup> See [Gateway overview](api-management-gateways-overview.md#feature-comparison-managed-versus-self-hosted-gateways) for a feature comparison of managed versus self-hosted gateways. In the Developer tier self-hosted gateways are limited to a single gateway node. <br/> <sup>4</sup> See [Gateway overview](api-management-gateways-overview.md#policies) for differences in policy support in the dedicated, consumption, and self-hosted gateways. <br/>
-<sup>5</sup> GraphQL subscriptions aren't supported in the Consumption tier.
api-management Api Management Gateways Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-gateways-overview.md
Previously updated : 02/06/2023 Last updated : 02/22/2023
The following table compares features available in the managed gateway versus th
| [Function App](import-function-app-as-api.md) | ✔️ | ✔️ | ✔️ | | [Container App](import-container-app-with-oas.md) | ✔️ | ✔️ | ✔️ | | [Service Fabric](../service-fabric/service-fabric-api-management-overview.md) | Developer, Premium | ❌ | ❌ |
-| [Passthrough GraphQL](graphql-api.md) | ✔️ | ✔️<sup>1</sup> | ❌ |
-| [Synthetic GraphQL](graphql-schema-resolve-api.md) | ✔️ | ❌ | ❌ |
-| [Passthrough WebSocket](websocket-api.md) | ✔️ | ❌ | ✔️ |
-
-<sup>1</sup> GraphQL subscriptions aren't supported in the Consumption tier.
+| [Pass-through GraphQL](graphql-apis-overview.md) | ✔️ | ✔️ | ❌ |
+| [Synthetic GraphQL](graphql-apis-overview.md)| ✔️ | ✔️ | ❌ |
+| [Pass-through WebSocket](websocket-api.md) | ✔️ | ❌ | ✔️ |
### Policies
Managed and self-hosted gateways support all available [policies](api-management
| Policy | Managed (Dedicated) | Managed (Consumption) | Self-hosted<sup>1</sup> | | | -- | -- | - | | [Dapr integration](api-management-policies.md#dapr-integration-policies) | ❌ | ❌ | ✔️ |
+| [GraphQL resolvers](api-management-policies.md#graphql-resolver-policies) and [GraphQL validation](api-management-policies.md#validation-policies)| ✔️ | ✔️ | ❌ |
| [Get authorization context](get-authorization-context-policy.md) | ✔️ | ✔️ | ❌ | | [Quota and rate limit](api-management-policies.md#access-restriction-policies) | ✔️ | ✔️<sup>2</sup> | ✔️<sup>3</sup>
-| [Set GraphQL resolver](set-graphql-resolver-policy.md) | ✔️ | ❌ | ❌ |
<sup>1</sup> Configured policies that aren't supported by the self-hosted gateway are skipped during policy execution.<br/> <sup>2</sup> The rate limit by key and quota by key policies aren't available in the Consumption tier.<br/>
api-management Api Management Howto Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-aad.md
description: Learn how to enable user sign-in to the API Management developer po
Previously updated : 03/17/2023 Last updated : 04/18/2023
Now that you've enabled access for users in an Azure AD tenant, you can:
* Add Azure AD groups into API Management. * Control product visibility using Azure AD groups.
-Follow these steps to grant:
-* `User.Read` **delegated** permission for Microsoft Graph API.
-* `Directory.ReadAll` **application** permission for Microsoft Graph API.
-
-1. Update the first 3 lines of the following Azure CLI script to match your environment and run it.
-
- ```azurecli
- $subId = "Your Azure subscription ID" # Example: "1fb8fadf-03a3-4253-8993-65391f432d3a"
- $tenantId = "Your Azure AD Tenant or Organization ID" # Example: 0e054eb4-e5d0-43b8-ba1e-d7b5156f6da8"
- $appObjectID = "Application Object ID that has been registered in AAD" # Example: "2215b54a-df84-453f-b4db-ae079c0d2619"
- #Login and Set the Subscription
- az login
- az account set --subscription $subId
- #Assign the following permission: Microsoft Graph Delegated Permission: User.Read, Microsoft Graph Application Permission: Directory.ReadAll
- az rest --method PATCH --uri "https://graph.microsoft.com/v1.0/$($tenantId)/applications/$($appObjectID)" --body "{'requiredResourceAccess':[{'resourceAccess': [{'id': 'e1fe6dd8-ba31-4d61-89e7-88639da4683d','type': 'Scope'},{'id': '7ab1d382-f21e-4acd-a863-ba3e13f7da61','type': 'Role'}],'resourceAppId': '00000003-0000-0000-c000-000000000000'}]}"
- ```
-
-1. Sign out and sign back in to the Azure portal.
1. Navigate to the App Registration page for the application you registered in [the previous section](#enable-user-sign-in-using-azure-adportal).
-1. Select **API Permissions**. You should see the permissions granted by the Azure CLI script in step 1.
+1. Select **API Permissions**.
+1. Add the following minimum **application** permissions for Microsoft Graph API:
+ * `User.Read.All` application permission ΓÇô so API Management can read the userΓÇÖs group membership to perform group synchronization at the time the user logs in.
+ * `Group.Read.All` application permission ΓÇô so API Management can read the Azure AD groups when an administrator tries to add the group to API Management using the **Groups** blade in the portal.
1. Select **Grant admin consent for {tenantname}** so that you grant access for all users in this directory. Now you can add external Azure AD groups from the **Groups** tab of your API Management instance.
api-management Api Management Howto Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-policies.md
When configuring a policy, you must first select the scope at which the policy a
For more information, see [Set or edit policies](set-edit-policies.md#use-base-element-to-set-policy-evaluation-order).
+### GraphQL resolver policies
+
+In API Management, a [GraphQL resolver](configure-graphql-resolver.md) is configured using policies scoped to a specific operation type and field in a [GraphQL schema](graphql-apis-overview.md#resolvers).
+
+* Currently, API Management supports GraphQL resolvers that specify HTTP data sources. Configure a single [`http-data-source`](http-data-source-policy.md) policy with elements to specify a request to (and optionally response from) an HTTP data source.
+* You can't include a resolver policy in policy definitions at other scopes such as API, product, or all APIs. It also doesn't inherit policies configured at other scopes.
+* The gateway evaluates a resolver-scoped policy *after* any configured `inbound` and `backend` policies in the policy execution pipeline.
+
+For more information, see [Configure a GraphQL resolver](configure-graphql-resolver.md).
+ ## Examples ### Apply policies specified at different scopes
api-management Api Management Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policies.md
More information about policies:
- [Send message to Pub/Sub topic](publish-to-dapr-policy.md): Uses Dapr runtime to publish a message to a Publish/Subscribe topic. To learn more about Publish/Subscribe messaging in Dapr, see the description in this [README](https://github.com/dapr/docs/blob/master/README.md) file. - [Trigger output binding](invoke-dapr-binding-policy.md): Uses Dapr runtime to invoke an external system via output binding. To learn more about bindings in Dapr, see the description in this [README](https://github.com/dapr/docs/blob/master/README.md) file.
-## GraphQL API policies
-- [Validate GraphQL request](validate-graphql-request-policy.md) - Validates and authorizes a request to a GraphQL API. -- [Set GraphQL resolver](set-graphql-resolver-policy.md) - Retrieves or sets data for a GraphQL field in an object type specified in a GraphQL schema.
+## GraphQL resolver policies
+- [HTTP data source for resolver](http-data-source-policy.md) - Configures the HTTP request and optionally the HTTP response to resolve data for an object type and field in a GraphQL schema.
+- [Publish event to GraphQL subscription](publish-event-policy.md) - Publishes an event to one or more subscriptions specified in a GraphQL API schema. Used in the `http-response` element of the `http-data-source` policy
## Transformation policies - [Convert JSON to XML](json-to-xml-policy.md) - Converts request or response body from JSON to XML.
More information about policies:
## Validation policies - [Validate content](validate-content-policy.md) - Validates the size or content of a request or response body against one or more API schemas. The supported schema formats are JSON and XML.
+- [Validate GraphQL request](validate-graphql-request-policy.md) - Validates and authorizes a request to a GraphQL API.
- [Validate parameters](validate-parameters-policy.md) - Validates the request header, query, or path parameters against the API schema. - [Validate headers](validate-headers-policy.md) - Validates the response headers against the API schema. - [Validate status code](validate-status-code-policy.md) - Validates the HTTP status codes in
api-management Api Management Policy Expressions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policy-expressions.md
The `context` variable is implicitly available in every policy [expression](api-
|Context Variable|Allowed methods, properties, and parameter values| |-|-|
-|`context`|[`Api`](#ref-context-api): [`IApi`](#ref-iapi)<br /><br /> [`Deployment`](#ref-context-deployment)<br /><br /> Elapsed: `TimeSpan` - time interval between the value of `Timestamp` and current time<br /><br /> [`LastError`](#ref-context-lasterror)<br /><br /> [`Operation`](#ref-context-operation)<br /><br /> [`Product`](#ref-context-product)<br /><br /> [`Request`](#ref-context-request)<br /><br /> `RequestId`: `Guid` - unique request identifier<br /><br /> [`Response`](#ref-context-response)<br /><br /> [`Subscription`](#ref-context-subscription)<br /><br /> `Timestamp`: `DateTime` - point in time when request was received<br /><br /> `Tracing`: `bool` - indicates if tracing is on or off <br /><br /> [User](#ref-context-user)<br /><br /> [`Variables`](#ref-context-variables): `IReadOnlyDictionary<string, object>`<br /><br /> `void Trace(message: string)`|
+|`context`|[`Api`](#ref-context-api): [`IApi`](#ref-iapi)<br /><br /> [`Deployment`](#ref-context-deployment)<br /><br /> Elapsed: `TimeSpan` - time interval between the value of `Timestamp` and current time<br /><br /> [`GraphQL`](#ref-context-graphql)<br /><br />[`LastError`](#ref-context-lasterror)<br /><br /> [`Operation`](#ref-context-operation)<br /><br /> [`Request`](#ref-context-request)<br /><br /> `RequestId`: `Guid` - unique request identifier<br /><br /> [`Response`](#ref-context-response)<br /><br /> [`Subscription`](#ref-context-subscription)<br /><br /> `Timestamp`: `DateTime` - point in time when request was received<br /><br /> `Tracing`: `bool` - indicates if tracing is on or off <br /><br /> [User](#ref-context-user)<br /><br /> [`Variables`](#ref-context-variables): `IReadOnlyDictionary<string, object>`<br /><br /> `void Trace(message: string)`|
|<a id="ref-context-api"></a>`context.Api`|`Id`: `string`<br /><br /> `IsCurrentRevision`: `bool`<br /><br /> `Name`: `string`<br /><br /> `Path`: `string`<br /><br /> `Revision`: `string`<br /><br /> `ServiceUrl`: [`IUrl`](#ref-iurl)<br /><br /> `Version`: `string` <br /><br /> `Workspace`: [`IWorkspace`](#ref-iworkspace) | |<a id="ref-context-deployment"></a>`context.Deployment`|[`Gateway`](#ref-context-gateway)<br /><br /> `GatewayId`: `string` (returns 'managed' for managed gateways)<br /><br /> `Region`: `string`<br /><br /> `ServiceId`: `string`<br /><br /> `ServiceName`: `string`<br /><br /> `Certificates`: `IReadOnlyDictionary<string, X509Certificate2>`| |<a id="ref-context-gateway"></a>`context.Deployment.Gateway`|`Id`: `string` (returns 'managed' for managed gateways)<br /><br /> `InstanceId`: `string` (returns 'managed' for managed gateways)<br /><br /> `IsManaged`: `bool`|
+|<a id="ref-context-graphql"></a>`context.GraphQL`|`GraphQLArguments`: `IGraphQLDataObject`<br /><br /> `Parent`: `IGraphQLDataObject`<br/><br/>[Examples](configure-graphql-resolver.md#graphql-context)|
|<a id="ref-context-lasterror"></a>`context.LastError`|`Source`: `string`<br /><br /> `Reason`: `string`<br /><br /> `Message`: `string`<br /><br /> `Scope`: `string`<br /><br /> `Section`: `string`<br /><br /> `Path`: `string`<br /><br /> `PolicyId`: `string`<br /><br /> For more information about `context.LastError`, see [Error handling](api-management-error-handling-policies.md).| |<a id="ref-context-operation"></a>`context.Operation`|`Id`: `string`<br /><br /> `Method`: `string`<br /><br /> `Name`: `string`<br /><br /> `UrlTemplate`: `string`| |<a id="ref-context-product"></a>`context.Product`|`Apis`: `IEnumerable<`[`IApi`](#ref-iapi)`>`<br /><br /> `ApprovalRequired`: `bool`<br /><br /> `Groups`: `IEnumerable<`[`IGroup`](#ref-igroup)`>`<br /><br /> `Id`: `string`<br /><br /> `Name`: `string`<br /><br /> `State`: `enum ProductState {NotPublished, Published}`<br /><br /> `SubscriptionLimit`: `int?`<br /><br /> `SubscriptionRequired`: `bool`<br /><br /> `Workspace`: [`IWorkspace`](#ref-iworkspace)|
The `context` variable is implicitly available in every policy [expression](api-
|<a id="ref-context-subscription"></a>`context.Subscription`|`CreatedDate`: `DateTime`<br /><br /> `EndDate`: `DateTime?`<br /><br /> `Id`: `string`<br /><br /> `Key`: `string`<br /><br /> `Name`: `string`<br /><br /> `PrimaryKey`: `string`<br /><br /> `SecondaryKey`: `string`<br /><br /> `StartDate`: `DateTime?`| |<a id="ref-context-user"></a>`context.User`|`Email`: `string`<br /><br /> `FirstName`: `string`<br /><br /> `Groups`: `IEnumerable<`[`IGroup`](#ref-igroup)`>`<br /><br /> `Id`: `string`<br /><br /> `Identities`: `IEnumerable<`[`IUserIdentity`](#ref-iuseridentity)`>`<br /><br /> `LastName`: `string`<br /><br /> `Note`: `string`<br /><br /> `RegistrationDate`: `DateTime`| |<a id="ref-iapi"></a>`IApi`|`Id`: `string`<br /><br /> `Name`: `string`<br /><br /> `Path`: `string`<br /><br /> `Protocols`: `IEnumerable<string>`<br /><br /> `ServiceUrl`: [`IUrl`](#ref-iurl)<br /><br /> `SubscriptionKeyParameterNames`: [`ISubscriptionKeyParameterNames`](#ref-isubscriptionkeyparameternames)|
+|<a id="ref-igraphqldataobject"></a>`IGraphQLDataObject`|TBD<br /><br />|
|<a id="ref-igroup"></a>`IGroup`|`Id`: `string`<br /><br /> `Name`: `string`| |<a id="ref-imessagebody"></a>`IMessageBody`|`As<T>(bool preserveContent = false): Where T: string, byte[], JObject, JToken, JArray, XNode, XElement, XDocument` <br /><br /> - The `context.Request.Body.As<T>` and `context.Response.Body.As<T>` methods read a request or response message body in specified type `T`. <br/><br/> - Or - <br/><br/>`AsFormUrlEncodedContent(bool preserveContent = false)` <br/></br>- The `context.Request.Body.AsFormUrlEncodedContent()` and `context.Response.Body.AsFormUrlEncodedContent()` methods read URL-encoded form data in a request or response message body and return an `IDictionary<string, IList<string>` object. The decoded object supports `IDictionary` operations and the following expressions: `ToQueryString()`, `JsonConvert.SerializeObject()`, `ToFormUrlEncodedContent().` <br/><br/> By default, the `As<T>` and `AsFormUrlEncodedContent()` methods:<br /><ul><li>Use the original message body stream.</li><li>Render it unavailable after it returns.</li></ul> <br />To avoid that and have the method operate on a copy of the body stream, set the `preserveContent` parameter to `true`, as shown in examples for the [set-body](set-body-policy.md#examples) policy.| |<a id="ref-iprivateendpointconnection"></a>`IPrivateEndpointConnection`|`Name`: `string`<br /><br /> `GroupId`: `string`<br /><br /> `MemberName`: `string`<br /><br />For more information, see the [REST API](/rest/api/apimanagement/current-ga/private-endpoint-connection/list-private-link-resources).|
api-management Stv1 Platform Retirement August 2024 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/stv1-platform-retirement-august-2024.md
documentationcenter: ''
Previously updated : 08/26/2022 Last updated : 01/10/2023
After 31 August 2024, any instance hosted on the `stv1` platform won't be suppor
**Migrate all your existing instances hosted on the `stv1` compute platform to the `stv2` compute platform by 31 August 2024.**
-If you have existing instances hosted on the `stv1` platform, you can follow our [migration guide](../compute-infrastructure.md#how-do-i-migrate-to-the-stv2-platform) which provides all the details to ensure a successful migration.
+If you have existing instances hosted on the `stv1` platform, you can follow our [migration guide](../migrate-stv1-to-stv2.md) which provides all the details to ensure a successful migration.
## Help and support
api-management Compute Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/compute-infrastructure.md
Title: Azure API Management compute platform
-description: Learn about the compute platform used to host your API Management service instance
+description: Learn about the compute platform used to host your API Management service instance. Instances in the dedicated service tiers of API Management are hosted on the stv1 or stv2 compute platform.
Previously updated : 03/16/2022 Last updated : 04/17/2023
As a cloud platform-as-a-service (PaaS), Azure API Management abstracts many det
To enhance service capabilities, we're upgrading the API Management compute platform version - the Azure compute resources that host the service - for instances in several [service tiers](api-management-features.md). This article gives you context about the upgrade and the major versions of API Management's compute platform: `stv1` and `stv2`.
-We've minimized impacts of this upgrade on your operation of your API Management instance. Upgrades are managed by the platform, and new instances created in service tiers other than the Consumption tier are mostly hosted on the `stv2` platform. However, for existing instances hosted on the `stv1` platform, you have options to trigger migration to the `stv2` platform.
+Most new instances created in service tiers other than the Consumption tier are hosted on the `stv2` platform. However, for existing instances hosted on the `stv1` platform, you have options to migrate to the `stv2` platform.
## What are the compute platforms for API Management?
The following table summarizes the compute platforms currently used for instance
| Version | Description | Architecture | Tiers | | -| -| -- | - |
-| `stv2` | Single-tenant v2 | Azure-allocated compute infrastructure that supports availability zones, private endpoints | Developer, Basic, Standard, Premium<sup>1</sup> |
+| `stv2` | Single-tenant v2 | Azure-allocated compute infrastructure that supports added resiliency and security features. See [What are the benefits of the `stv2` platform?](#what-are-the-benefits-of-the-stv2-platform) in this article. | Developer, Basic, Standard, Premium<sup>1</sup> |
| `stv1` | Single-tenant v1 | Azure-allocated compute infrastructure | Developer, Basic, Standard, Premium | | `mtv1` | Multi-tenant v1 | Shared infrastructure that supports native autoscaling and scaling down to zero in times of no traffic | Consumption |
-<sup>1</sup> Newly created instances in these tiers, created using the Azure portal or specifying API version 2021-01-01-preview or later. Includes some existing instances in Developer and Premium tiers configured with virtual networks or availability zones.
+<sup>1</sup> Newly created instances in these tiers and some existing instances in Developer and Premium tiers configured with virtual networks or availability zones.
> [!NOTE] > Currently, the `stv2` platform isn't available in the US Government cloud or in the following Azure regions: China East, China East 2, China North, China North 2. ## How do I know which platform hosts my API Management instance?
-Starting with API version `2021-04-01-preview`, the API Management instance exposes a read-only `platformVersion` property that shows this platform information.
+Starting with API version `2021-04-01-preview`, the API Management instance exposes a read-only `platformVersion` property with this platform information.
-You can find this information using the portal or the API Management [REST API](/rest/api/apimanagement/current-ga/api-management-service/get).
+You can find the platform version of your instance using the portal, the API Management [REST API](/rest/api/apimanagement/current-ga/api-management-service/get), or other Azure tools.
-To find the `platformVersion` property in the portal:
+To find the platform version in the portal:
-1. Go to your API Management instance.
-1. On the **Overview** page, select **JSON view**.
-1. In **API version**, select a current version such as `2021-08-01` or later.
-1. In the JSON view, scroll down to find the `platformVersion` property.
+1. Sign in to the [portal](https://portal.azure.com) and go to your API Management instance.
+1. On the **Overview** page, under **Essentials**, the **Platform Version** is displayed.
- :::image type="content" source="media/compute-infrastructure/platformversion-property.png" alt-text="platformVersion property in JSON view":::
+ :::image type="content" source="media/compute-infrastructure/platformversion-property.png" alt-text="Screenshot of the API Management platform version in the portal.":::
-## How do I migrate to the `stv2` platform?
-
-The following table summarizes migration options for instances in the different API Management service tiers that are currently hosted on the `stv1` platform. See the linked documentation for detailed steps.
-
-> [!NOTE]
-> Check the [`platformVersion` property](#how-do-i-know-which-platform-hosts-my-api-management-instance) before starting migration, and after your configuration change.
-
-|Tier |Migration options |
-|||
-|Premium | 1. Enable [zone redundancy](../reliability/migrate-api-mgt.md)<br/> -or-<br/> 2. Create new [external](api-management-using-with-vnet.md) or [internal](api-management-using-with-internal-vnet.md) VNet connection<sup>1</sup><br/> -or-<br/> 3. Update existing [VNet configuration](#update-vnet-configuration) |
-|Developer | 1. Create new [external](api-management-using-with-vnet.md) or [internal](api-management-using-with-internal-vnet.md) VNet connection<sup>1</sup><br/>-or-<br/> 2. Update existing [VNet configuration](#update-vnet-configuration) |
-| Standard | 1. [Change your service tier](upgrade-and-scale.md#change-your-api-management-service-tier) (downgrade to Developer or upgrade to Premium). Follow migration options in new tier.<br/>-or-<br/>2. Deploy new instance in existing tier and migrate configurations<sup>2</sup> |
-| Basic | 1. [Change your service tier](upgrade-and-scale.md#change-your-api-management-service-tier) (downgrade to Developer or upgrade to Premium). Follow migration options in new tier<br/>-or-<br/>2. Deploy new instance in existing tier and migrate configurations<sup>2</sup> |
-| Consumption | Not applicable |
-
-<sup>1</sup> Use Azure portal or specify API version 2021-01-01-preview or later.
-
-<sup>2</sup> Migrate configurations with the following mechanisms: [Backup and restore](api-management-howto-disaster-recovery-backup-restore.md), [Migration script for the developer portal](automate-portal-deployments.md), [APIOps with Azure API Management](/azure/architecture/example-scenario/devops/automated-api-deployments-apiops).
-
-## Update VNet configuration
+## What are the benefits of the `stv2` platform?
-If you have an existing Developer or Premium tier instance that's connected to a virtual network and hosted on the `stv1` platform, trigger migration to the `stv2` platform by updating the VNet configuration.
+The `stv2` platform infrastructure supports several resiliency and security features of API Management that aren't available on the `stv1` platform, including:
-### Prerequisites
+* [Availability zones](zone-redundancy.md)
+* [Private endpoints](private-endpoint.md)
+* [Protection with Azure DDoS](protect-with-ddos-protection.md)
-* A new or existing virtual network and subnet in the same region and subscription as your API Management instance. The subnet must be different from the one currently used for the instance hosted on the `stv1` platform, and a network security group must be attached.
-* A new or existing Standard SKU [public IPv4 address](../virtual-network/ip-services/public-ip-addresses.md#sku) resource in the same region and subscription as your API Management instance.
-
-To update the existing external or internal VNet configuration using the portal:
-
-1. Navigate to your API Management instance.
-1. In the left menu, select **Network** > **Virtual network**.
-1. Select the network connection in the location you want to update.
-1. Select the virtual network, subnet, and IP address resources you want to configure, and select **Apply**.
-1. Continue configuring VNet settings for the remaining locations of your API Management instance.
-1. In the top navigation bar, select **Save**, then select **Apply network configuration**.
-
-The virtual network configuration is updated, and the instance is migrated to the `stv2` platform. Confirm migration by checking the [`platformVersion` property](#how-do-i-know-which-platform-hosts-my-api-management-instance).
+## How do I migrate to the `stv2` platform?
-> [!NOTE]
-> * Updating the VNet configuration takes from 15 to 45 minutes to complete.
-> * The VIP address(es) of your API Management instance will change.
+> [!IMPORTANT]
+> Support for API Management instances hosted on the `stv1` platform will be [retired by 31 August 2024](breaking-changes/stv1-platform-retirement-august-2024.md). To ensure proper operation of your API Management instance, you should migrate any instance hosted on the `stv1` platform to `stv2` before that date.
+Migration steps depend on features enabled in your API Management instance. If the instance isn't injected in a VNet, you can use a migration API. For instances that are VNet-injected, follow manual steps. For details, see the [migration guide](migrate-stv1-to-stv2.md).
## Next steps
-* Learn more about using a [virtual network](virtual-network-concepts.md) with API Management.
-* Learn more about enabling [availability zones](../reliability/migrate-api-mgt.md).
-
+* [Migrate an API Management instance to the stv2 platform](migrate-stv1-to-stv2.md).
+* Learn more about [upcoming breaking changes](breaking-changes/overview.md) in API Management.
api-management Configure Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/configure-custom-domain.md
There are several API Management endpoints to which you can assign a custom doma
| **SCM** | Default is: `<apim-service-name>.scm.azure-api.net` | ### Considerations+ * You can update any of the endpoints supported in your service tier. Typically, customers update **Gateway** (this URL is used to call the APIs exposed through API Management) and **Developer portal** (the developer portal URL). * The default **Gateway** endpoint also is available after you configure a custom Gateway domain name. For other API Management endpoints (such as **Developer portal**) that you configure with a custom domain name, the default endpoint is no longer available. * Only API Management instance owners can use **Management** and **SCM** endpoints internally. These endpoints are less frequently assigned a custom domain name. * The **Premium** and **Developer** tiers support setting multiple hostnames for the **Gateway** endpoint.
-* Wildcard domain names, like `*.contoso.com`, are supported in all tiers except the Consumption tier.
+* Wildcard domain names, like `*.contoso.com`, are supported in all tiers except the Consumption tier. A specific subdomain certificate (for example, api.contoso.com) would take precedence over a wildcard certificate (*.contoso.com) for requests to api.contoso.com.
## Domain certificate options
API Management offers a free, managed TLS certificate for your domain, if you do
* Currently available only in the Azure cloud * Does not support root domain names (for example, `contoso.com`). Requires a fully qualified name such as `api.contoso.com`. * Can only be configured when updating an existing API Management instance, not when creating an instance- + ## Set a custom domain name - portal Choose the steps according to the [domain certificate](#domain-certificate-options) you want to use. # [Custom](#tab/custom)+ 1. Navigate to your API Management instance in the [Azure portal](https://portal.azure.com/). 1. In the left navigation, select **Custom domains**. 1. Select **+Add**, or select an existing [endpoint](#endpoints-for-custom-domains) that you want to update.
Choose the steps according to the [domain certificate](#domain-certificate-optio
:::image type="content" source="media/configure-custom-domain/gateway-domain-free-certifcate.png" alt-text="Configure gateway domain with free certificate"::: 1. Select **Add**, or select **Update** for an existing endpoint. 1. Select **Save**.-
-
+ > [!NOTE] > The process of assigning the certificate may take 15 minutes or more depending on size of deployment. Developer tier has downtime, while Basic and higher tiers do not.
You can also get a domain ownership identifier by calling the [Get Domain Owners
## Next steps [Upgrade and scale your service](upgrade-and-scale.md)+
api-management Configure Graphql Resolver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/configure-graphql-resolver.md
+
+ Title: Configure GraphQL resolver in Azure API Management
+description: Configure a GraphQL resolver in Azure AI Management for a field in an object type specified in a GraphQL schema
+++++ Last updated : 02/22/2023+++
+# Configure a GraphQL resolver
+
+Configure a resolver to retrieve or set data for a GraphQL field in an object type specified in a GraphQL schema. The schema must be imported to API Management. Currently, API Management supports resolvers that use HTTP-based data sources (REST or SOAP APIs).
+
+* A resolver is a resource containing a policy definition that's invoked only when a matching object type and field is executed.
+* Each resolver resolves data for a single field. To resolve data for multiple fields, configure a separate resolver for each.
+* Resolver-scoped policies are evaluated *after* any `inbound` and `backend` policies in the policy execution pipeline. They don't inherit policies from other scopes. For more information, see [Policies in API Management](api-management-howto-policies.md).
++
+> [!IMPORTANT]
+> * If you use the preview `set-graphql-resolver` policy in policy definitions, you should migrate to the managed resolvers described in this article.
+> * After you configure a managed resolver for a GraphQL field, the gateway will skip the `set-graphql-resolver` policy in any policy definitions. You can't combine use of managed resolvers and the `set-graphql-resolver` policy in your API Management instance.
+
+## Prerequisites
+
+- An existing API Management instance. [Create one if you haven't already](get-started-create-service-instance.md).
+- Import a [pass-through](graphql-api.md) or [synthetic](graphql-schema-resolve-api.md) GraphQL API.
+
+## Create a resolver
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance.
+
+1. In the left menu, select **APIs** and then the name of your GraphQL API.
+1. On the **Design** tab, review the schema for a field in an object type where you want to configure a resolver.
+ 1. Select a field, and then in the left margin, hover the pointer.
+ 1. Select **+ Add Resolver**.
+
+ :::image type="content" source="media/configure-graphql-resolver/add-resolver.png" alt-text="Screenshot of adding a resolver from a field in GraphQL schema in the portal.":::
+1. On the **Create Resolver** page, update the **Name** property if you want to, optionally enter a **Description**, and confirm or update the **Type** and **Field** selections.
+1. In the **Resolver policy** editor, update the [`http-data-source`](http-data-source-policy.md) policy with child elements for your scenario.
+ 1. Update the required `http-request` element with policies to transform the GraphQL operation to an HTTP request.
+ 1. Optionally add an `http-response` element, and add child policies to transform the HTTP response of the resolver. If the `http-response` element isn't specified, the response is returned as a raw string.
+ 1. Select **Create**.
+
+ :::image type="content" source="media/configure-graphql-resolver/configure-resolver-policy.png" alt-text="Screenshot of resolver policy editor in the portal." lightbox="media/configure-graphql-resolver/configure-resolver-policy.png":::
+
+1. The resolver is attached to the field. Go to the **Resolvers** tab to list and manage the resolvers configured for the API.
+
+ :::image type="content" source="media/configure-graphql-resolver/list-resolvers.png" alt-text="Screenshot of the resolvers list for GraphQL API in the portal." lightbox="media/configure-graphql-resolver/list-resolvers.png":::
+
+ > [!TIP]
+ > The **Linked** column indicates whether or not the resolver is configured for a field that's currently in the GraphQL schema. If a resolver isn't linked, it can't be invoked.
+++
+## GraphQL context
+
+* The context for the HTTP request and HTTP response (if specified) differs from the context for the original gateway API request:
+ * `context.GraphQL` properties are set to the arguments (`Arguments`) and parent object (`Parent`) for the current resolver execution.
+ * The HTTP request context contains arguments that are passed in the GraphQL query as its body.
+ * The HTTP response context is the response from the independent HTTP call made by the resolver, not the context for the complete response for the gateway request.
+The `context` variable that is passed through the request and response pipeline is augmented with the GraphQL context when used with a GraphQL resolver.
+
+### context.GraphQL.parent
+
+The `context.ParentResult` is set to the parent object for the current resolver execution. Consider the following partial schema:
+
+``` graphql
+type Comment {
+ id: ID!
+ owner: string!
+ content: string!
+}
+
+type Blog {
+ id: ID!
+ Title: string!
+ content: string!
+ comments: [Comment]!
+ comment(id: ID!): Comment
+}
+
+type Query {
+ getBlog(): [Blog]!
+ getBlog(id: ID!): Blog
+}
+```
+
+Also, consider a GraphQL query for all the information for a specific blog:
+
+``` graphql
+query {
+ getBlog(id: 1) {
+ title
+ content
+ comments {
+ id
+ owner
+ content
+ }
+ }
+}
+```
+
+If you set a resolver for the `comments` field in the `Blog` type, you'll want to understand which blog ID to use. You can get the ID of the blog using `context.GraphQL.Parent["id"]` as shown in the following resolver:
+
+``` xml
+<http-data-source>
+ <http-request>
+ <set-method>GET</set-method>
+ <set-url>@($"https://data.contoso.com/api/blog/{context.GraphQL.Parent["id"]}")
+ }</set-url>
+ </http-request>
+</http-data-source>
+```
+
+### context.GraphQL.Arguments
+
+The arguments for a parameterized GraphQL query are added to `context.GraphQL.Arguments`. For example, consider the following two queries:
+
+``` graphql
+query($id: Int) {
+ getComment(id: $id) {
+ content
+ }
+}
+
+query {
+ getComment(id: 2) {
+ content
+ }
+}
+```
+
+These queries are two ways of calling the `getComment` resolver. GraphQL sends the following JSON payload:
+
+``` json
+{
+ "query": "query($id: Int) { getComment(id: $id) { content } }",
+ "variables": { "id": 2 }
+}
+
+{
+ "query": "query { getComment(id: 2) { content } }"
+}
+```
+
+You can define the resolver as follows:
+
+``` xml
+<http-data-source>
+ <http-request>
+ <set-method>GET</set-method>
+ <set-url>@($"https://data.contoso.com/api/comment/{context.GraphQL.Arguments["id"]}")</set-url>
+ </http-request>
+</http-data-source>
+```
+
+## Next steps
+
+For more resolver examples, see:
++
+* [GraphQL resolver policies](api-management-policies.md#graphql-resolver-policies)
+
+* [Samples APIs for Azure API Management](https://github.com/Azure-Samples/api-management-sample-apis)
api-management Graphql Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/graphql-api.md
Title: Import a GraphQL API to Azure API Management | Microsoft Docs
+ Title: Add a GraphQL API to Azure API Management | Microsoft Docs
description: Learn how to add an existing GraphQL service as an API in Azure API Management using the Azure portal, Azure CLI, or Azure PowerShell. Manage the API and enable queries to pass through to the GraphQL endpoint. Previously updated : 10/27/2022 Last updated : 04/10/2023
In this article, you'll: > [!div class="checklist"]
-> * Learn more about the benefits of using GraphQL APIs.
-> * Add a GraphQL API to your API Management instance.
+> * Add a pass-through GraphQL API to your API Management instance.
> * Test your GraphQL API.
-> * Learn the limitations of your GraphQL API in API Management.
If you want to import a GraphQL schema and set up field resolvers using REST or SOAP API endpoints, see [Import a GraphQL schema and set up field resolvers](graphql-schema-resolve-api.md).
If you want to import a GraphQL schema and set up field resolvers using REST or
1. In the dialog box, select **Full** and complete the required form fields.
- :::image type="content" source="media/graphql-api/create-from-graphql-schema.png" alt-text="Screenshot of fields for creating a GraphQL API.":::
+ :::image type="content" source="media/graphql-api/create-from-graphql-endpoint.png" alt-text="Screenshot of fields for creating a GraphQL API.":::
| Field | Description | |-|-| | **Display name** | The name by which your GraphQL API will be displayed. | | **Name** | Raw name of the GraphQL API. Automatically populates as you type the display name. |
- | **GraphQL API endpoint** | The base URL with your GraphQL API endpoint name. <br /> For example: *`https://example.com/your-GraphQL-name`*. You can also use a common "Star Wars" GraphQL endpoint such as `https://swapi-graphql.azure-api.net/graphql` as a demo. |
+ | **GraphQL type** | Select **Pass-through GraphQL** to import from an existing GraphQL API endpoint. |
+ | **GraphQL API endpoint** | The base URL with your GraphQL API endpoint name. <br /> For example: *`https://example.com/your-GraphQL-name`*. You can also use a common "swapi" GraphQL endpoint such as `https://swapi-graphql.azure-api.net/graphql` as a demo. |
| **Upload schema** | Optionally select to browse and upload your schema file to replace the schema retrieved from the GraphQL endpoint (if available). | | **Description** | Add a description of your API. |
- | **URL scheme** | Select **HTTP**, **HTTPS**, or **Both**. Default selection: *Both*. |
+ | **URL scheme** | Make a selection based on your GraphQL endpoint. Select one of the options that includes a WebSocket scheme (**WS** or **WSS**) if your GraphQL API includes the subscription type. Default selection: *HTTP(S)*. |
| **API URL suffix**| Add a URL suffix to identify this specific API in this API Management instance. It has to be unique in this API Management instance. | | **Base URL** | Uneditable field displaying your API base URL | | **Tags** | Associate your GraphQL API with new or existing tags. | | **Products** | Associate your GraphQL API with a product to publish it. |
- | **Gateways** | Associate your GraphQL API with existing gateways. Default gateway selection: *Managed*. |
| **Version this API?** | Select to apply a versioning scheme to your GraphQL API. | 1. Select **Create**.
-1. After the API is created, browse the schema on the **Design** tab, in the **Frontend** section.
+1. After the API is created, browse or modify the schema on the **Design** tab.
:::image type="content" source="media/graphql-api/explore-schema.png" alt-text="Screenshot of exploring the GraphQL schema in the portal."::: #### [Azure CLI](#tab/cli)
After importing the API, if needed, you can update the settings by using the [Se
[!INCLUDE [api-management-graphql-test.md](../../includes/api-management-graphql-test.md)]
+### Test a subscription
+If your GraphQL API supports a subscription, you can test it in the test console.
+
+1. Ensure that your API allows a WebSocket URL scheme (**WS** or **WSS**) that's appropriate for your API. You can enable this setting on the **Settings** tab.
+1. Set up a subscription query in the query editor, and then select **Connect** to establish a WebSocket connection to the backend service.
+
+ :::image type="content" source="media/graphql-api/test-graphql-subscription.png" alt-text="Screenshot of a subscription query in the query editor.":::
+1. Review connection details in the **Subscription** pane.
+
+ :::image type="content" source="media/graphql-api/graphql-websocket-connection.png" alt-text="Screenshot of Websocket connection in the portal.":::
+
+1. Subscribed events appear in the **Subscription** pane. The WebSocket connection is maintained until you disconnect it or you connect to a new WebSocket subscription.
+
+ :::image type="content" source="media/graphql-api/graphql-subscription-event.png" alt-text="Screenshot of GraphQL subscription events in the portal.":::
+
+## Secure your GraphQL API
+
+Secure your GraphQL API by applying both existing [access control policies](api-management-policies.md#access-restriction-policies) and a [GraphQL validation policy](validate-graphql-request-policy.md) to protect against GraphQL-specific attacks.
+ [!INCLUDE [api-management-define-api-topics.md](../../includes/api-management-define-api-topics.md)] ## Next steps
api-management Graphql Apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/graphql-apis-overview.md
+
+ Title: Support for GraphQL APIs - Azure API Management
+description: Learn about GraphQL and how Azure API Management helps you manage GraphQL APIs.
+++++ Last updated : 02/26/2023+++
+# Overview of GraphQL APIs in Azure API Management
+
+You can use API Management to manage GraphQL APIs - APIs based on the GraphQL query language. GraphQL provides a complete and understandable description of the data in an API, giving clients the power to efficiently retrieve exactly the data they need. [Learn more about GraphQL](https://graphql.org/learn/)
+
+API Management helps you import, manage, protect, test, publish, and monitor GraphQL APIs. You can choose one of two API models:
++
+|Pass-through GraphQL |Synthetic GraphQL |
+|||
+| ▪️ Pass-through API to existing GraphQL service endpoint<br><br/>▪️ Support for GraphQL queries, mutations, and subscriptions | ▪️ API based on a custom GraphQL schema<br></br>▪️ Support for GraphQL queries, mutations, and subscriptions<br/><br/>▪️ Configure custom resolvers, for example, to HTTP data sources<br/><br/>▪️ Develop GraphQL schemas and GraphQL-based clients while consuming data from legacy APIs |
+
+## Availability
+
+* GraphQL APIs are supported in all API Management service tiers
+* Pass-through and synthetic GraphQL APIs currently aren't supported in a self-hosted gateway
+* GraphQL subscription support in synthetic GraphQL APIs is currently in preview
+
+## What is GraphQL?
+
+GraphQL is an open-source, industry-standard query language for APIs. Unlike REST-style APIs designed around actions over resources, GraphQL APIs support a broader set of use cases and focus on data types, schemas, and queries.
+
+The GraphQL specification explicitly solves common issues experienced by client web apps that rely on REST APIs:
+
+* It can take a large number of requests to fulfill the data needs for a single page
+* REST APIs often return more data than needed the page being rendered
+* The client app needs to poll to get new information
+
+Using a GraphQL API, the client app can specify the data they need to render a page in a query document that is sent as a single request to a GraphQL service. A client app can also subscribe to data updates pushed from the GraphQL service in real time.
+
+## Schema and operation types
+
+In API Management, add a GraphQL API from a GraphQL schema, either retrieved from a backend GraphQL API endpoint or uploaded by you. A GraphQL schema describes:
+
+* Data object types and fields that clients can request from a GraphQL API
+* Operation types allowed on the data, such as queries
+
+For example, a basic GraphQL schema for user data and a query for all users might look like:
+
+```
+type Query {
+ users: [User]
+}
+
+type User {
+ id: String!
+ name: String!
+}
+```
+
+API Management supports the following operation types in GraphQL schemas. For more information about these operation types, see the [GraphQL specification](https://spec.graphql.org/October2021/#sec-Subscription-Operation-Definitions).
+
+* **Query** - Fetches data, similar to a `GET` operation in REST
+* **Mutation** - Modifies server-side data, similar to a `PUT` or `PATCH` operation in REST
+* **Subscription** - Enables notifying subscribed clients in real time about changes to data on the GraphQL service
+
+ For example, when data is modified via a GraphQL mutation, subscribed clients could be automatically notified about the change.
+
+> [!IMPORTANT]
+> API Management supports subscriptions implemented using the [graphql-ws](https://github.com/enisdenjo/graphql-ws) WebSocket protocol. Queries and mutations aren't supported over WebSocket.
+>
+
+## Resolvers
+
+*Resolvers* take care of mapping the GraphQL schema to backend data, producing the data for each field in an object type. The data source could be an API, a database, or another service. For example, a resolver function would be responsible for returning data for the `users` query in the preceding example.
+
+In API Management, you can create a *custom resolver* to map a field in an object type to a backend data source. You configure resolvers for fields in synthetic GraphQL API schemas, but you can also configure them to override the default field resolvers used by pass-through GraphQL APIs.
+
+API Management currently supports HTTP-based resolvers to return the data for fields in a GraphQL schema. To use an HTTP-based resolver, configure a [`http-data-source`](http-data-source-policy.md) policy that transforms the API request (and optionally the response) into an HTTP request/response.
+
+For example, a resolver for the preceding `users` query might map to a `GET` operation in a backend REST API:
+
+```xml
+<http-data-source>
+ <http-request>
+ <set-method>GET</set-method>
+ <set-url>https://myapi.contoso.com/api/users</set-url>
+ </http-request>
+</http-data-source>
+```
+
+For more information, see [Configure a GraphQL resolver](configure-graphql-resolver.md).
+
+## Manage GraphQL APIs
+
+* Secure GraphQL APIs by applying both existing access control policies and a [GraphQL validation policy](validate-graphql-request-policy.md) to secure and protect against GraphQL-specific attacks.
+* Explore the GraphQL schema and run test queries against the GraphQL APIs in the Azure and developer portals.
++
+## Next steps
+
+- [Import a GraphQL API](graphql-api.md)
+- [Import a GraphQL schema and set up field resolvers](graphql-schema-resolve-api.md)
api-management Graphql Schema Resolve Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/graphql-schema-resolve-api.md
Title: Import GraphQL schema and set up field resolvers | Microsoft Docs
+ Title: Add a synthetic GraphQL API to Azure API Management | Microsoft Docs
-description: Import a GraphQL schema to API Management and configure a policy to resolve a GraphQL query using an HTTP-based data source.
+description: Add a synthetic GraphQL API by importing a GraphQL schema to API Management and configuring field resolvers that use HTTP-based data sources.
Previously updated : 05/17/2022 Last updated : 02/21/2023
-# Import a GraphQL schema and set up field resolvers
+# Add a synthetic GraphQL API and set up field resolvers
[!INCLUDE [api-management-graphql-intro.md](../../includes/api-management-graphql-intro.md)] - In this article, you'll: > [!div class="checklist"] > * Import a GraphQL schema to your API Management instance
-> * Set up a resolver for a GraphQL query using an existing HTTP endpoints
+> * Set up a resolver for a GraphQL query using an existing HTTP endpoint
> * Test your GraphQL API If you want to expose an existing GraphQL endpoint as an API, see [Import a GraphQL API](graphql-api.md).
If you want to expose an existing GraphQL endpoint as an API, see [Import a Grap
## Add a GraphQL schema 1. From the side navigation menu, under the **APIs** section, select **APIs**.
-1. Under **Define a new API**, select the **Synthetic GraphQL** icon.
+1. Under **Define a new API**, select the **GraphQL** icon.
- :::image type="content" source="media/graphql-schema-resolve-api/import-graphql-api.png" alt-text="Screenshot of selecting Synthetic GraphQL icon from list of APIs.":::
+ :::image type="content" source="media/graphql-api/import-graphql-api.png" alt-text="Screenshot of selecting GraphQL icon from list of APIs.":::
1. In the dialog box, select **Full** and complete the required form fields. :::image type="content" source="media/graphql-schema-resolve-api/create-from-graphql-schema.png" alt-text="Screenshot of fields for creating a GraphQL API.":::
- | Field | Description |
+ | Field | Description |
|-|-| | **Display name** | The name by which your GraphQL API will be displayed. | | **Name** | Raw name of the GraphQL API. Automatically populates as you type the display name. |
- | **Fallback GraphQL endpoint** | For this scenario, optionally enter a URL with a GraphQL API endpoint name. API Management passes GraphQL queries to this endpoint when a custom resolver isn't set for a field. |
- | **Upload schema file** | Select to browse and upload a valid GraphQL schema file with the `.graphql` extension. |
- | Description | Add a description of your API. |
- | URL scheme | Select **HTTP**, **HTTPS**, or **Both**. Default selection: *Both*. |
+ | **GraphQL type** | Select **Synthetic GraphQL** to import from a GraphQL schema file. |
+ | **Fallback GraphQL endpoint** | Optionally enter a URL with a GraphQL API endpoint name. API Management passes GraphQL queries to this endpoint when a custom resolver isn't set for a field. |
+ | **Description** | Add a description of your API. |
+ | **URL scheme** | Make a selection based on your GraphQL endpoint. Select one of the options that includes a WebSocket scheme (**WS** or **WSS**) if your GraphQL API includes the subscription type. Default selection: *HTTP(S)*. |
| **API URL suffix**| Add a URL suffix to identify this specific API in this API Management instance. It has to be unique in this API Management instance. | | **Base URL** | Uneditable field displaying your API base URL | | **Tags** | Associate your GraphQL API with new or existing tags. | | **Products** | Associate your GraphQL API with a product to publish it. |
- | **Gateways** | Associate your GraphQL API with existing gateways. Default gateway selection: *Managed*. |
| **Version this API?** | Select to apply a versioning scheme to your GraphQL API. |+ 1. Select **Create**.
-1. After the API is created, browse the schema on the **Design** tab, in the **Frontend** section.
+1. After the API is created, browse or modify the schema on the **Design** tab.
## Configure resolver
-Configure the [set-graphql-resolver](set-graphql-resolver-policy.md) policy to map a field in the schema to an existing HTTP endpoint.
+Configure a resolver to map a field in the schema to an existing HTTP endpoint.
+
+<!-- Add link to resolver how-to article for details -->
Suppose you imported the following basic GraphQL schema and wanted to set up a resolver for the *users* query.
type User {
``` 1. From the side navigation menu, under the **APIs** section, select **APIs** > your GraphQL API.
-1. On the **Design** tab of your GraphQL API, select **All operations**.
-1. In the **Backend** processing section, select **+ Add policy**.
-1. Configure the `set-graphql-resolver` policy to resolve the *users* query using an HTTP data source.
+1. On the **Design** tab, review the schema for a field in an object type where you want to configure a resolver.
+ 1. Select a field, and then in the left margin, hover the pointer.
+ 1. Select **+ Add Resolver**
+
+ :::image type="content" source="media/graphql-schema-resolve-api/add-resolver.png" alt-text="Screenshot of adding a GraphQL resolver in the portal.":::
+
+1. On the **Create Resolver** page, update the **Name** property if you want to, optionally enter a **Description**, and confirm or update the **Type** and **Field** selections.
- For example, the following `set-graphql-resolver` policy retrieves the *users* field by using a `GET` call on an existing HTTP data source.
+1. In the **Resolver policy** editor, update the `<http-data-source>` element with child elements for your scenario. For example, the following resolver retrieves the *users* field by using a `GET` call on an existing HTTP data source.
+
```xml
- <set-graphql-resolver parent-type="Query" field="users">
<http-data-source> <http-request> <set-method>GET</set-method> <set-url>https://myapi.contoso.com/users</set-url> </http-request> </http-data-source>
- </set-graphql-resolver>
```
-1. To resolve data for other fields in the schema, repeat the preceding step.
-1. Select **Save**.
+
+ :::image type="content" source="media/graphql-schema-resolve-api/configure-resolver-policy.png" alt-text="Screenshot of configuring resolver policy in the portal.":::
+1. Select **Create**.
+1. To resolve data for another field in the schema, repeat the preceding steps to create a resolver.
[!INCLUDE [api-management-graphql-test.md](../../includes/api-management-graphql-test.md)]
+## Secure your GraphQL API
+
+Secure your GraphQL API by applying both existing [access control policies](api-management-policies.md#access-restriction-policies) and a [GraphQL validation policy](validate-graphql-request-policy.md) to protect against GraphQL-specific attacks.
++ [!INCLUDE [api-management-define-api-topics.md](../../includes/api-management-define-api-topics.md)] ## Next steps
api-management Http Data Source Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/http-data-source-policy.md
+
+ Title: Azure API Management policy reference - http-data-source | Microsoft Docs
+description: Reference for the http-data-source resolver policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 03/07/2023+++
+# HTTP data source for a resolver
+
+The `http-data-source` resolver policy configures the HTTP request and optionally the HTTP response to resolve data for an object type and field in a GraphQL schema. The schema must be imported to API Management.
++
+## Policy statement
+
+```xml
+<http-data-source>
+ <http-request>
+ <get-authorization-context>...get-authorization-context policy configuration...</get-authorization-context>
+ <set-backend-service>...set-backend-service policy configuration...</set-backend-service>
+ <set-method>...set-method policy configuration...</set-method>
+ <set-url>URL</set-url>
+ <include-fragment>...include-fragment policy configuration...</include-fragment>
+ <set-header>...set-header policy configuration...</set-header>
+ <set-body>...set-body policy configuration...</set-body>
+ <authentication-certificate>...authentication-certificate policy configuration...</authentication-certificate>
+ </http-request>
+ <backend>
+ <forward-request>...forward-request policy configuration...</forward-request>
+ <http-response>
+ <set-body>...set-body policy configuration...</set-body>
+ <xml-to-json>...xml-to-json policy configuration...</xml-to-json>
+ <find-and-replace>...find-and-replace policy configuration...</find-and-replace>
+ <publish-event>...publish-event policy configuration...</publish-event>
+ <include-fragment>...include-fragment policy configuration...</include-fragment>
+ </http-response>
+</http-data-source>
+```
+
+## Elements
+
+|Name|Description|Required|
+|-|--|--|
+| http-request | Specifies a URL and child policies to configure the resolver's HTTP request. | Yes |
+| backend | Optionally forwards the resolver's HTTP request to a backend service, if specified. | No |
+| http-response | Optionally specifies child policies to configure the resolver's HTTP response. If not specified, the response is returned as a raw string. | No |
+
+### http-request elements
+
+> [!NOTE]
+> Except where noted, each child element may be specified at most once. Specify elements in the order listed.
++
+|Element|Description|Required|
+|-|--|--|
+| [get-authorization-context](get-authorization-context-policy.md) | Gets an authorization context for the resolver's HTTP request. | No |
+| [set-backend-service](set-backend-service-policy.md) | Redirects the resolver's HTTP request to the specified backend. | No |
+| [include-fragment](include-fragment-policy.md) | Inserts a policy fragment in the policy definition. If there are multiple fragments, then add additional `include-fragment` elements. | No |
+| [set-method](set-method-policy.md) | Sets the method of the resolver's HTTP request. | Yes |
+| set-url | Sets the URL of the resolver's HTTP request. | Yes |
+| [set-header](set-header-policy.md) | Sets a header in the resolver's HTTP request. If there are multiple headers, then add additional `header` elements. | No |
+| [set-body](set-body-policy.md) | Sets the body in the resolver's HTTP request. | No |
+| [authentication-certificate](authentication-certificate-policy.md) | Authenticates using a client certificate in the resolver's HTTP request. | No |
+
+### backend element
+
+| Element|Description|Required|
+|-|--|--|
+| [forward-request](forward-request-policy.md) | Forwards the resolver's HTTP request to a configured backend service. | No |
+
+### http-response elements
+
+> [!NOTE]
+> Except where noted, each child element may be specified at most once. Specify elements in the order listed.
+
+|Name|Description|Required|
+|-|--|--|
+| [set-body](set-body-policy.md) | Sets the body in the resolver's HTTP response. | No |
+| [xml-to-json](xml-to-json-policy.md) | Transforms the resolver's HTTP response from XML to JSON. | No |
+| [find-and-replace](find-and-replace-policy.md) | Finds a substring in the resolver's HTTP response and replaces it with a different substring. | No |
+| [publish-event](publish-event-policy.md) | Publishes an event to one or more subscriptions specified in the GraphQL API schema. | No |
+| [include-fragment](include-fragment-policy.md) | Inserts a policy fragment in the policy definition. If there are multiple fragments, then add additional `include-fragment` elements. | No |
+
+## Usage
+
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) GraphQL resolver
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption
+
+### Usage notes
+
+* This policy is invoked only when resolving a single field in a matching GraphQL query, mutation, or subscription.
+
+## Examples
+
+### Resolver for GraphQL query
+
+The following example resolves a query by making an HTTP `GET` call to a backend data source.
+
+#### Example schema
+
+```
+type Query {
+ users: [User]
+}
+
+type User {
+ id: String!
+ name: String!
+}
+```
+
+#### Example policy
+
+```xml
+<http-data-source>
+ <http-request>
+ <set-method>GET</set-method>
+ <set-url>https://data.contoso.com/get/users</set-url>
+ </http-request>
+</http-data-source>
+```
+
+### Resolver for a GraqhQL query that returns a list, using a liquid template
+
+The following example uses a liquid template, supported for use in the [set-body](set-body-policy.md) policy, to return a list in the HTTP response to a query. It also renames the `username` field in the response from the REST API to `name` in the GraphQL response.
+
+#### Example schema
+
+```
+type Query {
+ users: [User]
+}
+
+type User {
+ id: String!
+ name: String!
+}
+```
+
+#### Example policy
+
+```xml
+<http-data-source>
+ <http-request>
+ <set-method>GET</set-method>
+ <set-url>https://data.contoso.com/users</set-url>
+ </http-request>
+ <http-response>
+ <set-body template="liquid">
+ [
+ {% JSONArrayFor elem in body %}
+ {
+ "name": "{{elem.username}}"
+ }
+ {% endJSONArrayFor %}
+ ]
+ </set-body>
+ </http-response>
+</http-data-source>
+```
+
+### Resolver for GraphQL mutation
+
+The following example resolves a mutation that inserts data by making a `POST` request to an HTTP data source. The policy expression in the `set-body` policy of the HTTP request modifies a `name` argument that is passed in the GraphQL query as its body. The body that is sent will look like the following JSON:
+
+``` json
+{
+ "name": "the-provided-name"
+}
+```
+
+#### Example schema
+
+```
+type Query {
+ users: [User]
+}
+
+type Mutation {
+ makeUser(name: String!): User
+}
+
+type User {
+ id: String!
+ name: String!
+}
+```
+
+#### Example policy
+
+```xml
+<http-data-source>
+ <http-request>
+ <set-method>POST</set-method>
+ <set-url> https://data.contoso.com/user/create </set-url>
+ <set-header name="Content-Type" exists-action="override">
+ <value>application/json</value>
+ </set-header>
+ <set-body>@{
+ var args = context.Request.Body.As<JObject>(true)["arguments"];
+ JObject jsonObject = new JObject();
+ jsonObject.Add("name", args["name"])
+ return jsonObject.ToString();
+ }</set-body>
+ </http-request>
+</http-data-source>
+```
+
+## Related policies
+
+* [GraphQL resolver policies](api-management-policies.md#graphql-resolver-policies)
+
api-management Migrate Stv1 To Stv2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/migrate-stv1-to-stv2.md
+
+ Title: Migrate Azure API Management instance to stv2 platform | Microsoft Docs
+description: Follow the steps in this article to migrate your Azure API Management instance from the stv1 compute platform to the stv2 compute platform. Migration steps depend on whether the instance is deployed (injected) in a VNet.
++++ Last updated : 04/17/2023++++
+# Migrate an API Management instance hosted on the stv1 platform to stv2
+
+You can migrate an API Management instance hosted on the `stv1` compute platform to the `stv2` platform. This article provides migration steps for two scenarios, depending on whether or not your API Management instance is currently deployed (injected) in an [external](api-management-using-with-vnet.md) or [internal](api-management-using-with-internal-vnet.md) VNet.
+
+* **Non-VNet-injected API Management instance** - Use the [Migrate to stv2](/rest/api/apimanagement/current-ga/api-management-service/migratetostv2) REST API
+
+* **VNet-injected API Management instance** - Manually update the VNet configuration settings
+
+For more information about the `stv1` and `stv2` platforms and the benefits of using the `stv2` platform, see [Compute platform for API Management](compute-infrastructure.md).
+
+> [!IMPORTANT]
+> * Migration is a long-running operation. Your instance will experience downtime during the last 10-15 minutes of migration. Plan your migration accordingly.
+> * The VIP address(es) of your API Management will change.
+> * Migration to `stv2` is not reversible.
+
+> [!IMPORTANT]
+> Support for API Management instances hosted on the `stv1` platform will be [retired by 31 August 2024](breaking-changes/stv1-platform-retirement-august-2024.md). To ensure proper operation of your API Management instance, you should migrate any instance hosted on the `stv1` platform to `stv2` before that date.
++
+## Prerequisites
+
+* An API Management instance hosted on the `stv1` compute platform. To confirm that your instance is hosted on the `stv1` platform, see [How do I know which platform hosts my API Management instance?](compute-infrastructure.md#how-do-i-know-which-platform-hosts-my-api-management-instance).
++
+## Scenario 1: Migrate API Management instance, not injected in a VNet
+
+For an API Management instance that's not deployed in a VNet, invoke the Migrate to `stv2` REST API. For example, run the following Azure CLI commands, setting variables where indicated with the name of your API Management instance and the name of the resource group in which it was created.
+
+> [!NOTE]
+> The Migrate to `stv2` REST API is available starting in API Management REST API version `2022-04-01-preview`.
++
+```azurecli
+# Verify currently selected subscription
+az account show
+
+# View other available subscriptions
+az account list --output table
+
+# Set correct subscription, if needed
+az account set --subscription {your subscription ID}
+
+# Update these variables with the name and resource group of your API Management instance
+APIM_NAME={name of your API Management instance}
+RG_NAME={name of your resource group}
+
+# Get resource ID of API Management instance
+APIM_RESOURCE_ID=$(az apim show --name $APIM_NAME --resource-group $RG_NAME --query id --output tsv)
+
+# Call REST API to migrate to stv2
+az rest --method post --uri "$APIM_RESOURCE_ID/migrateToStv2?api-version=2022-08-01"
+```
+
+## Scenario 2: Migrate a network-injected API Management instance
+
+Trigger migration of a network-injected API Management instance to the `stv2` platform by updating the existing network configuration (see the following section). You can also cause migrate to the `stv2` platform by enabling [zone redundancy](../reliability/migrate-api-mgt.md).
+
+### Update VNet configuration
+
+Update the configuration of the VNet in each location (region) where the API Management instance is deployed.
+
+#### Prerequisites
+
+* A new subnet in the current virtual network. (Alternatively, set up a subnet in a different virtual network in the same region and subscription as your API Management instance). A network security group must be attached to the subnet.
+
+* A Standard SKU [public IPv4 address](../virtual-network/ip-services/public-ip-addresses.md#sku) resource in the same region and subscription as your API Management instance.
+
+For details, see [Prerequisites for network connections](api-management-using-with-vnet.md#prerequisites).
+
+#### Update VNet configuration
+
+To update the existing external or internal VNet configuration:
+
+1. In the [portal](https://portal.azure.com), navigate to your API Management instance.
+1. In the left menu, select **Network** > **Virtual network**.
+1. Select the network connection in the location you want to update.
+1. Select the virtual network, subnet, and IP address resources you want to configure, and select **Apply**.
+1. Continue configuring VNet settings for the remaining locations of your API Management instance.
+1. In the top navigation bar, select **Save**, then select **Apply network configuration**.
+
+The virtual network configuration is updated, and the instance is migrated to the `stv2` platform.
+
+## Verify migration
+
+To verify that the migration was successful, check the [platform version](compute-infrastructure.md#how-do-i-know-which-platform-hosts-my-api-management-instance) of your API Management instance. After successful migration, the value is `stv2`.
+
+## Next steps
+
+* Learn about [stv1 platform retirement](breaking-changes/stv1-platform-retirement-august-2024.md).
+* For instances deployed in a VNet, see the [Virtual network configuration reference](virtual-network-reference.md).
api-management Publish Event Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/publish-event-policy.md
+
+ Title: Azure API Management policy reference - publish-event | Microsoft Docs
+description: Reference for the publish-event policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 02/23/2023+++
+# Publish event to GraphQL subscription
+
+The `publish-event` policy publishes an event to one or more subscriptions specified in a GraphQL API schema. Configure the policy using an [http-data-source](http-data-source-policy.md) GraphQL resolver for a related field in the schema for another operation type such as a mutation. At runtime, the event is published to connected GraphQL clients. Learn more about [GraphQL APIs in API Management](graphql-apis-overview.md).
++
+<!--Link to resolver configuration article -->
+
+## Policy statement
+
+```xml
+<http-data-source
+ <http-request>
+ [...]
+ </http-request>
+ <http-response>
+ [...]
+ <publish-event>
+ <targets>
+ <graphql-subscription id="subscription field" />
+ </targets>
+ </publish-event>
+ </http-response>
+</http-data-source>
+```
+
+## Elements
+
+|Name|Description|Required|
+|-|--|--|
+| targets | One or more subscriptions in the GraphQL schema, specified in `target` subelements, to which the event is published. | Yes |
++
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) `http-response` element in `http-data-source` resolver
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) GraphQL resolver only
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption
+
+### Usage notes
+
+* This policy is invoked only when a related GraphQL query or mutation is executed.
+
+## Example
+
+The following example policy definition is configured in a resolver for the `createUser` mutation. It publishes an event to the `onUserCreated` subscription.
+
+### Example schema
+
+```
+type User {
+ id: Int!
+ name: String!
+}
++
+type Mutation {
+ createUser(id: Int!, name: String!): User
+}
+
+type Subscription {
+ onUserCreated: User!
+}
+```
+
+### Example policy
+
+```xml
+<http-data-source>
+ <http-request>
+ <set-method>POST</set-method>
+ <set-url>https://contoso.com/api/user</set-url>
+ <set-body template="liquid">{ "id" : {{body.arguments.id}}, "name" : "{{body.arguments.name}}"}</set-body>
+ </http-request>
+ <http-response>
+ <publish-event>
+ <targets>
+ <graphql-subscription id="onUserCreated" />
+ </targets>
+ </publish-event>
+ </http-response>
+</http-data-source>
+```
+
+## Related policies
+
+* [GraphQL resolver policies](api-management-policies.md#graphql-resolver-policies)
+
api-management Set Graphql Resolver Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-graphql-resolver-policy.md
Title: Azure API Management policy reference - set-graphql-resolver | Microsoft Docs
-description: Reference for the set-graphql-resolver policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+description: Reference for the set-graphql-resolver policy in Azure API Management. Provides policy usage, settings, and examples. This policy is retired.
- Previously updated : 12/07/2022+ Last updated : 03/07/2023
-# Set GraphQL resolver
+# Set GraphQL resolver (retired)
-The `set-graphql-resolver` policy retrieves or sets data for a GraphQL field in an object type specified in a GraphQL schema. The schema must be imported to API Management. Currently the data must be resolved using an HTTP-based data source (REST or SOAP API).
+> [!IMPORTANT]
+> * The `set-graphql-resolver` policy is retired. Customers using the `set-graphql-resolver` policy must migrate to the [managed resolvers](configure-graphql-resolver.md) for GraphQL APIs, which provide enhanced functionality.
+> * After you configure a managed resolver for a GraphQL field, the gateway skips the `set-graphql-resolver` policy in any policy definitions. You can't combine use of managed resolvers and the `set-graphql-resolver` policy in your API Management instance.
+The `set-graphql-resolver` policy retrieves or sets data for a GraphQL field in an object type specified in a GraphQL schema. The schema must be imported to API Management. Currently the data must be resolved using an HTTP-based data source (REST or SOAP API).
## Policy statement
The `set-graphql-resolver` policy retrieves or sets data for a GraphQL field in
<authentication-certificate>...authentication-certificate policy configuration...</authentication-certificate> </http-request> <http-response>
- <json-to-xml>...json-to-xml policy configuration...</json-to-xml>
+ <set-body>...set-body policy configuration...</set-body>
<xml-to-json>...xml-to-json policy configuration...</xml-to-json> <find-and-replace>...find-and-replace policy configuration...</find-and-replace> </http-response>
The `set-graphql-resolver` policy retrieves or sets data for a GraphQL field in
|Name|Description|Required| |-|--|--| | http-data-source | Configures the HTTP request and optionally the HTTP response that are used to resolve data for the given `parent-type` and `field`. | Yes |
-| http-request | Specifies a URL and child policies to configure the resolver's HTTP request. Each child element can be specified at most once. | Yes |
-| set-method| Method of the resolver's HTTP request, configured using the [set-method](set-method-policy.md) policy. | Yes |
-| set-url | URL of the resolver's HTTP request. | Yes |
-| set-header | Header set in the resolver's HTTP request, configured using the [set-header](set-header-policy.md) policy. | No |
-| set-body | Body set in the resolver's HTTP request, configured using the [set-body](set-body-policy.md) policy. | No |
-| authentication-certificate | Client certificate presented in the resolver's HTTP request, configured using the [authentication-certificate](authentication-certificate-policy.md) policy. | No |
-| http-response | Optionally specifies child policies to configure the resolver's HTTP response. If not specified, the response is returned as a raw string. Each child element can be specified at most once. |
-| json-to-xml | Transforms the resolver's HTTP response using the [json-to-xml](json-to-xml-policy.md) policy. | No |
-| xml-to-json | Transforms the resolver's HTTP response using the [xml-to-json](xml-to-json-policy.md) policy. | No |
-| find-and-replace | Transforms the resolver's HTTP response using the [find-and-replace](find-and-replace-policy.md) policy. | No |
+| http-request | Specifies a URL and child policies to configure the resolver's HTTP request. | Yes |
+| http-response | Optionally specifies child policies to configure the resolver's HTTP response. If not specified, the response is returned as a raw string. |
+
+### http-request elements
+
+> [!NOTE]
+> Except where noted, each child element may be specified at most once. Specify elements in the order listed.
+
+|Element|Description|Required|
+|-|--|--|
+| [set-method](set-method-policy.md) | Sets the method of the resolver's HTTP request. | Yes |
+| set-url | Sets the URL of the resolver's HTTP request. | Yes |
+| [set-header](set-header-policy.md) | Sets a header in the resolver's HTTP request. If there are multiple headers, then add additional `header` elements. | No |
+| [set-body](set-body-policy.md) | Sets the body in the resolver's HTTP request. | No |
+| [authentication-certificate](authentication-certificate-policy.md) | Authenticates using a client certificate in the resolver's HTTP request. | No |
+
+### http-response elements
+
+> [!NOTE]
+> Each child element may be specified at most once. Specify elements in the order listed.
+
+|Name|Description|Required|
+|-|--|--|
+| [set-body](set-body-policy.md) | Sets the body in the resolver's HTTP response. | No |
+| [xml-to-json](xml-to-json-policy.md) | Transforms the resolver's HTTP response from XML to JSON. | No |
+| [find-and-replace](find-and-replace-policy.md) | Finds a substring in the resolver's HTTP response and replaces it with a different substring. | No |
## Usage
The `set-graphql-resolver` policy retrieves or sets data for a GraphQL field in
* This policy is invoked only when a matching GraphQL query is executed. * The policy resolves data for a single field. To resolve data for multiple fields, configure multiple occurrences of this policy in a policy definition. - ## GraphQL context * The context for the HTTP request and HTTP response (if specified) differs from the context for the original gateway API request:
app-service Quickstart Dotnetcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-dotnetcore.md
If you've already installed Visual Studio 2022:
### [.NET 6.0](#tab/net60) - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).-- A GitHub account [Create an account for free](http://github.com/).
+- A GitHub account [Create an account for free](https://github.com/).
### [.NET Framework 4.8](#tab/netframework48) - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).-- A GitHub account [Create an account for free](http://github.com/).
+- A GitHub account [Create an account for free](https://github.com/).
:::zone-end
application-gateway Application Gateway Private Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-private-deployment.md
AGIC v1.7 must be used to introduce support for private frontend IP only.
If Application Gateway has a backend target or key vault reference to a private endpoint located in a VNet that is accessible via global VNet peering, traffic is dropped, resulting in an unhealthy status.
+### Network watcher integration
+
+Connection Troubleshoot and NSG Diagnostics will return an error when running check and diagnostic tests.
+ ### Coexisting v2 Application Gateways created prior to enablement of enhanced network control If a subnet shares Application Gateway v2 deployments that were created both prior to and after enablement of the enhanced network control functionality, Network Security Group (NSG) and Route Table functionality is limited to the prior gateway deployment. Application gateways provisioned prior to enablement of the new functionality must either be reprovisioned, or newly created gateways must use a different subnet to enable enhanced network security group and route table features.
application-gateway Migrate V1 V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/migrate-v1-v2.md
An Azure PowerShell script is available that does the following:
* [Virtual network service endpoint policies](../virtual-network/virtual-network-service-endpoint-policies-overview.md) are currently not supported in an Application Gateway subnet. * To migrate a TLS/SSL configuration, you must specify all the TLS/SSL certs used in your v1 gateway. * If you have FIPS mode enabled for your V1 gateway, it won't be migrated to your new v2 gateway. FIPS mode isn't supported in v2.
-* v2 doesn't support IPv6, so IPv6 enabled v1 gateways aren't migrated. If you run the script, it may not complete.
-* If the v1 gateway has only a private IP address, the script creates a public IP address and a private IP address for the new v2 gateway. v2 gateways currently don't support only private IP addresses.
+* In case of Private IP only V1 gateway, the script will generate a private and public IP address for the new V2 gateway. The Private IP only V2 gateway is currently in public preview. Once it becomes generally available, customers can utilize the script to transfer their private IP only V1 gateway to a private IP only V2 gateway.
* Headers with names containing anything other than letters, digits, and hyphens are not passed to your application. This only applies to header names, not header values. This is a breaking change from v1. * NTLM and Kerberos authentication is not supported by Application Gateway v2. The script is unable to detect if the gateway is serving this type of traffic and may pose as a breaking change from v1 to v2 gateways if run.
Here are a few scenarios where your current application gateway (Standard) may r
Update your clients to use the IP address(es) associated with the newly created v2 application gateway. We recommend that you don't use IP addresses directly. Consider using the DNS name label (for example, yourgateway.eastus.cloudapp.azure.com) associated with your application gateway that you can CNAME to your own custom DNS zone (for example, contoso.com).
+## ApplicationGateway V2 pricing
+
+The pricing models are different for the Application Gateway v1 and v2 SKUs. Please review the pricing at [Application Gateway pricing](https://azure.microsoft.com/pricing/details/application-gateway/) page before migrating from V1 to V2.
+ ## Common questions ### Are there any limitations with the Azure PowerShell script to migrate the configuration from v1 to v2?
application-gateway Overview V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/overview-v2.md
Previously updated : 04/03/2023 Last updated : 04/19/2023
This section describes features and limitations of the v2 SKU that differ from t
|Performance logs in Azure diagnostics|Not supported.<br>Azure metrics should be used.| |FIPS mode|Currently not supported.| |Private frontend configuration only mode|Currently in public preview [Learn more](application-gateway-private-deployment.md).|
-|Azure Network Watcher integration|Not supported.|
|Microsoft Defender for Cloud integration|Not yet available. ## Migrate from v1 to v2
applied-ai-services Create A Form Recognizer Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/create-a-form-recognizer-resource.md
Previously updated : 10/20/2022 Last updated : 04/17/2023 monikerRange: '>=form-recog-2.1.0' recommendations: false
recommendations: false
[!INCLUDE [applies to v3.0 and v2.1](includes/applies-to-v3-0-and-v2-1.md)]
-Azure Form Recognizer is a cloud-based [Azure Applied AI Service](../../applied-ai-services/index.yml) that uses machine-learning models to extract key-value pairs, text, and tables from your documents. Here, you'll learn how to create a Form Recognizer resource in the Azure portal.
+Azure Form Recognizer is a cloud-based [Azure Applied AI Service](../../applied-ai-services/index.yml) that uses machine-learning models to extract key-value pairs, text, and tables from your documents. In this article, learn how to create a Form Recognizer resource in the Azure portal.
## Visit the Azure portal
Let's get started:
1. Next, you're going to fill out the **Create Form Recognizer** fields with the following values: * **Subscription**. Select your current subscription.
- * **Resource group**. The [Azure resource group](/azure/cloud-adoption-framework/govern/resource-consistency/resource-access-management#what-is-an-azure-resource-group) that will contain your resource. You can create a new group or add it to a pre-existing group.
+ * **Resource group**. The [Azure resource group](/azure/cloud-adoption-framework/govern/resource-consistency/resource-access-management#what-is-an-azure-resource-group) that contains your resource. You can create a new group or add it to a pre-existing group.
* **Region**. Select your local region. * **Name**. Enter a name for your resource. We recommend using a descriptive name, for example *YourNameFormRecognizer*. * **Pricing tier**. The cost of your resource depends on the pricing tier you choose and your usage. For more information, see [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/). You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
Let's get started:
1. Once you receive the *deployment is complete* message, select the **Go to resource** button.
-1. Copy the key and endpoint values from your Form Recognizer resource paste them in a convenient location, such as *Microsoft Notepad*. You'll need the key and endpoint values to connect your application to the Form Recognizer API.
+1. Copy the key and endpoint values from your Form Recognizer resource paste them in a convenient location, such as *Microsoft Notepad*. You need the key and endpoint values to connect your application to the Form Recognizer API.
1. If your overview page doesn't have the keys and endpoint visible, you can select the **Keys and Endpoint** button, on the left navigation bar, and retrieve them there.
applied-ai-services Project Share Custom Classifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/project-share-custom-classifier.md
+
+ Title: "Share custom model projects using Form Recognizer Studio"
+
+description: Learn how to share custom model projects using Form Recognizer Studio.
+++++ Last updated : 04/17/2023+
+monikerRange: 'form-recog-3.0.0'
+recommendations: false
++
+# Share custom model projects using Form Recognizer Studio
+
+Form Recognizer Studio is an online tool to visually explore, understand, train, and integrate features from the Form Recognizer service into your applications. Form Recognizer Studio enables project sharing feature within the custom extraction model. Projects can be shared easily via a project token. The same project token can also be used to import a project.
+
+## Prerequisite
+
+In order to share and import your custom extraction projects seamlessly, both users (user who shares and user who imports) need an An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/). Also, both users need to configure permissions to grant access to the Form Recognizer and storage resources.
+
+## Granted access and permissions
+
+ > [!IMPORTANT]
+ > Custom model projects can be imported only if you have the access to the storage account that is associated with the project you are trying to import. Check your storage account permission before starting to share or import projects with others.
+
+### Managed identity
+
+Enable a system-assigned managed identity for your Form Recognizer resource. A system-assigned managed identity is enabled directly on a service instance. It isn't enabled by default; you must go to your resource and update the identity setting.
+
+For more information, *see*, [Enable a system-assigned managed identity](../managed-identities.md#enable-a-system-assigned-managed-identity)
+
+### Role-based access control (RBAC)
+
+Grant your Form Recognizer managed identity access to your storage account using Azure role-based access control (Azure RBAC). The [Storage Blob Data Contributor](../../..//role-based-access-control/built-in-roles.md#storage-blob-data-reader) role grants read, write, and delete permissions for Azure Storage containers and blobs.
+
+For more information, *see*, [Grant access to your storage account](../managed-identities.md#grant-access-to-your-storage-account)
+
+### Configure cross origin resource sharing (CORS)
+
+CORS needs to be configured in your Azure storage account for it to be accessible to the Form Recognizer Studio. You can update the CORS setting in the Azure portal.
+
+Form more information, *see* [Configure CORS](../quickstarts/try-form-recognizer-studio.md#configure-cors)
+
+### Virtual networks and firewalls
+
+If your storage account VNet is enabled or if there are any firewall constraints, the project can't be shared. If you want to bypass those restrictions, ensure that those settings are turned off.
+
+A workaround is to manually create a project using the same settings as the project being shared.
+
+### User sharing requirements
+
+Users sharing the project need to create a project [**`ListAccountSAS`**](/rest/api/storagerp/storage-accounts/list-account-sas) to configure the storage account CORS and a [**`ListServiceSAS`**](/rest/api/storagerp/storage-accounts/list-service-sas) to generate a SAS token for *read*, *write* and *list* container's file in addition to blob storage data *update* permissions.
+
+### User importing requirements
+
+Users who want to import the project need a [**`ListServiceSAS`**](/rest/api/storagerp/storage-accounts/list-service-sas) to generate a SAS token for *read*, *write* and *list* container's file in addition to blob storage data *update* permissions.
+
+## Share a custom extraction model with Form Recognizer studio
+
+Follow these steps to share your project using Form Recognizer studio:
+
+1. Start by navigating to the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio).
+
+1. In the Studio, select the **Custom extraction models** tile, under the custom models section.
+
+ :::image type="content" source="../media/how-to/studio-custom-extraction.png" alt-text="Screenshot showing how to select a custom extraction model in the Studio.":::
+
+1. On the custom extraction models page, select the desired model to share and then select the **Share** button.
+
+ :::image type="content" source="../media/how-to/studio-project-share.png" alt-text="Screenshot showing how to select the desired model and select the share option.":::
+
+1. On the share project dialog, copy the project token for the selected project.
++
+## Import custom extraction model with Form Recognizer studio
+
+Follow these steps to import a project using Form Recognizer studio.
+
+1. Start by navigating to the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio).
+
+1. In the Studio, select the **Custom extraction models** tile, under the custom models section.
+
+ :::image type="content" source="../media/how-to/studio-custom-extraction.png" alt-text="Screenshot: Select custom extraction model in the Studio.":::
+
+1. On the custom extraction models page, select the **Import** button.
+
+ :::image type="content" source="../media/how-to/studio-project-import.png" alt-text="Screenshot: Select import within custom extraction model page.":::
+
+1. On the import project dialog, paste the project token shared with you and select import.
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Back up and recover models](../disaster-recovery.md)
applied-ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/language-support.md
-+ Last updated 03/03/2023
applied-ai-services V3 Error Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/v3-error-guide.md
-+ Last updated 10/07/2022 monikerRange: 'form-recog-3.0.0'
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md
-+ Last updated 03/15/2023 monikerRange: '>=form-recog-2.1.0'
automation Automation Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-services.md
Replaces repetitive, day-to-day operational tasks with an exception-only managem
### Azure Policy based Guest Configuration
-Azure Policy based Guest configuration is the next iteration of Azure Automation State configuration. [Learn more](../governance/machine-configuration/machine-configuration-policy-effects.md).
+Azure Policy based Guest configuration is the next iteration of Azure Automation State configuration. [Learn more](../governance/machine-configuration/remediation-options.md).
You can check on what is installed in:
Azure Policy based Guest configuration is the next iteration of Azure Automation
| **Scenarios** | **Users** | | - | - |
- | Obtain compliance data that may include: The configuration of the operating system ΓÇô files, registry, and services, Application configuration or presence, Check environment settings. </br> </br> Audit or deploy settings to all machines (Set) in scope either reactively to existing machines or proactively to new machines as they are deployed. </br> </br> Respond to policy events to provide [remediation on demand or continuous remediation.](../governance/machine-configuration/machine-configuration-policy-effects.md#remediation-on-demand-applyandmonitor) | The Central IT, Infrastructure Administrators, Auditors (Cloud custodians) are working towards the regulatory requirements at scale and ensuring that servers' end state looks as desired. </br> </br> The application teams validate compliance before releasing change. |
+ | Obtain compliance data that may include: The configuration of the operating system ΓÇô files, registry, and services, Application configuration or presence, Check environment settings. </br> </br> Audit or deploy settings to all machines (Set) in scope either reactively to existing machines or proactively to new machines as they are deployed. </br> </br> Respond to policy events to provide [remediation on demand or continuous remediation.](../governance/machine-configuration/remediation-options.md#remediation-on-demand-applyandmonitor) | The Central IT, Infrastructure Administrators, Auditors (Cloud custodians) are working towards the regulatory requirements at scale and ensuring that servers' end state looks as desired. </br> </br> The application teams validate compliance before releasing change. |
### Azure Automation - Process Automation
automation Automation Use Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-use-azure-ad.md
Before installing the Azure AD modules on your computer:
3. Run Windows PowerShell as an administrator to create an elevated Windows PowerShell command prompt.
-4. Deploy Azure Active Directory from [MSOnline 1.0](http://www.powershellgallery.com/packages/MSOnline/1.0).
+4. Deploy Azure Active Directory from [MSOnline 1.0](https://www.powershellgallery.com/packages/MSOnline/1.0).
5. If you're prompted to install the NuGet provider, type Y and press ENTER.
azure-arc Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/troubleshooting.md
Title: "Troubleshoot common Azure Arc-enabled Kubernetes issues" Previously updated : 03/28/2023 Last updated : 04/18/2023 description: "Learn how to resolve common issues with Azure Arc-enabled Kubernetes clusters and GitOps."
az k8s-extension create --resource-group <resource-group> --cluster-name <cluste
### Flux v2 - `microsoft.flux` extension installation CPU and memory limits
-The controllers installed in your Kubernetes cluster with the Microsoft Flux extension require the following CPU and memory resource limits to properly schedule on Kubernetes cluster nodes.
+The controllers installed in your Kubernetes cluster with the Microsoft Flux extension require CPU and memory resources to properly schedule on Kubernetes cluster nodes. This table shows the minimum memory and CPU resources that may be requested, along with the maximum limits for potential CPU and memory resource requirements.
-| Container Name | CPU limit | Memory limit |
+| Container Name | Minimum CPU | Minimum memory | Maximum CPU | Maximum memory |
| -- | -- | -- |
-| fluxconfig-agent | 50 m | 150 Mi |
-| fluxconfig-controller | 100 m | 150 Mi |
-| fluent-bit | 20 m | 150 Mi |
-| helm-controller | 1000 m | 1 Gi |
-| source-controller | 1000 m | 1 Gi |
-| kustomize-controller | 1000 m | 1 i |
-| notification-controller | 1000 m | 1 Gi |
-| image-automation-controller | 1000 m | 1 Gi |
-| image-reflector-controller | 1000 m | 1 Gi |
+| fluxconfig-agent | 5 m | 30 Mi | 50 m | 150 Mi |
+| fluxconfig-controller | 5 m | 30 Mi | 100 m | 150 Mi |
+| fluent-bit | 5 m | 30 Mi | 20 m | 150 Mi |
+| helm-controller | 100 m | 64 Mi | 1000 m | 1 Gi |
+| source-controller | 50 m | 64 Mi | 1000 m | 1 Gi |
+| kustomize-controller | 100 m | 64 Mi | 1000 m | 1 Gi |
+| notification-controller | 100 m | 64 Mi | 1000 m | 1 Gi |
+| image-automation-controller | 100 m | 64 Mi | 1000 m | 1 Gi |
+| image-reflector-controller | 100 m | 64 Mi | 1000 m | 1 Gi |
If you've enabled a custom or built-in Azure Gatekeeper Policy that limits the resources for containers on Kubernetes clusters, such as `Kubernetes cluster containers CPU and memory resource limits should not exceed the specified limits`, ensure that either the resource limits on the policy are greater than the limits shown above or that the `flux-system` namespace is part of the `excludedNamespaces` parameter in the policy assignment.
azure-arc Agent Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes-archive.md
Download for [Windows](https://download.microsoft.com/download/1/c/4/1c4a0bde-0b
### Fixed -- The guest configuration policy agent can now configure and remediate system settings. Existing policy assignments continue to be audit-only. Learn more about the Azure Policy [guest configuration remediation options](../../governance/machine-configuration/machine-configuration-policy-effects.md).
+- The guest configuration policy agent can now configure and remediate system settings. Existing policy assignments continue to be audit-only. Learn more about the Azure Policy [guest configuration remediation options](../../governance/machine-configuration/remediation-options.md).
- The guest configuration policy agent now restarts every 48 hours instead of every 6 hours. ## Version 1.9 - July 2021
azure-arc Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-agent.md
Title: Managing the Azure Arc-enabled servers agent description: This article describes the different management tasks that you will typically perform during the lifecycle of the Azure Connected Machine agent. Previously updated : 04/07/2023 Last updated : 04/19/2023
The proxy bypass feature does not require you to enter specific URLs to bypass.
| | | | `AAD` | `login.windows.net`, `login.microsoftonline.com`, `pas.windows.net` | | `ARM` | `management.azure.com` |
-| `Arc` | `his.arc.azure.com`, `guestconfiguration.azure.com`, `guestnotificationservice.azure.com`, `servicebus.windows.net` |
+| `Arc` | `his.arc.azure.com`, `guestconfiguration.azure.com` |
To send Azure Active Directory and Azure Resource Manager traffic through a proxy server but skip the proxy for Azure Arc traffic, run the following command:
azure-arc Manage Vm Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-vm-extensions.md
In this release, we support the following VM extensions on Windows and Linux mac
To learn about the Azure Connected Machine agent package and details about the Extension agent component, see [Agent overview](agent-overview.md). > [!NOTE]
-> The Desired State Configuration VM extension is no longer available for Azure Arc-enabled servers. Alternatively, we recommend [migrating to machine configuration](../../governance/machine-configuration/machine-configuration-azure-automation-migration.md) or using the Custom Script Extension to manage the post-deployment configuration of your server.
+> The Desired State Configuration VM extension is no longer available for Azure Arc-enabled servers. Alternatively, we recommend [migrating to machine configuration](../../governance/machine-configuration/migrate-from-azure-automation.md) or using the Custom Script Extension to manage the post-deployment configuration of your server.
Arc-enabled servers support moving machines with one or more VM extensions installed between resource groups or another Azure subscription without experiencing any impact to their configuration. The source and destination subscriptions must exist within the same [Azure Active Directory tenant](../../active-directory/develop/quickstart-create-new-tenant.md). This support is enabled starting with the Connected Machine agent version **1.8.21197.005**. For more information about moving resources and considerations before proceeding, see [Move resources to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md).
azure-functions Durable Functions Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-troubleshooting-guide.md
+
+ Title: Durable Functions Troubleshooting Guide - Azure Functions
+description: Guide to troubleshoot common issues with durable functions.
++ Last updated : 03/10/2023+++
+# Durable Functions Troubleshooting Guide
+
+Durable Functions is an extension of [Azure Functions](../functions-overview.md) that lets you build serverless orchestrations using ordinary code. For more information on Durable Functions, see the [Durable Functions overview](./durable-functions-overview.md).
+
+This article provides a guide for troubleshooting common scenarios in Durable Functions apps.
+
+> [!NOTE]
+> Microsoft support engineers are available to assist in diagnosing issues with your application. If you're not able to diagnose your problem using this guide, you can file a support ticket by accessing the **New Support request** blade in the **Support + troubleshooting** section of your function app page in the Azure portal.
+
+![Screenshot of support request page in Azure Portal.](./media/durable-functions-troubleshooting-guide/durable-function-support-request.png)
+
+> [!TIP]
+> When debugging and diagnosing issues, it's recommended that you start by ensuring your app is using the latest Durable Functions extension version. Most of the time, using the latest version mitigates known issues already reported by other users. Please read the [Upgrade Durable Functions extension version](./durable-functions-extension-upgrade.md) article for instructions on how to upgrade your extension version.
+
+The **Diagnose and solve problems** tab in the Azure portal is a useful resource to monitor and diagnose possible issues related to your application. It also supplies potential solutions to your problems based on the diagnosis. See [Azure Function app diagnostics](./function-app-diagnostics.md) for more details.
+
+If the resources above didn't solve your problem, the following sections provide advice for specific application symptoms:
+
+## Orchestration is stuck in the `Pending` state
+
+When you start an orchestration, a "start" message gets written to an internal queue managed by the Durable extension, and the status of the orchestration gets set to "Pending". After the orchestration message gets picked up and successfully processed by an available app instance, the status will transition to "Running" (or to some other non-"Pending" state).
+
+Use the following steps to troubleshoot orchestration instances that remain stuck indefinitely in the "Pending" state.
+
+* Check the Durable Task Framework traces for warnings or errors for the impacted orchestration instance ID. A sample query can be found in the [Trace Errors/Warnings section](#trace-errorswarnings).
+
+* Check the Azure Storage control queues assigned to the stuck orchestrator to see if its "start message" is still there For more information on control queues, see the [Azure Storage provider control queue documentation](durable-functions-azure-storage-provider.md#control-queues).
+
+* Change your app's [platform configuration](../../app-service/configure-common.md#configure-general-settings) version to ΓÇ£64 BitΓÇ¥.
+ Sometimes orchestrations don't start because the app is running out of memory. Switching to 64-bit process allows the app to allocate more total memory. This only applies to App Service Basic, Standard, Premium, and Elastic Premium plans. Free or Consumption plans **do not** support 64-bit processes.
+
+## Orchestration starts after a long delay
+
+Normally, orchestrations start within a few seconds after they're scheduled. However, there are certain cases where orchestrations may take longer to start. Use the following steps to troubleshoot when orchestrations take more than a few seconds to start executing.
+
+* Refer to the [documentation on delayed orchestrations in Azure Storage](./durable-functions-azure-storage-provider.md#orchestration-start-delays) to check whether the delay may be caused by known limitations.
+
+* Check the Durable Task Framework traces for warnings or errors with the impacted orchestration instance ID. A sample query can be found in [Trace Errors/Warnings section](#trace-errorswarnings).
+
+## Orchestration doesn't complete / is stuck in the `Running` state
+
+If an orchestration remains in the "Running" state for a long period of time, it usually means that it's waiting for a long-running task that is scheduled to complete. For example, it could be waiting for a durable timer task, an activity task, or an external event task to be completed. However, if you observe that scheduled tasks have completed successfully but the orchestration still isn't making progress, then there might be a problem preventing the orchestration from proceeding to its next task. We often refer to orchestrations in this state as "stuck orchestrations".
+
+Use the following steps to troubleshoot stuck orchestrations:
+
+* Try restarting the function app. This step can help if the orchestration gets stuck due to a transient bug or deadlock in either the app or the extension code.
+
+* Check the Azure Storage account control queues to see if any queues are growing continuously. [This Azure Storage messaging KQL query](./durable-functions-troubleshooting-guide.md#azure-storage-messaging) can help identify problems with dequeuing orchestration messages. If the problem impacts only a single control queue, it might indicate a problem that exists only on a specific app instance, in which case scaling up or down to move off the unhealthy VM instance could help.
+
+* Use the Application Insights query in the [Azure Storage Messaging section](./durable-functions-troubleshooting-guide.md#azure-storage-messaging) to filter on that queue name as the Partition ID and look for any problems related to that control queue partition.
+
+* Check the guidance in [Durable Functions Best Practice and Diagnostic Tools](./durable-functions-best-practice-reference.md). Some problems may be caused by known Durable Functions anti-patterns.
+
+* Check the [Durable Functions Versioning documentation](durable-functions-versioning.md). Some problems may be caused by breaking changes to in-flight orchestration instances.
+
+## Orchestration runs slowly
+
+Heavy data processing, internal errors, and insufficient compute resources can cause orchestrations to execute slower than normal. Use the following steps to troubleshoot orchestrations that are taking longer than expected to execute:
+
+* Check the Durable Task Framework traces for warnings or errors for the impacted orchestration instance ID. A sample query can be found in the [Trace Errors/Warnings section](#trace-errorswarnings).
+
+* If your app utilizes the .NET in-process model, consider enabling [extended sessions](./durable-functions-azure-storage-provider.md#extended-sessions).
+ Extended sessions can minimize history loads, which can slow down processing.
+
+* Check for performance and scalability bottlenecks.
+ Application performance depends on many factors. For example, high CPU usage, or large memory consumption can result in delays. Read [Performance and scale in Durable Functions](./durable-functions-perf-and-scale.md) for detailed guidance.
+
+## Sample Queries
+
+This section shows how to troubleshoot issues by writing custom [KQL queries](/azure/data-explorer/kusto/query/) in the Azure Application Insights instance configured for your Azure Functions app.
+
+### Azure Storage Messaging
+
+When using the default Azure Storage provider, all Durable Functions behavior is driven by Azure Storage queue messages and all state related to an orchestration is stored in table storage and blob storage. When Durable Task Framework tracing is enabled, all Azure Storage interactions are logged to Application Insights, and this data is critically important for debugging execution and performance problems.
+
+Starting in v2.3.0 of the Durable Functions extension, you can have these Durable Task Framework logs published to your Application Insights instance by updating your logging configuration in the host.json file. See the [Durable Task Framework logging article](./durable-functions-diagnostics.md) for information and instructions on how to do this.
+
+The following query is for inspecting end-to-end Azure Storage interactions for a specific orchestration instance. Edit `start` and `orchestrationInstanceID` to filter by time range and instance ID.
+
+```kusto
+let start = datetime(XXXX-XX-XXTXX:XX:XX); // edit this
+let orchestrationInstanceID = "XXXXXXX"; //edit this
+traces
+| where timestamp > start and timestamp < start + 1h
+| where customDimensions.Category == "DurableTask.AzureStorage"
+| extend taskName = customDimensions["EventName"]
+| extend eventType = customDimensions["prop__EventType"]
+| extend extendedSession = customDimensions["prop__IsExtendedSession"]
+| extend account = customDimensions["prop__Account"]
+| extend details = customDimensions["prop__Details"]
+| extend instanceId = customDimensions["prop__InstanceId"]
+| extend messageId = customDimensions["prop__MessageId"]
+| extend executionId = customDimensions["prop__ExecutionId"]
+| extend age = customDimensions["prop__Age"]
+| extend latencyMs = customDimensions["prop__LatencyMs"]
+| extend dequeueCount = customDimensions["prop__DequeueCount"]
+| extend partitionId = customDimensions["prop__PartitionId"]
+| extend eventCount = customDimensions["prop__TotalEventCount"]
+| extend taskHub = customDimensions["prop__TaskHub"]
+| extend pid = customDimensions["ProcessId"]
+| extend appName = cloud_RoleName
+| extend newEvents = customDimensions["prop__NewEvents"]
+| where instanceId == orchestrationInstanceID
+| sort by timestamp asc
+| project timestamp, appName, severityLevel, pid, taskName, eventType, message, details, messageId, partitionId, instanceId, executionId, age, latencyMs, dequeueCount, eventCount, newEvents, taskHub, account, extendedSession, sdkVersion
+```
+
+### Trace Errors/Warnings
+
+The following query searches for errors and warnings for a given orchestration instance. You'll need to provide a value for `orchestrationInstanceID`.
+
+```kusto
+let orchestrationInstanceID = "XXXXXX"; // edit this
+let start = datetime(XXXX-XX-XXTXX:XX:XX);
+traces
+| where timestamp > start and timestamp < start + 1h
+| extend instanceId = iif(isnull(customDimensions["prop__InstanceId"] ) , customDimensions["prop__instanceId"], customDimensions["prop__InstanceId"] )
+| extend logLevel = customDimensions["LogLevel"]
+| extend functionName = customDimensions["prop__functionName"]
+| extend status = customDimensions["prop__status"]
+| extend details = customDimensions["prop__Details"]
+| extend reason = customDimensions["prop__reason"]
+| where severityLevel > 1 // to see all logs of severity level "Information" or greater.
+| where instanceId == orchestrationInstanceID
+| sort by timestamp asc
+```
+
+### Control queue / Partition ID logs
+
+The following query searches for all activity associated with an instanceId's control queue. You'll need to provide the value for the instanceID in `orchestrationInstanceID` and the query's start time in `start`.
+
+```kusto
+let orchestrationInstanceID = "XXXXXX"; // edit this
+let start = datetime(XXXX-XX-XXTXX:XX:XX); // edit this
+traces // determine control queue for this orchestrator
+| where timestamp > start and timestamp < start + 1h
+| extend instanceId = customDimensions["prop__TargetInstanceId"]
+| extend partitionId = tostring(customDimensions["prop__PartitionId"])
+| where partitionId contains "control"
+| where instanceId == orchestrationInstanceID
+| join kind = rightsemi(
+traces
+| where timestamp > start and timestamp < start + 1h
+| where customDimensions.Category == "DurableTask.AzureStorage"
+| extend taskName = customDimensions["EventName"]
+| extend eventType = customDimensions["prop__EventType"]
+| extend extendedSession = customDimensions["prop__IsExtendedSession"]
+| extend account = customDimensions["prop__Account"]
+| extend details = customDimensions["prop__Details"]
+| extend instanceId = customDimensions["prop__InstanceId"]
+| extend messageId = customDimensions["prop__MessageId"]
+| extend executionId = customDimensions["prop__ExecutionId"]
+| extend age = customDimensions["prop__Age"]
+| extend latencyMs = customDimensions["prop__LatencyMs"]
+| extend dequeueCount = customDimensions["prop__DequeueCount"]
+| extend partitionId = tostring(customDimensions["prop__PartitionId"])
+| extend eventCount = customDimensions["prop__TotalEventCount"]
+| extend taskHub = customDimensions["prop__TaskHub"]
+| extend pid = customDimensions["ProcessId"]
+| extend appName = cloud_RoleName
+| extend newEvents = customDimensions["prop__NewEvents"]
+) on partitionId
+| sort by timestamp asc
+| project timestamp, appName, severityLevel, pid, taskName, eventType, message, details, messageId, partitionId, instanceId, executionId, age, latencyMs, dequeueCount, eventCount, newEvents, taskHub, account, extendedSession, sdkVersion
+```
+
+### Application Insights column reference
+
+Below is a list of the columns projected by the queries above and their respective descriptions.
+
+|Column |Description |
+|-||
+|pid|Process ID of the function app instance. This is useful for determining if the process was recycled while an orchestration was executing.|
+|taskName|The name of the event being logged.|
+|eventType|The type of message, which usually represents work done by an orchestrator. A full list of its possible values, and their descriptions, is [here](https://github.com/Azure/durabletask/blob/main/src/DurableTask.Core/History/EventType.cs)|
+|extendedSession|Boolean value indicating whether [extended sessions](durable-functions-azure-storage-provider.md#extended-sessions) is enabled.|
+|account|The storage account used by the app.|
+|details|Additional information about a particular event, if available.|
+|instanceId|The ID for a given orchestration or entity instance.|
+|messageId|The unique Azure Storage ID for a given queue message. This value most commonly appears in ReceivedMessage, ProcessingMessage, and DeletingMessage trace events. Note that it's NOT present in SendingMessage events because the message ID is generated by Azure Storage _after_ we send the message.|
+|executionId|The ID of the orchestrator execution, which changes whenever `continue-as-new` is invoked.|
+|age|The number of milliseconds since a message was enqueued. Large numbers often indicate performance problems. An exception is the TimerFired message type, which may have a large Age value depending on timer's duration.|
+|latencyMs|The number of milliseconds taken by some storage operation.|
+|dequeueCount|The number of times a message has been dequeued. Under normal circumstances, this value is always 1. If it's more than one, then there might be a problem.|
+|partitionId|The name of the queue associated with this log.|
+|totalEventCount|The number of history events involved in the current action.|
+|taskHub|The name of your [task hub](./durable-functions-task-hubs.md).|
+|newEvents|A comma-separated list of history events that are being written to the History table in storage.|
azure-maps How To Render Custom Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-render-custom-data.md
The following are examples of custom data:
This article uses the [Postman] application, but you may use a different API development environment.
-We'll use the Azure Maps [Data service] to store and render overlays.
+Use the Azure Maps [Data service] to store and render overlays.
## Render pushpins with labels and a custom image
To get a static image with custom pins and labels:
> [!NOTE] > The procedure in this section requires an Azure Maps account Gen 1 (S1) or Gen 2 pricing tier.
-In this section, we'll upload path and pin data to Azure Map data storage.
+In this section, you upload path and pin data to Azure Map data storage.
To upload pins and path data:
To render a polygon with color and opacity:
> [!NOTE] > The procedure in this section requires an Azure Maps account Gen 1 (S1) or Gen 2 pricing tier.
-You can modify the appearance of the pins by adding style modifiers. For example, to make pushpins and their labels larger or smaller, use the `sc` "scale style" modifier. This modifier takes a value that's greater than zero. A value of 1 is the standard scale. Values larger than 1 will make the pins larger, and values smaller than 1 will make them smaller. For more information about style modifiers, see [static image service path parameters].
+You can modify the appearance of the pins by adding style modifiers. For example, to make pushpins and their labels larger or smaller, use the `sc` "scale style" modifier. This modifier takes a value that's greater than zero. A value of 1 is the standard scale. Values larger than 1 makes the pins larger, and values smaller than 1 makes them smaller. For more information about style modifiers, see [static image service path parameters].
To render a circle and pushpins with custom labels:
To render a circle and pushpins with custom labels:
:::image type="content" source="./media/how-to-render-custom-data/circle-custom-pins.png" alt-text="Render a circle with custom pushpins.":::
-8. Now we'll change the color of the pushpins by modifying the `co` style modifier. If you look at the value of the `pins` parameter (`pins=default|la15+50|al0.66|lc003C62|co002D62|`), you'll see that the current color is `#002D62`. To change the color to `#41d42a`, we'll replace `#002D62` with `#41d42a`. Now the `pins` parameter is `pins=default|la15+50|al0.66|lc003C62|co41D42A|`. The request looks like the following URL:
+8. Next, change the color of the pushpins by modifying the `co` style modifier. If you look at the value of the `pins` parameter (`pins=default|la15+50|al0.66|lc003C62|co002D62|`), notice that the current color is `#002D62`. To change the color to `#41d42a`, replace `#002D62` with `#41d42a`. Now the `pins` parameter is `pins=default|la15+50|al0.66|lc003C62|co41D42A|`. The request looks like the following URL:
```HTTP https://atlas.microsoft.com/map/static/png?api-version=1.0&style=main&layer=basic&zoom=14&height=700&Width=700&center=-122.13230609893799,47.64599069048016&path=lcFF0000|lw2|la0.60|ra1000||-122.13230609893799 47.64599069048016&pins=default|la15+50|al0.66|lc003C62|co41D42A||'Microsoft Corporate Headquarters'-122.14131832122801 47.64690503939462|'Microsoft Visitor Center'-122.136828 47.642224|'Microsoft Conference Center'-122.12552547454833 47.642940335653996|'Microsoft The Commons'-122.13687658309935 47.64452336193245&subscription-key={Your-Azure-Maps-Subscription-key}
Similarly, you can change, add, and remove other style modifiers.
## Next steps -- Explore the [Azure Maps Get Map Image API] documentation.-- To learn more about Azure Maps Data service, see the [service documentation].
+> [!div class="nextstepaction"]
+> [Render - Get Map Image]
+
+> [!div class="nextstepaction"]
+> [Data service]
+ [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[Subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
-[Postman]: https://www.postman.com/
+[Render - Get Map Image]: /rest/api/maps/render/getmapimage
[Data service]: /rest/api/maps/data
-[static image service]: /rest/api/maps/render/getmapimage
[Data Upload]: /rest/api/maps/data-v2/upload
-[Render service]: /rest/api/maps/render/get-map-image
[path parameter]: /rest/api/maps/render/getmapimage#uri-parameters
-[Azure Maps Get Map Image API]: /rest/api/maps/render/getmapimage
-[service documentation]: /rest/api/maps/data
+[Postman]: https://www.postman.com/
+[Render service]: /rest/api/maps/render/get-map-image
[static image service path parameters]: /rest/api/maps/render/getmapimage#uri-parameters
+[static image service]: /rest/api/maps/render/getmapimage
+[Subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
azure-maps How To Secure Webapp Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-webapp-users.md
Title: How to secure a web application with interactive single-sign-in
+ Title: How to secure a web application with interactive single sign-in
-description: How to configure a web application which supports Azure AD single-sign-on with Azure Maps Web SDK using OpenID Connect protocol.
+description: How to configure a web application that supports Azure AD single sign-in with Azure Maps Web SDK using OpenID Connect protocol.
Last updated 06/12/2020
# Secure a web application with user sign-in
-The following guide pertains to an application which is hosted on web servers, maintains multiple business scenarios, and deploys to web servers. The application has the requirement to provide protected resources secured only to Azure AD users. The objective of the scenario is to enable the web application to authenticate to Azure AD and call Azure Maps REST APIs on behalf of the user.
+The following guide pertains to an application that is hosted on web servers, maintains multiple business scenarios, and deploys to web servers. The application has the requirement to provide protected resources secured only to Azure AD users. The objective of the scenario is to enable the web application to authenticate to Azure AD and call Azure Maps REST APIs on behalf of the user.
[!INCLUDE [authentication details](./includes/view-authentication-details.md)] ## Create an application registration in Azure AD
-You must create the web application in Azure AD for users to sign in. This web application will then delegate user access to Azure Maps REST APIs.
+You must create the web application in Azure AD for users to sign in. This web application then delegates user access to Azure Maps REST APIs.
1. In the Azure portal, in the list of Azure services, select **Azure Active Directory** > **App registrations** > **New registration**.
- > [!div class="mx-imgBorder"]
- > ![App registration](./media/how-to-manage-authentication/app-registration.png)
+ :::image type="content" source="./media/how-to-manage-authentication/app-registration.png" alt-text="A screenshot showing App registration." lightbox="./media/how-to-manage-authentication/app-registration.png":::
-2. Enter a **Name**, choose a **Support account type**, provide a redirect URI which will represent the url which Azure AD will issue the token and is the url where the map control is hosted. For more details please see Azure AD [Scenario: Web app that signs in users](../active-directory/develop/scenario-web-app-sign-user-overview.md). Complete the provided steps from the Azure AD scenario.
+2. Enter a **Name**, choose a **Support account type**, provide a redirect URI that represents the url to which Azure AD issues the token, which is the url where the map control is hosted. For more information, see Azure AD [Scenario: Web app that signs in users](../active-directory/develop/scenario-web-app-sign-user-overview.md). Complete the provided steps from the Azure AD scenario.
-3. Once the application registration is complete, Confirm that application sign-in works for users. Once sign-in works, then the application can be granted delegated access to Azure Maps REST APIs.
-
-4. To assign delegated API permissions to Azure Maps, go to the application. Then select **API permissions** > **Add a permission**. Under **APIs my organization uses**, search for and select **Azure Maps**.
+3. Once the application registration is complete, confirm that application sign-in works for users. Once sign-in works, the application can be granted delegated access to Azure Maps REST APIs.
- > [!div class="mx-imgBorder"]
- > ![Add app API permissions](./media/how-to-manage-authentication/app-permissions.png)
+4. To assign delegated API permissions to Azure Maps, go to the application and select **API permissions** > **Add a permission**. select **Azure Maps** in the **APIs my organization uses** list.
+
+ :::image type="content" source="./media/how-to-manage-authentication/app-permissions.png" alt-text="A screenshot showing add app API permissions." lightbox="./media/how-to-manage-authentication/app-permissions.png":::
5. Select the check box next to **Access Azure Maps**, and then select **Add permissions**.
- > [!div class="mx-imgBorder"]
- > ![Select app API permissions](./media/how-to-manage-authentication/select-app-permissions.png)
+ :::image type="content" source="./media/how-to-manage-authentication/select-app-permissions.png" alt-text="A screenshot showing select app API permissions." lightbox="./media/how-to-manage-authentication/select-app-permissions.png":::
+
+6. Enable the web application to call Azure Maps REST APIs by configuring the app registration with an application secret, For detailed steps, see [A web app that calls web APIs: App registration](../active-directory/develop/scenario-web-app-call-api-app-registration.md). A secret is required to authenticate to Azure AD on-behalf of the user. The app registration certificate or secret should be stored in a secure store for the web application to retrieve to authenticate to Azure AD.
+
+ * This step may be skipped if the application already has an Azure AD app registration and secret configured.
+
+ > [!TIP]
+ > If the application is hosted in an Azure environment, we recommend using [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) and an Azure Key Vault instance to access secrets by [acquiring an access token](../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md) for accessing Azure Key Vault secrets or certificates. To connect to Azure Key Vault to retrieve secrets, see [tutorial to connect through managed identity](../key-vault/general/tutorial-net-create-vault-azure-web-app.md).
-6. Enable the web application to call Azure Maps REST APIs by configuring the app registration with an application secret, For detailed steps, see [A web app that calls web APIs: App registration](../active-directory/develop/scenario-web-app-call-api-app-registration.md). A secret is required to authenticate to Azure AD on-behalf of the user. The app registration certificate or secret should be stored in a secure store for the web application to retrieve to authenticate to Azure AD.
-
- * If the application already has configured an Azure AD app registration and a secret this step may be skipped.
+7. Implement a secure token endpoint for the Azure Maps Web SDK to access a token.
-> [!Tip]
-> If the application is hosted in an Azure environment, we recommend using [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) and an Azure Key Vault instance to access secrets by [acquiring an access token](../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md) for accessing Azure Key Vault secrets or certificates. To connect to Azure Key Vault to retrieve secrets, see [tutorial to connect through managed identity](../key-vault/general/tutorial-net-create-vault-azure-web-app.md).
-
-7. Implement a secure token endpoint for the Azure Maps Web SDK to access a token.
-
- * For a sample token controller, see [Azure Maps Azure AD Samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples/blob/master/src/OpenIdConnect/AzureMapsOpenIdConnectv1/AzureMapsOpenIdConnect/Controllers/TokenController.cs).
+ * For a sample token controller, see [Azure Maps Azure AD Samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples/blob/master/src/OpenIdConnect/AzureMapsOpenIdConnectv1/AzureMapsOpenIdConnect/Controllers/TokenController.cs).
* For a non-AspNetCore implementation or other, see [Acquire token for the app](../active-directory/develop/scenario-web-app-call-api-acquire-token.md) from Azure AD documentation. * The secured token endpoint is responsible to return an access token for the authenticated and authorized user to call Azure Maps REST APIs.
-8. Configure Azure role-based access control (Azure RBAC) for users or groups. See [grant role-based access for users](#grant-role-based-access-for-users-to-azure-maps).
+8. To configure Azure role-based access control (Azure RBAC) for users or groups, see [grant role-based access for users](#grant-role-based-access-for-users-to-azure-maps).
-9. Configure the web application page with the Azure Maps Web SDK to access the secure token endpoint.
+9. Configure the web application page with the Azure Maps Web SDK to access the secure token endpoint.
```javascript var map = new atlas.Map("map", {
Find the API usage metrics for your Azure Maps account:
Explore samples that show how to integrate Azure AD with Azure Maps: > [!div class="nextstepaction"]
-> [Azure Maps Azure AD Web App Samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples/tree/master/src/OpenIdConnect)
+> [Azure Maps Azure AD Web App Samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples/tree/master/src/OpenIdConnect)
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
Using Azure Monitor agent, you get immediate benefits as shown below:
- **Security and Performance** - Enhanced security through Managed Identity and Azure Active Directory (Azure AD) tokens (for clients). - Higher event throughput that is 25% better than the legacy Log Analytics (MMA/OMS) agents.-- **A single agent** that serves all data collection needs across [supported](https://learn.microsoft.com/azure/azure-monitor/agents/agents-overview#supported-operating-systems) servers and client devices. A single agent is the goal, although Azure Monitor Agent is currently converging with the Log Analytics agents.
+- **A single agent** that serves all data collection needs across [supported](#supported-operating-systems) servers and client devices. A single agent is the goal, although Azure Monitor Agent is currently converging with the Log Analytics agents.
## Consolidating legacy agents
In addition to the generally available data collection listed above, Azure Monit
| [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) | Public preview | <ul><li>Azure Security Agent extension</li><li>SQL Advanced Threat Protection extension</li><li>SQL Vulnerability Assessment extension</li></ul> | [Auto-deployment of Azure Monitor Agent (Preview)](../../defender-for-cloud/auto-deploy-azure-monitoring-agent.md) | | [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows Security Events: [Generally available](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li><li>Windows Forwarding Event (WEF): [Public preview](../../sentinel/data-connectors/windows-forwarded-events.md)</li><li>Windows DNS logs: [Public preview](../../sentinel/connect-dns-ama.md)</li><li>Linux Syslog CEF: [Public preview](../../sentinel/connect-cef-ama.md#set-up-the-common-event-format-cef-via-ama-connector)</li></ul> | Sentinel DNS extension, if youΓÇÖre collecting DNS logs. For all other data types, you just need the Azure Monitor Agent extension. | - | | [Change Tracking and Inventory Management](../../automation/change-tracking/overview.md) | Public preview | Change Tracking extension | [Change Tracking and Inventory using Azure Monitor Agent](../../automation/change-tracking/overview-monitoring-agent.md) |
-| [Update Management](../../automation/update-management/overview.md) (available without Azure Monitor Agent) | Use Update Management v2 - Public preview | None | [Update management center (Public preview) documentation](../../update-center/index.yml) |
-| [Automation Hybrid Runbook Worker overview](../../automation/automation-hybrid-runbook-worker.md) (available without Azure Monitor Agent) | Migrate to Azure Automation Hybrid Worker Extension - Generally available | None | [Migrate an existing Agent based to Extension based Hybrid Workers](../../automation/extension-based-hybrid-runbook-worker-install.md#migrate-an-existing-agent-based-to-extension-based-hybrid-workers) |
| [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Connection Monitor: Public preview | Azure NetworkWatcher extension | [Monitor network connectivity by using Azure Monitor Agent](../../network-watcher/azure-monitor-agent-with-connection-monitor.md) | | Azure Stack HCI Insights | private preview | No additional extension installed | [Sign up here](https://aka.ms/amadcr-privatepreviews) | | Azure Virtual Desktop (AVD) Insights | private preview | No additional extension installed | [Sign up here](https://aka.ms/amadcr-privatepreviews) |
azure-monitor Alerts Common Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-common-schema.md
If the custom properties are not set in the Alert rule, this field will be null.
"metricValue": 7.727 } ]
- }
+ },
"customProperties":{ "Key1": "Value1", "Key2": "Value2"
azure-monitor Proactive Failure Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-failure-diagnostics.md
Click the alert to configure it.
## Delete alerts
-You can disable or delete a Failure Anomalies alert rule, but once deleted you can't create another one for the same Application Insights resource.
+You can disable or delete a Failure Anomalies alert rule.
-Notice that if you delete an Application Insights resource, the associated Failure Anomalies alert rule doesn't get deleted automatically. You can do so manually on the Alert rules page or with the following Azure CLI command:
+You can do so manually on the Alert rules page or with the following Azure CLI command:
```azurecli az resource delete --ids <Resource ID of Failure Anomalies alert rule> ```
+Notice that if you delete an Application Insights resource, the associated Failure Anomalies alert rule doesn't get deleted automatically.
## Example of Failure Anomalies alert webhook payload
azure-monitor Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-ad-authentication.md
Application Insights now supports [Azure Active Directory (Azure AD) authenticat
Using various authentication systems can be cumbersome and risky because it's difficult to manage credentials at scale. You can now choose to [opt out of local authentication](#disable-local-authentication) to ensure only telemetry exclusively authenticated by using [managed identities](../../active-directory/managed-identities-azure-resources/overview.md) and [Azure AD](../../active-directory/fundamentals/active-directory-whatis.md) is ingested in your resource. This feature is a step to enhance the security and reliability of the telemetry used to make critical operational ([alerting](../alerts/alerts-overview.md#what-are-azure-monitor-alerts)and [autoscale](../autoscale/autoscale-overview.md#overview-of-autoscale-in-azure)) and business decisions.
+> [!NOTE]
+> Note
+> This document is used to cover data ingestion into Application Insights using Azure AD. authentication. If you are looking for information on querying data within Application Insights, please refer to **[Query Application Insights using Azure AD Authentication](/azure/azure-monitor/logs/api/app-insights-azure-ad-api)**.
+ ## Prerequisites
+>
The following prerequisites enable Azure AD authenticated ingestion. You need to:
tracer = Tracer(
) ... ```-
+-
## Disable local authentication
This error usually occurs when the provided credentials don't grant access to in
## Next steps+ * [Monitor your telemetry in the portal](overview-dashboard.md) * [Diagnose with Live Metrics Stream](live-stream.md)
+* [Query Application Insights using Azure AD Authentication](/azure/azure-monitor/logs/api/app-insights-azure-ad-api)
++
azure-monitor Container Insights Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-manage-agent.md
To reenable discovery of the environmental variables, apply the same process you
- name: AZMON_COLLECT_ENV value: "True" ```
+## Semantic version update of container insights agent version
+
+Container Insights has shifted the image version and naming convention to [semver format] (https://semver.org/). SemVer helps developers keep track of every change made to a software during its development phase and ensures that the software versioning is consistent and meaningful. The old version was in format of ciprod<timestamp>-<commitId> and win-ciprod<timestamp>-<commitId>, our first image versions using the Semver format are 3.1.4 for Linux and win-3.1.4 for Windows.
+
+Semver is a universal software versioning schema which is defined in the format MAJOR.MINOR.PATCH, which follows the following constraints:
+
+1. Increment the MAJOR version when you make incompatible API changes.
+2. Increment the MINOR version when you add functionality in a backwards compatible manner.
+3. Increment the PATCH version when you make backwards compatible bug fixes.
+
+With the rise of Kubernetes and the OSS ecosystem, Container Insights migrate to use semver image following the K8s recommended standard wherein with each minor version introduced, all breaking changes were required to be publicly documented with each new Kubernetes release.
## Next steps
azure-monitor Cost Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md
The count of monitored servers is calculated on an hourly granularity. The daily
Subscriptions that contained a Log Analytics workspace or Application Insights resource on April 2, 2018, or are linked to an Enterprise Agreement that started before February 1, 2019, and is still active, will continue to have access to use the following legacy pricing tiers: - Standalone (Per GB)-- Per Node (Operations Management Suite [OMS])
+- Per Node (Operations Management Suite [OMS])
-Access to the legacy Free Trial pricing tier was limited on July 1, 2022.
+Access to the legacy Free Trial pricing tier was limited on July 1, 2022. Pricing information for the Standalone and Per Node pricing tiers is available [here](https://aka.ms/OMSpricing).
### Free Trial pricing tier
Usage on the Standalone pricing tier is billed by the ingested data volume. It's
### Per Node pricing tier
-The Per Node pricing tier charges per monitored VM (node) on an hour granularity. For each monitored node, the workspace is allocated 500 MB of data per day that's not billed. This allocation is calculated with hourly granularity and is aggregated at the workspace level each day. Data ingested above the aggregate daily data allocation is billed per GB as data overage.
+The Per Node pricing tier charges per monitored VM (node) on an hour granularity. For each monitored node, the workspace is allocated 500 MB of data per day that's not billed. This allocation is calculated with hourly granularity and is aggregated at the workspace level each day. Data ingested above the aggregate daily data allocation is billed per GB as data overage. The Per Node pricing tier should only be used by customers with active Operations Management Suite (OMS) licenses.
On your bill, the service will be **Insight and Analytics** for Log Analytics usage if the workspace is in the Per Node pricing tier. Workspaces in the Per Node pricing tier have user-configurable retention from 30 to 730 days. Workspaces in the Per Node pricing tier don't support the use of [Basic Logs](basic-logs-configure.md). Usage is reported on three meters:
azure-netapp-files Backup Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-introduction.md
Azure NetApp Files backup expands the data protection capabilities of Azure NetApp Files by providing fully managed backup solution for long-term recovery, archive, and compliance. Backups created by the service are stored in Azure storage, independent of volume snapshots that are available for near-term recovery or cloning. Backups taken by the service can be restored to new Azure NetApp Files volumes within the region. Azure NetApp Files backup supports both policy-based (scheduled) backups and manual (on-demand) backups. For additional information, see [How Azure NetApp Files snapshots work](snapshots-introduction.md). > [!IMPORTANT]
-> The Azure NetApp Files backup feature is currently in preview. You need to submit a waitlist request for accessing the feature through the **[Azure NetApp Files Backup Public Preview](https://aka.ms/anfbackuppreviewsignup)** page. Wait for an official confirmation email from the Azure NetApp Files team before using the Azure NetApp Files backup feature.
+> The Azure NetApp Files backup feature is currently in preview. You need to submit a waitlist request for accessing the feature through the **[Azure NetApp Files Backup Public Preview](https://aka.ms/anfbackuppreviewsignup)** page. The Azure NetApp Files backup feature is expected to be enabled within a week after you submit the waitlist request. You can check the status of feature registration by using the following command:
+>
+> ```azurepowershell-interactive
+> Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFBackupPreview
+>
+> FeatureName ProviderName RegistrationState
+> -- --
+> ANFBackupPreview Microsoft.NetApp Registered
+> ```
## Supported regions
azure-netapp-files Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md
The following diagram demonstrates how customer-managed keys work with Azure Net
## Considerations > [!IMPORTANT]
-> Customer-managed keys for Azure NetApp Files volume encryption is currently in preview. You need to submit a waitlist request for accessing the feature through the **[Customer-managed keys for Azure NetApp Files volume encryption](https://aka.ms/anfcmkpreviewsignup)** page. Customer-managed keys feature is expected to be enabled within a week from submitting waitlist request.
+> Customer-managed keys for Azure NetApp Files volume encryption is currently in preview. You need to submit a waitlist request for accessing the feature through the **[Customer-managed keys for Azure NetApp Files volume encryption](https://aka.ms/anfcmkpreviewsignup)** page. Customer-managed keys feature is expected to be enabled within a week after you submit the waitlist request. You can check the status of feature registration by using the following command:
+>
+> ```azurepowershell-interactive
+> Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFAzureKeyVaultEncryption
+>
+> FeatureName ProviderName RegistrationState
+> -- --
+> ANFAzureKeyVaultEncryption Microsoft.NetApp Registered
+> ```
* Customer-managed keys can only be configured on new volumes. You can't migrate existing volumes to customer-managed key encryption. * To create a volume using customer-managed keys, you must select the *Standard* network features. You can't use customer-managed key volumes with volume configured using Basic network features. Follow instructions in to [Set the Network Features option](configure-network-features.md#set-the-network-features-option) in the volume creation page.
azure-netapp-files Enable Continuous Availability Existing SMB https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/enable-continuous-availability-existing-SMB.md
You can enable the SMB Continuous Availability (CA) feature when you [create a new SMB volume](azure-netapp-files-create-volumes-smb.md#continuous-availability). You can also enable SMB CA on an existing SMB volume; this article shows you how to do so. > [!IMPORTANT]
-> The SMB Continuous Availability feature is currently in public preview. You need to submit a waitlist request for accessing the feature through the **[Azure NetApp Files SMB Continuous Availability Shares Public Preview waitlist submission page](https://aka.ms/anfsmbcasharespreviewsignup)**. Wait for an official confirmation email from the Azure NetApp Files team before using the Continuous Availability feature.
+> The SMB Continuous Availability feature is currently in public preview. You need to submit a waitlist request for accessing the feature through the **[Azure NetApp Files SMB Continuous Availability Shares Public Preview waitlist submission page](https://aka.ms/anfsmbcasharespreviewsignup)**. The SMB Continuous Availability feature is expected to be enabled within a week after you submit the waitlist request. You can check the status of feature registration by using the following command:
+>
+> ```azurepowershell-interactive
+> Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFSMBCAShare
>
-> See the [**Enable Continuous Availability**](azure-netapp-files-create-volumes-smb.md#continuous-availability) option for additional details and considerations.
+> FeatureName ProviderName RegistrationState
+> -- --
+> ANFSMBCAShare Microsoft.NetApp Registered
+> ```
>[!IMPORTANT] > Custom applications are not supported with SMB Continuous Availability.
+>
+> See the [**Enable Continuous Availability**](azure-netapp-files-create-volumes-smb.md#continuous-availability) option for additional details and considerations.
## Steps
azure-resource-manager Bicep Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-cli.md
Title: Bicep CLI commands and overview
description: Describes the commands that you can use in the Bicep CLI. These commands include building Azure Resource Manager templates from Bicep. Previously updated : 01/10/2023 Last updated : 04/18/2023 # Bicep CLI commands
The `publish` command adds a module to a registry. The Azure container registry
After publishing the file to the registry, you can [reference it in a module](modules.md#file-in-registry).
-To use the publish command, you must have Bicep CLI version **0.4.1008 or later**.
+To use the publish command, you must have Bicep CLI version **0.4.1008 or later**. To use the `--documentationUri`/`-d` parameter, you must have Bicep CLI version **0.14.46 or later**.
To publish a module to a registry, use: ```azurecli
-az bicep publish --file <bicep-file> --target br:<registry-name>.azurecr.io/<module-path>:<tag>
+az bicep publish --file <bicep-file> --target br:<registry-name>.azurecr.io/<module-path>:<tag> --documentationUri <documentation-uri>
``` For example: ```azurecli
-az bicep publish --file storage.bicep --target br:exampleregistry.azurecr.io/bicep/modules/storage:v1
+az bicep publish --file storage.bicep --target br:exampleregistry.azurecr.io/bicep/modules/storage:v1 --documentationUri https://www.contoso.com/exampleregistry.html
``` The `publish` command doesn't recognize aliases that you've defined in a [bicepconfig.json](bicep-config-modules.md) file. Provide the full module path.
The local cache is found in:
/home/<username>/.bicep ```
+- On Mac
+
+ ```path
+ ~/.bicep
+ ```
+ The `restore` command doesn't refresh the cache if a module is already cached. To fresh the cache, you can either delete the module path from the cache or use the `--force` switch with the `restore` command. ## upgrade
azure-resource-manager Bicep Extensibility Kubernetes Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-extensibility-kubernetes-provider.md
Title: Bicep extensibility Kubernetes provider
description: Learn how to Bicep Kubernetes provider to deploy .NET applications to Azure Kubernetes Service clusters. Previously updated : 02/21/2023 Last updated : 04/18/2023 # Bicep extensibility Kubernetes provider (Preview)
param kubeConfig string
import 'kubernetes@1.0.0' with { namespace: 'default' kubeConfig: kubeConfig
-}
+} as k8s
``` - **namespace**: Specify the namespace of the provider.
azure-resource-manager Installation Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/installation-troubleshoot.md
Title: Troubleshoot problems with Bicep installation
description: How to resolve errors and problems with your Bicep installation. Previously updated : 12/15/2021 Last updated : 04/18/2023 # Troubleshoot Bicep installation
Failed to install .NET runtime v5.0
Failed to download .NET 5.0.x ....... Error! ```
+> [!WARNING]
+> This is a last resort solution that may cause problems when updating versions.
+ To solve the problem, you can manually install .NET from the [.NET website](https://aka.ms/dotnet-core-download), and then configure Visual Studio Code to reuse an existing installation of .NET with the following settings: **Windows**
azure-resource-manager Msbuild Bicep File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/msbuild-bicep-file.md
description: Use MSBuild to convert a Bicep file to Azure Resource Manager templ
Last updated 09/26/2022 --+ # Customer intent: As a developer I want to convert Bicep files to Azure Resource Manager template (ARM template) JSON in an MSBuild pipeline.
azure-resource-manager Private Module Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/private-module-registry.md
Title: Create private registry for Bicep module
description: Learn how to set up an Azure container registry for private Bicep modules Previously updated : 01/10/2023 Last updated : 04/18/2023 # Create private registry for Bicep modules
After setting up the container registry, you can publish files to it. Use the [p
# [PowerShell](#tab/azure-powershell) ```azurepowershell
-Publish-AzBicepModule -FilePath ./storage.bicep -Target br:exampleregistry.azurecr.io/bicep/modules/storage:v1
+Publish-AzBicepModule -FilePath ./storage.bicep -Target br:exampleregistry.azurecr.io/bicep/modules/storage:v1 -DocumentationUri https://www.contoso.com/exampleregistry.html
``` # [Azure CLI](#tab/azure-cli)
Publish-AzBicepModule -FilePath ./storage.bicep -Target br:exampleregistry.azure
To run this deployment command, you must have the [latest version](/cli/azure/install-azure-cli) of Azure CLI. ```azurecli
-az bicep publish --file storage.bicep --target br:exampleregistry.azurecr.io/bicep/modules/storage:v1
+az bicep publish --file storage.bicep --target br:exampleregistry.azurecr.io/bicep/modules/storage:v1 --documentationUri https://www.contoso.com/exampleregistry.html
```
azure-resource-manager Quickstart Private Module Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/quickstart-private-module-registry.md
Title: Publish modules to private module registry description: Publish Bicep modules to private module registry and use the modules. Previously updated : 04/01/2022 Last updated : 04/18/2023 #Customer intent: As a developer new to Azure deployment, I want to learn how to publish Bicep modules to private module registry.
Use the following syntax to publish a Bicep file as a module to a private module
# [Azure CLI](#tab/azure-cli) ```azurecli
-az bicep publish --file storage.bicep --target br:exampleregistry.azurecr.io/bicep/modules/storage:v1
+az bicep publish --file storage.bicep --target br:exampleregistry.azurecr.io/bicep/modules/storage:v1 --documentationUri https://www.contoso.com/exampleregistry.html
``` # [Azure PowerShell](#tab/azure-powershell) ```azurepowershell
-Publish-AzBicepModule -FilePath ./storage.bicep -Target br:exampleregistry.azurecr.io/bicep/modules/storage:v1
+Publish-AzBicepModule -FilePath ./storage.bicep -Target br:exampleregistry.azurecr.io/bicep/modules/storage:v1 -DocumentationUri https://www.contoso.com/exampleregistry.html
```
azure-resource-manager Template Functions Numeric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-numeric.md
Title: Template functions - numeric
description: Describes the functions to use in an Azure Resource Manager template (ARM template) to work with numbers. Previously updated : 03/10/2022 Last updated : 04/18/2023 # Numeric functions for ARM templates
The output from the preceding example with the default values is:
| Name | Type | Value | | - | - | -- |
-| mulResult | Int | 15 |
+| mulResult | Int | 45 |
## sub
azure-video-indexer Monitor Video Indexer Data Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/monitor-video-indexer-data-reference.md
The following schemas are in use by Azure Video Indexer
"Filename": "1 Second Video 1.mp4", "AnimationModelId": null, "BrandsCategories": null,
- "CustomLanguages": null,
- "ExcludedAIs": "Face",
+ "CustomLanguages": "en-US,ar-BH,hi-IN,es-MX",
+ "ExcludedAIs": "Faces",
"LogoGroupId": "ea9d154d-0845-456c-857e-1c9d5d925d95" } }
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
To stay up-to-date with the most recent Azure Video Indexer developments, this article provides you with information about:
-* [Important notice](#important-notice) about planned changes
* The latest releases * Known issues * Bug fixes * Deprecated functionality
-## Important notice
+## April 2023
+### The animation character recognition model has been retired
-## April 2023
+The **animation character recognition** model has been retired on March 1st, 2023. For any related issues, [open a support ticket via the Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
### Excluding sensitive AI models
backup Archive Tier Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/archive-tier-support.md
Title: Azure Backup - Archive tier overview description: Learn about Archive tier support for Azure Backup. Previously updated : 04/06/2023 Last updated : 04/15/2023
When you move recovery points to archive, they're subjected to an early deletion
Stop protection and delete data deletes all recovery points. For recovery points in archive that haven't stayed for a duration of 180 days in archive tier, deletion of recovery points leads to early deletion cost.
+## Stop protection and retain data
+
+Azure Backup now supports tiering to archive when you choose to *Stop protection and retain data*. If the backup item is associated with a long term retention policy and is moved to *Stop protection and retain data* state, you can choose to move recommended recovery points to vault-archive tier.
+
+>[!Note]
+>For Azure VM backups, moving recommended recovery points to vault-archive saves costs. For other supported workloads, you can choose to move all eligible recovery points to archive to save costs. If backup item is associated with a short term retention policy and it's moved to *Stop protection & retain data* state, you can't tier the recovery points to archive.
+ ## Archive tier pricing
-You can view the Archive tier pricing from our [pricing page](azure-backup-pricing.md).
+You can view the Archive tier pricing from our [pricing page](https://azure.microsoft.com/pricing/details/backup/).
## Frequently asked questions
batch Error Handling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/error-handling.md
Title: Error handling and detection in Azure Batch description: Learn about error handling in Batch service workflows from a development standpoint. Previously updated : 12/20/2021 Last updated : 04/13/2023 # Error handling and detection in Azure Batch
At times, you might need to handle task and application failures in your Azure B
## Error codes
-Some general types of errors you might see in Batch are:
+Some general types of errors that you might see in Batch are:
-- Networking failures for requests that never reached Batch. Or, networking failures when the Batch response didn't reach the client in time.
+- Networking failures for requests that never reached Batch, or networking failures when the Batch response didn't reach the client in time.
- Internal server errors. These errors have a standard `5xx` status code HTTP response. - Throttling-related errors. These errors include `429` or `503` status code HTTP responses with the `Retry-after` header. - `4xx` errors such as `AlreadyExists` and `InvalidOperation`. These errors indicate that the resource isn't in the correct state for the state transition.
-For detailed information about specific error codes, see [Batch Status and Error Codes](/rest/api/batchservice/batch-status-and-error-codes). This reference includes error codes for REST API, Batch service, and job tasks and scheduling.
+For detailed information about specific error codes, see [Batch status and error codes](/rest/api/batchservice/batch-status-and-error-codes). This reference includes error codes for REST API, Batch service, and for job tasks and scheduling.
## Application failures
-During execution, an application might produce diagnostic output. You can use this output to troubleshoot issues. The Batch service writes standard output and standard error output to the `stdout.txt` and `stderr.txt` files in the task directory on the compute node. For more information, see [Files and directories in Batch](files-and-directories.md).
+During execution, an application might produce diagnostic output. You can use this output to troubleshoot issues. The Batch service writes standard output and standard error output to the *stdout.txt* and *stderr.txt* files in the task directory on the compute node. For more information, see [Files and directories in Batch](files-and-directories.md).
To download these output files, use the Azure portal or one of the Batch SDKs. For example, to retrieve files for troubleshooting purposes, use [ComputeNode.GetNodeFile](/dotnet/api/microsoft.azure.batch.computenode) and [CloudTask.GetNodeFile](/dotnet/api/microsoft.azure.batch.cloudtask) in the Batch .NET library.
If files that you specified for a task fail to upload for any reason, a file upl
- The shared access signature (SAS) token supplied for accessing Azure Storage is invalid. - The SAS token doesn't provide write permissions.-- The storage account is no longer available
+- The storage account is no longer available.
- Another issue happened that prevented the successful copying of files from the node. ### Application errors
-The process that the task's command line specifies can also fail. For more information, see [Task exit codes](#task-exit-codes).
+The process specified by the task's command line can also fail. For more information, see [Task exit codes](#task-exit-codes).
For application errors, configure Batch to automatically retry the task up to a specified number of times. ### Constraint errors
-To specify the maximum execution duration for a job or task, set the **maxWallClockTime** constraint. Use this setting to terminate tasks that fail to progress.
+To specify the maximum execution duration for a job or task, set the `maxWallClockTime` constraint. Use this setting to terminate tasks that fail to progress.
When the task exceeds the maximum time: -- The task is marked as **completed**.-- The exit code is set to `0xC000013A`
+- The task is marked as *completed*.
+- The exit code is set to `0xC000013A`.
- The **schedulingError** field is marked as `{ category:"ServerError", code="TaskEnded"}`. ## Task exit codes When a task executes a process, Batch populates the task's exit code property with the return code of the process. If the process returns a nonzero exit code, the Batch service marks the task as failed.
-The Batch service doesn't determine a task's exit code. The process itself, or the operating system on which the process executed, determines the exit code.
+The Batch service doesn't determine a task's exit code. The process itself, or the operating system on which the process executes, determines the exit code.
## Task failures or interruptions
It's also possible for an intermittent issue to cause a task to stop responding
## Connect to compute nodes
-You can perform additional debugging and troubleshooting by signing in to a compute node remotely. Use the Azure portal to download a Remote Desktop Protocol (RDP) file for Windows nodes, and obtain Secure Shell (SSH) connection information for Linux nodes. You can also download this information using the [Batch .NET](/dotnet/api/microsoft.azure.batch.computenode) or [Batch Python](batch-linux-nodes.md#connect-to-linux-nodes-using-ssh) APIs.
+You can perform debugging and troubleshooting by signing in to a compute node remotely. Use the Azure portal to download a Remote Desktop Protocol (RDP) file for Windows nodes, and obtain Secure Shell (SSH) connection information for Linux nodes. You can also download this information using the [Batch .NET](/dotnet/api/microsoft.azure.batch.computenode) or [Batch Python](batch-linux-nodes.md#connect-to-linux-nodes-using-ssh) APIs.
To connect to a node via RDP or SSH, first create a user on the node. Use one of the following methods: -- The Azure portal
+- The [Azure portal](https://portal.azure.com)
- Batch REST API: [adduser](/rest/api/batchservice/computenode/adduser) - Batch .NET API: [ComputeNode.CreateComputeNodeUser](/dotnet/api/microsoft.azure.batch.computenode) - Batch Python module: [add_user](batch-linux-nodes.md#connect-to-linux-nodes-using-ssh)
-If necessary, [restrict or disable RDP or SSH access to compute nodes](pool-endpoint-configuration.md).
+If necessary, [configure or disable access to compute nodes](pool-endpoint-configuration.md).
+ ## Troubleshoot problem nodes Your Batch client application or service can examine the metadata of failed tasks to identify a problem node. Each node in a pool has a unique ID. Task metadata includes the node where a task runs. After you find the problem node, try the following methods to resolve the failure.
Reimaging a node reinstalls the operating system. Start tasks and job preparatio
Removing the node from the pool is sometimes necessary. - Batch REST API: [removenodes](/rest/api/batchservice/pool/remove-nodes)-- Batch .NET API: [pooloperations](/dotnet/api/microsoft.azure.batch.pooloperations)
+- Batch .NET API: [PoolOperations](/dotnet/api/microsoft.azure.batch.pooloperations)
### Disable task scheduling on node
-Disabling task scheduling on a node effectively takes the node offline. Batch assigns no further tasks to the node. However, the node continues running in the pool. You can then further investigate the failures without losing the failed tasks's data. The node also won't cause additional task failures.
+Disabling task scheduling on a node effectively takes the node offline. Batch assigns no further tasks to the node. However, the node continues running in the pool. You can then further investigate the failures without losing the failed task's data. The node also won't cause more task failures.
For example, disable task scheduling on the node. Then, sign in to the node remotely. Examine the event logs, and do other troubleshooting. After you solve the problems, enable task scheduling again to bring the node back online.
For example, disable task scheduling on the node. Then, sign in to the node remo
You can use these actions to specify Batch handles tasks currently running on the node. For example, when you disable task scheduling with the Batch .NET API, you can specify an enum value for [DisableComputeNodeSchedulingOption](/dotnet/api/microsoft.azure.batch.common.disablecomputenodeschedulingoption). You can choose to: -- Terminate running tasks (`Terminate`).-- Requeue tasks for scheduling on other nodes (`Requeue`).-- Allow running tasks to complete before performing the action (`TaskCompletion`).
+- Terminate running tasks: `Terminate`
+- Requeue tasks for scheduling on other nodes: `Requeue`
+- Allow running tasks to complete before performing the action: `TaskCompletion`
## Retry after errors
After a failure, wait several seconds before retrying. If you retry too frequent
## Next steps -- [Check for Batch pool and node errors](batch-pool-node-error-checking.md).-- [Check for Batch job and task errors](batch-job-task-error-checking.md).
+- [Check for Batch pool and node errors](batch-pool-node-error-checking.md)
+- [Check for Batch job and task errors](batch-job-task-error-checking.md)
batch Simplified Compute Node Communication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/simplified-compute-node-communication.md
Title: Use simplified compute node communication description: Learn about the simplified compute node communication mode in the Azure Batch service and how to enable it. Previously updated : 03/29/2023 Last updated : 04/14/2023
An Azure Batch pool contains one or more compute nodes that execute user-specified workloads in the form of Batch tasks. To enable Batch functionality and Batch pool infrastructure management, compute nodes must communicate with the Azure Batch service.
-Batch supports two types of node communication modes:
-- Classic: where the Batch service initiates communication to the compute nodes-- Simplified: where the compute nodes initiate communication to the Batch service
+Batch supports two types of communication modes:
+- **Classic**: the Batch service initiates communication with the compute nodes.
+- **Simplified**: the compute nodes initiate communication with the Batch service.
-This document describes the simplified compute node communication mode and the associated network configuration requirements.
+This article describes the *simplified* communication mode and the associated network configuration requirements.
> [!TIP]
-> Information in this document pertaining to networking resources and rules such as NSGs does not apply to
-> Batch pools with [no public IP addresses](simplified-node-communication-pool-no-public-ip.md) using the
-> node management private endpoint without Internet outbound access.
+> Information in this document pertaining to networking resources and rules such as NSGs doesn't apply to Batch pools with [no public IP addresses](simplified-node-communication-pool-no-public-ip.md) that use the node management private endpoint without internet outbound access.
> [!WARNING]
-> The classic compute node communication model will be retired on **31 March 2026** and will be replaced with
-> the simplified compute node communication model as described in this document. For more information, see the
-> classic compute node communication mode
-> [migration guide](batch-pools-to-simplified-compute-node-communication-model-migration-guide.md).
+> The *classic* compute node communication mode will be retired on **31 March 2026** and replaced with the *simplified* communication mode described in this document. For more information, see the communication mode [migration guide](batch-pools-to-simplified-compute-node-communication-model-migration-guide.md).
## Supported regions Simplified compute node communication in Azure Batch is currently available for the following regions: -- Public: all public regions where Batch is present except for West India and France South.
+- **Public**: all public regions where Batch is present except for West India and France South.
+- **Government**: USGov Arizona, USGov Virginia, USGov Texas.
+- **China**: all China regions where Batch is present except for China North 1 and China East 1.
-- Government: USGov Arizona, USGov Virginia, USGov Texas.
+## Differences between classic and simplified modes
-- China: all China regions where Batch is present except for China North 1 and China East 1.
+The simplified compute node communication mode streamlines the way Batch pool infrastructure is managed on behalf of users. This communication mode reduces the complexity and scope of inbound and outbound networking connections required in baseline operations.
-## Compute node communication differences between classic and simplified modes
-
-The simplified compute node communication mode streamlines the way Batch pool infrastructure is
-managed on behalf of users. This communication mode reduces the complexity and scope of inbound
-and outbound networking connections required in baseline operations.
-
-Batch pools with the `classic` communication mode require the following networking rules in network
-security groups (NSGs), user-defined routes (UDRs), and firewalls when
-[creating a pool in a virtual network](batch-virtual-network.md):
+Batch pools with the *classic* communication mode require the following networking rules in network security groups (NSGs), user-defined routes (UDRs), and firewalls when [creating a pool in a virtual network](batch-virtual-network.md):
- Inbound:
- - Destination ports 29876, 29877 over TCP from BatchNodeManagement.*region*
+ - Destination ports `29876`, `29877` over TCP from `BatchNodeManagement.<region>`
- Outbound:
- - Destination port 443 over TCP to Storage.*region*
- - Destination port 443 over TCP to BatchNodeManagement.*region* for certain workloads that require communication back to the Batch Service, such as Job Manager tasks
+ - Destination port `443` over TCP to `Storage.<region>`
+ - Destination port `443` over TCP to `BatchNodeManagement.<region>` for certain workloads that require communication back to the Batch Service, such as Job Manager tasks
-Batch pools with the `simplified` communication mode require the following networking rules in
-NSGs, UDRs, and firewalls:
+Batch pools with the *simplified* communication mode require the following networking rules in NSGs, UDRs, and firewalls:
- Inbound: - None - Outbound:
- - Destination port 443 over ANY to BatchNodeManagement.*region*
+ - Destination port `443` over ANY to `BatchNodeManagement.<region>`
-Outbound requirements for a Batch account can be discovered using the
-[List Outbound Network Dependencies Endpoints API](/rest/api/batchmanagement/batch-account/list-outbound-network-dependencies-endpoints)
-This API reports the base set of dependencies, depending upon the Batch account pool communication mode.
-User-specific workloads may need extra rules such as opening traffic to other Azure resources (such as Azure
-Storage for Application Packages, Azure Container Registry, etc.) or endpoints like the Microsoft package
-repository for virtual file system mounting functionality.
+Outbound requirements for a Batch account can be discovered using the [List Outbound Network Dependencies Endpoints API](/rest/api/batchmanagement/batch-account/list-outbound-network-dependencies-endpoints). This API reports the base set of dependencies, depending upon the Batch account pool communication mode. User-specific workloads might need extra rules such as opening traffic to other Azure resources (such as Azure Storage for Application Packages, Azure Container Registry) or endpoints like the Microsoft package repository for virtual file system mounting functionality.
-## Benefits of the simplified communication mode
+## Benefits of simplified mode
-Azure Batch users utilizing the simplified mode benefit from simplification of networking connections and
-rules. Simplified compute node communication helps reduce security risks by removing the requirement to open
-ports for inbound communication from the internet. Only a single outbound rule to a well-known Service Tag is
-required for baseline operation.
+Azure Batch users utilizing the simplified mode benefit from simplification of networking connections and rules. Simplified compute node communication helps reduce security risks by removing the requirement to open ports for inbound communication from the internet. Only a single outbound rule to a well-known Service Tag is required for baseline operation.
-The `simplified` mode also provides more fine-grained data exfiltration control over the `classic`
-communication mode since outbound communication to Storage.*region* is no longer required. You can
-explicitly lock down outbound communication to Azure Storage if necessary for your workflow. For
-example, you can scope your outbound communication rules to Azure Storage to enable your AppPackage
-storage accounts or other storage accounts for resource files or output files.
+The *simplified* mode also provides more fine-grained data exfiltration control over the *classic* communication mode since outbound communication to `Storage.<region>` is no longer required. You can explicitly lock down outbound communication to Azure Storage if necessary for your workflow. For example, you can scope your outbound communication rules to Azure Storage to enable your AppPackage storage accounts or other storage accounts for resource files or output files.
-Even if your workloads aren't currently impacted by the changes (as described in the next section), it's
-recommended to move to the `simplified` mode. Future improvements in the Batch service may only be functional
-with simplified compute node communication.
+Even if your workloads aren't currently impacted by the changes (as described in the following section), it's recommended to move to the simplified mode. Future improvements in the Batch service might only be functional with simplified compute node communication.
## Potential impact between classic and simplified communication modes
-In many cases, the `simplified` communication mode doesn't directly affect your Batch workloads. However,
-simplified compute node communication has an impact for the following cases:
+In many cases, the simplified communication mode doesn't directly affect your Batch workloads. However, simplified compute node communication has an impact for the following cases:
-- Users who specify a Virtual Network as part of creating a Batch pool and do one or both of the following actions:
+- Users who specify a virtual network as part of creating a Batch pool and do one or both of the following actions:
- Explicitly disable outbound network traffic rules that are incompatible with simplified compute node communication. - Use UDRs and firewall rules that are incompatible with simplified compute node communication. - Users who enable software firewalls on compute nodes and explicitly disable outbound traffic in software firewall rules that are incompatible with simplified compute node communication.
-If either of these cases applies to you, then follow the steps outlined in the next section to ensure that
-your Batch workloads can still function under the `simplified` mode. We strongly recommend that you test and
-verify all of your changes in a dev and test environment first before pushing your changes into production.
+If either of these cases applies to you, then follow the steps outlined in the next section to ensure that your Batch workloads can still function in simplified mode. It's strongly recommended that you test and verify all of your changes in a dev and test environment first before pushing your changes into production.
-### Required network configuration changes for simplified communication mode
+### Required network configuration changes for simplified mode
-The following set of steps is required to migrate to the new communication mode:
+The following steps are required to migrate to the new communication mode:
-1. Ensure your networking configuration as applicable to Batch pools (NSGs, UDRs, firewalls, etc.) includes a union of the modes (that is, the combined network rules of both `classic` and `simplified` modes). At a minimum, these rules would be:
+1. Ensure your networking configuration as applicable to Batch pools (NSGs, UDRs, firewalls, etc.) includes a union of the modes, that is, the combined network rules of both classic and simplified modes. At a minimum, these rules would be:
- Inbound:
- - Destination ports 29876, 29877 over TCP from BatchNodeManagement.*region*
+ - Destination ports `29876`, `29877` over TCP from `BatchNodeManagement.<region>`
- Outbound:
- - Destination port 443 over TCP to Storage.*region*
- - Destination port 443 over ANY to BatchNodeManagement.*region*
+ - Destination port `443` over TCP to `Storage.<region>`
+ - Destination port `443` over ANY to `BatchNodeManagement.<region>`
1. If you have any other inbound or outbound scenarios required by your workflow, you need to ensure that your rules reflect these requirements. 1. Use one of the following options to update your workloads to use the new communication mode.
- - Create new pools with the `targetNodeCommunicationMode` set to `simplified` and validate that the new pools are working correctly. Migrate your workload to the new pools and delete any earlier pools.
- - Update existing pools `targetNodeCommunicationMode` property to `simplified` and then resize all existing pools to zero nodes and scale back out.
-1. Use the [Get Pool](/rest/api/batchservice/pool/get), [List Pool](/rest/api/batchservice/pool/list) API or Portal to confirm the `currentNodeCommunicationMode` is set to the desired communication mode of `simplified`.
-1. Modify all applicable networking configuration to the Simplified Compute Node Communication rules, at the minimum (note any extra rules needed as discussed above):
+ - Create new pools with the `targetNodeCommunicationMode` set to *simplified* and validate that the new pools are working correctly. Migrate your workload to the new pools and delete any earlier pools.
+ - Update existing pools `targetNodeCommunicationMode` property to *simplified* and then resize all existing pools to zero nodes and scale back out.
+1. Use the [Get Pool](/rest/api/batchservice/pool/get) API, [List Pool](/rest/api/batchservice/pool/list) API, or the Azure portal to confirm the `currentNodeCommunicationMode` is set to the desired communication mode of *simplified*.
+1. Modify all applicable networking configuration to the simplified communication rules, at the minimum (note any extra rules needed as discussed above):
- Inbound: - None - Outbound:
- - Destination port 443 over ANY to BatchNodeManagement.*region*
+ - Destination port `443` over ANY to `BatchNodeManagement.<region>`
-If you follow these steps, but later want to switch back to `classic` compute node communication, you need to take the following actions:
+If you follow these steps, but later want to switch back to *classic* compute node communication, you need to take the following actions:
-1. Revert any networking configuration operating exclusively in `simplified` compute node communication mode.
-1. Create new pools or update existing pools `targetNodeCommunicationMode` property set to `classic`.
+1. Revert any networking configuration operating exclusively in *simplified* compute node communication mode.
+1. Create new pools or update existing pools `targetNodeCommunicationMode` property set to *classic*.
1. Migrate your workload to these pools, or resize existing pools and scale back out (see step 3 above).
-1. See step 4 above to confirm that your pools are operating in `classic` communication mode.
+1. See step 4 above to confirm that your pools are operating in *classic* communication mode.
1. Optionally restore your networking configuration.
-## Specifying the node communication mode on a Batch pool
+## Specify the communication mode on a Batch pool
-The [`targetNodeCommunicationMode`](/rest/api/batchservice/pool/add) property on Batch pools allows you to indicate a preference
-to the Batch service for which communication mode to utilize between the Batch service and compute nodes. The following are
-the allowable options on this property:
+The [targetNodeCommunicationMode](/rest/api/batchservice/pool/add) property on Batch pools allows you to indicate a preference to the Batch service for which communication mode to utilize between the Batch service and compute nodes. The following are the allowable options on this property:
-- `classic`: create the pool using classic compute node communication.-- `simplified`: create the pool using simplified compute node communication.-- `default`: allow the Batch service to select the appropriate compute node communication mode. For pools without a virtual
-network, the pool may be created in either `classic` or `simplified` mode. For pools with a virtual network, the pool will always
-default to `classic` until **30 September 2024**. For more information, see the classic compute node communication mode
-[migration guide](batch-pools-to-simplified-compute-node-communication-model-migration-guide.md).
+- **Classic**: creates the pool using classic compute node communication.
+- **Simplified**: creates the pool using simplified compute node communication.
+- **Default**: allows the Batch service to select the appropriate compute node communication mode. For pools without a virtual network, the pool may be created in either classic or simplified mode. For pools with a virtual network, the pool always defaults to classic until **30 September 2024**. For more information, see the classic compute node communication mode [migration guide](batch-pools-to-simplified-compute-node-communication-model-migration-guide.md).
> [!TIP]
-> Specifying the target node communication mode is a preference indication for the Batch service and not a guarantee that it
-> will be honored. Certain configurations on the pool may prevent the Batch service from honoring the specified target node
-> communication mode, such as interaction with No public IP address, virtual networks, and the pool configuration type.
+> Specifying the target node communication mode indicates a preference for the Batch service, but doesn't guarantee that it will be honored. Certain configurations on the pool might prevent the Batch service from honoring the specified target node communication mode, such as interaction with no public IP address, virtual networks, and the pool configuration type.
-The following are examples of how to create a Batch pool with `simplified` compute node communication.
+The following are examples of how to create a Batch pool with simplified compute node communication.
### Azure portal
-Navigate to the Pools blade of your Batch account and click the Add button. Under `OPTIONAL SETTINGS`, you can
-select `Simplified` as an option from the pull-down of `Node communication mode` as shown below.
+First, sign in to the [Azure portal](https://portal.azure.com). Then, navigate to the **Pools** blade of your Batch account and select the **Add** button. Under **OPTIONAL SETTINGS**, you can select **Simplified** as an option from the pull-down of **Node communication mode** as shown:
:::image type="content" source="media/simplified-compute-node-communication/add-pool-simplified-mode.png" alt-text="Screenshot that shows creating a pool with simplified mode.":::
-To update an existing pool to simplified communication mode, navigate to the Pools blade of your Batch account and
-click on the pool to update. On the left-side navigation, select `Node communication mode`. There you're able
-to select a new target node communication mode as shown below. After selecting the appropriate communication mode,
-click the `Save` button to update. You need to scale the pool down to zero nodes first, and then back out
-for the change to take effect, if conditions allow.
+To update an existing pool to simplified communication mode, navigate to the **Pools** blade of your Batch account and select the pool to update. On the left-side navigation, select **Node communication mode**. There you can select a new target node communication mode as shown below. After selecting the appropriate communication mode, select the **Save** button to update. You need to scale the pool down to zero nodes first, and then back out for the change to take effect, if conditions allow.
:::image type="content" source="media/simplified-compute-node-communication/update-pool-simplified-mode.png" alt-text="Screenshot that shows updating a pool to simplified mode.":::
-To display the current node communication mode for a pool, navigate to the Pools blade of your Batch account, and
-click on the pool to view. Select `Properties` on the left-side navigation and the pool node communication mode
-will be shown under the General section.
+To display the current node communication mode for a pool, navigate to the **Pools** blade of your Batch account, and select the pool to view. Select **Properties** on the left-side navigation and the pool node communication mode appears under the **General** section.
:::image type="content" source="media/simplified-compute-node-communication/get-pool-simplified-mode.png" alt-text="Screenshot that shows properties with a pool with simplified mode."::: ### REST API
-This example shows how to use the [Batch Service REST API](/rest/api/batchservice/pool/add) to create a pool with
-`simplified` compute node communication.
+This example shows how to use the [Batch Service REST API](/rest/api/batchservice/pool/add) to create a pool with simplified compute node communication.
```http POST {batchURL}/pools?api-version=2022-10-01.16.0
client-request-id: 00000000-0000-0000-0000-000000000000
## Limitations
-The following are known limitations of the `simplified` communication mode:
--- Limited migration support for previously created pools without public IP addresses
-([V1 preview](batch-pool-no-public-ip-address.md)). These pools can only be migrated if created in a
-[virtual network](batch-virtual-network.md), otherwise they won't use simplified compute node communication, even
-if specified on the pool. For more information, see the
-[migration guide](batch-pools-without-public-ip-addresses-classic-retirement-migration-guide.md).
-- Cloud Service Configuration pools are currently not supported for simplified compute node communication and are
-[deprecated](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/).
-Specifying a communication mode for these types of pools aren't honored and always results in `classic`
-communication mode. We recommend using Virtual Machine Configuration for your Batch pools. For more information, see
-[Migrate Batch pool configuration from Cloud Services to Virtual Machine](batch-pool-cloud-service-to-virtual-machine-configuration.md).
+The following are known limitations of the simplified communication mode:
+- Limited migration support for previously created pools [without public IP addresses](batch-pool-no-public-ip-address.md). These pools can only be migrated if created in a [virtual network](batch-virtual-network.md), otherwise they won't use simplified compute node communication, even if specified on the pool. For more information, see the [migration guide](batch-pools-without-public-ip-addresses-classic-retirement-migration-guide.md).
+- Cloud Service Configuration pools are currently not supported for simplified compute node communication and are [deprecated](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/). Specifying a communication mode for these types of pools aren't honored and always results in *classic* communication mode. We recommend using Virtual Machine Configuration for your Batch pools. For more information, see [Migrate Batch pool configuration from Cloud Services to Virtual Machine](batch-pool-cloud-service-to-virtual-machine-configuration.md).
## Next steps
cognitive-services Storage Lab Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/Tutorials/storage-lab-tutorial.md
In this section, you'll create a new Web app in Visual Studio and add code to im
} ```
- The language used here is [Razor](http://www.asp.net/web-pages/overview/getting-started/introducing-razor-syntax-c), which lets you embed executable code in HTML markup. The ```@foreach``` statement in the middle of the file enumerates the **BlobInfo** objects passed from the controller in **ViewBag** and creates HTML ```<img>``` elements from them. The ```src``` property of each element is initialized with the URI of the blob containing the image thumbnail.
+ The language used here is [Razor](https://www.asp.net/web-pages/overview/getting-started/introducing-razor-syntax-c), which lets you embed executable code in HTML markup. The ```@foreach``` statement in the middle of the file enumerates the **BlobInfo** objects passed from the controller in **ViewBag** and creates HTML ```<img>``` elements from them. The ```src``` property of each element is initialized with the URI of the blob containing the image thumbnail.
1. Download and unzip the _photos.zip_ file from the [GitHub sample data repository](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/ComputerVision/storage-lab-tutorial). This is an assortment of different photos you can use to test the app.
cognitive-services Language Identification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-identification.md
Previously updated : 02/27/2023 Last updated : 04/19/2023 zone_pivot_groups: programming-languages-speech-services-nomore-variant
speechRecognizer.recognizeOnceAsync((result: SpeechSDK.SpeechRecognitionResult)
::: zone-end
-### Using Speech-to-text custom models
+### Speech-to-text custom models
> [!NOTE] > Language detection with custom models can only be used with real-time speech to text and speech translation. Batch transcription only supports language detection for base models.
var autoDetectSourceLanguageConfig = SpeechSDK.AutoDetectSourceLanguageConfig.fr
::: zone-end
-### Using Speech-to-text batch transcription
-
-To identify languages in [Batch transcription](batch-transcription.md), you need to use `languageIdentification` property in the body of your [transcription REST request](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create). The example in this section shows the usage of `languageIdentification` property with four candidate languages.
-
-> [!WARNING]
-> Batch transcription only supports language identification for base models. If both language identification and a custom model are specified in the transcription request, the service will fall back to use the base models for the specified candidate languages. This may result in unexpected recognition results.
->
-> If your speech to text scenario requires both language identification and custom models, use [real-time speech to text](#using-speech-to-text-custom-models) instead of batch transcription.
-
-```json
-{
- <...>
-
- "properties": {
- <...>
-
- "languageIdentification": {
- "candidateLocales": [
- "en-US",
- "ja-JP",
- "zh-CN",
- "hi-IN"
- ]
- },
- <...>
- }
-}
-```
- ## Speech translation You use Speech translation when you need to identify the language in an audio source and then translate it to another language. For more information, see [Speech translation overview](speech-translation.md).
recognizer.stop_continuous_recognition()
::: zone-end
+## Run and use a container
+
+Speech containers provide websocket-based query endpoint APIs that are accessed through the Speech SDK and Speech CLI. By default, the Speech SDK and Speech CLI use the public Speech service. To use the container, you need to change the initialization method. Use a container host URL instead of key and region.
+
+When you run language ID in a container, use the `SourceLanguageRecognizer` object instead of `SpeechRecognizer` or `TranslationRecognizer`.
+
+For more information about containers, see the [language identification speech containers](speech-container-lid.md#use-the-container) how-to guide.
++
+## Speech-to-text batch transcription
+
+To identify languages with [Batch transcription REST API](batch-transcription.md), you need to use `languageIdentification` property in the body of your [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) request.
+
+> [!WARNING]
+> Batch transcription only supports language identification for base models. If both language identification and a custom model are specified in the transcription request, the service will fall back to use the base models for the specified candidate languages. This may result in unexpected recognition results.
+>
+> If your speech to text scenario requires both language identification and custom models, use [real-time speech to text](#speech-to-text-custom-models) instead of batch transcription.
+
+The following example shows the usage of the `languageIdentification` property with four candidate languages. For more information about request properties see [Create a batch transcription](batch-transcription-create.md#request-configuration-options).
+
+```json
+{
+ <...>
+
+ "properties": {
+ <...>
+
+ "languageIdentification": {
+ "candidateLocales": [
+ "en-US",
+ "ja-JP",
+ "zh-CN",
+ "hi-IN"
+ ]
+ },
+ <...>
+ }
+}
+```
+ ## Next steps * [Try the speech to text quickstart](get-started-speech-to-text.md)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
You can also get a list of locales and voices supported for each specific region
Language support varies by Speech service functionality. > [!NOTE]
-> See [Speech Containers](speech-container-howto.md#available-speech-containers) and [Embedded Speech](embedded-speech.md#models-and-voices) separately for their supported languages.
+> See [Speech Containers](speech-container-overview.md#available-speech-containers) and [Embedded Speech](embedded-speech.md#models-and-voices) separately for their supported languages.
**Choose a Speech feature**
Each prebuilt neural voice model is available at 24kHz and high-fidelity 48kHz.
Please note that the following neural voices are retired. -- The English (United Kingdom) voice `en-GB-MiaNeural` retired on October 30, 2021. All service requests to `en-GB-MiaNeural` will be redirected to `en-GB-SoniaNeural` automatically as of October 30, 2021. If you're using container Neural TTS, [download](speech-container-howto.md#get-the-container-image-with-docker-pull) and deploy the latest version. Starting from October 30, 2021, all requests with previous versions will not succeed.
+- The English (United Kingdom) voice `en-GB-MiaNeural` retired on October 30, 2021. All service requests to `en-GB-MiaNeural` will be redirected to `en-GB-SoniaNeural` automatically as of October 30, 2021. If you're using container Neural TTS, [download](speech-container-ntts.md#get-the-container-image-with-docker-pull) and deploy the latest version. Starting from October 30, 2021, all requests with previous versions will not succeed.
- The `en-US-JessaNeural` voice is retired and replaced by `en-US-AriaNeural`. If you were using "Jessa" before, convert to "Aria." ### Custom Neural Voice
cognitive-services Openai Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/openai-speech.md
Previously updated : 03/07/2023 Last updated : 04/15/2023
+zone_pivot_groups: programming-languages-csharp-python
keywords: speech to text, openai # Azure OpenAI speech to speech chat + [!INCLUDE [Python include](./includes/quickstarts/openai-speech/python.md)] ## Next steps
cognitive-services Speech Container Batch Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-batch-processing.md
Use the batch processing kit to complement and scale out workloads on Speech con
:::image type="content" source="media/containers/general-diagram.png" alt-text="A diagram showing an example batch-kit container workflow.":::
-The batch kit container is available for free on [GitHub](https://github.com/microsoft/batch-processing-kit) and [Docker hub](https://hub.docker.com/r/batchkit/speech-batch-kit/tags). You are only [billed](speech-container-howto.md#billing) for the Speech containers you use.
+The batch kit container is available for free on [GitHub](https://github.com/microsoft/batch-processing-kit) and [Docker hub](https://hub.docker.com/r/batchkit/speech-batch-kit/tags). You are only [billed](speech-container-overview.md#billing) for the Speech containers you use.
| Feature | Description | |||
Use the Docker `run` command to start the container. This will start an interact
-```Docker
+```bash
docker run --network host --rm -ti -v /mnt/my_nfs:/my_nfs --entrypoint /bin/bash /mnt/my_nfs:/my_nfs docker.io/batchkit/speech-batch-kit:latest ```
To run the batch client:
-```Docker
+```bash
run-batch-client -config /my_nfs/config.yaml -input_folder /my_nfs/audio_files -output_folder /my_nfs/transcriptions -log_folder /my_nfs/logs -file_log_level DEBUG -nbest 1 -m ONESHOT -diarization None -language en-US -strict_config ```
To run the batch client and container in a single command:
-```Docker
+```bash
docker run --network host --rm -ti -v /mnt/my_nfs:/my_nfs docker.io/batchkit/speech-batch-kit:latest -config /my_nfs/config.yaml -input_folder /my_nfs/audio_files -output_folder /my_nfs/transcriptions -log_folder /my_nfs/logs ```
cognitive-services Speech Container Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-configuration.md
Previously updated : 07/22/2021 Last updated : 04/18/2023 # Configure Speech service containers
-Speech containers enable customers to build one speech application architecture that is optimized to take advantage of both robust cloud capabilities and edge locality. The supported speech containers are **speech-to-text**, **Custom speech-to-text**, **speech language identification** and **Neural text-to-speech**.
+Speech containers enable customers to build one speech application architecture that is optimized to take advantage of both robust cloud capabilities and edge locality.
-The **Speech** container runtime environment is configured using the `docker run` command arguments. This container has several required settings, along with a few optional settings. Several [examples](#example-docker-run-commands) of the command are available. The container-specific settings are the billing settings.
+The Speech container runtime environment is configured using the `docker run` command arguments. This container has several required settings, along with a few optional settings. The container-specific settings are the billing settings.
## Configuration settings [!INCLUDE [Container shared configuration settings table](../../../includes/cognitive-services-containers-configuration-shared-settings-table.md)] > [!IMPORTANT]
-> The [`ApiKey`](#apikey-configuration-setting), [`Billing`](#billing-configuration-setting), and [`Eula`](#eula-setting) settings are used together, and you must provide valid values for all three of them; otherwise your container won't start. For more information about using these configuration settings to instantiate a container, see [Billing](speech-container-howto.md#billing).
+> The [`ApiKey`](#apikey-configuration-setting), [`Billing`](#billing-configuration-setting), and [`Eula`](#eula-setting) settings are used together, and you must provide valid values for all three of them; otherwise your container won't start. For more information about using these configuration settings to instantiate a container, see [Billing](speech-container-overview.md#billing).
## ApiKey configuration setting
The `ApiKey` setting specifies the Azure resource key used to track billing info
This setting can be found in the following place: -- Azure portal: **Speech's** Resource Management, under **Keys**
+- Azure portal: **Speech** Resource Management, under **Keys**
## ApplicationInsights setting
The `Billing` setting specifies the endpoint URI of the _Speech_ resource on Azu
This setting can be found in the following place: -- Azure portal: **Speech's** Overview, labeled `Endpoint`
+- Azure portal: **Speech** Overview, labeled `Endpoint`
| Required | Name | Data type | Description | | -- | - | | -- |
-| Yes | `Billing` | String | Billing endpoint URI. For more information on obtaining the billing URI, see [gather required parameters](speech-container-howto.md#gather-required-parameters). For more information and a complete list of regional endpoints, see [Custom subdomain names for Cognitive Services](../cognitive-services-custom-subdomains.md). |
+| Yes | `Billing` | String | Billing endpoint URI. For more information on obtaining the billing URI, see [billing](speech-container-overview.md#billing). For more information and a complete list of regional endpoints, see [Custom subdomain names for Cognitive Services](../cognitive-services-custom-subdomains.md). |
## Eula setting
The exact syntax of the host mount location varies depending on the host operati
The custom speech containers use [volume mounts](https://docs.docker.com/storage/volumes/) to persist custom models. You can specify a volume mount by adding the `-v` (or `--volume`) option to the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command.
+> [!NOTE]
+> The volume mount settings are only applicable for [Custom Speech-to-text](speech-container-cstt.md) containers.
+ Custom models are downloaded the first time that a new model is ingested as part of the custom speech container docker run command. Sequential runs of the same `ModelId` for a custom speech container will use the previously downloaded model. If the volume mount is not provided, custom models cannot be persisted. The volume mount setting consists of three color `:` separated fields:
The volume mount setting consists of three color `:` separated fields:
2. The second field is the directory in the container, for example _/usr/local/models_. 3. The third field (optional) is a comma-separated list of options, for more information see [use volumes](https://docs.docker.com/storage/volumes/).
-### Volume mount example
+Here's a volume mount example that mounts the host machine _C:\input_ directory to the containers _/usr/local/models_ directory.
```bash -v C:\input:/usr/local/models ```
-This command mounts the host machine _C:\input_ directory to the containers _/usr/local/models_ directory.
-
-> [!IMPORTANT]
-> The volume mount settings are only applicable to **Custom Speech-to-text** containers. The **Speech-to-text**, **Neural Text-to-speech** and **Speech language identification** containers do not use volume mounts.
-
-## Example docker run commands
-
-The following examples use the configuration settings to illustrate how to write and use `docker run` commands. Once running, the container continues to run until you [stop](speech-container-howto.md#stop-the-container) it.
--- **Line-continuation character**: The Docker commands in the following sections use the back slash, `\`, as a line continuation character. Replace or remove this based on your host operating system's requirements.-- **Argument order**: Do not change the order of the arguments unless you are familiar with Docker containers.-
-Replace {_argument_name_} with your own values:
-
-| Placeholder | Value | Format or example |
-| -- | -- | -- |
-| **{API_KEY}** | The endpoint key of the `Speech` resource on the Azure `Speech` Keys page. | `xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx` |
-| **{ENDPOINT_URI}** | The billing endpoint value is available on the Azure `Speech` Overview page. | See [gather required parameters](speech-container-howto.md#gather-required-parameters) for explicit examples. |
--
-> [!IMPORTANT]
-> The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](#billing-configuration-setting).
-> The ApiKey value is the **Key** from the Azure Speech Resource keys page.
-
-## Speech container Docker examples
-
-The following Docker examples are for the Speech container.
-
-## [Speech-to-text](#tab/stt)
-
-### Basic example for Speech-to-text
--
-```Docker
-docker run --rm -it -p 5000:5000 --memory 8g --cpus 4 \
-mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY}
-```
-
-### Logging example for Speech-to-text
--
-```Docker
-docker run --rm -it -p 5000:5000 --memory 8g --cpus 4 \
-mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY} \
-Logging:Console:LogLevel:Default=Information
-```
-
-## [Custom Speech-to-text](#tab/cstt)
-
-### Basic example for Custom Speech-to-text
--
-```Docker
-docker run --rm -it -p 5000:5000 --memory 8g --cpus 4 \
--v {VOLUME_MOUNT}:/usr/local/models \
-mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text \
-ModelId={MODEL_ID} \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY}
-```
-
-### Logging example for Custom Speech-to-text
--
-```Docker
-docker run --rm -it -p 5000:5000 --memory 8g --cpus 4 \
--v {VOLUME_MOUNT}:/usr/local/models \
-mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text \
-ModelId={MODEL_ID} \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY} \
-Logging:Console:LogLevel:Default=Information
-```
-
-## [Neural Text-to-speech](#tab/ntts)
-
-### Basic example for Neural Text-to-speech
-
-```Docker
-docker run --rm -it -p 5000:5000 --memory 12g --cpus 6 \
-mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY}
-```
-
-### Logging example for Neural Text-to-speech
-```Docker
-docker run --rm -it -p 5000:5000 --memory 12g --cpus 6 \
-mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY} \
-Logging:Console:LogLevel:Default=Information
-```
-
-## [Speech Language Identification](#tab/lid)
-
-### Basic example for Speech language identification
--
-```Docker
-docker run --rm -it -p 5000:5000 --memory 1g --cpus 1 \
-mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY}
-```
-
-### Logging example for Speech language identification
--
-```Docker
-docker run --rm -it -p 5000:5000 --memory 1g --cpus 1 \
-mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY} \
-Logging:Console:LogLevel:Default=Information
-```
-- ## Next steps - Review [How to install and run containers](speech-container-howto.md)
cognitive-services Speech Container Cstt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-cstt.md
+
+ Title: Custom speech-to-text containers - Speech service
+
+description: Install and run custom speech-to-text containers with Docker to perform speech recognition, transcription, generation, and more on-premises.
++++++ Last updated : 04/18/2023+
+zone_pivot_groups: programming-languages-speech-sdk-cli
+keywords: on-premises, Docker, container
++
+# Custom speech-to-text containers with Docker
+
+The Custom speech-to-text container transcribes real-time speech or batch audio recordings with intermediate results. You can use a custom model that you created in the [Custom Speech portal](https://speech.microsoft.com/customspeech). In this article, you'll learn how to download, install, and run a Custom speech-to-text container.
+
+> [!NOTE]
+> You must [request and get approval](speech-container-overview.md#request-approval-to-run-the-container) to use a Speech container.
+
+For more information about prerequisites, validating that a container is running, running multiple containers on the same host, and running disconnected containers, see [Install and run Speech containers with Docker](speech-container-howto.md).
+
+## Container images
+
+The Custom speech-to-text container image for all supported versions and locales can be found on the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/custom-speech-to-text/tags) syndicate. It resides within the `azure-cognitive-services/speechservices/` repository and is named `custom-speech-to-text`.
++
+The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text`. Either append a specific version or append `:latest` to get the most recent version.
+
+| Version | Path |
+|--||
+| Latest | `mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text:latest` |
+| 3.12.0 | `mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text:3.12.0-amd64` |
+
+All tags, except for `latest`, are in the following format and are case sensitive:
+
+```
+<major>.<minor>.<patch>-<platform>-<prerelease>
+```
+
+> [!NOTE]
+> The `locale` and `voice` for custom speech-to-text containers is determined by the custom model ingested by the container.
+
+The tags are also available [in JSON format](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/custom-speech-to-text/tags/list) for your convenience. The body includes the container path and list of tags. The tags aren't sorted by version, but `"latest"` is always included at the end of the list as shown in this snippet:
+
+```json
+{
+ "name": "azure-cognitive-services/speechservices/custom-speech-to-text",
+ "tags": [
+ "2.10.0-amd64",
+ "2.11.0-amd64",
+ "2.12.0-amd64",
+ "2.12.1-amd64",
+ <--redacted for brevity-->
+ "latest"
+ ]
+}
+```
+
+### Get the container image with docker pull
+
+You need the [prerequisites](speech-container-howto.md#prerequisites) including required hardware. Please also see the [recommended allocation of resources](speech-container-howto.md#container-requirements-and-recommendations) for each Speech container.
+
+Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container Registry:
+
+```bash
+docker pull mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text:latest
+```
+
+> [!NOTE]
+> The `locale` and `voice` for custom Speech containers is determined by the custom model ingested by the container.
+
+## Get the model ID
+
+Before you can [run](#run-the-container-with-docker-run) the container, you need to know the model ID of your custom model or a base model ID. When you run the container you specify one of the model IDs to download and use.
+
+# [Custom model ID](#tab/custom-model)
+
+The custom model has to have been [trained](how-to-custom-speech-train-model.md) by using the [Speech Studio](https://aka.ms/speechstudio/customspeech). For information about how to get the model ID, see [Custom Speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md).
+
+![Screenshot that shows the Custom Speech training page.](media/custom-speech/custom-speech-model-training.png)
+
+Obtain the **Model ID** to use as the argument to the `ModelId` parameter of the `docker run` command.
+
+![Screenshot that shows Custom Speech model details.](media/custom-speech/custom-speech-model-details.png)
++
+# [Base model ID](#tab/base-model)
+
+You can get the available base model information by using option `BaseModelLocale={LOCALE}`. This option gives you a list of available base models on that locale under your billing account.
+
+To get base model IDs, you use the `docker run` command. For example:
+
+```bash
+docker run --rm -it \
+mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text \
+BaseModelLocale={LOCALE} \
+Eula=accept \
+Billing={ENDPOINT_URI} \
+ApiKey={API_KEY}
+```
+
+This command checks the container image and returns the available base models of the target locale.
+
+> [!NOTE]
+> Although you use the `docker run` command, the container isn't started for service.
+
+The output gives you a list of base models with the information locale, model ID, and creation date time. For example:
+
+```
+Checking available base model for en-us
+2020/10/30 21:54:20 [Info] Searching available base models for en-us
+2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2016-11-04T08:23:42Z, Id: a3d8aab9-6f36-44cd-9904-b37389ce2bfa
+2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2016-11-04T12:01:02Z, Id: cc7826ac-5355-471d-9bc6-a54673d06e45
+2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2017-08-17T12:00:00Z, Id: a1f8db59-40ff-4f0e-b011-37629c3a1a53
+2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2018-04-16T11:55:00Z, Id: c7a69da3-27de-4a4b-ab75-b6716f6321e5
+2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2018-09-21T15:18:43Z, Id: da494a53-0dad-4158-b15f-8f9daca7a412
+2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2018-10-19T11:28:54Z, Id: 84ec130b-d047-44bf-a46d-58c1ac292ca7
+2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2018-11-26T07:59:09Z, Id: ee5c100f-152f-4ae5-9e9d-014af3c01c56
+2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2018-11-26T09:21:55Z, Id: d04959a6-71da-4913-9997-836793e3c115
+2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2019-01-11T10:04:19Z, Id: 488e5f23-8bc5-46f8-9ad8-ea9a49a8efda
+2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2019-02-18T14:37:57Z, Id: 0207b3e6-92a8-4363-8c0e-361114cdd719
+2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2019-03-03T17:34:10Z, Id: 198d9b79-2950-4609-b6ec-f52254074a05
+2020/10/30 21:54:21 [Fatal] Please run this tool again and assign --modelId '<one above base model id>'. If no model id listed above, it means currently there is no available base model for en-us
+```
+++
+## Display model download
+
+Before you [run](#run-the-container-with-docker-run) the container, you can optionally get the available display models information and choose to download those models into your speech-to-text container to get highly improved final display output. Display model download is available with custom-speech-to-text container version 3.1.0 and later.
+
+> [!NOTE]
+> Although you use the `docker run` command, the container isn't started for service.
+
+You can query or download any or all of these display model types: Rescoring (`Rescore`), Punctuation (`Punct`), resegmentation (`Resegment`), and wfstitn (`Wfstitn`). Otherwise, you can use the `FullDisplay` option (with or without the other types) to query or download all types of display models.
+
+Set the `BaseModelLocale` to query the latest available display model on the target locale. If you include multiple display model types, the command will return the latest available display models for each type. For example:
+
+```bash
+docker run --rm -it \
+mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text \
+Punct Rescore Resegment Wfstitn \ # Specify `FullDisplay` or a space-separated subset of display models
+BaseModelLocale={LOCALE} \
+Eula=accept \
+Billing={ENDPOINT_URI} \
+ApiKey={API_KEY}
+```
+
+Set the `DisplayLocale` to download the latest available display model on the target locale. When you set `DisplayLocale`, you must also specify `FullDisplay` or a space-separated subset of display models. The command will download the latest available display model for each specified type. For example:
+
+```bash
+docker run --rm -it \
+mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text \
+Punct Rescore Resegment Wfstitn \ # Specify `FullDisplay` or a space-separated subset of display models
+DisplayLocale={LOCALE} \
+Eula=accept \
+Billing={ENDPOINT_URI} \
+ApiKey={API_KEY}
+```
+
+Set one model ID parameter to download a specific display model: Rescoring (`RescoreId`), Punctuation (`PunctId`), resegmentation (`ResegmentId`), or wfstitn (`WfstitnId`). This is similar to how you would download a base model via the `ModelId` parameter. For example, to download a rescoring display model, you can use the following command with the `RescoreId` parameter:
+
+```bash
+docker run --rm -it \
+mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text \
+RescoreId={RESCORE_MODEL_ID} \
+Eula=accept \
+Billing={ENDPOINT_URI} \
+ApiKey={API_KEY}
+```
+
+> [!NOTE]
+> If you set more than one query or download parameter, the command will prioritize in this order: `BaseModelLocale`, model ID, and then `DisplayLocale` (only applicable for display models).
+
+## Run the container with docker run
+
+Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container for service.
+
+# [Custom speech to text](#tab/container)
++
+# [Disconnected custom speech to text](#tab/disconnected)
+
+To run disconnected containers (not connected to the internet), you must submit [this request form](https://aka.ms/csdisconnectedcontainers) and wait for approval. For more information about applying and purchasing a commitment plan to use containers in disconnected environments, see [Use containers in disconnected environments](../containers/disconnected-containers.md) in the Azure Cognitive Services documentation.
+
+If you have been approved to run the container disconnected from the internet, the following example shows the formatting of the `docker run` command to use, with placeholder values. Replace these placeholder values with your own values.
+
+In order to prepare and configure a disconnected custom speech-to-text container you will need two separate speech resources:
+
+- A regular Azure Speech Service resource which is either configured to use a "**S0 - Standard**" pricing tier or a "**Speech to Text (Custom)**" commitment tier pricing plan. This is used to train, download, and configure your custom speech models for use in your container.
+- An Azure Speech Service resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan. This is used to download your disconnected container license file required to run the container in disconnected mode.
+
+Follow these steps to download and run the container in disconnected environments.
+1. [Download a model for the disconnected container](#download-a-model-for-the-disconnected-container). For this step, use a regular Azure Speech Service resource which is either configured to use a "**S0 - Standard**" pricing tier or a "**Speech to Text (Custom)**" commitment tier pricing plan.
+1. [Download the disconnected container license](#download-the-disconnected-container-license). For this step, use an Azure Speech Service resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan.
+1. [Run the disconnected container for service](#run-the-disconnected-container). For this step, use an Azure Speech Service resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan.
+
+### Download a model for the disconnected container
+
+For this step, use a regular Azure Speech Service resource which is either configured to use a "**S0 - Standard**" pricing tier or a "**Speech to Text (Custom)**" commitment tier pricing plan.
++
+### Download the disconnected container license
+
+Next, you download your disconnected license file. The `DownloadLicense=True` parameter in your `docker run` command will download a license file that will enable your Docker container to run when it isn't connected to the internet. It also contains an expiration date, after which the license file will be invalid to run the container.
+
+You can only use a license file with the appropriate container that you've been approved for. For example, you can't use a license file for a `speech-to-text` container with a `neural-text-to-speech` container.
+
+| Placeholder | Description |
+|-|-|
+| `{IMAGE}` | The container image you want to use.<br/><br/>For example: `mcr.microsoft.com/azure-cognitive-services/custom-speech-to-text:latest` |
+| `{LICENSE_MOUNT}` | The path where the license will be downloaded, and mounted.<br/><br/>For example: `/host/license:/path/to/license/directory` |
+| `{ENDPOINT_URI}` | The endpoint for authenticating your service request. You can find it on your resource's **Key and endpoint** page, on the Azure portal.<br/><br/>For example: `https://<your-resource-name>.cognitiveservices.azure.com` |
+| `{API_KEY}` | The key for your Speech resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |
+| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem.<br/><br/>For example: `/path/to/license/directory` |
+
+For this step, use an Azure Speech Service resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan.
+
+```bash
+docker run --rm -it -p 5000:5000 \
+-v {LICENSE_MOUNT} \
+{IMAGE} \
+eula=accept \
+billing={ENDPOINT_URI} \
+apikey={API_KEY} \
+DownloadLicense=True \
+Mounts:License={CONTAINER_LICENSE_DIRECTORY}
+```
+
+### Run the disconnected container
+
+Once the license file has been downloaded, you can run the container in a disconnected environment. The following example shows the formatting of the `docker run` command you'll use, with placeholder values. Replace these placeholder values with your own values.
+
+Wherever the container is run, the license file must be mounted to the container and the location of the license folder on the container's local filesystem must be specified with `Mounts:License=`. An output mount must also be specified so that billing usage records can be written.
+
+| Placeholder | Description |
+|-|-|
+| `{IMAGE}` | The container image you want to use.<br/><br/>For example: `mcr.microsoft.com/azure-cognitive-services/custom-speech-to-text:latest` |
+| `{MEMORY_SIZE}` | The appropriate size of memory to allocate for your container.<br/><br/>For example: `4g` |
+| `{NUMBER_CPUS}` | The appropriate number of CPUs to allocate for your container.<br/><br/>For example: `4` |
+| `{LICENSE_MOUNT}` | The path where the license will be downloaded, and mounted.<br/><br/>For example: `/host/license:/path/to/license/directory` |
+| `{ENDPOINT_URI}` | The endpoint for authenticating your service request. You can find it on your resource's **Key and endpoint** page, on the Azure portal.<br/><br/>For example: `https://<your-resource-name>.cognitiveservices.azure.com` |
+| `{API_KEY}` | The key for your Speech resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |
+| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem.<br/><br/>For example: `/path/to/license/directory` |
+| `{CONTAINER_OUTPUT_DIRECTORY}` | Location of the output folder on the container's local filesystem.<br/><br/>For example: `/path/to/output/directory` |
+| `{OUTPUT_PATH}` | The output path for logging.<br/><br/>For example: `/host/output:/path/to/output/directory`<br/><br/>For more information, see [usage records](../containers/disconnected-containers.md#usage-records) in the Azure Cognitive Services documentation. |
+| `{MODEL_PATH}` | The path where the model is located.<br/><br/>For example: `/path/to/model/` |
+
+For this step, use an Azure Speech Service resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan.
+
+```bash
+docker run --rm -it -p 5000:5000 --memory {MEMORY_SIZE} --cpus {NUMBER_CPUS} \
+-v {LICENSE_MOUNT} \
+-v {OUTPUT_PATH} \
+-v {MODEL_PATH} \
+{IMAGE} \
+eula=accept \
+Mounts:License={CONTAINER_LICENSE_DIRECTORY}
+Mounts:Output={CONTAINER_OUTPUT_DIRECTORY}
+```
+
+The Custom Speech-to-Text container provides a default directory for writing the license file and billing log at runtime. The default directories are /license and /output respectively.
+
+When you're mounting these directories to the container with the `docker run -v` command, make sure the local machine directory is set ownership to `user:group nonroot:nonroot` before running the container.
+
+Below is a sample command to set file/directory ownership.
+
+```bash
+sudo chown -R nonroot:nonroot <YOUR_LOCAL_MACHINE_PATH_1> <YOUR_LOCAL_MACHINE_PATH_2> ...
+```
++++
+## Use the container
++
+[Try the speech-to-text quickstart](get-started-speech-to-text.md) using host authentication instead of key and region.
+
+## Next steps
+
+* See the [Speech containers overview](speech-container-overview.md)
+* Review [configure containers](speech-container-configuration.md) for configuration settings
+* Use more [Azure Cognitive Services containers](../cognitive-services-container-support.md)
++
cognitive-services Speech Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-howto.md
Title: Install and run Docker containers for the Speech service APIs
+ Title: Install and run Speech containers with Docker - Speech service
-description: Use the Docker containers for the Speech service to perform speech recognition, transcription, generation, and more on-premises.
+description: Use the Speech containers with Docker to perform speech recognition, transcription, generation, and more on-premises.
Previously updated : 03/02/2023 Last updated : 04/18/2023 - keywords: on-premises, Docker, container
-# Install and run Docker containers for the Speech service APIs
+# Install and run Speech containers with Docker
-By using containers, you can run _some_ of the Azure Cognitive Services Speech service APIs in your own environment. Containers are great for specific security and data governance requirements. In this article, you'll learn how to download, install, and run a Speech container.
+By using containers, you can use a subset of the Speech service features in your own environment. In this article, you'll learn how to download, install, and run a Speech container.
-With Speech containers, you can build a speech application architecture that's optimized for both robust cloud capabilities and edge locality. Several containers are available, which use the same [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) as the cloud-based Azure Speech services.
-
-## Available Speech containers
-
-> [!IMPORTANT]
-> We retired the standard speech synthesis voices and text-to-speech container on August 31, 2021. Consider migrating your applications to use the neural text-to-speech container instead. For more information on updating your application, see [Migrate from standard voice to prebuilt neural voice](./how-to-migrate-to-prebuilt-neural-voice.md).
-
-| Container | Features | Supported versions and locales |
-|--|--|--|
-| Speech-to-text | Analyzes sentiment and transcribes continuous real-time speech or batch audio recordings with intermediate results. | Latest: 3.13.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list).|
-| Custom speech-to-text | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | Latest: 3.13.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/custom-speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list). |
-| Speech language identification | Detects the language spoken in audio files. | Latest: 1.11.0<sup>1</sup><br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/language-detection/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/language-detection/tags/list). |
-| Neural text-to-speech | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | Latest: 2.12.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/neural-text-to-speech/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/neural-text-to-speech/tags/list). |
-
-<sup>1</sup> The container is available in public preview. Containers in preview are still under development and don't meet Microsoft's stability and support requirements.
+> [!NOTE]
+> Disconnected container pricing and commitment tiers vary from standard containers. For more information, see [Speech Services pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
## Prerequisites
-> [!IMPORTANT]
-> To use the Speech containers, you must submit an online request and have it approved. For more information, see the "Request approval to run the container" section.
- You must meet the following prerequisites before you use Speech service containers. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin. You need:
+* You must [request and get approval](speech-container-overview.md#request-approval-to-run-the-container) to use a Speech container.
* [Docker](https://docs.docker.com/) installed on a host computer. Docker must be configured to allow the containers to connect with and send billing data to Azure. * On Windows, Docker must also be configured to support Linux containers. * You should have a basic understanding of [Docker concepts](https://docs.docker.com/get-started/overview/). * A <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" title="Create a Speech service resource" target="_blank">Speech service resource </a> with the free (F0) or standard (S) [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+### Billing arguments
-## Host computer requirements and recommendations
+Speech containers aren't licensed to run without being connected to Azure for metering. You must configure your container to communicate billing information with the metering service at all times.
+
+Three primary parameters for all Cognitive Services containers are required. The Microsoft Software License Terms must be present with a value of **accept**. An Endpoint URI and API key are also needed.
+
+Queries to the container are billed at the pricing tier of the Azure resource that's used for the `ApiKey` parameter.
+
+The <a href="https://docs.docker.com/engine/reference/commandline/run/" target="_blank">`docker run` <span class="docon docon-navigate-external x-hidden-focus"></span></a> command will start the container when all three of the following options are provided with valid values:
+
+| Option | Description |
+|--|-|
+| `ApiKey` | The API key of the Speech resource that's used to track billing information.<br/>The `ApiKey` value is used to start the container and is available on the Azure portal's **Keys** page of the corresponding Speech resource. Go to the **Keys** page, and select the **Copy to clipboard** <span class="docon docon-edit-copy x-hidden-focus"></span> icon.|
+| `Billing` | The endpoint of the Speech resource that's used to track billing information.<br/>The endpoint is available on the Azure portal **Overview** page of the corresponding Speech resource. Go to the **Overview** page, hover over the endpoint, and a **Copy to clipboard** <span class="docon docon-edit-copy x-hidden-focus"></span> icon appears. Copy and use the endpoint where needed.|
+| `Eula` | Indicates that you accepted the license for the container.<br/>The value of this option must be set to **accept**. |
+
+> [!IMPORTANT]
+> These subscription keys are used to access your Cognitive Services API. Don't share your keys. Store them securely. For example, use Azure Key Vault. We also recommend that you regenerate these keys regularly. Only one key is necessary to make an API call. When you regenerate the first key, you can use the second key for continued access to the service.
+
+The container needs the billing argument values to run. These values allow the container to connect to the billing endpoint. The container reports usage about every 10 to 15 minutes. If the container doesn't connect to Azure within the allowed time window, the container continues to run but doesn't serve queries until the billing endpoint is restored. The connection is attempted 10 times at the same time interval of 10 to 15 minutes. If it can't connect to the billing endpoint within the 10 tries, the container stops serving requests. For an example of the information sent to Microsoft for billing, see the [Cognitive Services container FAQ](../containers/container-faq.yml#how-does-billing-work) in the Azure Cognitive Services documentation.
+For more information about these options, see [Configure containers](speech-container-configuration.md).
### Container requirements and recommendations
Core and memory correspond to the `--cpus` and `--memory` settings, which are us
> [!NOTE] > The minimum and recommended allocations are based on Docker limits, *not* the host machine resources.
-> For example, speech-to-text containers memory map portions of a large language model. We recommend that the entire file should fit in memory. You need to add an additional 4 to 8 GB to load the speech modesl (see above table).
+> For example, speech-to-text containers memory map portions of a large language model. We recommend that the entire file should fit in memory. You need to add an additional 4 to 8 GB to load the speech models (see above table).
> Also, the first run of either container might take longer because models are being paged into memory.
-### Advanced Vector Extension support
-
-The *host* is the computer that runs the Docker container. The host *must support* [Advanced Vector Extensions](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX2) (AVX2). You can check for AVX2 support on Linux hosts with the following command:
-
-```console
-grep -q avx2 /proc/cpuinfo && echo AVX2 supported || echo No AVX2 support detected
-```
-> [!WARNING]
-> The host computer is *required* to support AVX2. The container *will not* function correctly without AVX2 support.
-
-## Request approval to run the container
-
-Fill out and submit the [request form](https://aka.ms/csgate) to request access to the container.
--
-## Speech container images
-
-# [Speech-to-text](#tab/stt)
-
-The Speech-to-text container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/speechservices/` repository and is named `speech-to-text`. The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text`. You can find a full list of [tags on the MCR](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/speech-to-text/tags).
-
-| Container | Repository |
-|--||
-| Speech-to-text | `mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text:latest` |
-
-# [Custom speech-to-text](#tab/cstt)
-
-The Custom Speech-to-text container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/speechservices/` repository and is named `custom-speech-to-text`. The fully qualified container image name is `mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text`.
-
-To use the latest version of the container, you can use the `latest` tag. You can also find a full list of [tags on the MCR](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/custom-speech-to-text/tags).
-
-| Container | Repository |
-|--||
-| Custom speech-to-text | `mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text:latest` |
-
-# [Neural text-to-speech](#tab/ntts)
-
-The Neural Text-to-speech container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/speechservices/` repository and is named `neural-text-to-speech`. The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech`.
-
-To use the latest version of the container, you can use the `latest` tag. You can also find a full list of [tags on the MCR](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/neural-text-to-speech/about).
-
-| Container | Repository |
-|--||
-| Neural text-to-speech | `mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech:latest` |
-
-# [Speech language identification](#tab/lid)
-
-The Speech language detection container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/speechservices/` repository and is named `language-detection`. The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection`.
-
-To use the latest version of the container, you can use the `latest` tag. You can also find a full list of [tags on the MCR](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/language-detection/tags).
-
-> [!TIP]
-> To get the most useful results, use the Speech language identification container with the speech-to-text or custom speech-to-text containers.
-
-| Container | Repository |
-|--||
-| Speech language identification | `mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection:latest` |
-
-***
-
-### Get the container image with docker pull
-
-# [Speech-to-text](#tab/stt)
-
-#### Docker pull for the speech-to-text container
-
-Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container Registry:
-
-```Docker
-docker pull mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text:latest
-```
-
-> [!IMPORTANT]
-> The `latest` tag pulls the `en-US` locale. For additional locales, see [Speech-to-text locales](#speech-to-text-locales).
-
-#### Speech-to-text locales
-
-All tags, except for `latest`, are in the following format and are case sensitive:
-
-```
-<major>.<minor>.<patch>-<platform>-<locale>-<prerelease>
-```
-
-The following tag is an example of the format:
-
-```
-2.6.0-amd64-en-us
-```
-
-For all the supported locales of the speech-to-text container, see [Speech-to-text image tags](../containers/container-image-tags.md#speech-to-text).
-
-# [Custom speech-to-text](#tab/cstt)
-
-#### Docker pull for the custom speech-to-text container
-
-Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container Registry:
-
-```Docker
-docker pull mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text:latest
-```
-
-> [!NOTE]
-> The `locale` and `voice` for custom Speech containers is determined by the custom model ingested by the container.
-
-# [Neural text-to-speech](#tab/ntts)
-
-#### Docker pull for the neural text-to-speech container
-
-Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container Registry:
-
-```Docker
-docker pull mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech:latest
-```
-
-> [!IMPORTANT]
-> The `latest` tag pulls the `en-US` locale and `arianeural` voice. For more locales, see [Neural text-to-speech locales](#neural-text-to-speech-locales).
-
-#### Neural text-to-speech locales
-
-All tags, except for `latest`, are in the following format and are case sensitive:
-
-```
-<major>.<minor>.<patch>-<platform>-<locale>-<voice>
-```
-
-The following tag is an example of the format:
-
-```
-1.3.0-amd64-en-us-arianeural
-```
-
-For all the supported locales and corresponding voices of the neural text-to-speech container, see [Neural text-to-speech image tags](../containers/container-image-tags.md#neural-text-to-speech).
-
-> [!IMPORTANT]
-> When you construct a neural text-to-speech HTTP POST, the [SSML](speech-synthesis-markup.md) message requires a `voice` element with a `name` attribute. The value is the corresponding container [locale and voice](language-support.md?tabs=tts). For example, the `latest` tag would have a voice name of `en-US-AriaNeural`.
-
-# [Speech language identification](#tab/lid)
-
-#### Docker pull for the Speech language identification container
-
-Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container Registry:
-
-```Docker
-docker pull mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection:latest
-```
-
-***
-## Use the container
-
-After the container is on the [host computer](#host-computer-requirements-and-recommendations), use the following process to work with the container.
-
-1. [Run the container](#run-the-container-with-docker-run) with the required billing settings. More [examples](speech-container-configuration.md#example-docker-run-commands) of the `docker run` command are available.
-1. [Query the container's prediction endpoint](#query-the-containers-prediction-endpoint).
-
-## Run the container with docker run
-
-Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container. For more information on how to get the `{Endpoint_URI}` and `{API_Key}` values, see [Gather required parameters](#gather-required-parameters). More [examples](speech-container-configuration.md#example-docker-run-commands) of the `docker run` command are also available.
-
-> [!NOTE]
-> For general container requirements, see [Container requirements and recommendations](#container-requirements-and-recommendations).
+## Host computer requirements and recommendations
-# [Speech-to-text](#tab/stt)
+The host is an x64-based computer that runs the Docker container. It can be a computer on your premises or a Docker hosting service in Azure, such as:
-### Run the container connected to the internet
+* [Azure Kubernetes Service](~/articles/aks/index.yml).
+* [Azure Container Instances](~/articles/container-instances/index.yml).
+* A [Kubernetes](https://kubernetes.io/) cluster deployed to [Azure Stack](/azure-stack/operator). For more information, see [Deploy Kubernetes to Azure Stack](/azure-stack/user/azure-stack-solution-template-kubernetes-deploy).
-To run the standard speech-to-text container, execute the following `docker run` command:
-
-```bash
-docker run --rm -it -p 5000:5000 --memory 8g --cpus 4 \
-mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY}
-```
-
-This command:
-
-* Runs a *speech-to-text* container from the container image.
-* Allocates 4 CPU cores and 8 GB of memory.
-* Exposes TCP port 5000 and allocates a pseudo-TTY for the container.
-* Automatically removes the container after it exits. The container image is still available on the host computer.
> [!NOTE] > Containers support compressed audio input to the Speech SDK by using GStreamer.
-> To install GStreamer in a container,
-> follow Linux instructions for GStreamer in [Use codec compressed audio input with the Speech SDK](how-to-use-codec-compressed-audio-input-streams.md).
-
-### Run the container disconnected from the internet
+> To install GStreamer in a container, follow Linux instructions for GStreamer in [Use codec compressed audio input with the Speech SDK](how-to-use-codec-compressed-audio-input-streams.md).
-
-The speech-to-text container provide a default directory for writing the license file and billing log at runtime. The default directories are /license and /output respectively.
-
-When you're mounting these directories to the container with the `docker run -v` command, make sure the local machine directory is set ownership to `user:group nonroot:nonroot` before running the container.
+### Advanced Vector Extension support
-Below is a sample command to set file/directory ownership.
+The *host* is the computer that runs the Docker container. The host *must support* [Advanced Vector Extensions](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX2) (AVX2). You can check for AVX2 support on Linux hosts with the following command:
-```bash
-sudo chown -R nonroot:nonroot <YOUR_LOCAL_MACHINE_PATH_1> <YOUR_LOCAL_MACHINE_PATH_2> ...
+```console
+grep -q avx2 /proc/cpuinfo && echo AVX2 supported || echo No AVX2 support detected
```
+> [!WARNING]
+> The host computer is *required* to support AVX2. The container *will not* function correctly without AVX2 support.
-### Diarization on the speech-to-text output
-
-Diarization is enabled by default. To get diarization in your response, use `diarize_speech_config.set_service_property`.
-
-1. Set the phrase output format to `Detailed`.
-2. Set the mode of diarization. The supported modes are `Identity` and `Anonymous`.
-
- ```python
- diarize_speech_config.set_service_property(
- name='speechcontext-PhraseOutput.Format',
- value='Detailed',
- channel=speechsdk.ServicePropertyChannel.UriQueryParameter
- )
-
- diarize_speech_config.set_service_property(
- name='speechcontext-phraseDetection.speakerDiarization.mode',
- value='Identity',
- channel=speechsdk.ServicePropertyChannel.UriQueryParameter
- )
- ```
-
- > [!NOTE]
- > "Identity" mode returns `"SpeakerId": "Customer"` or `"SpeakerId": "Agent"`.
- > "Anonymous" mode returns `"SpeakerId": "Speaker 1"` or `"SpeakerId": "Speaker 2"`.
-
-### Analyze sentiment on the speech-to-text output
-
-Starting in v2.6.0 of the speech-to-text container, you should use Language service 3.0 API endpoint instead of the preview one. For example:
-
-* `https://eastus.api.cognitive.microsoft.com/text/analytics/v3.0/sentiment`
-* `https://localhost:5000/text/analytics/v3.0/sentiment`
-
-> [!NOTE]
-> The Language service `v3.0` API isn't backward compatible with `v3.0-preview.1`. To get the latest sentiment feature support, use `v2.6.0` of the speech-to-text container image and Language service `v3.0`.
+## Run the container
-Starting in v2.2.0 of the speech-to-text container, you can call the [sentiment analysis v3 API](../text-analytics/how-tos/text-analytics-how-to-sentiment-analysis.md) on the output. To call sentiment analysis, you'll need a Language service API resource endpoint. For example:
+Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container. Once running, the container continues to run until you [stop the container](#stop-the-container).
-* `https://eastus.api.cognitive.microsoft.com/text/analytics/v3.0-preview.1/sentiment`
-* `https://localhost:5000/text/analytics/v3.0-preview.1/sentiment`
+Take note the following best practices with the `docker run` command:
-If you're accessing a Language service endpoint in the cloud, you'll need a key. If you're running Language service features locally, you might not need to provide this.
+- **Line-continuation character**: The Docker commands in the following sections use the back slash, `\`, as a line continuation character. Replace or remove this based on your host operating system's requirements.
+- **Argument order**: Do not change the order of the arguments unless you are familiar with Docker containers.
-The key and endpoint are passed to the Speech container as arguments, as in the following example:
+You can use the [docker images](https://docs.docker.com/engine/reference/commandline/images/) command to list your downloaded container images. The following command lists the ID, repository, and tag of each downloaded container image, formatted as a table:
```bash
-docker run -it --rm -p 5000:5000 \
-mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text:latest \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY} \
-CloudAI:SentimentAnalysisSettings:TextAnalyticsHost={TEXT_ANALYTICS_HOST} \
-CloudAI:SentimentAnalysisSettings:SentimentAnalysisApiKey={SENTIMENT_APIKEY}
-```
-
-This command:
-
-* Performs the same steps as the preceding command.
-* Stores a Language service API endpoint and key, for sending sentiment analysis requests.
-
-### Phraselist v2 on the speech-to-text output
-
-Starting in v2.6.0 of the speech-to-text container, you can get the output with your own phrases, either the whole sentence or phrases in the middle. For example, *the tall man* in the following sentence:
-
-* "This is a sentence **the tall man** this is another sentence."
-
-To configure a phrase list, you need to add your own phrases when you make the call. For example:
-
-```python
- phrase="the tall man"
- recognizer = speechsdk.SpeechRecognizer(
- speech_config=dict_speech_config,
- audio_config=audio_config)
- phrase_list_grammer = speechsdk.PhraseListGrammar.from_recognizer(recognizer)
- phrase_list_grammer.addPhrase(phrase)
-
- dict_speech_config.set_service_property(
- name='setflight',
- value='xonlineinterp',
- channel=speechsdk.ServicePropertyChannel.UriQueryParameter
- )
+docker images --format "table {{.ID}}\t{{.Repository}}\t{{.Tag}}"
```
-If you have multiple phrases to add, call `.addPhrase()` for each phrase to add it to the phrase list.
-
-# [Custom speech-to-text](#tab/cstt)
-
-The custom speech-to-text container relies on a Custom Speech model. The custom model has to have been [trained](how-to-custom-speech-train-model.md) by using the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-
-The custom speech **Model ID** is required to run the container. For more information about how to get the model ID, see [Custom Speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md).
-
-![Screenshot that shows the Custom Speech training page.](media/custom-speech/custom-speech-model-training.png)
-
-Obtain the **Model ID** to use as the argument to the `ModelId` parameter of the `docker run` command.
-
-![Screenshot that shows Custom Speech model details.](media/custom-speech/custom-speech-model-details.png)
-
-The following table represents the various `docker run` parameters and their corresponding descriptions:
-
-| Parameter | Description |
-|||
-| `{VOLUME_MOUNT}` | The host computer [volume mount](https://docs.docker.com/storage/volumes/), which Docker uses to persist the custom model. An example is *C:\CustomSpeech* where the C drive is located on the host machine. |
-| `{MODEL_ID}` | The custom speech model ID. For more information, see [Custom Speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md). |
-| `{ENDPOINT_URI}` | The endpoint is required for metering and billing. For more information, see [Gather required parameters](#gather-required-parameters). |
-| `{API_KEY}` | The API key is required. For more information, see [Gather required parameters](#gather-required-parameters). |
-
-To run the custom speech-to-text container, execute the following `docker run` command:
-
-```bash
-docker run --rm -it -p 5000:5000 --memory 8g --cpus 4 \
--v {VOLUME_MOUNT}:/usr/local/models \
-mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text \
-ModelId={MODEL_ID} \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY}
+Here's an example result:
```-
-This command:
-
-* Runs a custom speech-to-text container from the container image.
-* Allocates 4 CPU cores and 8 GB of memory.
-* Loads the custom speech-to-text model from the volume input mount, for example, *C:\CustomSpeech*.
-* Exposes TCP port 5000 and allocates a pseudo-TTY for the container.
-* Downloads the model given the `ModelId` (if not found on the volume mount).
-* If the custom model was previously downloaded, the `ModelId` is ignored.
-* Automatically removes the container after it exits. The container image is still available on the host computer.
-
-#### Base model download on the custom speech-to-text container
-
-Starting in v2.6.0 of the custom-speech-to-text container, you can get the available base model information by using option `BaseModelLocale={LOCALE}`. This option gives you a list of available base models on that locale under your billing account. For example:
-
-```bash
-docker run --rm -it \
-mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text \
-BaseModelLocale={LOCALE} \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY}
+IMAGE ID REPOSITORY TAG
+<image-id> <repository-path/name> <tag-name>
```
-This command:
+## Validate that a container is running
-* Runs a custom speech-to-text container from the container image.
-* Checks and returns the available base models of the target locale.
+There are several ways to validate that the container is running. Locate the *External IP* address and exposed port of the container in question, and open your favorite web browser. Use the various request URLs that follow to validate the container is running.
-The output gives you a list of base models with the information locale, model ID, and creation date time. You can use the model ID to download and use the specific base model you prefer. For example:
-```
-Checking available base model for en-us
-2020/10/30 21:54:20 [Info] Searching available base models for en-us
-2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2016-11-04T08:23:42Z, Id: a3d8aab9-6f36-44cd-9904-b37389ce2bfa
-2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2016-11-04T12:01:02Z, Id: cc7826ac-5355-471d-9bc6-a54673d06e45
-2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2017-08-17T12:00:00Z, Id: a1f8db59-40ff-4f0e-b011-37629c3a1a53
-2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2018-04-16T11:55:00Z, Id: c7a69da3-27de-4a4b-ab75-b6716f6321e5
-2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2018-09-21T15:18:43Z, Id: da494a53-0dad-4158-b15f-8f9daca7a412
-2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2018-10-19T11:28:54Z, Id: 84ec130b-d047-44bf-a46d-58c1ac292ca7
-2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2018-11-26T07:59:09Z, Id: ee5c100f-152f-4ae5-9e9d-014af3c01c56
-2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2018-11-26T09:21:55Z, Id: d04959a6-71da-4913-9997-836793e3c115
-2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2019-01-11T10:04:19Z, Id: 488e5f23-8bc5-46f8-9ad8-ea9a49a8efda
-2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2019-02-18T14:37:57Z, Id: 0207b3e6-92a8-4363-8c0e-361114cdd719
-2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2019-03-03T17:34:10Z, Id: 198d9b79-2950-4609-b6ec-f52254074a05
-2020/10/30 21:54:21 [Fatal] Please run this tool again and assign --modelId '<one above base model id>'. If no model id listed above, it means currently there is no available base model for en-us
-```
+The example request URLs listed here are `http://localhost:5000`, but your specific container might vary. Make sure to rely on your container's *External IP* address and exposed port.
-#### Display model download on the custom speech-to-text container
-Starting in v3.1.0 of the custom-speech-to-text container, you can get the available display models information and choose to download those models into your speech-to-text container to get highly improved final display output.
+| Request URL | Purpose |
+|--|--|
+| `http://localhost:5000/` | The container provides a home page. |
+| `http://localhost:5000/ready` | Requested with GET, this URL provides a verification that the container is ready to accept a query against the model. This request can be used for Kubernetes [liveness and readiness probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/). |
+| `http://localhost:5000/status` | Also requested with GET, this URL verifies if the api-key used to start the container is valid without causing an endpoint query. This request can be used for Kubernetes [liveness and readiness probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/). |
+| `http://localhost:5000/swagger` | The container provides a full set of documentation for the endpoints and a **Try it out** feature. With this feature, you can enter your settings into a web-based HTML form and make the query without having to write any code. After the query returns, an example CURL command is provided to demonstrate the HTTP headers and body format that's required. |
-You can query or download any or all of these display model types: Rescoring (`Rescore`), Punctuation (`Punct`), resegmentation (`Resegment`), and wfstitn (`Wfstitn`). Otherwise, you can use the `FullDisplay` option (with or without the other types) to query or download all types of display models.
-
-Set the `BaseModelLocale` to query the latest available display model on the target locale. If you include multiple display model types, the command will return the latest available display models for each type. For example:
+## Stop the container
-```bash
-docker run --rm -it \
-mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text \
-Punct Rescore Resegment Wfstitn \ # Specify `FullDisplay` or a space-separated subset of display models
-BaseModelLocale={LOCALE} \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY}
-```
+To shut down the container, in the command-line environment where the container is running, select <kbd>Ctrl+C</kbd>.
-Set the `DisplayLocale` to download the latest available display model on the target locale. When you set `DisplayLocale`, you must also specify `FullDisplay` or a space-separated subset of display models. The command will download the latest available display model for each specified type. For example:
+## Run multiple containers on the same host
-```bash
-docker run --rm -it \
-mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text \
-Punct Rescore Resegment Wfstitn \ # Specify `FullDisplay` or a space-separated subset of display models
-DisplayLocale={LOCALE} \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY}
-```
+If you intend to run multiple containers with exposed ports, make sure to run each container with a different exposed port. For example, run the first container on port 5000 and the second container on port 5001.
-Set one model ID parameter to download a specific display model: Rescoring (`RescoreId`), Punctuation (`PunctId`), resegmentation (`ResegmentId`), or wfstitn (`WfstitnId`). This is similar to how you would download a base model via the `ModelId` parameter. For example, to download a rescoring display model, you can use the following command with the `RescoreId` parameter:
+You can have this container and a different Cognitive Services container running on the HOST together. You also can have multiple containers of the same Cognitive Services container running.
-```bash
-docker run --rm -it \
-mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text \
-RescoreId={RESCORE_MODEL_ID} \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY}
-```
+## Host URLs
> [!NOTE]
-> If you set more than one query or download parameter, the command will prioritize in this order: `BaseModelLocale`, model ID, and then `DisplayLocale` (only applicable for display models).
-
-#### Custom pronunciation on the custom speech-to-text container
-
-Starting in v2.5.0 of the custom-speech-to-text container, you can get custom pronunciation results in the output. All you need to do is have your own custom pronunciation rules set up in your custom model and mount the model to a custom-speech-to-text container.
--
-### Run the container disconnected from the internet
-
-To use this container disconnected from the internet, you must first request access by filling out an application, and purchasing a commitment plan. See [Use Docker containers in disconnected environments](../containers/disconnected-containers.md) for more information.
-
-In order to prepare and configure the Custom Speech-to-Text container you will need two separate speech resources:
-
-1. A regular Azure Speech Service resource which is either configured to use a "**S0 - Standard**" pricing tier or a "**Speech to Text (Custom)**" commitment tier pricing plan. This will be used to train, download, and configure your custom speech models for use in your container.
-1. An Azure Speech Service resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan. This is used to download your disconnected container license file required to run the container in disconnected mode.
-
-Download the docker container and run it to get the required speech model as [described above](#get-the-container-image-with-docker-pull) using the regular Azure Speech resource. Next, you will need to download your disconnected license file.
-
-The `DownloadLicense=True` parameter in your `docker run` command will download a license file that will enable your Docker container to run when it isn't connected to the internet. It also contains an expiration date, after which the license file will be invalid to run the container. You can only use a license file with the appropriate container that you've been approved for. For example, you can't use a license file for a speech-to-text container with a form recognizer container.
-
-| Placeholder | Value | Format or example |
-|-|-||
-| `{IMAGE}` | The container image you want to use. | `mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice` |
-| `{LICENSE_MOUNT}` | The path where the license will be downloaded, and mounted. | `/host/license:/path/to/license/directory` |
-| `{ENDPOINT_URI}` | The endpoint for authenticating your service request. You can find it on your resource's **Key and endpoint** page, on the Azure portal. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
-| `{API_KEY}` | The key for your Text Analytics resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |`xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`|
-| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem. | `/path/to/license/directory` |
-
-```bash
-docker run --rm -it -p 5000:5000 \
--v {LICENSE_MOUNT} \
-{IMAGE} \
-eula=accept \
-billing={ENDPOINT_URI} \
-apikey={API_KEY} \
-DownloadLicense=True \
-Mounts:License={CONTAINER_LICENSE_DIRECTORY}
-```
-
-Once the license file has been downloaded, you can run the container in a disconnected environment. The following example shows the formatting of the `docker run` command you'll use, with placeholder values. Replace these placeholder values with your own values.
-
-Wherever the container is run, the license file must be mounted to the container and the location of the license folder on the container's local filesystem must be specified with `Mounts:License=`. An output mount must also be specified so that billing usage records can be written.
+> Use a unique port number if you're running multiple containers.
-Placeholder | Value | Format or example |
-|-|-||
-| `{IMAGE}` | The container image you want to use. | `mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice` |
- `{MEMORY_SIZE}` | The appropriate size of memory to allocate for your container. | `4g` |
-| `{NUMBER_CPUS}` | The appropriate number of CPUs to allocate for your container. | `4` |
-| `{LICENSE_MOUNT}` | The path where the license will be located and mounted. | `/host/license:/path/to/license/directory` |
-| `{OUTPUT_PATH}` | The output path for logging [usage records](../containers/disconnected-containers.md#usage-records). | `/host/output:/path/to/output/directory` |
-| `{MODEL_PATH}` | The path where the model is located. | `/path/to/model/` |
-| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem. | `/path/to/license/directory` |
-| `{CONTAINER_OUTPUT_DIRECTORY}` | Location of the output folder on the container's local filesystem. | `/path/to/output/directory` |
+| Protocol | Host URL | Containers |
+|--|--|--|
+| WS | `ws://localhost:5000` | [Speech-to-text](speech-container-stt.md#use-the-container)<br/><br/>[Custom speech-to-text](speech-container-cstt.md#use-the-container) |
+| HTTP | `http://localhost:5000` | [Neural text-to-speech](speech-container-ntts.md#use-the-container)<br/><br/>[Speech language identification](speech-container-lid.md#use-the-container) |
-```bash
-docker run --rm -it -p 5000:5000 --memory {MEMORY_SIZE} --cpus {NUMBER_CPUS} \
--v {LICENSE_MOUNT} \ --v {OUTPUT_PATH} \--v {MODEL_PATH} \
-{IMAGE} \
-eula=accept \
-Mounts:License={CONTAINER_LICENSE_DIRECTORY}
-Mounts:Output={CONTAINER_OUTPUT_DIRECTORY}
-```
+For more information on using WSS and HTTPS protocols, see [Container security](../cognitive-services-container-support.md#azure-cognitive-services-container-security) in the Azure Cognitive Services documentation.
-The [Custom Speech-to-Text](../speech-service/speech-container-howto.md?tabs=cstt) container provides a default directory for writing the license file and billing log at runtime. The default directories are /license and /output respectively.
+## Troubleshooting
-When you're mounting these directories to the container with the `docker run -v` command, make sure the local machine directory is set ownership to `user:group nonroot:nonroot` before running the container.
+When you start or run the container, you might experience issues. Use an output [mount](speech-container-configuration.md#mount-settings) and enable logging. Doing so allows the container to generate log files that are helpful when you troubleshoot issues.
-Below is a sample command to set file/directory ownership.
+> [!TIP]
+> For more troubleshooting information and guidance, see [Cognitive Services containers frequently asked questions (FAQ)](../containers/container-faq.yml) in the Azure Cognitive Services documentation.
-```bash
-sudo chown -R nonroot:nonroot <YOUR_LOCAL_MACHINE_PATH_1> <YOUR_LOCAL_MACHINE_PATH_2> ...
-```
-# [Neural text-to-speech](#tab/ntts)
+### Logging settings
-To run the neural text-to-speech container, execute the following `docker run` command:
+Speech containers come with ASP.NET Core logging support. Here's an example of the `neural-text-to-speech container` started with default logging to the console:
```bash docker run --rm -it -p 5000:5000 --memory 12g --cpus 6 \ mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech \ Eula=accept \ Billing={ENDPOINT_URI} \
-ApiKey={API_KEY}
+ApiKey={API_KEY} \
+Logging:Console:LogLevel:Default=Information
```
-This command:
-
-* Runs a neural text-to-speech container from the container image.
-* Allocates 6 CPU cores and 12 GB of memory.
-* Exposes TCP port 5000 and allocates a pseudo-TTY for the container.
-* Automatically removes the container after it exits. The container image is still available on the host computer.
--
-### Run the container disconnected from the internet
-
+For more information about logging, see [Configure Speech containers](speech-container-configuration.md#logging-settings) and [usage records](../containers/disconnected-containers.md#usage-records) in the Azure Cognitive Services documentation.
+## Microsoft diagnostics container
-The neural text-to-speech container provide a default directory for writing the license file and billing log at runtime. The default directories are /license and /output respectively.
+If you're having trouble running a Cognitive Services container, you can try using the Microsoft diagnostics container. Use this container to diagnose common errors in your deployment environment that might prevent Cognitive Services containers from functioning as expected.
-When you're mounting these directories to the container with the `docker run -v` command, make sure the local machine directory is set ownership to `user:group nonroot:nonroot` before running the container.
-
-Below is a sample command to set file/directory ownership.
+To get the container, use the following `docker pull` command:
```bash
-sudo chown -R nonroot:nonroot <YOUR_LOCAL_MACHINE_PATH_1> <YOUR_LOCAL_MACHINE_PATH_2> ...
+docker pull mcr.microsoft.com/azure-cognitive-services/diagnostic
``` -
-# [Speech language identification](#tab/lid)
-
-To run the Speech language identification container, execute the following `docker run` command:
+Then run the container. Replace `{ENDPOINT_URI}` with your endpoint, and replace `{API_KEY}` with your key to your resource:
```bash
-docker run --rm -it -p 5003:5003 --memory 1g --cpus 1 \
-mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection \
-Eula=accept \
+docker run --rm mcr.microsoft.com/azure-cognitive-services/diagnostic \
+eula=accept \
Billing={ENDPOINT_URI} \ ApiKey={API_KEY} ```
-This command:
-
-* Runs a Speech language-detection container from the container image. Currently, you won't be charged for running this image.
-* Allocates 1 CPU core and 1 GB of memory.
-* Exposes TCP port 5003 and allocates a pseudo-TTY for the container.
-* Automatically removes the container after it exits. The container image is still available on the host computer.
-
-If you want to run this container with the speech-to-text container, you can use this [docker image](https://hub.docker.com/r/antsu/on-prem-client). After both containers have been started, use this `docker run` command to execute `speech-to-text-with-languagedetection-client`:
-
-```Docker
-docker run --rm -v ${HOME}:/root -ti antsu/on-prem-client:latest ./speech-to-text-with-languagedetection-client ./audio/LanguageDetection_en-us.wav --host localhost --lport 5003 --sport 5000
-```
-
-Increasing the number of concurrent calls can affect reliability and latency. For language identification, we recommend a maximum of four concurrent calls using 1 CPU with 1 GB of memory. For hosts with 2 CPUs and 2 GB of memory, we recommend a maximum of six concurrent calls.
-
-***
-> [!IMPORTANT]
-> The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container. Otherwise, the container won't start. For more information, see [Billing](#billing).
-
-## Query the container's prediction endpoint
-
-> [!NOTE]
-> Use a unique port number if you're running multiple containers.
-
-| Containers | SDK Host URL | Protocol |
-|--|--|--|
-| Standard speech-to-text and custom speech-to-text | `ws://localhost:5000` | WS |
-| Neural Text-to-speech, Speech language identification | `http://localhost:5000` | HTTP |
-
-For more information on using WSS and HTTPS protocols, see [Container security](../cognitive-services-container-support.md#azure-cognitive-services-container-security).
-
-### Speech-to-text (standard and custom)
--
-#### Analyze sentiment
-
-If you provided your Language service API credentials [to the container](#analyze-sentiment-on-the-speech-to-text-output), you can use the Speech SDK to send speech recognition requests with sentiment analysis. You can configure the API responses to use either a *simple* or *detailed* format.
-
-> [!NOTE]
-> v1.13 of the Speech Service Python SDK has an identified issue with sentiment analysis. Use v1.12.x or earlier if you're using sentiment analysis in the Speech Service Python SDK.
-
-# [Simple format](#tab/simple-format)
-
-To configure the Speech client to use a simple format, add `"Sentiment"` as a value for `Simple.Extensions`. If you want to choose a specific Language service model version, replace `'latest'` in the `speechcontext-phraseDetection.sentimentAnalysis.modelversion` property configuration.
-
-```python
-speech_config.set_service_property(
- name='speechcontext-PhraseOutput.Simple.Extensions',
- value='["Sentiment"]',
- channel=speechsdk.ServicePropertyChannel.UriQueryParameter
-)
-speech_config.set_service_property(
- name='speechcontext-phraseDetection.sentimentAnalysis.modelversion',
- value='latest',
- channel=speechsdk.ServicePropertyChannel.UriQueryParameter
-)
-```
-
-`Simple.Extensions` returns the sentiment result in the root layer of the response.
-
-```json
-{
- "DisplayText":"What's the weather like?",
- "Duration":13000000,
- "Id":"6098574b79434bd4849fee7e0a50f22e",
- "Offset":4700000,
- "RecognitionStatus":"Success",
- "Sentiment":{
- "Negative":0.03,
- "Neutral":0.79,
- "Positive":0.18
- }
-}
-```
-
-# [Detailed format](#tab/detailed-format)
-
-To configure the Speech client to use a detailed format, add `"Sentiment"` as a value for `Detailed.Extensions`, `Detailed.Options`, or both. If you want to choose a specific sentiment analysis model version, replace `'latest'` in the `speechcontext-phraseDetection.sentimentAnalysis.modelversion` property configuration.
-
-```python
-speech_config.set_service_property(
- name='speechcontext-PhraseOutput.Detailed.Options',
- value='["Sentiment"]',
- channel=speechsdk.ServicePropertyChannel.UriQueryParameter
-)
-speech_config.set_service_property(
- name='speechcontext-PhraseOutput.Detailed.Extensions',
- value='["Sentiment"]',
- channel=speechsdk.ServicePropertyChannel.UriQueryParameter
-)
-speech_config.set_service_property(
- name='speechcontext-phraseDetection.sentimentAnalysis.modelversion',
- value='latest',
- channel=speechsdk.ServicePropertyChannel.UriQueryParameter
-)
-```
-
-`Detailed.Extensions` provides the sentiment result in the root layer of the response. `Detailed.Options` provides the result in the `NBest` layer of the response. They can be used separately or together.
-
-```json
-{
- "DisplayText":"What's the weather like?",
- "Duration":13000000,
- "Id":"6a2aac009b9743d8a47794f3e81f7963",
- "NBest":[
- {
- "Confidence":0.973695,
- "Display":"What's the weather like?",
- "ITN":"what's the weather like",
- "Lexical":"what's the weather like",
- "MaskedITN":"What's the weather like",
- "Sentiment":{
- "Negative":0.03,
- "Neutral":0.79,
- "Positive":0.18
- }
- },
- {
- "Confidence":0.9164971,
- "Display":"What is the weather like?",
- "ITN":"what is the weather like",
- "Lexical":"what is the weather like",
- "MaskedITN":"What is the weather like",
- "Sentiment":{
- "Negative":0.02,
- "Neutral":0.88,
- "Positive":0.1
- }
- }
- ],
- "Offset":4700000,
- "RecognitionStatus":"Success",
- "Sentiment":{
- "Negative":0.03,
- "Neutral":0.79,
- "Positive":0.18
- }
-}
-```
--
-If you want to completely disable sentiment analysis, add a `false` value to `sentimentanalysis.enabled`.
-
-```python
-speech_config.set_service_property(
- name='speechcontext-phraseDetection.sentimentanalysis.enabled',
- value='false',
- channel=speechsdk.ServicePropertyChannel.UriQueryParameter
-)
-```
-
-### Neural Text-to-Speech
--
-### Run multiple containers on the same host
-
-If you intend to run multiple containers with exposed ports, make sure to run each container with a different exposed port. For example, run the first container on port 5000 and the second container on port 5001.
-
-You can have this container and a different Cognitive Services container running on the HOST together. You also can have multiple containers of the same Cognitive Services container running.
--
-## Stop the container
--
-## Troubleshooting
-
-When you start or run the container, you might experience issues. Use an output [mount](speech-container-configuration.md#mount-settings) and enable logging. Doing so allows the container to generate log files that are helpful when you troubleshoot issues.
-
+The container will test for network connectivity to the billing endpoint.
+## Run disconnected containers
-## Billing
+Tu run disconnected containers (not connected to the internet), you must submit [this request form](https://aka.ms/csdisconnectedcontainers) and wait for approval. For more information about applying and purchasing a commitment plan to use containers in disconnected environments, see [Use containers in disconnected environments](../containers/disconnected-containers.md) in the Azure Cognitive Services documentation.
-The Speech containers send billing information to Azure by using a Speech resource on your Azure account.
--
-For more information about these options, see [Configure containers](speech-container-configuration.md).
-
-## Summary
-
-In this article, you learned concepts and workflow for how to download, install, and run Speech containers. In summary:
-
-* Speech provides four Linux containers for Docker that have various capabilities:
- * Speech-to-text
- * Custom speech-to-text
- * Neural text-to-speech
- * Speech language identification
-* Container images are downloaded from the container registry in Azure.
-* Container images run in Docker.
-* Whether you use the REST API (text-to-speech only) or the SDK (speech-to-text or text-to-speech), you specify the host URI of the container.
-* You're required to provide billing information when you instantiate a container.
-
-> [!IMPORTANT]
-> Cognitive Services containers aren't licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Cognitive Services containers don't send customer data (for example, the image or text that's being analyzed) to Microsoft.
## Next steps * Review [configure containers](speech-container-configuration.md) for configuration settings. * Learn how to [use Speech service containers with Kubernetes and Helm](speech-container-howto-on-premises.md).
-* Use more [Cognitive Services containers](../cognitive-services-container-support.md).
+* Deploy and run containers on [Azure Container Instance](../containers/azure-container-instance-recipe.md)
+* Use more [Azure Cognitive Services containers](../cognitive-services-container-support.md).
cognitive-services Speech Container Lid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-lid.md
+
+ Title: Language identification containers - Speech service
+
+description: Install and run language identification containers with Docker to perform speech recognition, transcription, generation, and more on-premises.
++++++ Last updated : 04/18/2023+
+zone_pivot_groups: programming-languages-speech-sdk-cli
+keywords: on-premises, Docker, container
++
+# Language identification containers with Docker
+
+The Speech language identification container detects the language spoken in audio files. You can get real-time speech or batch audio recordings with intermediate results. In this article, you'll learn how to download, install, and run a language identification container.
+
+> [!NOTE]
+> You must [request and get approval](speech-container-overview.md#request-approval-to-run-the-container) to use a Speech container.
+>
+> The Speech language identification container is available in public preview. Containers in preview are still under development and don't meet Microsoft's stability and support requirements.
+
+For more information about prerequisites, validating that a container is running, running multiple containers on the same host, and running disconnected containers, see [Install and run Speech containers with Docker](speech-container-howto.md).
+
+> [!TIP]
+> To get the most useful results, use the Speech language identification container with the [speech-to-text](speech-container-stt.md) or [custom speech-to-text](speech-container-cstt.md) containers.
+
+## Container images
+
+The Speech language identification container image for all supported versions and locales can be found on the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/language-detection/tags) syndicate. It resides within the `azure-cognitive-services/speechservices/` repository and is named `language-detection`.
++
+The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection`. Either append a specific version or append `:latest` to get the most recent version.
+
+| Version | Path |
+|--||
+| Latest | `mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection:latest` |
+| 1.11.0 | `mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection:1.11.0-amd64-preview` |
+
+All tags, except for `latest`, are in the following format and are case sensitive:
+
+```
+<major>.<minor>.<patch>-<platform>-<prerelease>
+```
+
+The tags are also available [in JSON format](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/language-detection/tags/list) for your convenience. The body includes the container path and list of tags. The tags aren't sorted by version, but `"latest"` is always included at the end of the list as shown in this snippet:
+
+```json
+{
+ "name": "azure-cognitive-services/speechservices/language-detection",
+ "tags": [
+ "1.1.0-amd64-preview",
+ "1.11.0-amd64-preview",
+ "1.3.0-amd64-preview",
+ "1.5.0-amd64-preview",
+ <--redacted for brevity-->
+ "1.8.0-amd64-preview",
+ "latest"
+ ]
+}
+```
+
+## Get the container image with docker pull
+
+You need the [prerequisites](speech-container-howto.md#prerequisites) including required hardware. Please also see the [recommended allocation of resources](speech-container-howto.md#container-requirements-and-recommendations) for each Speech container.
+
+Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container Registry:
+
+```bash
+docker pull mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection:latest
+```
++
+## Run the container with docker run
+
+Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container.
+
+The following table represents the various `docker run` parameters and their corresponding descriptions:
+
+| Parameter | Description |
+|||
+| `{ENDPOINT_URI}` | The endpoint is required for metering and billing. For more information, see [billing arguments](speech-container-howto.md#billing-arguments). |
+| `{API_KEY}` | The API key is required. For more information, see [billing arguments](speech-container-howto.md#billing-arguments). |
+
+When you run the Speech language identification container, configure the port, memory, and CPU according to the language identification container [requirements and recommendations](speech-container-howto.md#container-requirements-and-recommendations).
+
+Here's an example `docker run` command with placeholder values. You must specify the `ENDPOINT_URI` and `API_KEY` values:
+
+```bash
+docker run --rm -it -p 5000:5003 --memory 1g --cpus 1 \
+mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection \
+Eula=accept \
+Billing={ENDPOINT_URI} \
+ApiKey={API_KEY}
+```
+
+This command:
+
+* Runs a Speech language identification container from the container image.
+* Allocates 1 CPU core and 1 GB of memory.
+* Exposes TCP port 5000 and allocates a pseudo-TTY for the container.
+* Automatically removes the container after it exits. The container image is still available on the host computer.
+
+For more information about `docker run` with Speech containers, see [Install and run Speech containers with Docker](speech-container-howto.md#run-the-container).
+
+## Run with the speech-to-text container
+
+If you want to run the language identification container with the [speech-to-text](speech-container-stt.md) container, you can use this [docker image](https://hub.docker.com/r/antsu/on-prem-client). After both containers have been started, use this `docker run` command to execute `speech-to-text-with-languagedetection-client`:
+
+```bash
+docker run --rm -v ${HOME}:/root -ti antsu/on-prem-client:latest ./speech-to-text-with-languagedetection-client ./audio/LanguageDetection_en-us.wav --host localhost --lport 5003 --sport 5000
+```
+
+Increasing the number of concurrent calls can affect reliability and latency. For language identification, we recommend a maximum of four concurrent calls using 1 CPU with 1 GB of memory. For hosts with 2 CPUs and 2 GB of memory, we recommend a maximum of six concurrent calls.
+
+## Use the container
++
+[Try language identification](language-identification.md) using host authentication instead of key and region. When you run language ID in a container, use the `SourceLanguageRecognizer` object instead of `SpeechRecognizer` or `TranslationRecognizer`.
+
+## Next steps
+
+* See the [Speech containers overview](speech-container-overview.md)
+* Review [configure containers](speech-container-configuration.md) for configuration settings
+* Use more [Azure Cognitive Services containers](../cognitive-services-container-support.md)
cognitive-services Speech Container Ntts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-ntts.md
+
+ Title: Neural text-to-speech containers - Speech service
+
+description: Install and run neural text-to-speech containers with Docker to perform speech synthesis and more on-premises.
++++++ Last updated : 04/18/2023+
+zone_pivot_groups: programming-languages-speech-sdk-cli
+keywords: on-premises, Docker, container
++
+# Text-to-speech containers with Docker
+
+The neural text-to-speech container converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech.. In this article, you'll learn how to download, install, and run a Text-to-speech container.
+
+> [!NOTE]
+> You must [request and get approval](speech-container-overview.md#request-approval-to-run-the-container) to use a Speech container.
+
+For more information about prerequisites, validating that a container is running, running multiple containers on the same host, and running disconnected containers, see [Install and run Speech containers with Docker](speech-container-howto.md).
+
+## Container images
+
+The neural text-to-speech container image for all supported versions and locales can be found on the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/neural-text-to-speech/tags) syndicate. It resides within the `azure-cognitive-services/speechservices/` repository and is named `neural-text-to-speech`.
++
+The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech`. Either append a specific version or append `:latest` to get the most recent version.
+
+| Version | Path |
+|--||
+| Latest | `mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech:latest`<br/><br/>The `latest` tag pulls the `en-US` locale and `en-us-arianeural` voice. |
+| 2.12.0 | `mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech:2.12.0-amd64-mr-in` |
+
+All tags, except for `latest`, are in the following format and are case sensitive:
+
+```
+<major>.<minor>.<patch>-<platform>-<voice>-<preview>
+```
+
+The tags are also available [in JSON format](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/neural-text-to-speech/tags/list) for your convenience. The body includes the container path and list of tags. The tags aren't sorted by version, but `"latest"` is always included at the end of the list as shown in this snippet:
+
+```json
+{
+ "name": "azure-cognitive-services/speechservices/neural-text-to-speech",
+ "tags": [
+ "1.10.0-amd64-cs-cz-antoninneural",
+ "1.10.0-amd64-cs-cz-vlastaneural",
+ "1.10.0-amd64-de-de-conradneural",
+ "1.10.0-amd64-de-de-katjaneural",
+ "1.10.0-amd64-en-au-natashaneural",
+ <--redacted for brevity-->
+ "latest"
+ ]
+}
+```
+
+> [!IMPORTANT]
+> We retired the standard speech synthesis voices and standard [text-to-speech](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/text-to-speech/tags) container on August 31, 2021. You should use neural voices with the [neural-text-to-speech](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/neural-text-to-speech/tags) container instead. For more information on updating your application, see [Migrate from standard voice to prebuilt neural voice](./how-to-migrate-to-prebuilt-neural-voice.md).
+
+## Get the container image with docker pull
+
+You need the [prerequisites](speech-container-howto.md#prerequisites) including required hardware. Please also see the [recommended allocation of resources](speech-container-howto.md#container-requirements-and-recommendations) for each Speech container.
+
+Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container Registry:
+
+```bash
+docker pull mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech:latest
+```
+
+> [!IMPORTANT]
+> The `latest` tag pulls the `en-US` locale and `en-us-arianeural` voice. For additional locales and voices, see [text-to-speech container images](#container-images).
+
+## Run the container with docker run
+
+Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container.
+
+# [Neural text to speech](#tab/container)
+
+The following table represents the various `docker run` parameters and their corresponding descriptions:
+
+| Parameter | Description |
+|||
+| `{ENDPOINT_URI}` | The endpoint is required for metering and billing. For more information, see [billing arguments](speech-container-howto.md#billing-arguments). |
+| `{API_KEY}` | The API key is required. For more information, see [billing arguments](speech-container-howto.md#billing-arguments). |
+
+When you run the text-to-speech container, configure the port, memory, and CPU according to the text-to-speech container [requirements and recommendations](speech-container-howto.md#container-requirements-and-recommendations).
+
+Here's an example `docker run` command with placeholder values. You must specify the `ENDPOINT_URI` and `API_KEY` values:
+
+```bash
+docker run --rm -it -p 5000:5000 --memory 12g --cpus 6 \
+mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech \
+Eula=accept \
+Billing={ENDPOINT_URI} \
+ApiKey={API_KEY}
+```
+
+This command:
+
+* Runs a neural text-to-speech container from the container image.
+* Allocates 6 CPU cores and 12 GB of memory.
+* Exposes TCP port 5000 and allocates a pseudo-TTY for the container.
+* Automatically removes the container after it exits. The container image is still available on the host computer.
+
+# [Disconnected neural text to speech](#tab/disconnected)
+
+To run disconnected containers (not connected to the internet), you must submit [this request form](https://aka.ms/csdisconnectedcontainers) and wait for approval. For more information about applying and purchasing a commitment plan to use containers in disconnected environments, see [Use containers in disconnected environments](../containers/disconnected-containers.md) in the Azure Cognitive Services documentation.
+
+If you have been approved to run the container disconnected from the internet, the following example shows the formatting of the `docker run` command to use, with placeholder values. Replace these placeholder values with your own values.
+
+The `DownloadLicense=True` parameter in your `docker run` command will download a license file that will enable your Docker container to run when it isn't connected to the internet. It also contains an expiration date, after which the license file will be invalid to run the container. You can only use a license file with the appropriate container that you've been approved for. For example, you can't use a license file for a `speech-to-text` container with a `neural-text-to-speech` container.
+
+| Placeholder | Description |
+|-|-|
+| `{IMAGE}` | The container image you want to use.<br/><br/>For example: `mcr.microsoft.com/azure-cognitive-services/neural-text-to-speech:latest` |
+| `{LICENSE_MOUNT}` | The path where the license will be downloaded, and mounted.<br/><br/>For example: `/host/license:/path/to/license/directory` |
+| `{ENDPOINT_URI}` | The endpoint for authenticating your service request. You can find it on your resource's **Key and endpoint** page, on the Azure portal.<br/><br/>For example: `https://<your-resource-name>.cognitiveservices.azure.com` |
+| `{API_KEY}` | The key for your Speech resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |
+| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem.<br/><br/>For example: `/path/to/license/directory` |
+
+```bash
+docker run --rm -it -p 5000:5000 \
+-v {LICENSE_MOUNT} \
+{IMAGE} \
+eula=accept \
+billing={ENDPOINT_URI} \
+apikey={API_KEY} \
+DownloadLicense=True \
+Mounts:License={CONTAINER_LICENSE_DIRECTORY}
+```
+
+Once the license file has been downloaded, you can run the container in a disconnected environment. The following example shows the formatting of the `docker run` command you'll use, with placeholder values. Replace these placeholder values with your own values.
+
+Wherever the container is run, the license file must be mounted to the container and the location of the license folder on the container's local filesystem must be specified with `Mounts:License=`. An output mount must also be specified so that billing usage records can be written.
+
+Placeholder | Value | Format or example |
+|-|-||
+| `{IMAGE}` | The container image you want to use.<br/><br/>For example: `mcr.microsoft.com/azure-cognitive-services/neural-text-to-speech:latest` |
+ `{MEMORY_SIZE}` | The appropriate size of memory to allocate for your container.<br/><br/>For example: `4g` |
+| `{NUMBER_CPUS}` | The appropriate number of CPUs to allocate for your container.<br/><br/>For example: `4` |
+| `{LICENSE_MOUNT}` | The path where the license will be located and mounted.<br/><br/>For example: `/host/license:/path/to/license/directory` |
+| `{OUTPUT_PATH}` | The output path for logging.<br/><br/>For example: `/host/output:/path/to/output/directory`<br/><br/>For more information, see [usage records](../containers/disconnected-containers.md#usage-records) in the Azure Cognitive Services documentation. |
+| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem.<br/><br/>For example: `/path/to/license/directory` |
+| `{CONTAINER_OUTPUT_DIRECTORY}` | Location of the output folder on the container's local filesystem.<br/><br/>For example: `/path/to/output/directory` |
+
+```bash
+docker run --rm -it -p 5000:5000 --memory {MEMORY_SIZE} --cpus {NUMBER_CPUS} \
+-v {LICENSE_MOUNT} \
+-v {OUTPUT_PATH} \
+{IMAGE} \
+eula=accept \
+Mounts:License={CONTAINER_LICENSE_DIRECTORY}
+Mounts:Output={CONTAINER_OUTPUT_DIRECTORY}
+```
+
+Speech containers provide a default directory for writing the license file and billing log at runtime. The default directories are /license and /output respectively.
+
+When you're mounting these directories to the container with the `docker run -v` command, make sure the local machine directory is set ownership to `user:group nonroot:nonroot` before running the container.
+
+Below is a sample command to set file/directory ownership.
+
+```bash
+sudo chown -R nonroot:nonroot <YOUR_LOCAL_MACHINE_PATH_1> <YOUR_LOCAL_MACHINE_PATH_2> ...
+```
+++
+For more information about `docker run` with Speech containers, see [Install and run Speech containers with Docker](speech-container-howto.md#run-the-container).
+
+## Use the container
++
+[Try the text-to-speech quickstart](get-started-text-to-speech.md) using host authentication instead of key and region.
+
+### SSML voice element
+
+When you construct a neural text-to-speech HTTP POST, the [SSML](speech-synthesis-markup.md) message requires a `voice` element with a `name` attribute. The [locale of the voice](language-support.md?tabs=tts) must correspond to the locale of the container model.
+
+For example, a model that was downloaded via the `latest` tag (defaults to "en-US") would have a voice name of `en-US-AriaNeural`.
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
+ <voice name="en-US-AriaNeural">
+ This is the text that is spoken.
+ </voice>
+</speak>
+```
+
+## Next steps
+
+* See the [Speech containers overview](speech-container-overview.md)
+* Review [configure containers](speech-container-configuration.md) for configuration settings
+* Use more [Azure Cognitive Services containers](../cognitive-services-container-support.md)
cognitive-services Speech Container Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-overview.md
+
+ Title: Speech containers overview - Speech service
+
+description: Use the Docker containers for the Speech service to perform speech recognition, transcription, generation, and more on-premises.
++++++ Last updated : 04/18/2023+
+keywords: on-premises, Docker, container
++
+# Speech containers overview
+
+By using containers, you can use a subset of the Speech service features in your own environment. With Speech containers, you can build a speech application architecture that's optimized for both robust cloud capabilities and edge locality. Containers are great for specific security and data governance requirements.
+
+> [!NOTE]
+> You must [request and get approval](#request-approval-to-run-the-container) to use a Speech container.
+
+## Available Speech containers
+
+The following table lists the Speech containers available in the Microsoft Container Registry (MCR). The table also lists the features supported by each container and the latest version of the container.
+
+| Container | Features | Supported versions and locales |
+|--|--|--|
+| [Speech-to-text](speech-container-stt.md) | Transcribes continuous real-time speech or batch audio recordings with intermediate results. | Latest: 3.12.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list).|
+| [Custom speech-to-text](speech-container-cstt.md) | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | Latest: 3.12.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/custom-speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list). |
+| [Speech language identification](speech-container-lid.md)<sup>1, 2</sup> | Detects the language spoken in audio files. | Latest: 1.11.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/language-detection/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/language-detection/tags/list). |
+| [Neural text-to-speech](speech-container-ntts.md) | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | Latest: 2.11.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/neural-text-to-speech/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/neural-text-to-speech/tags/list). |
+
+<sup>1</sup> The container is available in public preview. Containers in preview are still under development and don't meet Microsoft's stability and support requirements.
+<sup>2</sup> Not available as a disconnected container.
+
+## Request approval to run the container
+
+To use the Speech containers, you must submit one of the following request forms and wait for approval:
+- [Connected containers request form](https://aka.ms/csgate) if you want to run containers regularly, in environments that are only connected to the internet.
+- [Disconnected Container request form](https://aka.ms/csdisconnectedcontainers) if you want to run containers in environments that can be disconnected from the internet. For more information about applying and purchasing a commitment plan to use containers in disconnected environments, see [Use containers in disconnected environments](../containers/disconnected-containers.md) in the Azure Cognitive Services documentation.
+
+The form requests information about you, your company, and the user scenario for which you'll use the container.
+
+* On the form, you must use an email address associated with an Azure subscription ID.
+* The Azure resource you use to run the container must have been created with the approved Azure subscription ID.
+* Check your email for updates on the status of your application from Microsoft.
+
+After you submit the form, the Azure Cognitive Services team reviews it and emails you with a decision within 10 business days.
+
+> [!IMPORTANT]
+> To use the Speech containers, your request must be approved.
+
+While you're waiting for approval, you can [setup the prerequisites](speech-container-howto.md#prerequisites) on your host computer. You can also download the container from the Microsoft Container Registry (MCR). You can run the container after your request is approved.
+
+## Billing
+
+The Speech containers send billing information to Azure by using a Speech resource on your Azure account.
+
+> [!NOTE]
+> Connected and disconnected container pricing and commitment tiers vary. For more information, see [Speech Services pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+
+Speech containers aren't licensed to run without being connected to Azure for metering. You must configure your container to communicate billing information with the metering service at all times. For more information, see [billing arguments](speech-container-howto.md#billing-arguments).
+
+## Container recipes and other container services
+
+You can use container recipes to create containers that can be reused. Containers can be built with some or all configuration settings so that they are not needed when the container is started. For container recipes see the following Azure Cognitive Services articles:
+- [Create containers for reuse](../containers/container-reuse-recipe.md)
+- [Deploy and run container on Azure Container Instance](../containers/azure-container-instance-recipe.md)
+- [Deploy a language detection container to Azure Kubernetes Service](../containers/azure-kubernetes-recipe.md)
+- [Use Docker Compose to deploy multiple containers](../containers/docker-compose-recipe.md)
+
+For information about other container services, see the following Azure Cognitive Services articles:
+- [Tutorial: Create a container image for deployment to Azure Container Instances](../../container-instances/container-instances-tutorial-prepare-app.md)
+- [Quickstart: Create a private container registry using the Azure CLI](../../container-registry/container-registry-get-started-azure-cli.md)
+- [Tutorial: Prepare an application for Azure Kubernetes Service (AKS)](../../aks/tutorial-kubernetes-prepare-app.md)
+
+## Next steps
+
+* [Install and run Speech containers](speech-container-howto.md)
++
cognitive-services Speech Container Stt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-stt.md
+
+ Title: Speech-to-text containers - Speech service
+
+description: Install and run speech-to-text containers with Docker to perform speech recognition, transcription, generation, and more on-premises.
++++++ Last updated : 04/18/2023+
+zone_pivot_groups: programming-languages-speech-sdk-cli
+keywords: on-premises, Docker, container
++
+# Speech-to-text containers with Docker
+
+The Speech-to-text container transcribes real-time speech or batch audio recordings with intermediate results. In this article, you'll learn how to download, install, and run a Speech-to-text container.
+
+> [!NOTE]
+> You must [request and get approval](speech-container-overview.md#request-approval-to-run-the-container) to use a Speech container.
+
+For more information about prerequisites, validating that a container is running, running multiple containers on the same host, and running disconnected containers, see [Install and run Speech containers with Docker](speech-container-howto.md).
+
+## Container images
+
+The Speech-to-text container image for all supported versions and locales can be found on the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/speech-to-text/tags) syndicate. It resides within the `azure-cognitive-services/speechservices/` repository and is named `speech-to-text`.
++
+The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text`. Either append a specific version or append `:latest` to get the most recent version.
+
+| Version | Path |
+|--||
+| Latest | `mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text:latest`<br/><br/>The `latest` tag pulls the latest image for the `en-US` locale. |
+| 3.12.0 | `mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text:3.12.0-amd64-mr-in` |
+
+All tags, except for `latest`, are in the following format and are case sensitive:
+
+```
+<major>.<minor>.<patch>-<platform>-<locale>-<prerelease>
+```
+
+The tags are also available [in JSON format](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list) for your convenience. The body includes the container path and list of tags. The tags aren't sorted by version, but `"latest"` is always included at the end of the list as shown in this snippet:
+
+```json
+{
+ "name": "azure-cognitive-services/speechservices/speech-to-text",
+ "tags": [
+ "2.10.0-amd64-ar-ae",
+ "2.10.0-amd64-ar-bh",
+ "2.10.0-amd64-ar-eg",
+ "2.10.0-amd64-ar-iq",
+ "2.10.0-amd64-ar-jo",
+ <--redacted for brevity-->
+ "latest"
+ ]
+}
+```
+
+## Get the container image with docker pull
+
+You need the [prerequisites](speech-container-howto.md#prerequisites) including required hardware. Please also see the [recommended allocation of resources](speech-container-howto.md#container-requirements-and-recommendations) for each Speech container.
+
+Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container Registry:
+
+```bash
+docker pull mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text:latest
+```
+
+> [!IMPORTANT]
+> The `latest` tag pulls the latest image for the `en-US` locale. For additional versions and locales, see [speech-to-text container images](#container-images).
+
+## Run the container with docker run
+
+Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container.
+
+# [Speech to text](#tab/container)
+
+The following table represents the various `docker run` parameters and their corresponding descriptions:
+
+| Parameter | Description |
+|||
+| `{ENDPOINT_URI}` | The endpoint is required for metering and billing. For more information, see [billing arguments](speech-container-howto.md#billing-arguments). |
+| `{API_KEY}` | The API key is required. For more information, see [billing arguments](speech-container-howto.md#billing-arguments). |
+
+When you run the speech-to-text container, configure the port, memory, and CPU according to the speech-to-text container [requirements and recommendations](speech-container-howto.md#container-requirements-and-recommendations).
+
+Here's an example `docker run` command with placeholder values. You must specify the `ENDPOINT_URI` and `API_KEY` values:
+
+```bash
+docker run --rm -it -p 5000:5000 --memory 8g --cpus 4 \
+mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text \
+Eula=accept \
+Billing={ENDPOINT_URI} \
+ApiKey={API_KEY}
+```
+
+This command:
+* Runs a `speech-to-text` container from the container image.
+* Allocates 4 CPU cores and 8 GB of memory.
+* Exposes TCP port 5000 and allocates a pseudo-TTY for the container.
+* Automatically removes the container after it exits. The container image is still available on the host computer.
+
+# [Disconnected speech to text](#tab/disconnected)
+
+To run disconnected containers (not connected to the internet), you must submit [this request form](https://aka.ms/csdisconnectedcontainers) and wait for approval. For more information about applying and purchasing a commitment plan to use containers in disconnected environments, see [Use containers in disconnected environments](../containers/disconnected-containers.md) in the Azure Cognitive Services documentation.
+
+If you have been approved to run the container disconnected from the internet, the following example shows the formatting of the `docker run` command to use, with placeholder values. Replace these placeholder values with your own values.
+
+The `DownloadLicense=True` parameter in your `docker run` command will download a license file that will enable your Docker container to run when it isn't connected to the internet. It also contains an expiration date, after which the license file will be invalid to run the container. You can only use a license file with the appropriate container that you've been approved for. For example, you can't use a license file for a `speech-to-text` container with a `neural-text-to-speech` container.
+
+| Placeholder | Description |
+|-|-|
+| `{IMAGE}` | The container image you want to use.<br/><br/>For example: `mcr.microsoft.com/azure-cognitive-services/speech-to-text:latest` |
+| `{LICENSE_MOUNT}` | The path where the license will be downloaded, and mounted.<br/><br/>For example: `/host/license:/path/to/license/directory` |
+| `{ENDPOINT_URI}` | The endpoint for authenticating your service request. You can find it on your resource's **Key and endpoint** page, on the Azure portal.<br/><br/>For example: `https://<your-resource-name>.cognitiveservices.azure.com` |
+| `{API_KEY}` | The key for your Speech resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |
+| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem.<br/><br/>For example: `/path/to/license/directory` |
+
+```bash
+docker run --rm -it -p 5000:5000 \
+-v {LICENSE_MOUNT} \
+{IMAGE} \
+eula=accept \
+billing={ENDPOINT_URI} \
+apikey={API_KEY} \
+DownloadLicense=True \
+Mounts:License={CONTAINER_LICENSE_DIRECTORY}
+```
+
+Once the license file has been downloaded, you can run the container in a disconnected environment. The following example shows the formatting of the `docker run` command you'll use, with placeholder values. Replace these placeholder values with your own values.
+
+Wherever the container is run, the license file must be mounted to the container and the location of the license folder on the container's local filesystem must be specified with `Mounts:License=`. An output mount must also be specified so that billing usage records can be written.
+
+Placeholder | Value | Format or example |
+|-|-||
+| `{IMAGE}` | The container image you want to use.<br/><br/>For example: `mcr.microsoft.com/azure-cognitive-services/speech-to-text:latest` |
+ `{MEMORY_SIZE}` | The appropriate size of memory to allocate for your container.<br/><br/>For example: `4g` |
+| `{NUMBER_CPUS}` | The appropriate number of CPUs to allocate for your container.<br/><br/>For example: `4` |
+| `{LICENSE_MOUNT}` | The path where the license will be located and mounted.<br/><br/>For example: `/host/license:/path/to/license/directory` |
+| `{OUTPUT_PATH}` | The output path for logging.<br/><br/>For example: `/host/output:/path/to/output/directory`<br/><br/>For more information, see [usage records](../containers/disconnected-containers.md#usage-records) in the Azure Cognitive Services documentation. |
+| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem.<br/><br/>For example: `/path/to/license/directory` |
+| `{CONTAINER_OUTPUT_DIRECTORY}` | Location of the output folder on the container's local filesystem.<br/><br/>For example: `/path/to/output/directory` |
+
+```bash
+docker run --rm -it -p 5000:5000 --memory {MEMORY_SIZE} --cpus {NUMBER_CPUS} \
+-v {LICENSE_MOUNT} \
+-v {OUTPUT_PATH} \
+{IMAGE} \
+eula=accept \
+Mounts:License={CONTAINER_LICENSE_DIRECTORY}
+Mounts:Output={CONTAINER_OUTPUT_DIRECTORY}
+```
+
+Speech containers provide a default directory for writing the license file and billing log at runtime. The default directories are /license and /output respectively.
+
+When you're mounting these directories to the container with the `docker run -v` command, make sure the local machine directory is set ownership to `user:group nonroot:nonroot` before running the container.
+
+Below is a sample command to set file/directory ownership.
+
+```bash
+sudo chown -R nonroot:nonroot <YOUR_LOCAL_MACHINE_PATH_1> <YOUR_LOCAL_MACHINE_PATH_2> ...
+```
+++
+For more information about `docker run` with Speech containers, see [Install and run Speech containers with Docker](speech-container-howto.md#run-the-container).
++
+## Use the container
++
+[Try the speech-to-text quickstart](get-started-speech-to-text.md) using host authentication instead of key and region.
+
+## Next steps
+
+* See the [Speech containers overview](speech-container-overview.md)
+* Review [configure containers](speech-container-configuration.md) for configuration settings
+* Use more [Azure Cognitive Services containers](../cognitive-services-container-support.md)
+
cognitive-services Speech Services Quotas And Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-services-quotas-and-limits.md
This section describes text-to-speech quotas and limits per Speech resource. Unl
| Quota | Free (F0)| Standard (S0) | |--|--|--|
-| File size | 3,000 characters per file | 20,000 characters per file |
+| File size (plain text in SSML)<sup>1</sup> | 3,000 characters per file | 20,000 characters per file |
+| File size (lexicon file)<sup>2</sup> | 3,000 characters per file | 20,000 characters per file |
+| Billable characters in SSML| 15,000 characters per file | 100,000 characters per file |
| Export to audio library | 1 concurrent task | N/A |
+<sup>1</sup> The limit only applies to plain text in SSML and doesn't include tags.
+
+<sup>2</sup> The limit includes all text including tags. The characters of lexicon file aren't charged. Only the lexicon elements in SSML are counted as billable characters. Refer to [billable characters](text-to-speech.md#billable-characters) to learn more.
+ ### Speaker recognition quotas and limits per resource Speaker recognition is limited to 20 transactions per second (TPS).
cognitive-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/faq.md
Last updated 08/17/2020 -+
cognitive-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/release-notes.md
Last updated 11/04/2022 -+ # Custom Translator release notes
cognitive-services Use Rest Api Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/how-to-guides/use-rest-api-programmatically.md
Previously updated : 03/22/2023 Last updated : 04/17/2023 recommendations: false ms.devlang: csharp, golang, java, javascript, python
The `sourceUrl` , `targetUrl` , and optional `glossaryUrl` must include a Share
A batch Document Translation request is submitted to your Translator service endpoint via a POST request. If successful, the POST method returns a `202 Accepted` response code and the service creates a batch request. The translated documents are listed in your target container.
+For detailed information regarding Azure Translator Service request limits, _see_ [**Document Translation request limits**](../../request-limits.md#document-translation).
+ ### HTTP headers The following headers are included with each Document Translation API request:
func main() {
-## Content limits
-
-This table lists the limits for data that you send to Document Translation:
-
-|Attribute | Limit|
-|||
-|Document size| Γëñ 40 MB |
-|Total number of files.|Γëñ 1000 |
-|Total content size in a batch | Γëñ 250 MB|
-|Number of target languages in a batch| Γëñ 10 |
-|Size of Translation memory file| Γëñ 10 MB|
-
-Document Translation can't be used to translate secured documents such as those with an encrypted password or with restricted access to copy content.
-
-## Troubleshooting
- ### Common HTTP status codes | HTTP status code | Description | Possible reason |
Document Translation can't be used to translate secured documents such as those
> [!div class="nextstepaction"] > [Create a customized language system using Custom Translator](../../custom-translator/overview.md)
->
->
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/overview.md
Previously updated : 03/24/2023 Last updated : 04/17/2023 recommendations: false
Document Translation is a cloud-based feature of the [Azure Translator](../trans
> [!NOTE] > When translating documents with content in multiple languages, the feature is intended for complete sentences in a single language. If sentences are composed of more than one language, the content may not all translate into the target language.
-> For more information on input requirements, *see* [content limits](get-started-with-document-translation.md#content-limits)
+> For more information on input requirements, *see* [Document Transaltion request limits](../request-limits.md#document-translation)
## Document Translation development options
Document Translation supports the following document file types:
|Tab Separated Values/TAB|`tsv`/`tab`| A tab-delimited raw-data file used by spreadsheet programs.| |Text|`txt`| An unformatted text document.|
+## Request limits
+
+For detailed information regarding Azure Translator Service request limits, *see* [**Document Translation request limits**](../request-limits.md#document-translation).
+ ### Legacy file types Source file types are preserved during the document translation with the following **exceptions**:
cognitive-services Get Started With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/quickstarts/get-started-with-rest-api.md
For this project, you need a **source document** uploaded to your **source conta
A batch Document Translation request is submitted to your Translator service endpoint via a POST request. If successful, the POST method returns a `202 Accepted` response code and the service creates a batch request. The translated documents are listed in your target container.
+For detailed information regarding Azure Translator Service request limits, *see* [**Document Translation request limits**](../../request-limits.md#document-translation).
+ ### Headers The following headers are included with each Document Translation API request:
cognitive-services Quickstart Translator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/quickstart-translator.md
Header|Value| Condition |
The core operation of the Translator service is translating text. In this quickstart, you build a request using a programming language of your choice that takes a single source (`from`) and provides two outputs (`to`). Then we review some parameters that can be used to adjust both the request and the response.
+For detailed information regarding Azure Translator Service request limits, *see* [**Text translation request limits**](request-limits.md#text-translation).
+ ### [C#: Visual Studio](#tab/csharp) ### Set up your Visual Studio project
cognitive-services Request Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/request-limits.md
Title: Request limits - Translator
+ Title: Request limits - Translator Service
-description: This article lists request limits for the Translator. Charges are incurred based on character count, not request frequency with a limit of 50,000 characters per request. Character limits are subscription-based, with F0 limited to 2 million characters per hour.
+description: This article lists request limits for the Translator text and document translation. Charges are incurred based on character count, not request frequency with a limit of 50,000 characters per request. Character limits are subscription-based, with F0 limited to 2 million characters per hour.
Previously updated : 08/17/2022 Last updated : 04/17/2023
-# Request limits for Translator
+# Request limits for Azure Translator Service
-This article provides throttling limits for the Translator translation, transliteration, sentence length detection, language detection, and alternate translations.
+This article provides both a quick reference and detailed description of Azure Translator Service character and array limits for text and document translation.
-## Character and array limits per request
+## Text translation
-Each translate request is limited to 50,000 characters, across all the target languages you're translating to. For example, sending a translate request of 3,000 characters to translate to three different languages results in a request size of 3000x3 = 9,000 characters, which satisfy the request limit. You're charged per character, not by the number of requests. It's recommended to send shorter requests.
+Charges are incurred based on character count, not request frequency. Character limits are subscription-based.
-The following table lists array element and character limits for each operation of the Translator.
+### Character and array limits per request
+
+Each translate request is limited to 50,000 characters, across all the target languages. For example, sending a translate request of 3,000 characters to translate to three different languages results in a request size of 3,000 &times; 3 = 9,000 characters and meets the request limit. You're charged per character, not by the number of requests, therefore, it's recommended that you send shorter requests.
+
+The following table lists array element and character limits for each text translation operation.
| Operation | Maximum Size of Array Element | Maximum Number of Array Elements | Maximum Request Size (characters) | |:-|:-|:-|:-|
-| Translate | 50,000| 1,000| 50,000 |
-| Transliterate | 5,000| 10| 5,000 |
-| Detect | 50,000 |100 |50,000 |
-| BreakSentence | 50,000| 100 |50,000 |
-| Dictionary Lookup| 100 |10| 1,000 |
-| Dictionary Examples | 100 for text and 100 for translation (200 total)| 10|2,000 |
+| **Translate** | 50,000| 1,000| 50,000 |
+| **Transliterate** | 5,000| 10| 5,000 |
+| **Detect** | 50,000 |100 |50,000 |
+| **BreakSentence** | 50,000| 100 |50,000 |
+| **Dictionary Lookup** | 100 |10| 1,000 |
+| **Dictionary Examples** | 100 for text and 100 for translation (200 total)| 10|2,000 |
-## Character limits per hour
+### Character limits per hour
Your character limit per hour is based on your Translator subscription tier.
Limits for [multi-service subscriptions](./reference/v3-0-reference.md#authentic
These limits are restricted to Microsoft's standard translation models. Custom translation models that use Custom Translator are limited to 3,600 characters per second, per model.
-## Latency
+### Latency
+
+The Translator has a maximum latency of 15 seconds using standard models and 120 seconds when using custom models. Typically, responses *for text within 100 characters* are returned in 150 milliseconds to 300 milliseconds. The custom translator models have similar latency characteristics on sustained request rate and may have a higher latency when your request rate is intermittent. Response times vary based on the size of the request and language pair. If you don't receive a translation or an [error response](./reference/v3-0-reference.md#errors) within that time frame, check your code, your network connection, and retry.
+
+## Document Translation
+
+This table lists the content limits for data sent using Document Translation:
+
+|Attribute | Limit|
+|||
+|Document size| Γëñ 40 MB |
+|Total number of files.|Γëñ 1000 |
+|Total content size in a batch | Γëñ 250 MB|
+|Number of target languages in a batch| Γëñ 10 |
+|Size of Translation memory file| Γëñ 10 MB|
-The Translator has a maximum latency of 15 seconds using standard models and 120 seconds when using custom models. Typically, responses *for text within 100 characters* are returned in 150 milliseconds to 300 milliseconds. The custom translator models have similar latency characteristics on sustained request rate and may have a higher latency when your request rate is intermittent. Response times will vary based on the size of the request and language pair. If you don't receive a translation or an [error response](./reference/v3-0-reference.md#errors) within that timeframe, check your code, your network connection, and retry.
+> [!NOTE]
+> Document Translation can't be used to translate secured documents such as those with an encrypted password or with restricted access to copy content.
## Next steps
cognitive-services Translator Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/translator-faq.md
Translator counts the following input:
* An individual letter. * Punctuation. * A space, tab, markup, or any white-space character.
-* A repeated translation, even if you've previously translated the same text. Every character submitted to the translate function is counted even when the content is unchanged or the source and target language are the same.
+* A repeated translation, even if you have previously translated the same text. Every character submitted to the translate function is counted even when the content is unchanged or the source and target language are the same.
For scripts based on graphic symbols, such as written Chinese and Japanese Kanji, the Translator service counts the number of Unicode code points. One character per symbol. Exception: Unicode surrogate pairs count as two characters. Calls to the **Detect** and **BreakSentence** methods aren't counted in the character consumption. However, we do expect calls to the Detect and BreakSentence methods to be reasonably proportionate to the use of other counted functions. If the number of Detect or BreakSentence calls exceeds the number of other counted methods by 100 times, Microsoft reserves the right to restrict your use of the Detect and BreakSentence methods.
+For detailed information regarding Azure Translator Service request limits, *see* [**Text translation request limits**](request-limits.md#text-translation).
+ ## Where can I see my monthly usage? The [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) can be used to estimate your costs. You can also monitor, view, and add Azure alerts for your Azure services in your user account in the Azure portal:
The [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/)
:::image type="content" source="media/azure-portal-overview.png" alt-text="Screenshot of the subscription link on overview page in the Azure portal.":::
-2. In the left rail, make your selection under **Cost Management**:
+1. In the left rail, make your selection under **Cost Management**:
:::image type="content" source="media/azure-portal-cost-management.png" alt-text="Screenshot of the cost management resources links in the Azure portal."::: ## Is attribution required when using Translator?
-Attribution isn't required when using Translator for text and speech translation. It is recommended that you inform users that the content they're viewing is machine translated.
+Attribution isn't required when using Translator for text and speech translation. It's recommended that you inform users that the content they're viewing is machine translated.
If attribution is present, it must conform to the [Translator attribution guidelines](https://www.microsoft.com/translator/business/attribution/).
cognitive-services Translator Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/translator-overview.md
# What is Azure Cognitive Services Translator?
-Translator Service is a cloud-based neural machine translation service that is part of the [Azure Cognitive Services](../what-are-cognitive-services.md) family of REST APIs and can be used with any operating system. Translator powers many Microsoft products and services used by thousands of businesses worldwide to perform language translation and other language-related operations. In this overview, you'll learn how Translator can enable you to build intelligent, multi-language solutions for your applications across all [supported languages](./language-support.md).
+Translator Service is a cloud-based neural machine translation service that is part of the [Azure Cognitive Services](../what-are-cognitive-services.md) family of REST APIs and can be used with any operating system. Translator powers many Microsoft products and services used by thousands of businesses worldwide to perform language translation and other language-related operations. In this overview, you learn how Translator can enable you to build intelligent, multi-language solutions for your applications across all [supported languages](./language-support.md).
Translator documentation contains the following article types:
Translator documentation contains the following article types:
## Translator features and development options
-The following features are supported by the Translator service. Use the links in this table to learn more about each feature and browse the API references.
+Translator service supports the following features. Use the links in this table to learn more about each feature and browse the API references.
| Feature | Description | Development options | |-|-|--|
The following features are supported by the Translator service. Use the links in
| [**Document Translation**](document-translation/overview.md) | Translate batch and complex files while preserving the structure and format of the original documents. | <ul><li>[**REST API**](document-translation/reference/rest-api-guide.md)</li><li>[**Client-library SDK**](document-translation/how-to-guides/use-client-sdks.md)</li></ul> | | [**Custom Translator**](custom-translator/overview.md) | Build customized models to translate domain- and industry-specific language, terminology, and style. | <ul><li>[**Custom Translator portal**](https://portal.customtranslator.azure.ai/)</li></ul> |
+For detailed information regarding Azure Translator Service request limits, *see* [**Text translation request limits**](request-limits.md#text-translation).
+ ## Try the Translator service for free
-First, you'll need a Microsoft account; if you don't have one, you can sign up for free at the [**Microsoft account portal**](https://account.microsoft.com/account). Select **Create a Microsoft account** and follow the steps to create and verify your new account.
+First, you need a Microsoft account; if you don't have one, you can sign up for free at the [**Microsoft account portal**](https://account.microsoft.com/account). Select **Create a Microsoft account** and follow the steps to create and verify your new account.
-Next, you'll need to have an Azure accountΓÇönavigate to the [**Azure sign-up page**](https://azure.microsoft.com/free/ai/), select the **Start free** button, and create a new Azure account using your Microsoft account credentials.
+Next, you need to have an Azure accountΓÇönavigate to the [**Azure sign-up page**](https://azure.microsoft.com/free/ai/), select the **Start free** button, and create a new Azure account using your Microsoft account credentials.
Now, you're ready to get started! [**Create a Translator service**](how-to-create-translator-resource.md "Go to the Azure portal."), [**get your access keys and API endpoint**](how-to-create-translator-resource.md#authentication-keys-and-endpoint-url "An endpoint URL and read-only key are required for authentication."), and try our [**quickstart**](quickstart-translator.md "Learn to use Translator via REST.").
cognitive-services Cognitive Services Container Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-container-support.md
Azure Cognitive Services containers provide the following set of Docker containe
| Service | Container | Description | Availability | |--|--|--|--|
-| [Speech Service API][sp-containers-stt] | **Speech-to-text** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-custom-speech-to-text)) | Transcribes continuous real-time speech into text. | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). |
+| [Speech Service API][sp-containers-stt] | **Speech-to-text** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-speech-to-text)) | Transcribes continuous real-time speech into text. | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). |
| [Speech Service API][sp-containers-cstt] | **Custom Speech-to-text** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-custom-speech-to-text)) | Transcribes continuous real-time speech into text using a custom model. | Generally available <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). | | [Speech Service API][sp-containers-ntts] | **Neural Text-to-speech** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-neural-text-to-speech)) | Converts text to natural-sounding speech using deep neural network technology, allowing for more natural synthesized speech. | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). | | [Speech Service API][sp-containers-lid] | **Speech language detection** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-language-detection)) | Determines the language of spoken audio. | Gated preview |
Install and explore the functionality provided by containers in Azure Cognitive
[lu-containers]: luis/luis-container-howto.md [sp-containers]: speech-service/speech-container-howto.md [spa-containers]: ./computer-vision/spatial-analysis-container.md
-[sp-containers-lid]: speech-service/speech-container-howto.md?tabs=lid
-[sp-containers-stt]: speech-service/speech-container-howto.md?tabs=stt
-[sp-containers-cstt]: speech-service/speech-container-howto.md?tabs=cstt
-[sp-containers-tts]: speech-service/speech-container-howto.md?tabs=tts
-[sp-containers-ctts]: speech-service/speech-container-howto.md?tabs=ctts
-[sp-containers-ntts]: speech-service/speech-container-howto.md?tabs=ntts
+[sp-containers-lid]: speech-service/speech-container-lid.md
+[sp-containers-stt]: speech-service/speech-container-stt.md
+[sp-containers-cstt]: speech-service/speech-container-cstt.md
+[sp-containers-ntts]: speech-service/speech-container-ntts.md
[ta-containers]: language-service/overview.md#deploy-on-premises-using-docker-containers [ta-containers-keyphrase]: language-service/key-phrase-extraction/how-to/use-containers.md [ta-containers-language]: language-service/language-detection/how-to/use-containers.md
cognitive-services Container Reuse Recipe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/container-reuse-recipe.md
Last updated 10/28/2021 #Customer intent: As a potential customer, I want to know how to configure containers so I can reuse them.-
-# SME: Siddhartha Prasad <siprasa@microsoft.com>
# Create containers for reuse
cognitive-services Disconnected Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/disconnected-containers.md
Access is limited to customers that meet the following requirements:
**Speech service**
- * [Speech-to-Text](../speech-service/speech-container-howto.md?tabs=stt#run-the-container-disconnected-from-the-internet)
- * [Custom Speech-to-Text](../speech-service/speech-container-howto.md?tabs=cstt#run-the-container-disconnected-from-the-internet-1)
- * [Neural Text-to-Speech](../speech-service/speech-container-howto.md?tabs=ntts#run-the-container-disconnected-from-the-internet-2)
+ * [Speech-to-Text](../speech-service/speech-container-stt.md?tabs=disconnected#run-the-container-with-docker-run)
+ * [Custom Speech-to-Text](../speech-service/speech-container-cstt.md?tabs=disconnected#run-the-container-with-docker-run)
+ * [Neural Text-to-Speech](../speech-service/speech-container-ntts.md?tabs=disconnected#run-the-container-with-docker-run)
**Language service**
cognitive-services Tag Utterances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/tag-utterances.md
To delete an entity:
In CLU, use Azure OpenAI to suggest utterances to add to your project using GPT models. You first need to get access and create a resource in Azure OpenAI. You'll then need to create a deployment for the GPT models. Follow the pre-requisite steps [here](../../../openai/how-to/create-resource.md).
+Before you get started, the suggest utterances feature is only available if your Language resource is in the following regions:
+* East US
+* South Central US
+* West Europe
+ In the Data Labeling page: 1. Click on the **Suggest utterances** button. A pane will open up on the right side prompting you to select your Azure OpenAI resource and deployment.
cognitive-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/models.md
description: Learn about the different model capabilities that are available with Azure OpenAI. Previously updated : 03/31/2023 Last updated : 04/19/2023
These models can be used with Completion API requests. `gpt-35-turbo` is the onl
| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) | | | | - | -- | - |
-| ada | N/A | East US <sup>2</sup> | 2,049 | Oct 2019|
+| ada | N/A | South Central US, West Europe <sup>2</sup> | 2,049 | Oct 2019|
| text-ada-001 | East US, South Central US, West Europe | N/A | 2,049 | Oct 2019|
-| babbage | N/A | East US<sup>2</sup> | 2,049 | Oct 2019 |
+| babbage | N/A | South Central US, West Europe<sup>2</sup> | 2,049 | Oct 2019 |
| text-babbage-001 | East US, South Central US, West Europe | N/A | 2,049 | Oct 2019 |
-| curie | N/A | East US<sup>2</sup> | 2,049 | Oct 2019 |
+| curie | N/A | South Central US, West Europe<sup>2</sup> | 2,049 | Oct 2019 |
| text-curie-001 | East US, South Central US, West Europe | N/A | 2,049 | Oct 2019 | | davinci<sup>1</sup> | N/A | Currently unavailable | 2,049 | Oct 2019| | text-davinci-001 | South Central US, West Europe | N/A | | |
These models can be used with Completion API requests. `gpt-35-turbo` is the onl
| gpt-35-turbo<sup>3</sup> (ChatGPT) (preview) | East US, South Central US | N/A | 4,096 | Sep 2021 | <sup>1</sup> The model is available by request only. Currently we aren't accepting new requests to use the model.
-<br><sup>2</sup> South Central US and West Europe were previously available, but due to high demand they are currently unavailable for new customers to use for fine-tuning. Please use the East US region for fine-tuning.
+<br><sup>2</sup> East US was previously available, but due to high demand this region is currently unavailable for new customers to use for fine-tuning. Please use the South Central US, and West Europe regions for fine-tuning.
<br><sup>3</sup> Currently, only version `0301` of this model is available. This version of the model will be deprecated on 8/1/2023 in favor of newer version of the gpt-35-model. See [ChatGPT model versioning](../how-to/chatgpt.md#model-versioning) for more details. ### GPT-4 Models
cognitive-services Chatgpt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/chatgpt.md
description: Learn about the options for how to use the ChatGPT and GPT-4 models
-+ Last updated 03/21/2023 keywords: ChatGPT
cognitive-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/quotas-limits.md
The following sections provide you with a quick guide to the quotas and limits t
| Limit Name | Limit Value | |--|--| | OpenAI resources per region | 2 |
-| Requests per minute per model* | Davinci-models (002 and later): 120 <br> ChatGPT model (preview): 300 <br> GPT-4 models (preview): 12 <br> All other models: 300 |
+| Requests per minute per model* | Davinci-models (002 and later): 120 <br> ChatGPT model (preview): 300 <br> GPT-4 models (preview): 18 <br> All other models: 300 |
| Tokens per minute per model* | Davinci-models (002 and later): 40,000 <br> ChatGPT model: 120,000 <br> All other models: 120,000 | | Max fine-tuned model deployments* | 2 | | Ability to deploy same model to multiple deployments | Not allowed |
The following sections provide you with a quick guide to the quotas and limits t
*The limits are subject to change. We anticipate that you will need higher limits as you move toward production and your solution scales. When you know your solution requirements, please reach out to us by applying for a quota increase here: <https://aka.ms/oai/quotaincrease> + For information on max tokens for different models, consult the [models article](./concepts/models.md#model-summary-table-and-region-availability) ### General best practices to mitigate throttling during autoscaling
The next sections describe specific cases of adjusting quotas.
If you need to increase the limit, you can apply for a quota increase here: <https://aka.ms/oai/quotaincrease>
+### How to request an increase to the number of resources per region
+
+If you need to increase the number of resources, you can apply for a resource increase here: <https://aka.ms/oai/resourceincrease>
+
+> [!NOTE]
+> Ensure that you thoroughly assess your current resource utilization, approaching its full capacity. Be aware that we will not grant additional resources if efficient usage of existing resources is not observed.
+ ## Next steps Learn more about the [underlying models that power Azure OpenAI](./concepts/models.md).
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/whats-new.md
-+ Last updated 03/21/2023 recommendations: false keywords:
communication-services Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/capabilities.md
In this article, you will learn which capabilities are supported for Teams exter
| Screen sharing | Share the entire screen from within the application | ✔️ | | | Share a specific application (from the list of running applications) | ✔️ | | | Share a web browser tab from the list of open tabs | ✔️ |
+| | Receive your screen sharing stream | ❌ |
| | Share content in "content-only" mode | ✔️ | | | Receive video stream with content for "content-only" screen sharing experience | ✔️ | | | Share content in "standout" mode | ❌ |
communication-services Teams User Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user-calling.md
The following list presents the set of features that are currently available in
| Screen sharing | Share the entire screen from within the application | ✔️ | | | Share a specific application (from the list of running applications) | ✔️ | | | Share a web browser tab from the list of open tabs | ✔️ |
+| | Receive your screen sharing stream | ❌ |
| | Share content in "content-only" mode | ✔️ | | | Receive video stream with content for "content-only" screen sharing experience | ✔️ | | | Share content in "standout" mode | ❌ |
communication-services Meeting Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user/meeting-capabilities.md
The following list of capabilities is allowed when Teams user participates in Te
| Screen sharing | Share the entire screen from within the application | ✔️ | | | Share a specific application (from the list of running applications) | ✔️ | | | Share a web browser tab from the list of open tabs | ✔️ |
+| | Receive your screen sharing stream | ❌ |
| | Share content in "content-only" mode | ✔️ | | | Receive video stream with content for "content-only" screen sharing experience | ✔️ | | | Share content in "standout" mode | ❌ |
communication-services Send Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/send-email.md
In this quick start, you'll learn about how to send email using our Email SDKs.
[!INCLUDE [Send Email with Python SDK](./includes/send-email-python.md)] ::: zone-end [!INCLUDE [Azure Logic Apps](./includes/send-email-logic-app.md)] ::: zone-end
communication-services Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/sms/send.md
- devx-track-js - mode-other - kr2b-contr-experiment
-zone_pivot_groups: acs-azcli-js-csharp-java-python-power-platform
+zone_pivot_groups: acs-azcli-js-csharp-java-python-logic-apps
# Quickstart: Send an SMS message
communication-services Click To Call Widget https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/widgets/click-to-call-widget.md
Enable your customers to talk with your support agent on Teams through a call interface directly embedded into your web application.
-## Architecture overview
## Prerequisites - An Azure account with an active subscription. For details, see [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
Follow instructions from our [trusted user access service tutorial](../trusted-s
1. Create an HTML file named `https://docsupdatetracker.net/index.html` and add the following code to it:
-``` html
-
- <!DOCTYPE html>
- <html>
- <head>
- <meta charset="utf-8">
- <title>Call Widget App - Vanilla</title>
- <link rel="stylesheet" href="style.css">
- </head>
- <body>
- <div id="call-widget">
- <div id="call-widget-header">
- <div id="call-widget-header-title">Call Widget App</div>
- <button class='widget'> ? </button >
- <div class='callWidget'></div>
+ ``` html
+
+ <!DOCTYPE html>
+ <html>
+ <head>
+ <meta charset="utf-8">
+ <title>Call Widget App - Vanilla</title>
+ <link rel="stylesheet" href="style.css">
+ </head>
+ <body>
+ <div id="call-widget">
+ <div id="call-widget-header">
+ <div id="call-widget-header-title">Call Widget App</div>
+ <button class='widget'> ? </button >
+ <div class='callWidget'></div>
+ </div>
</div>
- </div>
- </body>
- </html>
+ </body>
+ </html>
-```
+ ```
2. Create a CSS file named `style.css` and add the following code to it:
-``` css
-
- .widget {
- height: 75px;
- width: 75px;
- position: absolute;
- right: 0;
- bottom: 0;
- background-color: blue;
- margin-bottom: 35px;
- margin-right: 35px;
- border-radius: 50%;
- text-align: center;
- vertical-align: middle;
- line-height: 75px;
- color: white;
- font-size: 30px;
- }
-
- .callWidget {
- height: 400px;
- width: 600px;
- background-color: blue;
- position: absolute;
- right: 35px;
- bottom: 120px;
- z-index: 10;
- display: none;
- border-radius: 5px;
- border-style: solid;
- border-width: 5px;
- }
-
-```
-
-1. Configure the call window to be hidden by default. We show it when the user clicks the button.
-
-``` html
+ ``` css
+
+ .widget {
+ height: 75px;
+ width: 75px;
+ position: absolute;
+ right: 0;
+ bottom: 0;
+ background-color: blue;
+ margin-bottom: 35px;
+ margin-right: 35px;
+ border-radius: 50%;
+ text-align: center;
+ vertical-align: middle;
+ line-height: 75px;
+ color: white;
+ font-size: 30px;
+ }
+
+ .callWidget {
+ height: 400px;
+ width: 600px;
+ background-color: blue;
+ position: absolute;
+ right: 35px;
+ bottom: 120px;
+ z-index: 10;
+ display: none;
+ border-radius: 5px;
+ border-style: solid;
+ border-width: 5px;
+ }
+
+ ```
+
+3. Configure the call window to be hidden by default. We show it when the user clicks the button.
+
+ ``` html
+
+ <script>
+ var open = false;
+ const button = document.querySelector('.widget');
+ const content = document.querySelector('.callWidget');
+ button.addEventListener('click', async function() {
+ if(!open){
+ open = !open;
+ content.style.display = 'block';
+ button.innerHTML = 'X';
+ //Add code to initialize call widget here
+ } else if (open) {
+ open = !open;
+ content.style.display = 'none';
+ button.innerHTML = '?';
+ }
+ });
- <script>
- var open = false;
- const button = document.querySelector('.widget');
- const content = document.querySelector('.callWidget');
- button.addEventListener('click', async function() {
- if(!open){
- open = !open;
- content.style.display = 'block';
- button.innerHTML = 'X';
- //Add code to initialize call widget here
- } else if (open) {
- open = !open;
- content.style.display = 'none';
- button.innerHTML = '?';
+ async function getAccessToken(){
+ //Add code to get access token here
}
- });
-
- async function getAccessToken(){
- //Add code to get access token here
- }
- </script>
+ </script>
-```
+ ```
At this point, we have set up a static HTML page with a button that opens a call widget when clicked. Next, we add the widget script code. It makes a call to our Azure Function to get the access token and then use it to initialize our call client for Azure Communication Services and start the call to the Teams user we define.
Add the following code to the `getAccessToken()` function:
} ```
+
You need to add the URL of your Azure Function. You can find these values in the Azure portal under your Azure Function resource.
You need to add the URL of your Azure Function. You can find these values in the
1. Add a script tag to load the call widget script:
-``` html
+ ``` html
- <script src="https://github.com/ddematheu2/ACS-UI-Library-Widget/releases/download/widget/callComposite.js"></script>
+ <script src="https://github.com/ddematheu2/ACS-UI-Library-Widget/releases/download/widget/callComposite.js"></script>
-```
+ ```
We provide a test script hosted on GitHub for you to use for testing. For production scenarios, we recommend hosting the script on your own CDN. For more information on how to build your own bundle, see [this article](https://azure.github.io/communication-ui-library/?path=/docs/use-composite-in-non-react-environment--page#build-your-own-composite-js-bundle-files).
-1. Add the following code under the button event listener:
+2. Add the following code under the button event listener:
-``` javascript
+ ``` javascript
- button.addEventListener('click', async function() {
- if(!open){
- open = !open;
- content.style.display = 'block';
- button.innerHTML = 'X';
- let response = await getChatContext();
- console.log(response);
- const callAdapter = await callComposite.loadCallComposite(
- {
- displayName: "Test User",
- locator: { participantIds: ['INSERT USER UNIQUE IDENTIFIER FROM MICROSOFT GRAPH']},
- userId: response.user,
- token: response.userToken
- },
- content,
- {
- formFactor: 'mobile',
- key: new Date()
- }
- );
- } else if (open) {
- open = !open;
- content.style.display = 'none';
- button.innerHTML = '?';
- }
- });
+ button.addEventListener('click', async function() {
+ if(!open){
+ open = !open;
+ content.style.display = 'block';
+ button.innerHTML = 'X';
+ let response = await getChatContext();
+ console.log(response);
+ const callAdapter = await callComposite.loadCallComposite(
+ {
+ displayName: "Test User",
+ locator: { participantIds: ['INSERT USER UNIQUE IDENTIFIER FROM MICROSOFT GRAPH']},
+ userId: response.user,
+ token: response.userToken
+ },
+ content,
+ {
+ formFactor: 'mobile',
+ key: new Date()
+ }
+ );
+ } else if (open) {
+ open = !open;
+ content.style.display = 'none';
+ button.innerHTML = '?';
+ }
+ });
-```
+ ```
Add a Microsoft Graph [User](https://learn.microsoft.com/graph/api/resources/user?view=graph-rest-1.0) ID to the `participantIds` array. You can find this value through [Microsoft Graph](https://learn.microsoft.com/graph/api/user-get?view=graph-rest-1.0&tabs=http) or through [Microsoft Graph explorer](https://developer.microsoft.com/graph/graph-explorer) for testing purposes. There you can grab the `id` value from the response.
communications-gateway Prepare To Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/prepare-to-deploy.md
You must have signed an Operator Connect agreement with Microsoft. For more info
You need an onboarding partner for integrating with Microsoft Phone System. If you're not eligible for onboarding to Microsoft Teams through Azure Communications Gateway's [Basic Integration Included Benefit](onboarding.md) or you haven't arranged alternative onboarding with Microsoft through a separate arrangement, you need to arrange an onboarding partner yourself.
-You must ensure you've got two or more numbers that you own which are globally routable. Your onboarding team needs these numbers to configure test lines.
+You must own globally routable numbers that you can use for testing, as follows.
+
+|Type of testing|Numbers required |
+|||
+|Automated validation testing by Microsoft Teams test suites|Minimum: 3. Recommended: 6 (to run tests simultaneously).|
+|Manual test calls made by you and/or Microsoft staff during integration testing |Minimum: 1|
We strongly recommend that you have a support plan that includes technical support, such as [Microsoft Unified Support](https://www.microsoft.com/en-us/unifiedsupport/overview) or [Premier Support](https://www.microsoft.com/en-us/unifiedsupport/premier).
Collect all of the values in the following table for both service regions in whi
## 6. Collect Test Lines configuration values
-Collect all of the values in the following table for all test lines you want to configure for Azure Communications Gateway. You must configure at least one test line.
+Collect all of the values in the following table for all the test lines that you want to configure for Azure Communications Gateway.
|**Value**|**Field name(s) in Azure portal**| ||| |The name of the test line. |**Name**|
- |The phone number of the test line. |**Phone Number**|
- |Whether the test line is manual or automated: **Manual** test lines will be used by you and Microsoft staff to make test calls during integration testing. **Automated** test lines will be assigned to Microsoft Teams test suites for validation testing. |**Testing purpose**|
+ |The phone number of the test line, in E.164 format and including the country code. |**Phone Number**|
+ |The purpose of the test line: **Manual** (for manual test calls by you and/or Microsoft staff during integration testing) or **Automated** (for automated validation with Microsoft Teams test suites).|**Testing purpose**|
+
+> [!IMPORTANT]
+> You must configure at least three automated test lines. We recommend six automated test lines (to allow simultaneous tests).
## 7. Decide if you want tags
container-apps Background Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/background-processing.md
Now you can create the message queue.
```azurecli az storage queue create \
- --name 'myqueue" \
+ --name "myqueue" \
--account-name $STORAGE_ACCOUNT_NAME \ --connection-string $QUEUE_CONNECTION_STRING ```
Create a file named *queue.json* and paste the following configuration code into
"type": "String" }, "environment_name": {
- "defaultValue": "",
"type": "String" }, "queueconnection": {
- "defaultValue": "",
- "type": "String"
+ "type": "secureString"
} }, "variables": {},
container-registry Container Registry Get Started Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-get-started-bicep.md
param location string = resourceGroup().location
@description('Provide a tier of your Azure Container Registry.') param acrSku string = 'Basic'
-resource acrResource 'Microsoft.ContainerRegistry/registries@2021-06-01-preview' = {
+resource acrResource 'Microsoft.ContainerRegistry/registries@2023-01-01-preview' = {
name: acrName location: location sku: {
cosmos-db Analytical Store Change Data Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-change-data-capture.md
Last updated 04/03/2023
[!INCLUDE[NoSQL, MongoDB](includes/appliesto-nosql-mongodb.md)]
-Change data capture (CDC) in [Azure Cosmos DB analytical store](analytical-store-introduction.md) allows you to efficiently consume a continuous and incremental feed of changed (inserted, updated, and deleted) data from analytical store. The change data capture feature of the analytical store is seamlessly integrated with Azure Synapse and Azure Data Factory, providing you with a scalable no-code experience for high data volume. As the change data capture feature is based on analytical store, it [doesn't consume provisioned RUs, doesn't affect your transactional workloads](analytical-store-introduction.md#decoupled-performance-for-analytical-workloads), provides lower latency, and has lower TCO.
-
-> [!IMPORTANT]
-> This feature is currently in preview.
+Change data capture (CDC) in [Azure Cosmos DB analytical store](analytical-store-introduction.md) allows you to efficiently consume a continuous and incremental feed of changed (inserted, updated, and deleted) data from analytical store. Seamlessly integrated with Azure Synapse and Azure Data Factory, it provides you with a scalable no-code experience for high data volume. As the change data capture feature is based on analytical store, it [doesn't consume provisioned RUs, doesn't affect your transactional workloads](analytical-store-introduction.md#decoupled-performance-for-analytical-workloads), provides lower latency, and has lower TCO.
The change data capture feature in Azure Cosmos DB analytical store can write to various sinks using an Azure Synapse or Azure Data Factory data flow.
For more information on supported sink types in a mapping data flow, see [data f
In addition to providing incremental data feed from analytical store to diverse targets, change data capture supports the following capabilities: -- Supports applying filters, projections and transformations on the Change feed via source query - Supports capturing deletes and intermediate updates - Ability to filter the change feed for a specific type of operation (**Insert** | **Update** | **Delete** | **TTL**)-- Each change in Container appears exactly once in the change data capture feed, and the checkpoints are managed internally for you-- Changes can be synchronized from ΓÇ£the BeginningΓÇ¥ or ΓÇ£from a given timestampΓÇ¥ or ΓÇ£from nowΓÇ¥-- There's no limitation around the fixed data retention period for which changes are available
+- Supports applying filters, projections and transformations on the Change feed via source query
- Multiple change feeds on the same container can be consumed simultaneously
+- Each change in container appears exactly once in the change data capture feed, and the checkpoints are managed internally for you
+- Changes can be synchronized "from the BeginningΓÇ¥ or ΓÇ£from a given timestampΓÇ¥ or ΓÇ£from nowΓÇ¥
+- There's no limitation around the fixed data retention period for which changes are available
+
+> [!IMPORTANT]
+> Please note that "from the beginning" means that all data and all transactions since the container creation are availble for CDC, including deletes and updates. To ingest and process deletes and updates, you have to use specific settings in your CDC processes in Azure Synapse or Azure Data Factory. These settings are turned off by default. For more information, click [here](get-started-change-data-capture.md)
## Features
WHERE Category = 'Urban'
> [!NOTE] > If you would like to enable source-query based change data capture on Azure Data Factory data flows during preview, please email [cosmosdbsynapselink@microsoft.com](mailto:cosmosdbsynapselink@microsoft.com) and share your **subscription Id** and **region**. This is not necessary to enable source-query based change data capture on an Azure Synapse data flow.
+### Multiple CDC processes
+
+You can create multiple processes to consume CDC in analytical store. This approach brings flexibility to support different scenarios and requirements. While one process may have no data transformations and multiple sinks, another one can have data flattening and one sink. And they can run in parallel.
++ ### Throughput isolation, lower latency and lower TCO Operations on Cosmos DB analytical store don't consume the provisioned RUs and so don't affect your transactional workloads. change data capture with analytical store also has lower latency and lower TCO. The lower latency is attributed to analytical store enabling better parallelism for data processing and reduces the overall TCO enabling you to drive cost efficiencies in these rapidly shifting economic conditions.
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-introduction.md
Title: What is Azure Cosmos DB analytical store?
-description: Learn about Azure Cosmos DB transactional (row-based) and analytical(column-based) store. Benefits of analytical store, performance impact for large-scale workloads, and auto sync of data from transactional store to analytical store
+description: Learn about Azure Cosmos DB transactional (row-based) and analytical(column-based) store. Benefits of analytical store, performance impact for large-scale workloads, and auto sync of data from transactional store to analytical store.
Previously updated : 03/24/2022 Last updated : 04/18/2023
Azure Cosmos DB transactional store is schema-agnostic, and it allows you to ite
The multi-model operational data in an Azure Cosmos DB container is internally stored in an indexed row-based "transactional store". Row store format is designed to allow fast transactional reads and writes in the order-of-milliseconds response times, and operational queries. If your dataset grows large, complex analytical queries can be expensive in terms of provisioned throughput on the data stored in this format. High consumption of provisioned throughput in turn, impacts the performance of transactional workloads that are used by your real-time applications and services.
-Traditionally, to analyze large amounts of data, operational data is extracted from Azure Cosmos DB's transactional store and stored in a separate data layer. For example, the data is stored in a data warehouse or data lake in a suitable format. This data is later used for large-scale analytics and analyzed using compute engine such as the Apache Spark clusters. This separation of analytical storage and compute layers from operational data results in additional latency, because the ETL(Extract, Transform, Load) pipelines are run less frequently to minimize the potential impact on your transactional workloads.
+Traditionally, to analyze large amounts of data, operational data is extracted from Azure Cosmos DB's transactional store and stored in a separate data layer. For example, the data is stored in a data warehouse or data lake in a suitable format. This data is later used for large-scale analytics and analyzed using compute engines such as the Apache Spark clusters. The separation of analytical from operational data results in delays for analysts that want to use the most recent data.
The ETL pipelines also become complex when handling updates to the operational data when compared to handling only newly ingested operational data.
There's no impact on the performance of your transactional workloads due to anal
## Auto-Sync
-Auto-Sync refers to the fully managed capability of Azure Cosmos DB where the inserts, updates, deletes to operational data are automatically synced from transactional store to analytical store in near real time. Auto-sync latency is usually within 2 minutes. In cases of shared throughput database with a large number of containers, auto-sync latency of individual containers could be higher and take up to 5 minutes. We would like to learn more how this latency fits your scenarios. For that, please reach out to the [Azure Cosmos DB Team](mailto:cosmosdbsynapselink@microsoft.com).
+Auto-Sync refers to the fully managed capability of Azure Cosmos DB where the inserts, updates, deletes to operational data are automatically synced from transactional store to analytical store in near real time. Auto-sync latency is usually within 2 minutes. In cases of shared throughput database with a large number of containers, auto-sync latency of individual containers could be higher and take up to 5 minutes.
At the end of each execution of the automatic sync process, your transactional data will be immediately available for Azure Synapse Analytics runtimes:
The following constraints are applicable on the operational data in Azure Cosmos
* Sample scenarios:
- * If your document's first level has 2000 properties, only the first 1000 will be represented.
- * If your documents have five levels with 200 properties in each one, all properties will be represented.
- * If your documents have 10 levels with 400 properties in each one, only the two first levels will be fully represented in analytical store. Half of the third level will also be represented.
+ * If your document's first level has 2000 properties, the sync process will represent the first 1000 of them.
+ * If your documents have five levels with 200 properties in each one, the sync process will represent all properties.
+ * If your documents have 10 levels with 400 properties in each one, the sync process will fully represent the two first levels and only half of the third level.
* The hypothetical document below contains four properties and three levels. * The levels are `root`, `myArray`, and the nested structure within the `myArray`.
df = spark.read\
* MinKey/MaxKey * When using DateTime strings that follow the ISO 8601 UTC standard, expect the following behavior:
- * Spark pools in Azure Synapse will represent these columns as `string`.
- * SQL serverless pools in Azure Synapse will represent these columns as `varchar(8000)`.
+ * Spark pools in Azure Synapse represent these columns as `string`.
+ * SQL serverless pools in Azure Synapse represent these columns as `varchar(8000)`.
* Properties with `UNIQUEIDENTIFIER (guid)` types are represented as `string` in analytical store and should be converted to `VARCHAR` in **SQL** or to `string` in **Spark** for correct visualization.
-* SQL serverless pools in Azure Synapse support result sets with up to 1000 columns, and exposing nested columns also counts towards that limit. Please consider this information when designing your data architecture and modeling your transactional data.
+* SQL serverless pools in Azure Synapse support result sets with up to 1000 columns, and exposing nested columns also counts towards that limit. It is a good practice to consider this information in your transactional data architecture and modeling.
* If you rename a property, in one or many documents, it will be considered a new column. If you execute the same rename in all documents in the collection, all data will be migrated to the new column and the old column will be represented with `NULL` values. ### Schema representation
-There are two types of schema representation in the analytical store. These types define the schema representation method for all containers in the database account and have tradeoffs between the simplicity of query experience versus the convenience of a more inclusive columnar representation for polymorphic schemas.
+There are two methods of schema representation in the analytical store, valid for all containers in the database account. They have tradeoffs between the simplicity of query experience versus the convenience of a more inclusive columnar representation for polymorphic schemas.
* Well-defined schema representation, default option for API for NoSQL and Gremlin accounts. * Full fidelity schema representation, default option for API for MongoDB accounts.
The well-defined schema representation creates a simple tabular representation o
* The first document defines the base schema and properties must always have the same type across all documents. The only exceptions are: * From `NULL` to any other data type. The first non-null occurrence defines the column data type. Any document not following the first non-null datatype won't be represented in analytical store.
- * From `float` to `integer`. All documents will be represented in analytical store.
- * From `integer` to `float`. All documents will be represented in analytical store. However, to read this data with Azure Synapse SQL serverless pools, you must use a WITH clause to convert the column to `varchar`. And after this initial conversion, it's possible to convert it again to a number. Please check the example below, where **num** initial value was an integer and the second one was a float.
+ * From `float` to `integer`. All documents are represented in analytical store.
+ * From `integer` to `float`. All documents are represented in analytical store. However, to read this data with Azure Synapse SQL serverless pools, you must use a WITH clause to convert the column to `varchar`. And after this initial conversion, it's possible to convert it again to a number. Please check the example below, where **num** initial value was an integer and the second one was a float.
```SQL SELECT CAST (num as float) as num
WITH (num varchar(100)) AS [IntToFloat]
> If the Azure Cosmos DB analytical store follows the well-defined schema representation and the specification above is violated by certain items, those items won't be included in the analytical store. * Expect different behavior in regard to different types in well-defined schema:
- * Spark pools in Azure Synapse will represent these values as `undefined`.
- * SQL serverless pools in Azure Synapse will represent these values as `NULL`.
+ * Spark pools in Azure Synapse represent these values as `undefined`.
+ * SQL serverless pools in Azure Synapse represent these values as `NULL`.
* Expect different behavior in regard to explicit `NULL` values:
- * Spark pools in Azure Synapse will read these values as `0` (zero). And it will change to `undefined` as soon as the column has a non-null value.
- * SQL serverless pools in Azure Synapse will read these values as `NULL`.
+ * Spark pools in Azure Synapse read these values as `0` (zero), and as `undefined` as soon as the column has a non-null value.
+ * SQL serverless pools in Azure Synapse read these values as `NULL`.
* Expect different behavior in regard to missing columns:
- * Spark pools in Azure Synapse will represent these columns as `undefined`.
- * SQL serverless pools in Azure Synapse will represent these columns as `NULL`.
+ * Spark pools in Azure Synapse represent these columns as `undefined`.
+ * SQL serverless pools in Azure Synapse represent these columns as `NULL`.
##### Representation challenges workarounds It is possible that an old document, with an incorrect schema, was used to create your container's analytical store base schema. Based on all the rules presented above, you may be receiving `NULL` for certain properties when querying your analytical store using Azure Synapse Link. To delete or update the problematic documents won't help because base schema reset isn't currently supported. The possible solutions are: * To migrate the data to a new container, making sure that all documents have the correct schema.
- * To abandon the property with the wrong schema and add a new one, with another name, that has the correct schema in all documents. Example: You have billions of documents in the **Orders** container where the **status** property is a string. But the first document in that container has **status** defined with integer. So, one document will have **status** correctly represented and all other documents will have `NULL`. You can add the **status2** property to all documents and start to use it, instead of the original property.
+ * To abandon the property with the wrong schema and add a new one with another name that has the correct schema in all documents. Example: You have billions of documents in the **Orders** container where the **status** property is a string. But the first document in that container has **status** defined with integer. So, one document will have **status** correctly represented and all other documents will have `NULL`. You can add the **status2** property to all documents and start to use it, instead of the original property.
#### Full fidelity schema representation
the MongoDB `_id` field is fundamental to every collection in MongoDB and origin
###### Working with the MongoDB `_id` field in Spark
-```Python
-import org.apache.spark.sql.types._
-val simpleSchema = StructType(Array(
-    StructField("_id", StructType(Array(StructField("objectId",BinaryType,true)) ),true),
-    StructField("id", StringType, true)
-  ))
-
-df = spark.read.format("cosmos.olap")\
- .option("spark.synapse.linkedService", "<enter linked service name>")\
- .option("spark.cosmos.container", "<enter container name>")\
- .schema(simpleSchema)
- .load()
+The example below works on Spark 2.x and 3.x versions:
-df.select("id", "_id.objectId").show()
-```
+```Scala
+val df = spark.read.format("cosmos.olap").option("spark.synapse.linkedService", "xxxx").option("spark.cosmos.container", "xxxx").load()
-> [!NOTE]
-> This workaround was designed to work with Spark 2.4.
+val convertObjectId = udf((bytes: Array[Byte]) => {
+ val builder = new StringBuilder
+
+ for (b <- bytes) {
+ builder.append(String.format("%02x", Byte.box(b)))
+ }
+ builder.toString
+}
+ )
+
+val dfConverted = df.withColumn("objectId", col("_id.objectId")).withColumn("convertedObjectId", convertObjectId(col("_id.objectId"))).select("id", "objectId", "convertedObjectId")
+display(dfConverted)
+```
###### Working with the MongoDB `_id` field in SQL
It's possible to use full fidelity Schema for API for NoSQL accounts, instead of
* Currently, if you enable Synapse Link in your NoSQL API account using the Azure Portal, it will be enabled as well-defined schema. * Currently, if you want to use full fidelity schema with NoSQL or Gremlin API accounts, you have to set it at account level in the same CLI or PowerShell command that will enable Synapse Link at account level.
-* Currently Azure Cosmso DB for MongoDB isn't compatible with this possibility of changing the schema representation. All MongoDB accounts will always have full fidelity schema representation type.
+* Currently Azure Cosmos DB for MongoDB isn't compatible with this possibility of changing the schema representation. All MongoDB accounts have full fidelity schema representation type.
* It's not possible to reset the schema representation type, from well-defined to full fidelity or vice-versa.
-* Currently, containers schema in analytical store are defined when the container is created, even if Synapse Link has not been enabled in the database account.
+* Currently, containers schemas in analytical store are defined when the container is created, even if Synapse Link has not been enabled in the database account.
* Containers or graphs created before Synapse Link was enabled with full fidelity schema at account level will have well-defined schema. * Containers or graphs created after Synapse Link was enabled with full fidelity schema at account level will have full fidelity schema.
After the analytical store is enabled, based on the data retention needs of the
Analytical store relies on Azure Storage and offers the following protection against physical failure:
- * By default, Azure Cosmos DB database accounts allocate analytical store in Locally Redundant Storage (LRS) accounts.
- * If any geo-region of the database account is configured for zone-redundancy, it is allocated in Zone-redundant Storage (ZRS) accounts. Customers need to enable Availability Zones on a region of their Azure Cosmos DB database account to have analytical data of that region stored in ZRS.
+ * By default, Azure Cosmos DB database accounts allocate analytical store in Locally Redundant Storage (LRS) accounts. LRS provides at least 99.999999999% (11 nines) durability of objects over a given year.
+ * If any geo-region of the database account is configured for zone-redundancy, it is allocated in Zone-redundant Storage (ZRS) accounts. Customers need to enable Availability Zones on a region of their Azure Cosmos DB database account to have analytical data of that region stored in Zone-redundant Storage. ZRS offers durability for storage resources of at least 99.9999999999% (12 9's) over a given year.
+
+For more information about Azure Storage durability, click [here](https://learn.microsoft.com/azure/storage/common/storage-redundancy).
## Backup
Synapse Link, and analytical store by consequence, has different compatibility l
* Periodic backup mode is fully compatible with Synapse Link and these 2 features can be used in the same database account. * Currently Continuous backup mode and Synapse Link aren't supported in the same database account. Customers have to choose one of these two features and this decision can't be changed.
-### Backup Polices
+### Backup policies
There are two possible backup polices and to understand how to use them, the following details about Azure Cosmos DB backups are very important:
If you want to delete the original container but don't want to lose its analytic
It's important to note that the data in the analytical store has a different schema than what exists in the transactional store. While you can generate snapshots of your analytical store data, and export it to any Azure Data service, at no RUs costs, we can't guarantee the use of this snapshot to back feed the transactional store. This process isn't supported.
-## Global Distribution
+## Global distribution
If you have a globally distributed Azure Cosmos DB account, after you enable analytical store for a container, it will be available in all regions of that account. Any changes to operational data are globally replicated in all regions. You can run analytical queries effectively against the nearest regional copy of your data in Azure Cosmos DB.
In order to get a high-level cost estimate to enable analytical store on an Azur
Analytical store read operations estimates aren't included in the Azure Cosmos DB cost calculator since they are a function of your analytical workload. But as a high-level estimate, scan of 1 TB of data in analytical store typically results in 130,000 analytical read operations, and results in a cost of $0.065. As an example, if you use Azure Synapse serverless SQL pools to perform this scan of 1 TB, it will cost $5.00 according to [Azure Synapse Analytics pricing page](https://azure.microsoft.com/pricing/details/synapse-analytics/). The final total cost for this 1 TB scan would be $5.065.
-While the above estimate is for scanning 1TB of data in analytical store, applying filters reduces the volume of data scanned and this determines the exact number of analytical read operations given the consumption pricing model. A proof-of-concept around the analytical workload would provide a more finer estimate of analytical read operations. This estimate doesn't include the cost of Azure Synapse Analytics.
+While the above estimate is for scanning 1TB of data in analytical store, applying filters reduces the volume of data scanned and this determines the exact number of analytical read operations given the consumption pricing model. A proof-of-concept around the analytical workload would provide a finer estimate of analytical read operations. This estimate doesn't include the cost of Azure Synapse Analytics.
## Next steps
cosmos-db Continuous Backup Restore Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-introduction.md
See [How do customer-managed keys affect continuous backups?](./how-to-setup-cmk
Currently the point in time restore functionality has the following limitations:
-* Azure Cosmos DB APIs for SQL and MongoDB are supported for continuous backup. API for Cassandra isn't supported now.
+* Azure Cosmos DB APIs for SQL, MongoDB, Gremlin and Table supported for continuous backup. API for Cassandra isn't supported now.
* Multi-regions write accounts aren't supported.
cosmos-db Get Started Change Data Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/get-started-change-data-capture.md
Previously updated : 03/23/2023 Last updated : 04/18/2023
-# Get started with change data capture in the analytical store for Azure Cosmos DB
+# Get started with change data capture in the analytical store for Azure Cosmos DB (Preview)
[!INCLUDE[NoSQL, MongoDB](includes/appliesto-nosql-mongodb.md)] Use Change data capture (CDC) in Azure Cosmos DB analytical store as a source to [Azure Data Factory](../data-factory/index.yml) or [Azure Synapse Analytics](../synapse-analytics/index.yml) to capture specific changes to your data. +
+> [!NOTE]
+> Please note that the linked service interface for Azure Cosmos DB for MongoDB API is not available on Dataflow yet. However, you would be able to use your accountΓÇÖs document endpoint with the ΓÇ£Azure Cosmos DB for NoSQLΓÇ¥ linked service interface as a work around until the Mongo linked service is supported. On a NoSQL linked service, choose ΓÇ£Enter ManuallyΓÇ¥ to provide the Cosmos DB account info and use accountΓÇÖs document endpoint (eg: `https://[your-database-account-uri].documents.azure.com:443/`) instead of the MongoDB endpoint (eg: `mongodb://[your-database-account-uri].mongo.cosmos.azure.com:10255/`)ΓÇ»
+ ## Prerequisites - An existing Azure Cosmos DB account.
Use Change data capture (CDC) in Azure Cosmos DB analytical store as a source to
First, enable Azure Synapse Link at the account level and then enable analytical store for the containers that's appropriate for your workload.
-1. Enable Azure Synapse Link: [Enable Azure Synapse Link for an Azure Cosmos DB account](configure-synapse-link.md#enable-synapse-link) |
+1. Enable Azure Synapse Link: [Enable Azure Synapse Link for an Azure Cosmos DB account](configure-synapse-link.md#enable-synapse-link)
-1. Enable analytical store for your container\[s\]:
+1. Enable analytical store for your containers:
| Option | Guide | | | |
Now create and configure a source to flow data from the Azure Cosmos DB account'
| Batchsize in bytes | Specify the size in bytes if you would like to batch the change data capture feeds | | Extra Configs | Extra Azure Cosmos DB analytical store configs and their values. (ex: `spark.cosmos.allowWhiteSpaceInFieldNames -> true`) |
+### Working with source options
+
+When you check any of the `Capture intermediate updates`, `Capture Deltes`, and `Capture Transactional store TTLs` options, your CDC process will create and populate the `__usr_opType` field in sink with the following values:
+
+| Value | Description | Option
+| | | |
+| 1 | UPDATE | Capture Intermediate updates |
+| 2 | INSERT | There isn't an option for inserts, it's on by default |
+| 3 | USER_DELETE | Capture Deletes |
+| 4 | TTL_DELETE | Capture Transactional store TTLs|
+
+If you have to differentiate the TTL deleted records from documents deleted by users or applications, you have check both `Capture intermediate updates` and `Capture Transactional store TTLs` options. Then you have to adapt your CDC processes or applications or queries to use `__usr_opType` according to what is necessary for your business needs.
+
+
## Create and configure sink settings for update and delete operations First, create a straightforward [Azure Blob Storage](../storage/blobs/index.yml) sink and then configure the sink to filter data to only specific operations.
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/introduction.md
Cosmos DB for MongoDB has numerous benefits compared to other MongoDB service of
- **Role Based Access Control**: With Azure Cosmos DB for MongoDB, you can assign granular roles and permissions to users to control access to your data and audit user actions- all using native Azure tooling. -- **Flexible single-field indexes**: Unlike single field indexes in MongoDB Atlas, [single field indexes in Cosmos DB for MongoDB](indexing.md) cover multi-field filter queries. There is no need to create compound indexes for each multi-field filter query. This increases developer productivity.- - **In-depth monitoring capabilities**: Cosmos DB for MongoDB integrates natively with [Azure Monitor](../../azure-monitor/overview.md) to provide in-depth monitoring capabilities. ## How Cosmos DB for MongoDB works
Cosmos DB for MongoDB implements the wire protocol for MongoDB. This implementat
Cosmos DB for MongoDB is compatible with the following MongoDB server versions: -- [Version 5.0 (limited preview)](../access-previews.md)
+- [Version 5.0 (vCore preview)](./vcore/quickstart-portal.md)
- [Version 4.2](feature-support-42.md) - [Version 4.0](feature-support-40.md) - [Version 3.6](feature-support-36.md)
cosmos-db Reference Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-limits.md
be scaled down (decreased).
### Storage size
-Up to 2 TiB of storage is supported on coordinator and worker nodes. See the
-available storage options and IOPS calculation [above](resources-compute.md)
-for node and cluster sizes.
+Up to 16 TiB of storage is supported on coordinator and worker nodes in multi-node configuration. Up to 2 TiB of storage is supported for single node configurations. See [the available storage options and IOPS calculation](resources-compute.md)
+for various node and cluster sizes.
## Compute
cosmos-db Restore Account Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/restore-account-continuous-backup.md
Before restoring the account, install the [latest version of Azure PowerShell](/
### <a id="trigger-restore-ps"></a>Trigger a restore operation for API for NoSQL account
-The following cmdlet is an example to trigger a restore operation with the restore command by using the target account, source account, location, resource group, and timestamp:
+The following cmdlet is an example to trigger a restore operation with the restore command by using the target account, source account, location, resource group, PublicNetworkAccess and timestamp:
```azurepowershell
Restore-AzCosmosDBAccount `
-SourceDatabaseAccountName "SourceDatabaseAccountName" ` -RestoreTimestampInUtc "UTCTime" ` -Location "AzureRegionName"
+ -PublicNetworkAccess Disabled
```
Restore-AzCosmosDBAccount `
-SourceDatabaseAccountName "source-sql" ` -RestoreTimestampInUtc "2021-01-05T22:06:00" ` -Location "West US"
+ -PublicNetworkAccess Disabled
```
+If `PublicNetworkAccess` is not set, restored account is accessible from public network, please ensure to pass Disabled to the `PublicNetworkAccess` option to disable public network access for restored account.
+ [NOTE]
+> For restoring with public network access disabled, you'll need to install the preview of powershell module of CosmosDB by executing `Install-Module -Name Az.CosmosDB -AllowPrerelease`. You would also require version 5.1 of the Powershell.
+>
**Example 2:** Restoring specific collections and databases. This example restores the collections *MyCol1*, *MyCol2* from *MyDB1* and the entire database *MyDB2*, which, includes all the containers under it. ```azurepowershell
The simplest way to trigger a restore is by issuing the restore command with nam
--restore-timestamp 2020-07-13T16:03:41+0000 \ --resource-group MyResourceGroup \ --location "West US"
+ --public-network-access Disabled
```
+If `public-network-access` is not set, restored account is accessible from public network, please ensure to pass Disabled to the `public-network-access` option to disable public network access for restored account.
+
+> [NOTE]
+> For restoring with public network access disabled, you'll need to install the cosmosdb-preview 0.23.0 of CLI extension by executing `az extension update --name cosmosdb-preview `. You would also require version 2.17.1 of the CLI.
+ #### Create a new Azure Cosmos DB account by restoring only selected databases and containers from an existing database account
cost-management-billing Purchase Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/purchase-recommendations.md
Previously updated : 02/02/2023 Last updated : 04/18/2023 # Azure savings plan recommendations
Finally, we present a differentiated set of one-year and three-year recommendati
To account for scenarios where there were significant reductions in your usage, including recently decommissioned services, we run more simulations using only the last three days of usage. The lower of the three day and 30-day recommendations are highlighted, even in situations where the 30-day recommendation may appear to provide greater savings. The lower recommendation is to ensure that we don't encourage overcommitment based on stale data.
-Recommendations are refreshed several times a day. However, it may take up to five days for the newly purchased savings plans and reservations to begin to be reflected in recommendations.
+Note the following points:
+
+- Recommendations are refreshed several times a day.
+- The recommended quantity for a scope is reduced on the same day that you purchase a savings plan for the scope. However, an update for the savings plan recommendation across scopes can take up to 25 days.
+ - For example, if you purchase based on shared scope recommendations, the single subscription scope recommendations can take up to 25 days to adjust down.
## Recommendations in Azure Advisor
The minimum hourly commitment must be at least equal to the outstanding amount d
As part of the trade in, the outstanding commitment is automatically included in your new savings plan. We do it by dividing the outstanding commitment by the number of hours in the term of the new savings plan. For example, 24 times the term length in days. And by making the value the minimum hourly commitment you can make during as part of the trade-in. Using the previous example, the $250 amount would be converted into an hourly commitment of about $0.029 for a new one-year savings plan.
-If you're trading multiple reservations, the aggregate outstanding commitment is used. You may choose to increase the value, but you can't decrease it. The new savings plan will be used to cover usage of eligible resources.
+If you're trading multiple reservations, the aggregate outstanding commitment is used. You may choose to increase the value, but you can't decrease it. The new savings plan is used to cover usage of eligible resources.
The minimum value doesn't necessarily represent the hourly commitment necessary to cover the resources that were covered by the exchanged reservation. If you want to cover those resources, you'll most likely have to increase the hourly commitment. To determine the appropriate hourly commitment:
data-factory How To Schedule Azure Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-schedule-azure-ssis-integration-runtime.md
You will need an instance of Azure Data Factory to implement this walk through.
If you have not provisioned your Azure-SSIS IR already, provision it by following instructions in the [tutorial](./tutorial-deploy-ssis-packages-azure.md). ## Create and schedule ADF pipelines that start and or stop Azure-SSIS IR
+> [!NOTE]
+> This section is not supported for Azure-SSIS in **Azure Synapse** with [data exfiltration protection](/azure/synapse-analytics/security/workspace-data-exfiltration-protection) enabled.
+ This section shows you how to use Web activities in ADF pipelines to start/stop your Azure-SSIS IR on schedule or start & stop it on demand. We will guide you to create three pipelines: 1. The first pipeline contains a Web activity that starts your Azure-SSIS IR.
If you create a third trigger that is scheduled to run daily at midnight and ass
2. In the **Activities** toolbox, expand **General** menu, and drag & drop a **Web** activity onto the pipeline designer surface. In **General** tab of the activity properties window, change the activity name to **startMyIR**. Switch to **Settings** tab, and do the following actions: > [!NOTE]
- > For Azure-SSIS in Azure Synapse, use corresponding Azure Synapse REST API to [Get Integration Runtime status](/rest/api/synapse/integration-runtimes/get), [Start Integration Runtime](/rest/api/synapse/integration-runtimes/start) and [Stop Integration Runtime](/rest/api/synapse/integration-runtimes/stop).
+ > For Azure-SSIS in **Azure Synapse**, use corresponding Azure Synapse REST API to [Get Integration Runtime status](/rest/api/synapse/integration-runtimes/get), [Start Integration Runtime](/rest/api/synapse/integration-runtimes/start) and [Stop Integration Runtime](/rest/api/synapse/integration-runtimes/stop).
1. For **URL**, enter the following URL for REST API that starts Azure-SSIS IR, replacing `{subscriptionId}`, `{resourceGroupName}`, `{factoryName}`, and `{integrationRuntimeName}` with the actual values for your IR: `https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataFactory/factories/{factoryName}/integrationRuntimes/{integrationRuntimeName}/start?api-version=2018-06-01`. Alternatively, you can also copy & paste the resource ID of your IR from its monitoring page on ADF UI/app to replace the following part of the above URL: `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataFactory/factories/{factoryName}/integrationRuntimes/{integrationRuntimeName}`
data-manager-for-agri Concepts Ingest Sensor Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-ingest-sensor-data.md
Gateways collect all essential data from the nodes and push it securely to the c
In addition to the above approach, IOT devices (sensors/nodes/gateway) can directly push the data to IOTHub endpoint. In both cases, the data first reaches the IOTHub, post that the next set of processing happens.
->:::image type="content" source="./media/sensor-data-flow-new.png" alt-text="Screenshot showing sensor data flow.":::
## Sensor topology The following diagram depicts the topology of a sensor in Azure Data Manager for Agriculture. Each boundary under a party has a set of devices placed within it. A device can be either be a node or a gateway and each device has a set of sensors associated with it. Sensors send the recordings via gateway to the cloud. Sensors are tagged with GPS coordinates helping in creating a geospatial time series for all measured data.
->:::image type="content" source="./media/sensor-topology-new.png" alt-text="Screenshot showing sensor topology.":::
## Next steps
data-manager-for-agri Concepts Isv Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-isv-solutions.md
The agriculture industry is going through a significant technology transformatio
The solution framework is built on top of Data Manager for Agriculture that provides extensibility capabilities. It enables our Independent Software Vendor (ISV) partners to apply their deep domain knowledge and develop specialized domain specific industry solutions to top of the core platform. The solution framework provides below capabilities:
->:::image type="content" source="./media/solution-framework-isv-1.png" alt-text="Screenshot showing ISV solution framework.":::
* Enables ISV Partners to easily build industry specific solutions to top of Data Manager for Agriculture. * Helps ISVs generate revenue by monetizing their solution and publishing it on the Azure Marketplace* Provides simplified onboarding experience for ISV Partners and customers.
data-manager-for-agri Concepts Understanding Throttling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-understanding-throttling.md
+
+ Title: API throttling guidance for customers using Azure Data Manager for Agriculture.
+description: Provides information on API throttling limits to plan usage.
++++ Last updated : 04/18/2023+++
+# API throttling guidance for Azure Data Manager for Agriculture.
+
+The API throttling in Azure Data Manager for Agriculture allows more consistent performance within a time span for customers calling our service APIs. Throttling limits, the number of requests to our service in a time span to prevent overuse of resources. Azure Data Manager for Agriculture is designed to handle a high volume of requests, if an overwhelming number of requests occur by few customers, throttling helps maintain optimal performance and reliability for all customers.
+
+Throttling limits vary based on product type and capabilities being used. Currently we have two versions, standard and basic (for your POC needs).
+
+## DPS API limits
+
+Throttling category | Units available per Standard version| Units available per Basic version |
+|:|:|:|
+Per Minute | 25,000 | 25,000 |
+Per 5 Minutes| 100,000| 100,000 |
+Per Month| 25,000,000| 5,000,000|
+
+### Maximum requests allowed per type for standard version
+API Type| Per minute| Per 5 minutes| Per month|
+|:|:|:|:|
+PUT |5,000 |20,000 |5,000,000
+PATCH |5,000 |20,000 |5,000,000
+POST |5,000 |20,000 |5,000,000
+DELETE |5,000 |20,000 |5,000,000
+GET (single object) |25,000 |100,000 |25,000,000
+LIST with paginated response |25,000 results |100,000 results |25,000,000 results
+
+### Maximum requests allowed per type for basic version
+API Type| Per minute| Per 5 minutes| Per month|
+|:|:|:|:|
+PUT |5,000 |20,000 |1,000,000
+PATCH |5,000 |20,000 |1,000,000
+POST |5,000 |20,000 |1,000,000
+DELETE |5,000 |20,000 |1,000,000
+GET (single object) |25,000 |100,000 |5,000,000
+LIST with paginated response |25,000 results |100,000 results |5,000,000 results
+
+### Throttling cost by API type
+API Type| Cost per request|
+|:|::|
+PUT |5
+PATCH |5
+POST |5
+DELETE |5
+GET (single object) |1
+GET Sensor Events |1 + 0.01 per result
+LIST with paginated response |1 per request + 1 per result
+
+## Jobs create limits per instance of our service
+The maximum queue size for each job type is 10,000.
+
+### Total units available
+Throttling category| Units available per Standard version| Units available per Basic version|
+|:|:|:|
+Per 5 Minutes |1,000 |1,000
+Per Month |1,000,000 |200,000
++
+### Maximum create job requests allowed for standard version
+Job Type| Per 5 mins| Per month|
+|:|:|:|
+Cascade delete| 1,000| 500,000
+Satellite| 1,000| 500,000
+Model inference| 200| 100,000
+Farm Operation| 200| 100,000
+Rasterize| 500| 250,000
+Weather| 500| 250,000
++
+### Maximum create job requests allowed for basic version
+Job Type| Per 5 mins| Per month
+|:|:|:|
+Cascade delete| 1,000| 100,000
+Satellite| 1,000| 100,000
+Model inference| 200| 20,000
+Farm Operation| 200| 20,000
+Rasterize| 500| 50,000
+Weather| 500| 50,000
+
+### Sensor events limits
+100,000 event ingestion per hour by our sensor job.
+
+## Error code
+When you reach the limit, you receive the HTTP status code **429 Too many requests**. The response includes a **Retry-After** value, which specifies the number of seconds your application should wait (or sleep) before sending the next request. If you send a request before the retry value has elapsed, your request isn't processed and a new retry value is returned.
+
+After waiting for specified time, you can also close and reopen your connection to Azure Data Manager for Agriculture.
+
+## Next steps
+* See the Hierarchy Model and learn how to create and organize your agriculture data [here](./concepts-hierarchy-model.md).
+* Understand our APIs [here](/rest/api/data-manager-for-agri).
data-manager-for-agri How To Set Up Isv Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-set-up-isv-solution.md
Once you've installed an ISV solution from Azure portal, use this document to un
A high level view of how you can create a new request and get responses from the ISV partners solution:
->:::image type="content" source="./media/3p-solutions-new.png" alt-text="Screenshot showing access flow for ISV API.":::
* Step 1: You make an API call for a PUT request with the required parameters (for example Job ID, Farm details) * The Data Manager API receives this request and authenticates it. If the request is invalid, you'll get an error code back.
data-manager-for-agri How To Set Up Sensors Customer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-set-up-sensors-customer.md
To start using the on-boarded sensor partners, you need to give consent to the s
5. Now, look for `Davis Instruments WeatherLink Data Manager for Agriculture Connector` under All Applications tab in `App Registrations` page (illustrated with a generic Partner in the image).
- >:::image type="content" source="./media/sensor-partners.png" alt-text="Screenshot showing the partners message.":::
+ :::image type="content" source="./media/sensor-partners.png" alt-text="Screenshot showing the partners message.":::
6. Copy the Application (client) ID for the specific partner app that you want to provide access to.
Log in to <a href="https://portal.azure.com" target=" blank">Azure portal</a> an
You find the IAM (Identity Access Management) menu option on the left hand side of the option pane as shown in the image:
->:::image type="content" source="./media/role-assignment-1.png" alt-text="Screenshot showing role assignment.":::
Click **Add > Add role assignment**, this action opens up a pane on the right side of the portal, choose the role from the dropdown:
To complete the role assignment, do the following steps:
4. Click **Save** to assign the role.
->:::image type="content" source="./media/sensor-partner-role.png" alt-text="Screenshot showing app selection for authorization.":::
This step ensures that the sensor partner app has been granted access (based on the role assigned) to Azure Data Manager for Agriculture Resource.
data-manager-for-agri How To Set Up Sensors Partner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-set-up-sensors-partner.md
The below section of this document talks about the onboarding steps needed to in
Onboarding covers the steps required by both customers & partners to integrate with Data Manager for Agriculture and start receiving/sending sensor telemetry respectively.
->:::image type="content" source="./media/sensor-partners-flow.png" alt-text="Screenshot showing sensor partners flow.":::
From the above figure, the blocks highlighted in white are the steps taken by a partner, and the ones highlighted in black are done by customers.
data-manager-for-agri How To Use Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-use-events.md
+
+ Title: Azure Data Manager for Agriculture events with Azure Event Grid.
+description: Learn about properties that are provided for Azure Data Manager for Agriculture events with Azure Event Grid.
++++ Last updated : 04/18/2023+++
+# Azure Data Manager for Agriculture Preview as Event Grid source
+
+This article provides the properties and schema for Azure Data Manager for Agriculture events. For an introduction to event schemas, see [Azure Event Grid](https://learn.microsoft.com/azure/event-grid/event-schema) event schema.
+
+## Prerequisites
+
+It's important that you have the following prerequisites completed before you begin the steps of deploying the Events feature in Azure Data Manager for Agriculture.
+
+* [An active Azure account](https://azure.microsoft.com/free/search/?OCID=AID2100131_SEM_c4b0772dc7df1f075552174a854fd4bc:G:s&ef_id=c4b0772dc7df1f075552174a854fd4bc:G:s&msclkid=c4b0772dc7df1f075552174a854fd4bc)
+* [Microsoft Azure Event Hubs namespace and an event hub deployed in the Azure portal](../event-hubs/event-hubs-create.md)
+
+## Reacting to Data Manager for Agriculture events
+
+Data Manager for Agriculture events allow applications to react to creation, deletion and updating of resources. Data Manager for Agriculture events are pushed using <a href="https://azure.microsoft.com/services/event-grid/" target="_blank"> Azure Event Grid</a>.
+
+Azure Functions, Azure Logic Apps, or even to your own http listener can subscribe to these events. Azure Event Grid provides reliable event delivery to your applications through rich retry policies and dead-lettering.
+
+Here are example scenarios for consuming events in our service:
+1. When downloading satellite or weather data or executing jobs, you can use events to respond to changes in job status. You can minimize long polling can and decreasing the number of API calls to the service. You can also get prompt notification of job completion. All our asynchronous ingestion jobs are capable of supporting events.
+
+> [!NOTE]
+> Events related to ISV solutions flow are not currently supported.
+
+2. If there are modifications to data-plane resources such as party, fields, farms and other similar elements, you can react to changes and you can trigger workflows.
+
+## Filtering events
+You can filter Data Manager for Agriculture <a href="https://docs.microsoft.com/cli/azure/eventgrid/event-subscription" target="_blank"> events </a> by event type, subject, or fields in the data object. Filters in Event Grid match the beginning or end of the subject so that events that match can go to the subscriber.
+
+For instance, for the PartyChanged event, to receive notifications for changes for a particular party with ID Party1234, you may use the subject filter "EndsWith" as shown:
+
+EndsWith- /Party1234
+The subject for this event is of the format
+```"/parties/Party1234"```
+
+Subjects in an event schema provide 'starts with' and 'exact match' filters as well.
+
+Similarly, to filter the same event for a group of party IDs, use the Advanced filter on partyId field in the event data object. In a single subscription, you may add five advanced filters with a limit of 25 values for each key filtered.
+
+To learn more about how to apply filters, see <a href = "https://docs.microsoft.com/azure/event-grid/how-to-filter-events" target = "_blank"> filter events for Event Grid. </a>
+
+## Subscribing to events
+You can subscribe to Data Manager for Agriculture events by using Azure portal or Azure Resource Manager client. Each of these provide the user with a set of functionalities. Refer to following resources to know more about each method.
+
+<a href = "https://docs.microsoft.com/azure/event-grid/subscribe-through-portal#:~:text=Create%20event%20subscriptions%201%20Select%20All%20services.%202,event%20types%20option%20checked.%20...%20More%20items..." target = "_blank"> Subscribe to events using portal </a>
+
+<a href = "https://docs.microsoft.com/azure/event-grid/sdk-overview" target = "_blank"> Subscribe to events using the ARM template client </a>
+
+## Practices for consuming events
+
+Applications that handle Data Manager for Agriculture events should follow a few recommended practices:
+
+* Check that the eventType is one you're prepared to process, and don't assume that all events you receive are the types you expect.
+* As messages can arrive out of order, use the modifiedTime and etag fields to understand the order of events for any particular object.
+* Data Manager for Agriculture events guarantees at-least-once delivery to subscribers, which ensures that all messages are outputted. However due to retries or availability of subscriptions, duplicate messages may occasionally occur. To learn more about message delivery and retry, see <a href = "https://docs.microsoft.com/azure/event-grid/delivery-and-retry" target = "_blank">Event Grid message delivery and retry </a>
+* Ignore fields you don't understand. This practice will help keep you resilient to new features that might be added in the future.
++
+### Available event types
+
+|Event Name | Description|
+|:--|:-|
+|Microsoft.AgFoodPlatform.PartyChanged|Published when a party is created /updated/deleted in an Azure Data Manager for Agriculture resource
+|Microsoft.AgFoodPlatform.FarmChangedV2| Published when a farm is created /updated/deleted in an Azure Data Manager for Agriculture resource
+|Microsoft.AgFoodPlatform.FieldChangedV2|Published when a Field is created /updated/deleted in an Azure Data Manager for Agriculture resource
+|Microsoft.AgFoodPlatform.SeasonalFieldChangedV2|Published when a Seasonal Field is created /updated/deleted in an Azure Data Manager for Agriculture resource
+|Microsoft.AgFoodPlatform.BoundaryChangedV2|Published when a farm is created /updated/deleted in an Azure Data Manager for Agriculture resource
+|Microsoft.AgFoodPlatform.CropChanged|Published when a Crop is created /updated/deleted in an Azure Data Manager for Agriculture resource
+|Microsoft.AgFoodPlatform.CropProductChanged|Published when a Crop Product is created /updated/deleted in an Azure Data Manager for Agriculture resource
+|Microsoft.AgFoodPlatform.SeasonChanged|Published when a Season is created /updated/deleted in an Azure Data Manager for Agriculture resource
+|Microsoft.AgFoodPlatform.SatelliteDataIngestionJobStatusChangedV2| Published when a satellite data ingestion job's status changes, for example, job is created, has progressed or completed.
+|Microsoft.AgFoodPlatform.WeatherDataIngestionJobStatusChangedV2|Published when a weather data ingestion job's status changes, for example, job is created, has progressed or completed.
+|Microsoft.AgFoodPlatform.WeatherDataRefresherJobStatusChangedV2| Published when Weather Data Refresher job status is changed.
+|Microsoft.AgFoodPlatform.SensorMappingChangedV2|Published when Sensor Mapping is changed
+|Microsoft.AgFoodPlatform.SensorPartnerIntegrationChangedV2|Published when Sensor Partner Integration is changed
+|Microsoft.AgFoodPlatform.DeviceDataModelChanged|Published when Device Data Model is changed
+|Microsoft.AgFoodPlatform.DeviceChanged|Published when Device is changed
+|Microsoft.AgFoodPlatform.SensorDataModelChanged|Published when Sensor Data Model is changed
+|Microsoft.AgFoodPlatform.SensorChanged|Published when Sensor is changed
+|Microsoft.AgFoodPlatform.FarmOperationDataIngestionJobStatusChangedV2| Published when a farm operations data ingestion job's status changes, for example, job is created, has progressed or completed.
+|Microsoft.AgFoodPlatform.ApplicationDataChangedV2|Published when Application Data is created /updated/deleted in an Azure Data Manager for Agriculture resource
+|Microsoft.AgFoodPlatform.HarvestDataChangedV2|Published when Harvesting Data is created /updated/deleted in an Azure Data Manager for Agriculture resource
+|Microsoft.AgFoodPlatform.TillageDataChangedV2|Published when Tillage Data is created /updated/deleted in an Azure Data Manager for Agriculture resource
+|Microsoft.AgFoodPlatform.PlantingDataChangedV2|Published when Planting Data is created /updated/deleted in an Azure Data Manager for Agriculture resource
+|Microsoft.AgFoodPlatform.AttachmentChangedV2|Published when an attachment is created/updated/deleted.
+|Microsoft.AgFoodPlatform.ZoneChangedV2|Published when a zone is created/updated/deleted.
+|Microsoft.AgFoodPlatform.ManagementZoneChangedV2|Published when a management zone is created/updated/deleted.
+|Microsoft.AgFoodPlatform.PrescriptionChangedV2|Published when a prescription is created/updated/deleted.
+|Microsoft.AgFoodPlatform.PrescriptionMapChangedV2|Published when a prescription map is created/updated/deleted.
+|Microsoft.AgFoodPlatform.PlantTissueAnalysisChangedV2|Published when plant tissue analysis data is created/updated/deleted.
+|Microsoft.AgFoodPlatform.NutrientAnalysisChangedV2|Published when nutrient analysis data is created/updated/deleted.
+|Microsoft.AgFoodPlatform.ImageProcessingRasterizeJobStatusChangedV2|Published when an image processing rasterize job status changes, for example, job is created, has progressed or completed.
+|Microsoft.AgFoodPlatform.InsightChangedV2| Published when Insight is created/updated/deleted.
+|Microsoft.AgFoodPlatform.InsightAttachmentChangedV2| Published when Insight Attachment is created/updated/deleted.
+|Microsoft.AgFoodPlatform.BiomassModelJobStatusChangedV2|Published when Biomass Model job status is changed
+|Microsoft.AgFoodPlatform.SoilMoistureModelJobStatusChangedV2|Published when Soil Moisture Model job status is changed
+|Microsoft.AgFoodPlatform.SensorPlacementModelJobStatusChangedV2|Published when Sensor Placement Model Job status is changed
++
+### Event properties
+
+Each Azure Data Manager for Agriculture event has two parts, the first part is common across events and the second, data object contains properties specific to each event.
+
+The part common across events is elaborated in the **Event Grid event schema** and has the following top-level data:
+
+|Property | Type| Description|
+|:--| :-| :-|
+topic| string| Full resource path to the event source. This field isn't writeable. Event Grid provides this value.
+subject| string| Publisher-defined path to the event subject.
+eventType | string| One of the registered event types for this event source.
+eventTime| string| The time the event is generated based on the provider's UTC time.
+| ID | string| Unique identifier for the event.
+data| object| Data object with properties specific to each event type.
+dataVersion| string| The schema version of the data object. The publisher defines the schema version.
+metadataVersion| string| The schema version of the event metadata. Event Grid defines the schema of the top-level properties. Event Grid provides this value.
+
+For party, season, crop, crop product changed events, the data object contains following properties:
+
+|Property | Type| Description|
+|:--| :-| :-|
+| ID | string| Unique ID of resource.
+actionType| string| Indicates the change that triggered publishing of the event. Applicable values are created, updated, deleted
+properties| Object| It contains user defined key ΓÇô value pairs.
+modifiedDateTime|string| Indicates the time at which the event was last modified.
+createdDateTime| string| Indicates the time at which the resource was created.
+status| string| Contains the user defined status of the object.
+eTag| string| Implements optimistic concurrency.
+description| string| Textual description of the resource.
+name| string| Name to identify resource.
+
+For farm events, the data object contains following properties:
+
+|Property | Type| Description|
+|:--| :-| :-|
+| ID | string| Unique ID of resource.
+actionType| string| Indicates the change that triggered publishing of the event. Applicable values are created, updated, deleted
+properties| Object| It contains user defined key ΓÇô value pairs.
+modifiedDateTime|string| Indicates the time at which the event was last modified.
+createdDateTime| string| Indicates the time at which the resource was created.
+status| string| Contains the user defined status of the object.
+eTag| string| Implements optimistic concurrency.
+description| string| Textual description of the resource.
+name| string| Name to identify resource.
+partyId| string| ID of the party it belongs to.
+
+For device data model, and sensor data model events, the data object contains following properties:
+
+|Property | Type| Description|
+|:--| :-| :-|
+sensorPartnerId| string| ID associated with the sensorPartner.
+| ID | string| Unique ID of resource.
+actionType| string| Indicates the change which triggered publishing of the event. Applicable values are created, updated, deleted
+properties| Object| It contains user defined key ΓÇô value pairs.
+modifiedDateTime|string| Indicates the time at which the event was last modified.
+createdDateTime| string| Indicates the time at which the resource was created.
+status| string| Contains the user defined status of the object.
+eTag| string| Implements optimistic concurrency.
+description| string| Textual description of the resource.
+name| string| Name to identify resource.
+
+For device events, the data object contains following properties:
+
+|Property | Type| Description|
+|:--| :-| :-|
+deviceDataModelId| string| ID associated with the deviceDataModel.
+integrationId| string| ID associated with the integration.
+sensorPartnerId| string| ID associated with the sensorPartner.
+| ID | string| Unique ID of resource.
+actionType| string| Indicates the change that triggered publishing of the event. Applicable values are created, updated, deleted
+properties| Object| It contains user defined key ΓÇô value pairs.
+modifiedDateTime|string| Indicates the time at which the event was last modified.
+createdDateTime| string| Indicates the time at which the resource was created.
+status| string| Contains the user defined status of the object.
+eTag| string| Implements optimistic concurrency.
+description| string| Textual description of the resource.
+name| string| Name to identify resource.
+
+For sensor events, the data object contains following properties:
+
+|Property | Type| Description|
+|:--| :-| :-|
+sensorDataModelId| string| ID associated with the sensorDataModel.
+integrationId| string| ID associated with the integration.
+deviceId| string| ID associated with the device.
+sensorPartnerId| string| ID associated with the sensorPartner.
+| ID | string| Unique ID of resource.
+actionType| string| Indicates the change that triggered publishing of the event. Applicable values are created, updated, deleted
+properties| Object| It contains user defined key ΓÇô value pairs.
+modifiedDateTime|string| Indicates the time at which the event was last modified.
+createdDateTime| string| Indicates the time at which the resource was created.
+status| string| Contains the user defined status of the object.
+eTag| string| Implements optimistic concurrency.
+description| string| Textual description of the resource.
+name| string| Name to identify resource.
+
+For sensor mapping events, the data object contains following properties:
+
+|Property | Type| Description|
+|:--| :-| :-|
+sensorId| string| ID associated with the sensor.
+partyId| string| ID associated with the party.
+boundaryId| string| ID associated with the boundary.
+sensorPartnerId| string| ID associated with the sensorPartner.
+| ID | string| Unique ID of resource.
+actionType| string| Indicates the change that triggered publishing of the event. Applicable values are created, updated, deleted
+properties| Object| It contains user defined key ΓÇô value pairs.
+modifiedDateTime|string| Indicates the time at which the event was last modified.
+createdDateTime| string| Indicates the time at which the resource was created.
+status| string| Contains the user defined status of the object.
+eTag| string| Implements optimistic concurrency.
+description| string| Textual description of the resource.
+name| string| Name to identify resource.
+
+For sensor partner integration events, the data object contains following properties:
+
+|Property | Type| Description|
+|:--| :-| :-|
+integrationId| string| ID associated with the integration.
+partyId| string| ID associated with the party.
+sensorPartnerId| string| ID associated with the sensorPartner.
+| ID | string| Unique ID of resource.
+actionType| string| Indicates the change that triggered publishing of the event. Applicable values are created, updated, deleted
+properties| Object| It contains user defined key ΓÇô value pairs.
+modifiedDateTime|string| Indicates the time at which the event was last modified.
+createdDateTime| string| Indicates the time at which the resource was created.
+status| string| Contains the user defined status of the object.
+eTag| string| Implements optimistic concurrency.
+description| string| Textual description of the resource.
+name| string| Name to identify resource.
+
+Boundary events have the following data object:
+
+|Property |Type |Description |
+|:|:|:|
+| ID | string | User defined ID of boundary |
+|actionType | string | Indicates the change that triggered publishing of the event. Applicable values are created, updated, deleted. |
+|modifiedDateTime | string | Indicates the time at which the event was last modified. |
+|createdDateTime | string | Indicates the time at which the resource was created. |
+|status | string | Contains the user defined status of the object. |
+|eTag | string | Implements optimistic concurrency. |
+|partyId | string | ID of the party it belongs to. |
+|parentId | string | ID of the parent boundary belongs. |
+|parentType | string | Type of the parent boundary belongs to. Applicable values are Field, SeasonalField, Zone, Prescription, PlantTissueAnalysis, ApplicationData, PlantingData, TillageData, HarvestData etc. |
+|description | string | Textual description of the resource. |
+|properties | string | It contains user defined key ΓÇô value pair. |
+
+Seasonal field events have the following data object:
+
+Property| Type| Description
+|:--| :-| :-|
+ID | string| User defined ID of the seasonal field
+farmId| string| User defined ID of the farm that seasonal field is associated with.
+partyId| string| Id of the party it belongs to.
+seasonId| string| User defined ID of the season that seasonal field is associated with.
+fieldId| string| User defined ID of the field that seasonal field is associated with.
+name| string| User defined name of the seasonal field.
+actionType| string| Indicates the change that triggered publishing of the event. Applicable values are created, updated, deleted.
+properties| Object| It contains user defined key-value pairs.
+modifiedDateTime|string| Indicates the time at which the event was last modified.
+createdDateTime| string| Indicates the time at which the resource was created.
+status| string| Contains the user defined status of the object.
+eTag| string| Implements optimistic concurrency.
+description| string| Textual description of the resource.
+
+Insight events have the following data object:
+
+Property| Type| Description
+|:--| :-| :-|
+modelId| string| ID of the associated model.|
+resourceId| string| User-defined ID of the resource such as farm, field, boundary etc.|
+resourceType| string | Name of the resource type. Applicable values are Party, Farm, Field, SeasonalField, Boundary etc.|
+partyId| string| ID of the party it belongs to.|
+modelVersion| string| Version of the associated model.|
+ID | string| User defined ID of the resource.|
+status| string| Contains the status of the job. |
+actionType|string| Indicates the change that triggered publishing of the event. Applicable values are created, updated, deleted. |
+modifiedDateTime| date-time| Date-time when resource was last modified, sample format: yyyy-MM-ddTHH:mm:ssZ.|
+createdDateTime| date-time| Date-time when resource was created, sample format: yyyy-MM-ddTHH:mm:ssZ.|
+eTag| string| Implements optimistic concurrency|
+description| string| A list of key value pairs that describe the resource. Only string and numerical values are supported. |
+name| string| User-defined name of the resource.|
+properties| object| User-defined name of the resource.|
+
+InsightAttachment events have the following data object:
+
+Property| Type| Description
+|:--| :-| :-|
+modelId| string| ID of the associated model.
+resourceId| string| User-defined ID of the resource such as farm, field, boundary etc.
+resourceType| string | Name of the resource type.
+partyId| string| ID of the party it belongs to.
+insightId| string| ID associated with the insight resource.
+ID | string| User defined ID of the resource.
+status| string| Contains the status of the job.
+actionType|string| Indicates the change that triggered publishing of the event. Applicable values are created, updated, deleted.
+modifiedDateTime| date-time|Date-time when resource was last modified, sample format: yyyy-MM-ddTHH:mm:ssZ.
+createdDateTime|date-time|Date-time when resource was created, sample format: yyyy-MM-ddTHH:mm:ssZ.
+eTag| string| Implements optimistic concurrency
+description|string| A list of key value pairs that describe the resource. Only string and numerical values are supported.
+name| string| User-defined name of the resource.
+properties| object| User-defined name of the resource.
+
+Field events have the following data object:
+
+Property| Type| Description
+|:--| :-| :-|
+| ID | string| User defined ID of the field.
+farmId| string| User defined ID of the farm that field is associated with.
+partyId| string| Id of the party it belongs to.
+name| string| User defined name of the field.
+actionType| string| Indicates the change that triggered publishing of the event. Applicable values are created, updated, deleted.
+properties| Object| It contains user defined key-value pairs.
+modifiedDateTime|string|Indicates the time at which the event was last modified.
+createdDateTime| string| Indicates the time at which the resource was created.
+status| string| Contains the user defined status of the object.
+eTag| string| Implements optimistic concurrency.
+description| string| Textual description of the resource.
+
+ImageProcessingRasterizeJobStatusChanged event has the following data object:
+
+Property| Type| Description
+|:--| :-| :-|
+shapefileAttachmentId | string|User-defined ID name of the associated shape file.
+partyId|string| Party ID for which job was created.
+| ID |string| Unique ID of the job.
+name| string| User-defined name of the job.
+status|string|Various states a job can be in. Applicable values are Waiting, Running, Succeeded, Failed, Canceled etc.
+isCancellationRequested| boolean|Flag that gets set when job cancellation is requested.
+description|string| Textual description of the job.
+message|string| Status message to capture more details of the job.
+lastActionDateTime|date-time|Date-time when last action was taken on the job, sample format: yyyy-MM-ddTHH:mm:ssZ.
+createdDateTime|date-time|Date-time when resource was created, sample format: yyyy-MM-ddTHH:mm:ssZ.
+properties| Object| It contains user defined key-value pair
+
+SatelliteDataIngestionJobChanged, WeatherDataIngestionJobChanged, WeatherDataRefresherJobChanged, BiomassModelJobStatusChanged, SoilMoistureModelJobStatusChanged, and FarmOperationDataIngestionJobChanged events have the following data object:
+
+Property| Type| Description
+|:--| :-| :-|
+| ID |string| Unique ID of the job.
+name| string| User-defined name of the job.
+status|string|Various states a job can be in.
+isCancellationRequested| boolean|Flag that gets set when job cancellation is requested.
+description|string| Textual description of the job.
+partyId|string| Party ID for which job was created.
+message|string| Status message to capture more details of the job.
+lastActionDateTime|date-time|Date-time when last action was taken on the job, sample format: yyyy-MM-ddTHH:mm:ssZ.
+createdDateTime|date-time|Date-time when resource was created, sample format: yyyy-MM-ddTHH:mm:ssZ.
+properties| Object| It contains user defined key-value pairs.
+
+Farm operations data events such as application data, harvesting data, planting data, and tillage data have the following data object:
+
+Property| Type| Description
+|:--| :-| :-|
+| ID | string| Unique ID of resource.
+status| string| Contains the user defined status of the resource.
+partyId| string| ID of the party it belongs to.
+source| string| Message from Azure Data Manager for Agriculture giving details about the job.
+modifiedDateTime| string| Indicates the time at which the event was last modified
+createdDateTime| string| Indicates the time at which the resource was created
+eTag| string| Implements optimistic concurrency
+name| string| Name to identify resource.
+description| string| Textual description of the resource
+actionType| string|Indicates the change that triggered publishing of the event. Applicable values are created, updated, deleted.
+properties| Object| It contains user defined key-value pairs.
++
+AttachmentChanged event has the following data object
+
+Property| Type| Description
+|:--| :-| :-|
+resourceId| string| User-defined ID of the resource such as farm, field, boundary etc.
+resourceType| string | Name of the resource type.
+partyId| string| ID of the party it belongs to.
+| ID | string| User defined ID of the resource.
+status| string| Contains the status of the job.
+actionType|string| Indicates the change that triggered publishing of the event. Applicable values are created, updated, deleted.
+modifiedDateTime| date-time|Date-time when resource was last modified, sample format: yyyy-MM-ddTHH:mm:ssZ.
+createdDateTime|date-time|Date-time when resource was created, sample format: yyyy-MM-ddTHH:mm:ssZ.
+eTag| string| Implements optimistic concurrency
+description|string| Textual description of the resource
+name| string| User-defined name of the resource.
++
+ZoneChanged event has the following data object
+
+Property| Type| Description
+|:--| :-| :-|
+managementZoneId| string | Management Zone ID associated with the zone.
+partyId| string | User-defined ID of associated field.
+| ID | string| Id of the party it belongs to
+status| string| Contains the user defined status of the resource.
+actionType| string| Indicates the change that triggered publishing of the event. Applicable values are created, updated, deleted.
+modifiedDateTime| date-time|Date-time when resource was last modified, sample format: yyyy-MM-ddTHH:mm:ssZ.
+createdDateTime|date-time|Date-time when resource was created, sample format: yyyy-MM-ddTHH:mm:ssZ.
+eTag| string| Implements optimistic concurrency
+description|string| Textual description of the resource
+name| string| User-defined name of the resource.
+properties| object| A list of key value pairs that describe the resource. Only string and numeral values are supported.
+
+PrescriptionChanged event has the following data object
+
+|Property | Type| Description|
+|:--| :-| :-|
+prescriptionMapId|string| User-defined ID of the associated prescription map.
+partyId| string|Id of the party it belongs to.
+| ID | string| User-defined ID of the prescription.
+actionType| string| Indicates the change triggered during publishing of the event. Applicable values are Created, Updated, Deleted
+status| string| Contains the user-defined status of the prescription.
+properties| object| It contains user-defined key-value pairs.
+modifiedDateTime| date-time|Date-time when resource was last modified, sample format: yyyy-MM-ddTHH:mm:ssZ.
+createdDateTime|date-time|Date-time when resource was created, sample format: yyyy-MM-ddTHH:mm:ssZ.
+eTag| string| Implements optimistic concurrency
+description| string| Textual description of the resource
+name| string| User-defined name of the prescription.
+
+PrescriptionMapChanged and ManagementZoneChanged events have the following data object:
+
+Property| Type| Description
+|:--| :-| :-|
+|seasonId |string | User-defined ID of the associated season.
+|cropId |string | User-defined ID of the associated crop.
+|fieldId |string | User-defined ID of the associated field.
+|partyId |string| ID of the party it belongs to.
+| ID | string| User-defined ID of the resource.
+|actionType | string| Indicates the change that triggered publishing of the event. Applicable values are created, updated, deleted.
+modifiedDateTime | date-time| Date-time when resource was last modified, sample format: yyyy-MM-ddTHH:mm:ssZ.
+createdDateTime | date-time| Date-time when resource was created, sample format: yyyy-MM-ddTHH:mm:ssZ.
+eTag| string | Implements optimistic concurrency
+description | string| Textual description of the resource
+name| string | User-defined name of the prescription map.
+properties |object| It contains user-defined key-value pairs
+status| string | Status of the resource.
+
+PlantTissueAnalysisChanged event has the following data object:
+
+Property| Type| Description
+|:--| :-| :-|
+|seasonId|string|User-defined ID of the associated season.
+|cropId| string | User-defined ID of the associated crop.
+|cropProductId | string| Crop Product ID associated with the plant tissue analysis.
+|fieldId| string | User-defined ID of the associated field.
+|partyId| string | ID of the party it belongs to.
+| ID| string | User-defined ID of the resource.
+|actionType | string | Indicates the change that triggered publishing of the event. Applicable values are created, updated, deleted.
+modifiedDateTime| date-time | Date-time when resource was last modified, sample format: yyyy-MM-ddTHH:mm:ssZ.
+createdDateTime| date-time | Date-time when resource was created, sample format: yyyy-MM-ddTHH:mm:ssZ.
+eTag| string| Implements optimistic concurrency.
+description | string| Textual description of the resource.
+name| string| User-defined name of the prescription map.
+properties | object| It contains user-defined key-value pairs.
+status| string| Status of the resource.
+
+NutrientAnalysisChanged event has the following data object:
+
+|Property | Type| Description|
+|:--| :-| :-|
+parentId| string| ID of the parent nutrient analysis belongs to.
+parentType| string| Type of the parent nutrient analysis belongs to. Applicable value(s) are PlantTissueAnalysis.
+partyId| string|Id of the party it belongs to.
+| ID | string| User-defined ID of nutrient analysis.
+actionType| string| Indicates the change that is triggered during publishing of the event. Applicable values are Created, Updated, Deleted.
+properties| object| It contains user-defined key-value pairs.
+modifiedDateTime| date-time|Date-time when nutrient analysis was last modified, sample format: yyyy-MM-ddTHH:mm:ssZ.
+createdDateTime|date-time|Date-time when nutrient analysis was created, sample format: yyyy-MM-ddTHH:mm:ssZ.
+status| string| Contains user-defined status of the nutrient analysis.
+eTag| string| Implements optimistic concurrency.
+description| string| Textual description of resource.
+name| string| User-defined name of the nutrient analysis.
++
+## Sample events
+For Sample events, refer to [this](./sample-events.md) page
+
+## Next steps
+* For an introduction to Azure Event Grid, see [What is Event Grid?](../event-grid/overview.md)
+* Test our APIs [here](/rest/api/data-manager-for-agri).
data-manager-for-agri How To Use Nutrient Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-use-nutrient-apis.md
Analyzing the nutrient composition of the crop is vital to ensure good harvest.
## Tissue sample model Here's how we have modeled tissue analysis in Azure Data Manager for Agriculture:
->:::image type="content" source="./media/schema-1.png" alt-text="Screenshot showing entity relationships.":::
* Step 1: Create a **plant tissue analysis** resource for every sample you get tested. * Step 2: For each nutrient that is being tested, create a nutrient analysis resource with plant tissue analysis as parent created in step 1.
data-manager-for-agri Overview Azure Data Manager For Agriculture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/overview-azure-data-manager-for-agriculture.md
Azure Data Manager for Agriculture helps reduce data engineering investments thr
## Our key features
->:::image type="content" source="./media/about-data-manager.png" alt-text="Screenshot showing key features.":::
* Ingest, store and manage farm data: Connectors for satellite, weather forecast, farm operations, sensor data and extensibility framework help ingest your farm data. * Run Apps on your farm data: Use REST APIs to power your apps.
data-manager-for-agri Sample Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/sample-events.md
+
+ Title: Sample events for Microsoft Azure Data Manager for Agriculture Preview based on Azure Event Grid #Required; page title is displayed in search results. Include the brand.
+description: This article provides samples of Azure Data Manager for Agriculture Preview events. #Required; article description that is displayed in search results.
++++ Last updated : 04/18/2023 #Required; mm/dd/yyyy format.++
+# Azure Data Manager for Agriculture sample events
+This article provides the Azure Data Manager for Agriculture events samples. To learn more about our event properties that are provided with Azure Event Grid see our [how to use events](./how-to-use-events.md) page.
+
+The event samples given on this page represent an event notification.
+
+1. **Event type: Microsoft.AgFoodPlatform.PartyChanged**
+
+````json
+ {
+ "data": {
+ "actionType": "Deleted",
+ "modifiedDateTime": "2022-10-17T18:43:37Z",
+ "eTag": "f700fdd7-0000-0700-0000-634da2550000",
+ "properties": {
+ "key1": "value1",
+ "key2": 123.45
+ },
+ "id": "<YOUR-PARTY-ID>",
+ "createdDateTime": "2022-10-17T18:43:30Z"
+ },
+ "id": "23fad010-ec87-40d9-881b-1f2d3ba9600b",
+ "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}",
+ "subject": "/parties/<YOUR-PARTY-ID>",
+ "eventType": "Microsoft.AgFoodPlatform.PartyChanged",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2022-10-17T18:43:37.3306735Z"
+ }
+````
+
+ 2. **Event type: Microsoft.AgFoodPlatform.FarmChangedV2**
+````json
+ {
+ "data": {
+ "partyId": "<YOUR-PARTY-ID>",
+ "actionType": "Updated",
+ "status": "string",
+ "modifiedDateTime": "2022-11-07T09:20:27Z",
+ "eTag": "99017a4e-0000-0700-0000-6368cddb0000",
+ "properties": {
+ "key1": "value1",
+ "key2": 123.45
+ },
+ "id": "<YOUR-FARM-ID>",
+ "name": "string",
+ "description": "string",
+ "createdDateTime": "2022-03-26T12:51:24Z"
+ },
+ "id": "v2-796c89b6-306a-420b-be8f-4cd69df038f6",
+ "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}",
+ "subject": "/parties/<YOUR-PARTY-ID>/farms/<YOUR-FARM-ID>",
+ "eventType": "Microsoft.AgFoodPlatform.FarmChangedV2",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2022-11-07T09:20:27.5307566Z"
+ }
+````
+
+ 3. **Event type: Microsoft.AgFoodPlatform.FieldChangedV2**
+
+````json
+ {
+ "data": {
+ "farmId": "<YOUR-FARM-ID>",
+ "partyId": "<YOUR-PARTY-ID>",
+ "actionType": "Created",
+ "status": "string",
+ "modifiedDateTime": "2022-11-01T10:44:17Z",
+ "eTag": "af00eaf0-0000-0700-0000-6360f8810000",
+ "properties": {
+ "key1": "value1",
+ "key2": 123.45
+ },
+ "id": "<YOUR-FIELD-ID>",
+ "name": "string",
+ "description": "string",
+ "createdDateTime": "2022-11-01T10:44:17Z"
+ },
+ "id": "v2-b80e0977-5aeb-47c9-be7b-d6555e1c44f1",
+ "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}",
+ "subject": "/parties/<YOUR-PARTY-ID>/fields/<YOUR-FIELD-ID>",
+ "eventType": "Microsoft.AgFoodPlatform.FieldChangedV2",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2022-11-01T10:44:17.162118Z"
+ }
+ ````
+
+
+
+ 4. **Event type: Microsoft.AgFoodPlatform.CropChanged**
+
+````json
+ {
+ "data": {
+ "actionType": "Created",
+ "status": "Sample status",
+ "modifiedDateTime": "2021-03-05T11:03:48Z",
+ "eTag": "8601c4e5-0000-0700-0000-604210150000",
+ "id": "<YOUR-CROP-ID>",
+ "name": "Display name",
+ "description": "Sample description",
+ "createdDateTime": "2021-03-05T11:03:48Z",
+ "properties": {
+ "key1": "value1",
+ "key2": 123.45
+ }
+ },
+ "id": "4c59a797-b76d-48ec-8915-ceff58628f35",
+ "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}",
+ "subject": "/crops/<YOUR-CROP-ID>",
+ "eventType": "Microsoft.AgFoodPlatform.CropChanged",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2021-03-05T11:03:49.0590658Z"
+ }
+ ````
+
+ 5. **Event type: Microsoft.AgFoodPlatform.CropProductChanged**
+
+````json
+ {
+ "data": {
+ "actionType": "Deleted",
+ "status": "string",
+ "modifiedDateTime": "2022-11-01T10:41:06Z",
+ "eTag": "59055238-0000-0700-0000-6360f7080000",
+ "properties": {
+ "key1": "value1",
+ "key2": 123.45
+ },
+ "id": "amcp",
+ "name": "stridfng",
+ "description": "string",
+ "createdDateTime": "2022-11-01T10:34:54Z"
+ },
+ "id": "v2-a94f4e12-edca-4720-940f-f9d61755d8e2",
+ "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}",
+ "subject": "/cropProducts/amcp",
+ "eventType": "Microsoft.AgFoodPlatform.CropProductChanged",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2022-11-01T10:41:06.6942143Z"
+ }
+````
+
+ 6. **Event type: Microsoft.AgFoodPlatform.BoundaryChangedV2**
+
+````json
+ {
+ "data": {
+ "parentType": "Field",
+ "partyId": "amparty",
+ "actionType": "Created",
+ "modifiedDateTime": "2022-11-01T10:48:14Z",
+ "eTag": "af005dfc-0000-0700-0000-6360f96e0000",
+ "id": "amb",
+ "name": "string",
+ "description": "string",
+ "createdDateTime": "2022-11-01T10:48:14Z"
+ },
+ "id": "v2-25fd01cf-72d4-401d-92ee-146de348e815",
+ "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}",
+ "subject": "/parties/amparty/boundaries/amb",
+ "eventType": "Microsoft.AgFoodPlatform.BoundaryChangedV2",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2022-11-01T10:48:14.2385557Z"
+ }
+ ````
+
+ 7. **Event type: Microsoft.AgFoodPlatform.SeasonChanged**
+````json
+ {
+ "data": {
+ "actionType": "Created",
+ "status": "Sample status",
+ "modifiedDateTime": "2021-03-05T11:18:38Z",
+ "eTag": "86019afd-0000-0700-0000-6042138e0000",
+ "id": "UNIQUE-SEASON-ID",
+ "name": "Display name",
+ "description": "Sample description",
+ "createdDateTime": "2021-03-05T11:18:38Z",
+ "properties": {
+ "key1": "value1",
+ "key2": 123.45
+ }
+ },
+ "id": "63989475-397b-4b92-8160-8743bf8e5804",
+ "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{FARMBEATS-RESOURCE-NAME}",
+ "subject": "/seasons/UNIQUE-SEASON-ID",
+ "eventType": "Microsoft.AgFoodPlatform.SeasonChanged",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2021-03-05T11:18:38.5804699Z"
+ }
+ ````
+ 8. **Event type: Microsoft.AgFoodPlatform.SatelliteDataIngestionJobStatusChangedV2**
+```json
+ {
+ "data": {
+ "partyId": "contoso-partyId",
+ "message": "Created job 'sat-ingestion-job-1' to fetch satellite data for boundary 'contoso-boundary' from startDate '08/07/2022' to endDate '10/07/2022' (both inclusive).",
+ "status": "Running",
+ "lastActionDateTime": "2022-11-07T09:35:23.3141004Z",
+ "isCancellationRequested": false,
+ "properties": {
+ "key1": "value1",
+ "key2": 123.45
+ },
+ "id": "sat-ingestion-job-1",
+ "name": "string",
+ "description": "string",
+ "createdDateTime": "2022-11-07T09:35:15.8064528Z"
+ },
+ "id": "v2-3cab067b-4227-44c3-bea8-86e1e6d6968d",
+ "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}",
+ "subject": "/parties/contoso-partyId/satelliteDataIngestionJobs/sat-ingestion-job-1",
+ "eventType": "Microsoft.AgFoodPlatform.SatelliteDataIngestionJobStatusChangedV2",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2022-11-07T09:35:23.3141452Z"
+ }
+```
+ 9. **Event type: Microsoft.AgFoodPlatform.WeatherDataIngestionJobStatusChangedV2**
+```json
+ {
+ "data": {
+ "partyId": "partyId1",
+ "message": "Weather data available from '11/25/2020 00:00:00' to '11/30/2020 00:00:00'.",
+ "status": "Succeeded",
+ "lastActionDateTime": "2022-11-01T10:40:58.4472391Z",
+ "isCancellationRequested": false,
+ "properties": {
+ "key1": "value1",
+ "key2": 123.45
+ },
+ "id": "newIjJk",
+ "name": "string",
+ "description": "string",
+ "createdDateTime": "2022-11-01T10:40:45.9408927Z"
+ },
+ "id": "0c1507dc-1fe6-4ad5-b2f4-680f3b12b7cf",
+ "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}",
+ "subject": "/parties/partyId1/weatherDataIngestionJobs/newIjJk",
+ "eventType": "Microsoft.AgFoodPlatform.WeatherDataIngestionJobStatusChangedV2",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2022-11-01T10:40:58.4472961Z"
+ }
+```
+ 10. **Event type: Microsoft.AgFoodPlatform.WeatherDataRefresherJobStatusChangedV2**
+```json
+{
+ "data": {
+ "message": "Weather data refreshed successfully at '11/01/2022 10:45:57'.",
+ "status": "Waiting",
+ "lastActionDateTime": "2022-11-01T10:45:57.5966716Z",
+ "isCancellationRequested": false,
+ "id": "IBM.TWC~33.00~-9.00~currents-on-demand",
+ "createdDateTime": "2022-11-01T10:39:34.2024298Z"
+ },
+ "id": "dff85442-3b9c-4fb0-95da-bda66c994e73",
+ "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}",
+ "subject": "/weatherDataRefresherJobs/IBM.TWC~33.00~-9.00~currents-on-demand",
+ "eventType": "Microsoft.AgFoodPlatform.WeatherDataRefresherJobStatusChangedV2",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2022-11-01T10:45:57.596714Z"
+ }
+```
+
+ 11. **Event type: Microsoft.AgFoodPlatform.FarmOperationDataIngestionJobStatusChangedV2**
+```json
+{
+ "data": {
+ "partyId": "party-contoso",
+ "message": "Created job 'ay-1nov' to fetch farm operation data for party id 'party-contoso'.",
+ "status": "Running",
+ "lastActionDateTime": "2022-11-01T10:36:58.4373839Z",
+ "isCancellationRequested": false,
+ "properties": {
+ "key1": "value1",
+ "key2": 123.45
+ },
+ "id": "ay-1nov",
+ "name": "string",
+ "description": "string",
+ "createdDateTime": "2022-11-01T10:36:54.322847Z"
+ },
+ "id": "fa759285-9737-4636-ae47-8cffe8506986",
+ "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}",
+ "subject": "/parties/party-contoso/farmOperationDataIngestionJobs/ay-1nov",
+ "eventType": "Microsoft.AgFoodPlatform.FarmOperationDataIngestionJobStatusChangedV2",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2022-11-01T10:36:58.4379601Z"
+ }
+```
+ 12. **Event type: Microsoft.AgFoodPlatform.BiomassModelJobStatusChangedV2**
+```json
+{
+ "data": {
+ "partyId": "party1",
+ "message": "Created job 'job-biomass-13sdqwd' to calculate biomass values for boundary 'boundary1' from plantingStartDate '05/03/2020' to inferenceEndDate '10/11/2020' (both inclusive).",
+ "status": "Waiting",
+ "lastActionDateTime": "0001-01-01T00:00:00Z",
+ "isCancellationRequested": false,
+ "properties": {
+ "key1": "value1",
+ "key2": 123.45
+ },
+ "id": "job-biomass-13sdqwd",
+ "name": "biomass",
+ "description": "biomass is awesome",
+ "createdDateTime": "2022-11-07T15:16:28.3177868Z"
+ },
+ "id": "v2-bbb378f8-91cf-4005-8d1b-fe071d606459",
+ "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}",
+ "subject": "/parties/party1/biomassModelJobs/job-biomass-13sdqwd",
+ "eventType": "Microsoft.AgFoodPlatform.BiomassModelJobStatusChangedV2",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2022-11-07T15:16:28.6070116Z"
+ }
+```
+
+ 13. **Event type: Microsoft.AgFoodPlatform.SoilMoistureModelJobStatusChangedV2**
+```json
+ {
+ "data": {
+ "partyId": "party",
+ "message": "Created job 'job-soilmoisture-sf332q' to calculate soil moisture values for boundary 'boundary' from inferenceStartDate '05/01/2022' to inferenceEndDate '05/20/2022' (both inclusive).",
+ "status": "Waiting",
+ "lastActionDateTime": "0001-01-01T00:00:00Z",
+ "isCancellationRequested": false,
+ "properties": {
+ "key1": "value1",
+ "key2": 123.45
+ },
+ "id": "job-soilmoisture-sf332q",
+ "name": "string",
+ "description": "string",
+ "createdDateTime": "2022-11-07T15:11:00.9484192Z"
+ },
+ "id": "v2-575d2196-63f2-44dc-b0f5-e5180b8475f1",
+ "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}",
+ "subject": "/parties/party/soilMoistureModelJobs/job-soilmoisture-sf332q",
+ "eventType": "Microsoft.AgFoodPlatform.SoilMoistureModelJobStatusChangedV2",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2022-11-07T15:11:01.2957613Z"
+ }
+```
+
+ 14. **Event type: Microsoft.AgFoodPlatform.SensorPlacementModelJobStatusChangedV2**
+```json
+ {
+ "data": {
+ "partyId": "pjparty",
+ "message": "Satellite scenes are available only for '0' days, expected scenes for '133' days. Not all scenes are available, please trigger satellite job for the required date range.",
+ "status": "Running",
+ "lastActionDateTime": "2022-11-01T10:44:19Z",
+ "isCancellationRequested": false,
+ "properties": {
+ "key1": "value1",
+ "key2": 123.45
+ },
+ "id": "pjjob2",
+ "name": "string",
+ "description": "string",
+ "createdDateTime": "2022-11-01T10:44:01Z"
+ },
+ "id": "5d3e0d75-b963-494e-956a-3690b16315ff",
+ "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}",
+ "subject": "/parties/pjparty/sensorPlacementModelJobs/pjjob2",
+ "eventType": "Microsoft.AgFoodPlatform.SensorPlacementModelJobStatusChangedV2",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2022-11-01T10:44:19Z"
+ }
+```
+
+ 15. **Event type: Microsoft.AgFoodPlatform.SeasonalFieldChangedV2**
+````json
+{
+ "data": {
+ "seasonId": "unique-season",
+ "fieldId": "unique-field",
+ "farmId": "unique-farm",
+ "partyId": "unique-party",
+ "actionType": "Created",
+ "status": "string",
+ "modifiedDateTime": "2022-11-07T07:40:30Z",
+ "eTag": "9601f7cc-0000-0700-0000-6368b66e0000",
+ "properties": {
+ "key1": "value1",
+ "key2": 123.45
+ },
+ "id": "unique-seasonalfield",
+ "name": "string",
+ "description": "string",
+ "createdDateTime": "2022-11-07T07:40:30Z"
+ },
+ "id": "v2-8ac9fa0e-6750-4b9a-a62f-54fdeffb057a",
+ "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}",
+ "subject": "/parties/unique-party/seasonalFields/unique",
+ "eventType": "Microsoft.AgFoodPlatform.SeasonalFieldChangedV2",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2022-11-07T07:40:30.1368975Z"
+ }
+````
+
+ 16. **Event type: Microsoft.AgFoodPlatform.ZoneChangedV2**
+```json
+{
+ "data": {
+ "managementZoneId": "contoso-mz",
+ "partyId": "contoso-party",
+ "actionType": "Deleted",
+ "status": "string",
+ "modifiedDateTime": "2022-11-01T10:50:07Z",
+ "eTag": "5a058b39-0000-0700-0000-6360f9ae0000",
+ "properties": {
+ "key1": "value1",
+ "key2": 123.45
+ },
+ "id": "contoso-zone-5764",
+ "name": "string",
+ "description": "string",
+ "createdDateTime": "2022-11-01T10:48:39Z"
+ },
+ "id": "110777ec-e74e-42dd-aa5c-23c72fd2b2bf",
+ "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}",
+ "subject": "/parties/contoso-party/zones/contoso-zone-5764",
+ "eventType": "Microsoft.AgFoodPlatform.ZoneChangedV2",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2022-11-01T10:50:07.586658Z"
+ }
+ ```
+ 17. **Event type: Microsoft.AgFoodPlatform.ManagementZoneChangedV2**
+```json
+{
+ "data": {
+ "seasonId": "season",
+ "cropId": "crop",
+ "fieldId": "contoso-field",
+ "partyId": "contoso-party",
+ "actionType": "Created",
+ "status": "string",
+ "modifiedDateTime": "2022-11-01T10:44:38Z",
+ "eTag": "af00b1f1-0000-0700-0000-6360f8960000",
+ "properties": {
+ "key1": "value1",
+ "key2": 123.45
+ },
+ "id": "contoso-mz",
+ "name": "string",
+ "description": "string",
+ "createdDateTime": "2022-11-01T10:44:38Z"
+ },
+ "id": "0ac75094-ffd6-4dbf-847c-d9df03b630f4",
+ "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}",
+ "subject": "/parties/contoso-party/managementZones/contoso-mz",
+ "eventType": "Microsoft.AgFoodPlatform.ManagementZoneChangedV2",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2022-11-01T10:44:38.3458983Z"
+ }
+ ```
+
+ 18. **Event type: Microsoft.AgFoodPlatform.PrescriptionChangedV2**
+```json
+{
+ "data": {
+ "prescriptionMapId": "contoso-prescriptionmapid123",
+ "partyId": "contoso-partyId",
+ "actionType": "Created",
+ "status": "string",
+ "modifiedDateTime": "2022-11-07T09:06:30Z",
+ "eTag": "8f0745e8-0000-0700-0000-6368ca960000",
+ "properties": {
+ "key1": "value1",
+ "key2": 123.45
+ },
+ "id": "contoso-prescrptionid123",
+ "name": "string",
+ "description": "string",
+ "createdDateTime": "2022-11-07T09:06:30Z"
+ },
+ "id": "v2-f0c1df5d-db19-4bd9-adea-a0d38622d844",
+ "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}",
+ "subject": "/parties/contoso-partyId/prescriptions/contoso-prescrptionid123",
+ "eventType": "Microsoft.AgFoodPlatform.PrescriptionChangedV2",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2022-11-07T09:06:30.9331136Z"
+ }
+ ```
+
+ 19. **Event type: Microsoft.AgFoodPlatform.PrescriptionMapChangedV2**
+```json
+ {
+ "data": {
+ "seasonId": "contoso-season",
+ "cropId": "contoso-crop",
+ "fieldId": "contoso-field",
+ "partyId": "contoso-partyId",
+ "actionType": "Updated",
+ "status": "string",
+ "modifiedDateTime": "2022-11-07T09:04:09Z",
+ "eTag": "8f0722c1-0000-0700-0000-6368ca090000",
+ "properties": {
+ "key1": "value1",
+ "key2": 123.45
+ },
+ "id": "contoso-prescriptionmapid123",
+ "name": "string",
+ "description": "string",
+ "createdDateTime": "2022-11-07T09:01:25Z"
+ },
+ "id": "v2-625f09bd-c342-4af4-8ae9-0533fe36d8b5",
+ "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}",
+ "subject": "/parties/contoso-partyId/prescriptionMaps/contoso-prescriptionmapid123",
+ "eventType": "Microsoft.AgFoodPlatform.PrescriptionMapChangedV2",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2022-11-07T09:04:09.8937395Z"
+ }
+ ```
+ 20. **Event type: Microsoft.AgFoodPlatform.PlantTissueAnalysisChangedV2**
+```json
+ {
+ "data": {
+ "fieldId": "contoso-field",
+ "cropId": "contoso-crop",
+ "cropProductId": "contoso-cropProduct",
+ "seasonId": "contoso-season",
+ "partyId": "contoso-partyId",
+ "actionType": "Created",
+ "status": "string",
+ "modifiedDateTime": "2022-11-07T09:10:12Z",
+ "eTag": "90078d29-0000-0700-0000-6368cb740000",
+ "properties": {
+ "key1": "value1",
+ "key2": 123.45
+ },
+ "id": "contoso-planttissueanalysis123",
+ "name": "string",
+ "description": "string",
+ "createdDateTime": "2022-11-07T09:10:12Z"
+ },
+ "id": "v2-1bcc9ef4-51a1-4192-bfbc-64deb3816583",
+ "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}",
+ "subject": "/parties/contoso-partyId/plantTissueAnalyses/contoso-planttissueanalysis123",
+ "eventType": "Microsoft.AgFoodPlatform.PlantTissueAnalysisChangedV2",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2022-11-07T09:10:12.1008276Z"
+ }
+```
+ 21. **Event type: Microsoft.AgFoodPlatform.NutrientAnalysisChangedV2**
+```json
+ {
+ "data": {
+ "parentId": "contoso-planttissueanalysis123",
+ "parentType": "PlantTissueAnalysis",
+ "partyId": "contoso-partyId",
+ "actionType": "Created",
+ "status": "string",
+ "modifiedDateTime": "2022-11-07T09:17:21Z",
+ "eTag": "9901583d-0000-0700-0000-6368cd220000",
+ "properties": {
+ "key1": "value1",
+ "key2": 123.45
+ },
+ "id": "nutrientAnalysis-123",
+ "name": "string",
+ "description": "string",
+ "createdDateTime": "2022-11-07T09:17:21Z"
+ },
+ "id": "v2-c6eb10eb-27be-480a-bdca-bd8fbef7cfe7",
+ "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}",
+ "subject": "/parties/contoso-partyId/nutrientAnalyses/nutrientAnalysis-123",
+ "eventType": "Microsoft.AgFoodPlatform.NutrientAnalysisChangedV2",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2022-11-07T09:17:22.0694093Z"
+ }
+ ```
+
+ 22. **Event type: Microsoft.AgFoodPlatform.AttachmentChangedV2**
+```json
+ {
+ "data": {
+ "resourceId": "NDk5MzQ5XzVmZWQ3ZWQ4ZGQxNzQ0MTI1YzliNjU5Yg",
+ "resourceType": "ApplicationData",
+ "partyId": "contoso-432623-party-6",
+ "actionType": "Updated",
+ "modifiedDateTime": "2022-10-17T18:56:23Z",
+ "eTag": "19004980-0000-0700-0000-634da55a0000",
+ "id": "NDk5MzQ5XzVmZWQ3ZWQ4ZGQxNzQ0MTI1YzliNjU5Yg-AppliedRate-TIF",
+ "createdDateTime": "2022-06-08T15:03:00Z"
+ },
+ "id": "80542664-b16f-4b0c-9d7e-f453edede5e3",
+ "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}",
+ "subject": "/parties/contoso-432623-party-6/attachments/NDk5MzQ5XzVmZWQ3ZWQ4ZGQxNzQ0MTI1YzliNjU5Yg-AppliedRate-TIF",
+ "eventType": "Microsoft.AgFoodPlatform.AttachmentChangedV2",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2022-10-17T18:56:23.4832442Z"
+ }
+ ```
+
+ 23. **Event type: Microsoft.AgFoodPlatform.InsightChangedV2**
+```json
+ {
+ "data": {
+ "modelId": "Microsoft.SoilMoisture",
+ "resourceType": "Boundary",
+ "resourceId": "boundary",
+ "modelVersion": "1.0",
+ "partyId": "party",
+ "actionType": "Updated",
+ "modifiedDateTime": "2022-11-03T18:21:24Z",
+ "eTag": "04011838-0000-0700-0000-636406a40000",
+ "properties": {
+ "SYSTEM-SENSORDATAMODELID": "pra-sm",
+ "SYSTEM-INFERENCESTARTDATETIME": "2022-05-01T00:00:00Z",
+ "SYSTEM-SENSORPARTNERID": "SensorPartner",
+ "SYSTEM-SATELLITEPROVIDER": "Microsoft",
+ "SYSTEM-SATELLITESOURCE": "Sentinel_2_L2A",
+ "SYSTEM-IMAGERESOLUTION": 10,
+ "SYSTEM-IMAGEFORMAT": "TIF"
+ },
+ "id": "02e96e5e-852b-f895-af1e-c6da309ae345",
+ "createdDateTime": "2022-07-06T09:06:57Z"
+ },
+ "id": "v2-475358e4-3c8a-4a05-a22c-9fa4da6effc7",
+ "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}",
+ "subject": "/parties/party/insights/02e96e5e-852b-f895-af1e-c6da309ae345",
+ "eventType": "Microsoft.AgFoodPlatform.InsightChangedV2",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2022-11-03T18:21:24.7502452Z"
+ }
+ ```
+
+ 24. **Event type: Microsoft.AgFoodPlatform.InsightAttachmentChangedV2**
+```json
+ {
+ "data": {
+ "insightId": "f5c2071c-c7ce-05f3-be4d-952a26f2490a",
+ "modelId": "Microsoft.SoilMoisture",
+ "resourceType": "Boundary",
+ "resourceId": "boundary",
+ "partyId": "party",
+ "actionType": "Updated",
+ "modifiedDateTime": "2022-11-03T18:21:26Z",
+ "eTag": "5d06cc22-0000-0700-0000-636406a60000",
+ "id": "f5c2071c-c7ce-05f3-be4d-952a26f2490a-soilMoisture",
+ "createdDateTime": "2022-07-06T09:07:00Z"
+ },
+ "id": "v2-46881f59-fd5c-48ed-a71f-342c04c75d1f",
+ "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}",
+ "subject": "/parties/party/insightAttachments/f5c2071c-c7ce-05f3-be4d-952a26f2490a-soilMoisture",
+ "eventType": "Microsoft.AgFoodPlatform.InsightAttachmentChangedV2",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2022-11-03T18:21:26.9501924Z"
+ }
+ ```
+
+ 25. **Event type: Microsoft.AgFoodPlatform.ApplicationDataChangedV2**
+```json
+{
+ "data": {
+ "actionType": "Created",
+ "partyId": "contoso-partyId",
+ "status": "string",
+ "source": "string",
+ "modifiedDateTime": "2022-11-07T09:23:07Z",
+ "properties": {
+ "key1": "value1",
+ "key2": 123.45
+ },
+ "eTag": "91072b09-0000-0700-0000-6368ce7b0000",
+ "id": "applicationData-123",
+ "name": "string",
+ "description": "string",
+ "createdDateTime": "2022-11-07T09:23:07Z"
+ },
+ "id": "v2-2d849164-a773-4926-bcd3-b3884bad5076",
+ "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}",
+ "subject": "/parties/contoso-partyId/applicationData/applicationData-123",
+ "eventType": "Microsoft.AgFoodPlatform.ApplicationDataChangedV2",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2022-11-07T09:23:07.078703Z"
+ }
+ ```
+
+ 26. **Event type: Microsoft.AgFoodPlatform.HarvestDataChangedV2**
+```json
+ {
+ "data": {
+ "actionType": "Created",
+ "partyId": "contoso-partyId",
+ "status": "string",
+ "source": "string",
+ "modifiedDateTime": "2022-11-07T09:29:39Z",
+ "properties": {
+ "key1": "value1",
+ "key2": 123.45
+ },
+ "eTag": "9901037e-0000-0700-0000-6368d0030000",
+ "id": "harvestData-123",
+ "name": "string",
+ "description": "string",
+ "createdDateTime": "2022-11-07T09:29:39Z"
+ },
+ "id": "v2-bd4c9d63-17f2-4c61-8583-a64e064f06d6",
+ "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}",
+ "subject": "/parties/contoso-partyId/harvestData/harvestData-123",
+ "eventType": "Microsoft.AgFoodPlatform.HarvestDataChangedV2",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2022-11-07T09:29:39.3967693Z"
+ }
+ ```
+
+ 27. **Event type: Microsoft.AgFoodPlatform.TillageDataChangedV2**
+```json
+ {
+ "data": {
+ "actionType": "Created",
+ "partyId": "contoso-partyId",
+ "status": "string",
+ "source": "string",
+ "modifiedDateTime": "2022-11-07T09:32:00Z",
+ "properties": {
+ "key1": "value1",
+ "key2": 123.45
+ },
+ "eTag": "9107eb95-0000-0700-0000-6368d0900000",
+ "id": "tillageData-123",
+ "name": "string",
+ "description": "string",
+ "createdDateTime": "2022-11-07T09:32:00Z"
+ },
+ "id": "v2-75b58a0f-00b9-4c73-9733-4caab2343686",
+ "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}",
+ "subject": "/parties/contoso-partyId/tillageData/tillageData-123",
+ "eventType": "Microsoft.AgFoodPlatform.TillageDataChangedV2",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2022-11-07T09:32:00.7745737Z"
+ }
+ ```
+
+ 28. **Event type: Microsoft.AgFoodPlatform.PlantingDataChangedV2**
+```json
+ {
+ "data": {
+ "actionType": "Created",
+ "partyId": "contoso-partyId",
+ "status": "string",
+ "source": "string",
+ "modifiedDateTime": "2022-11-07T09:13:27Z",
+ "properties": {
+ "key1": "value1",
+ "key2": 123.45
+ },
+ "eTag": "90073465-0000-0700-0000-6368cc370000",
+ "id": "contoso-plantingdata123",
+ "name": "string",
+ "description": "string",
+ "createdDateTime": "2022-11-07T09:13:27Z"
+ },
+ "id": "v2-1b55076b-d989-4831-81e4-ff8b469dc5f8",
+ "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}",
+ "subject": "/parties/contoso-partyId/plantingData/contoso-plantingdata123",
+ "eventType": "Microsoft.AgFoodPlatform.PlantingDataChangedV2",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2022-11-07T09:13:27.9490317Z"
+ }
+ ```
+
+ 29. **Event type: Microsoft.AgFoodPlatform.ImageProcessingRasterizeJobStatusChangedV2**
+```json
+ {
+ "data": {
+ "shapefileAttachmentId": "attachment-contoso",
+ "partyId": "party-contoso",
+ "message": "Created job 'contoso-nov1-2' to rasterize shapefile attachment with id 'attachment-contoso'.",
+ "status": "Running",
+ "lastActionDateTime": "2022-11-01T10:44:44.8186582Z",
+ "isCancellationRequested": false,
+ "properties": {
+ "key1": "value1",
+ "key2": 123.45
+ },
+ "id": "contoso-nov1-2",
+ "name": "string",
+ "description": "string",
+ "createdDateTime": "2022-11-01T10:44:39.3098984Z"
+ },
+ "id": "0ad2d5e6-1277-4880-adb6-bf0a621ad59b",
+ "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}",
+ "subject": "/parties/party-contoso/imageProcessingRasterizeJobs/contoso-nov1-2",
+ "eventType": "Microsoft.AgFoodPlatform.ImageProcessingRasterizeJobStatusChangedV2",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2022-11-01T10:44:44.8203668Z"
+ }
+ ```
+
+ 30. **Event type: Microsoft.AgFoodPlatform.DeviceDataModelChanged**
+```json
+ {
+ "data": {
+ "sensorPartnerId": "partnerId",
+ "actionType": "Created",
+ "modifiedDateTime": "2022-11-03T03:37:42Z",
+ "eTag": "e50094f2-0000-0700-0000-636337860000",
+ "id": "synthetics-02a465da-0c85-40cf-b7a8-64e15baae3c4",
+ "createdDateTime": "2022-11-03T03:37:42Z"
+ },
+ "id": "40ba84c3-b8f4-497d-8d44-1b8df6eb3b7c",
+ "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}",
+ "subject": "/sensorPartners/partnerId/deviceDataModels/synthetics-02a465da-0c85-40cf-b7a8-64e15baae3c4",
+ "eventType": "Microsoft.AgFoodPlatform.DeviceDataModelChanged",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2022-11-03T03:37:42.4536218Z"
+ }
+ ```
+
+ 31. **Event type: Microsoft.AgFoodPlatform.DeviceChanged**
+```json
+ {
+ "data": {
+ "deviceDataModelId": "test-ddm1",
+ "integrationId": "ContosoID",
+ "sensorPartnerId": "SensorPartner",
+ "actionType": "Created",
+ "status": "string",
+ "modifiedDateTime": "2022-11-01T11:29:01Z",
+ "eTag": "b0000a6f-0000-0700-0000-636102fe0000",
+ "properties": {
+ "key1": "value1",
+ "key2": 123.45
+ },
+ "id": "dddd1",
+ "name": "string",
+ "description": "string",
+ "createdDateTime": "2022-11-01T11:29:01Z"
+ },
+ "id": "15ab45c7-0f04-4db3-b982-87380b3c1ba4",
+ "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}",
+ "subject": "/sensorPartners/SensorPartner/devices/dddd1",
+ "eventType": "Microsoft.AgFoodPlatform.DeviceChanged",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2022-11-01T11:29:02.0578111Z"
+ }
+ ```
+
+ 32. **Event type: Microsoft.AgFoodPlatform.SensorDataModelChanged**
+```json
+ {
+ "data": {
+ "sensorPartnerId": "partnerId",
+ "actionType": "Deleted",
+ "modifiedDateTime": "2022-11-03T03:38:11Z",
+ "eTag": "e50099f2-0000-0700-0000-636337860000",
+ "id": "4fb0214a-459c-47b8-8564-b822f263ae12",
+ "createdDateTime": "2022-11-03T03:37:42Z"
+ },
+ "id": "54fdb552-b5db-45c0-be49-8f4f27f27bde",
+ "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}",
+ "subject": "/sensorPartners/partnerId/sensorDataModels/4fb0214a-459c-47b8-8564-b822f263ae12",
+ "eventType": "Microsoft.AgFoodPlatform.SensorDataModelChanged",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2022-11-03T03:38:11.7538559Z"
+ }
+ ```
+
+ 33. **Event type: Microsoft.AgFoodPlatform.SensorChanged**
+```json
+ {
+ "data": {
+ "sensorDataModelId": "4fb0214a-459c-47b8-8564-b822f263ae12",
+ "integrationId": "159ce4e5-878f-4fc7-9bae-16eaf65bfb45",
+ "sensorPartnerId": "partnerId",
+ "actionType": "Deleted",
+ "modifiedDateTime": "2022-11-03T03:38:09Z",
+ "eTag": "13063e1e-0000-0700-0000-636337970000",
+ "properties": {
+ "key-a": "value-a"
+ },
+ "id": "ec1ed9c6-f476-448a-ab07-65e0d71e34d5",
+ "createdDateTime": "2022-11-03T03:37:59Z"
+ },
+ "id": "b3a0f169-6d28-4e57-b570-6068446b50b4",
+ "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}",
+ "subject": "/sensorPartners/partnerId/sensors/ec1ed9c6-f476-448a-ab07-65e0d71e34d5",
+ "eventType": "Microsoft.AgFoodPlatform.SensorChanged",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2022-11-03T03:38:09.7932361Z"
+ }
+ ```
+
+ 34. **Event type: Microsoft.AgFoodPlatform.SensorMappingChangedV2**
+```json
+ {
+ "data": {
+ "sensorId": "sensor",
+ "partyId": "ContosopartyId",
+ "boundaryId": "ContosoBoundary",
+ "sensorPartnerId": "sensorpartner",
+ "actionType": "Created",
+ "status": "string",
+ "modifiedDateTime": "2022-11-01T11:08:33Z",
+ "eTag": "b000ff36-0000-0700-0000-6360fe310000",
+ "properties": {
+ "key1": "value1",
+ "key2": 123.45
+ },
+ "id": "sensormapping",
+ "name": "string",
+ "description": "string",
+ "createdDateTime": "2022-11-01T11:08:33Z"
+ },
+ "id": "c532ff5c-bfa0-4644-a0bc-14f736ebc07d",
+ "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}",
+ "subject": "/sensorPartners/sensorpartner/sensorMappings/sensormapping",
+ "eventType": "Microsoft.AgFoodPlatform.SensorMappingChangedV2",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2022-11-01T11:08:33.3345312Z"
+ }
+ ```
+
+ 35. **Event type: Microsoft.AgFoodPlatform.SensorPartnerIntegrationChangedV2**
+```json
+ {
+ "data": {
+ "integrationId": "159ce4e5-878f-4fc7-9bae-16eaf65bfb45",
+ "sensorPartnerId": "partnerId",
+ "actionType": "Deleted",
+ "modifiedDateTime": "2022-11-03T03:38:10Z",
+ "eTag": "e5009cf2-0000-0700-0000-636337870000",
+ "id": "159ce4e5-878f-4fc7-9bae-16eaf65bfb45",
+ "createdDateTime": "2022-11-03T03:37:42Z"
+ },
+ "id": "v2-3e6b1527-7f67-4c7d-b26e-1000a6a97612",
+ "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}",
+ "subject": "/sensorPartners/partnerId/integrations/159ce4e5-878f-4fc7-9bae-16eaf65bfb45",
+ "eventType": "Microsoft.AgFoodPlatform.SensorPartnerIntegrationChangedV2",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2022-11-03T03:38:10.9531838Z"
+ }
+ ```
+## Next steps
+* For an introduction to Azure Event Grid, see [What is Event Grid?](../event-grid/overview.md)
+* Test our APIs [here](/rest/api/data-manager-for-agri).
defender-for-iot Tutorial Configure Agent Based Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/tutorial-configure-agent-based-solution.md
There are no resources to clean up.
## Next steps > [!div class="nextstepaction"]
-> [Investigate security recommendations](tutorial-investigate-security-recommendations.md)
+> [Investigate security recommendations](tutorial-investigate-security-recommendations.md)
defender-for-iot Cli Ot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/cli-ot-sensor.md
To use this command:
- Verify that the certificate file you want to import is readable on the appliance. Upload certificate files to the appliance using tools such as WinSCP or Wget. - Confirm with your IT office that the appliance domain as it appears in the certificate is correct for your DNS server and the corresponding IP address.
-For more information, see [Certificates for appliance encryption and authentication (OT appliances)](how-to-deploy-certificates.md).
+For more information, see [Prepare CA-signed certificates](best-practices/plan-prepare-deploy.md#prepare-ca-signed-certificates) and [Create SSL/TLS certificates for OT appliances](ot-deploy/create-ssl-certificates.md).
|User |Command |Full command syntax | ||||
defender-for-iot Concept Supported Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-supported-protocols.md
OT network sensors can detect the following protocols when identifying assets an
|Brand / Vendor |Protocols | ||| |**ABB** | ABB 800xA DCS (IEC61850 MMS including ABB extension)<br> CNCP<br> RNRP<br> ABB IAC<br> ABB Totalflow |
-|**Samsung** | Samsung TV |
|**ASHRAE** | BACnet<br> BACnet BACapp<br> BACnet BVLC | |**Beckhoff** | AMS (ADS)<br> Twincat | |**Cisco** | CAPWAP Control<br> CAPWAP Data<br> CDP<br> LWAPP |
OT network sensors can detect the following protocols when identifying assets an
|**Emerson** | DeltaV<br> DeltaV - Discovery<br> Emerson OpenBSI/BSAP<br> Ovation DCS ADMD<br>Ovation DCS DPUSTAT<br> Ovation DCS SSRPC | |**Emerson Fischer** | ROC | |**Eurocontrol** | ASTERIX |
-|**GE** | Bentley Nevada (System 1 / BN3500)<br> EGD<br> GSM (GE MarkVI and MarkVIe)<br> SRTP (GE)<br> GE_CMP |
+|**GE** | Bentley Nevada (System 1 / BN3500)<br>ClassicSDI (MarkVle) <br> EGD<br> GSM (GE MarkVI and MarkVIe)<br> InterSite<br> SDI (MarkVle) <br> SRTP (GE)<br> GE_CMP |
|**Generic Applications** | Active Directory<br> RDP<br> Teamviewer<br> VNC<br> | |**Honeywell** | ENAP<br> Experion DCS CDA<br> Experion DCS FDA<br> Honeywell EUCN <br> Honeywell Discovery | |**IEC** | Codesys V3<br>IEC 60870-5-7 (IEC 62351-3 + IEC 62351-5)<br> IEC 60870-5-101 (encapsulated serial)<br> IEC 60870-5-103 (encapsulated serial)<br> IEC 60870-5-104<br> IEC 60870-5-104 ASDU_APCI<br> IEC 60870 ICCP TASE.2<br> IEC 61850 GOOSE<br> IEC 61850 MMS<br> IEC 61850 SMV (SAMPLED-VALUES)<br> LonTalk (LonWorks) |
OT network sensors can detect the following protocols when identifying assets an
|**Omron** | FINS | |**OPC** | UA | |**Oracle** | TDS<br> TNS |
-|**Rockwell Automation** | ENIP<br> EtherNet/IP CIP (including Rockwell extension)<br> EtherNet/IP CIP FW version 27 and above |
+|**Rockwell Automation** | CSP2<br> ENIP<br> EtherNet/IP CIP (including Rockwell extension)<br> EtherNet/IP CIP FW version 27 and above |
+|**Samsung** | Samsung TV |
|**Schneider Electric** | Modbus/TCP<br> Modbus TCPΓÇôSchneider Unity Extensions<br> OASYS (Schneider Electric Telvant)<br> Schneider TSAA | |**Schneider Electric / Invensys** | Foxboro Evo<br> Foxboro I/A<br> Trident<br> TriGP<br> TriStation | |**Schneider Electric / Modicon** | Modbus RTU | |**Schneider Electric / Wonderware** | Wonderware Suitelink |
-|**Siemens** | CAMP<br> PCS7<br> PCS7 WinCC ΓÇô Historian<br> Profinet DCP<br> Profinet Realtime<br> Siemens PHD<br> Siemens S7<br> Siemens S7-Plus<br> Siemens SICAM<br> Siemens WinCC |
+|**Siemens** | CAMP<br> PCS7<br> PCS7 WinCC ΓÇô Historian<br> Profinet DCP<br> Profinet I/O<br> Profinet Realtime<br> Siemens PHD<br> Siemens S7<br> Siemens S7 - Firmware and model extraction<br> Siemens S7 ΓÇô key state<br> Siemens S7-Plus<br> Siemens SICAM<br> Siemens WinCC |
|**Toshiba** |Toshiba Computer Link | |**Yokogawa** | Centum ODEQ (Centum / ProSafe DCS)<br> HIS Equalize<br> FA-M3<br> Vnet/IP |
Enterprise IoT network sensors can detect the following protocols when identifyi
Asset vendors, partners, or platform owners can use Defender for IoT's Horizon Protocol SDK to secure any OT protocol used in IoT and ICS environments that's not isn't already supported by default.
-Horizon helps you to write plugins for OT sensors that enable Deep Packet Inspection (DPI) on the traffic and detect threats in realtime. Customize your plugins localize and customize text for alerts, events, and protocol parameters.
+Horizon helps you to write plugins for OT sensors that enable Deep Packet Inspection (DPI) on the traffic and detect threats in real-time. Customize your plugins localize and customize text for alerts, events, and protocol parameters.
Horizon provides:
defender-for-iot Configure Windows Endpoint Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/configure-windows-endpoint-monitoring.md
If you'll be using a non-admin account to run your WEM scans, this procedure is
For more information, see:
+- [Detect Windows workstations and servers with a local script](detect-windows-endpoints-script.md)
- [View your device inventory from a sensor console](how-to-investigate-sensor-detections-in-a-device-inventory.md) - [View your device inventory from the Azure portal](how-to-manage-device-inventory-for-organizations.md) - [Configure active monitoring for OT networks](configure-active-monitoring.md)
defender-for-iot Detect Windows Endpoints Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/detect-windows-endpoints-script.md
The script described in this article returns the following details about each de
- Installed programs - Last knowledge base update
-If an OT network sensor has already learned the device, running the script outlined in this article retrieves the device's information and enrichment data.
+If an OT network sensor has already detected the device, running the script outlined in this article retrieves the device's information and enrichment data.
## Prerequisites
The script described in this article is supported for the following Windows oper
- Windows 10 - Windows Server 2003/2008/2012/2016/2019
-## Run the script
+## Download and run the script
-This procedure describes how to obtain, deploy, and run the script on the Windows workstation and servers that you want to monitor in Defender for IoT.
+This procedure describes how to deploy and run a script on the Windows workstation and servers that you want to monitor in Defender for IoT.
-The script you run to detect enriched Windows data is run as a utility and not as an installed program. Running the script doesn't affect the endpoint.
+The script detects enriched Windows data, and is run as a utility and not an installed program. Running the script doesn't affect the endpoint. You may want to deploy the script once, or using ongoing automation, using standard automated deployment methods and tools.
-1. To acquire the script, [contact customer support](mailto:support.microsoft.com).
+1. Sign into your OT sensor console, and select **System Settings** > **Import Settings** > **Windows Information**.
+
+1. Select **Download script**. For example:
-1. Deploy the script once, or using ongoing automation, using standard automated deployment methods and tools.
+ :::image type="content" source="media/detect-windows-endpoints-script/download-wmi-script.png" alt-text="Screenshot of where to download WMI script." lightbox="media/detect-windows-endpoints-script/download-wmi-script.png":::
1. Copy the script to a local drive and unzip it. The following files appear:
The script you run to detect enriched Windows data is run as a utility and not a
1. Run the `run.bat` file.
- After the script runs to probe the registry, a CX-snapshot file appears with the registry information. The filename indicates the system name, date, and time of the snapshot with the following syntax: `CX-snaphot_SystemName_Month_Year_Time`
+ After the script runs to probe the registry, a CX-snapshot file appears with the registry information. The filename indicates the machine name and the current date and time of the snapshot with the following syntax: `cx_snapshot_[machinename]_[current date time]`.
-Files generated by the script:
+Files generated by the script include:
- Remain on the local drive until you delete them. - Must remain in the same location. Don't separate the generated files.
Files generated by the script:
## Import device details
-After having run the script as described [earlier](#run-the-script), import the generated data to your sensor to view the device details in the **Device inventory**.
+After having run the script as described [earlier](#download-and-run-the-script), import the generated data to your sensor to view the device details in the **Device inventory**.
**To import device details to your sensor**:
After having run the script as described [earlier](#run-the-script), import the
1. Select **Import File**, and then select all the files (Ctrl+A).
-1. Select **Close**. The device registry information is imported and a successful confirmation message is shown.
+ :::image type="content" source="media/detect-windows-endpoints-script/import-wmi-script.png" alt-text="Screenshot of where to import WMI script." lightbox="media/detect-windows-endpoints-script/import-wmi-script.png":::
+
+## View devices applications report
+
+After [downloading and running](#download-and-run-the-script) the script, then [importing](#import-device-details) the generated data to your sensor, you can view your devices applications with a custom data mining report.
+
+**To view the devices applications:**
- If there's a problem uploading one of the files, you'll be informed which file upload failed.
+1. Sign into your OT sensor console, and select **Data mining**.
+
+1. Select **+ Create report** to [create a custom report](how-to-create-data-mining-queries.md#create-an-ot-sensor-custom-data-mining-report). In the **Choose Category** field, select **Devices Applications**. For example:
+
+ :::image type="content" source="media/detect-windows-endpoints-script/devices-applications-report.png" alt-text="Screenshot of creating devices applications custom report." lightbox="media/detect-windows-endpoints-script/devices-applications-report.png":::
+
+1. Your devices applications report is shown in the **My reports** area.
+
+Based on this information, the Windows device installed applications CVE list will be displayed in Azure if the sensor is cloud-connected.
## Next steps For more information, see [Detect Windows workstations and servers with a local script](detect-windows-endpoints-script.md) and [Import extra data for detected OT devices](how-to-import-device-information.md).-
defender-for-iot Faqs Ot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/faqs-ot.md
Change network configuration settings before or after you activate your sensor u
- **From the sensor UI**: [Update the OT sensor network configuration](how-to-manage-individual-sensors.md#update-the-ot-sensor-network-configuration) - **From the sensor CLI**: [Network configuration](cli-ot-sensor.md#network-configuration)
-For more information, see [Activate and set up your sensor](how-to-activate-and-set-up-your-sensor.md), [Getting started with advanced CLI commands](references-work-with-defender-for-iot-cli-commands.md), and [CLI command reference from OT network sensors](cli-ot-sensor.md).
+For more information, see [Activate and set up your OT network sensor](ot-deploy/activate-deploy-sensor.md), [Getting started with advanced CLI commands](references-work-with-defender-for-iot-cli-commands.md), and [CLI command reference from OT network sensors](cli-ot-sensor.md).
## How do I check the sanity of my deployment
defender-for-iot Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/getting-started.md
Before you start, make sure that you have:
- Access to the Azure portal as a [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner). For more information, see [Azure user roles for OT and Enterprise IoT monitoring with Defender for IoT](roles-azure.md). -- A plan for your Defender for IoT deployment, such as any system requirements, [traffic mirroring](best-practices/traffic-mirroring-methods.md), any [SSL/TLS certificates](ot-deploy/create-ssl-certificates.md), and so on. For more information, see [Plan your OT monitoring system](best-practices/plan-corporate-monitoring.md).-
- If you want to use on-premises sensors, make sure that you have the [hardware appliances](ot-appliance-sizing.md) for those sensors and any administrative user permissions.
- ## Add a trial plan This procedure describes how to add a trial Defender for IoT plan for OT networks to an Azure subscription.
defender-for-iot How To Activate And Set Up Your On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-activate-and-set-up-your-on-premises-management-console.md
- Title: Activate and set up your on-premises management console
-description: Activating the management console ensures that sensors are registered with Azure and sending information to the on-premises management console, and that the on-premises management console carries out management tasks on connected sensors.
Previously updated : 06/06/2022---
-# Activate and set up your on-premises management console
-
-Activation and setup of the on-premises management console ensures that:
--- Network devices that you're monitoring through connected sensors are registered with an Azure account.-- Sensors send information to the on-premises management console.-- The on-premises management console carries out management tasks on connected sensors.-- You've installed an SSL certificate.-
-## Sign in for the first time
-
-To sign in to the on-premises management console:
-
-1. Go to the IP address you received for the on-premises management console during the system installation.
-
-1. Enter the username and password you received for the on-premises management console during the system installation.
-
-If you forgot your password, select the **Recover Password** option.
-## Activate the on-premises management console
-
-After you sign in for the first time, you need to activate the on-premises management console by getting and uploading an activation file. Activation files on the on-premises management console enforce the number of committed devices configured for your subscription and Defender for IoT plan. For more information, see [Manage Defender for IoT subscriptions](how-to-manage-subscriptions.md).
-
-**To activate the on-premises management console**:
-
-1. Sign in to the on-premises management console.
-
-1. In the alert notification at the top of the screen, select **Take Action**.
-
- :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/take-action.png" alt-text="Screenshot that shows the Take Action link in the alert at the top of the screen.":::
-
-1. In the **Activation** pop-up screen, select **Azure portal**.
-
- :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/azure-portal.png" alt-text="Screenshot that shows the Azure portal link in the pop-up message.":::
-
-1. Select a subscription to associate the on-premises management console to. Then select **Download on-premises management console activation file**. The activation file downloads.
-
- The on-premises management console can be associated to one or more subscriptions. The activation file is associated with all the selected subscriptions and the number of committed devices at the time of download.
-
- [!INCLUDE [root-of-trust](includes/root-of-trust.md)]
-
- :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/multiple-subscriptions.png" alt-text="Screenshot that shows selecting multiple subscriptions." lightbox="media/how-to-manage-sensors-from-the-on-premises-management-console/multiple-subscriptions.png":::
-
- If you haven't already onboarded Defender for IoT to a subscription, see [Onboard a Defender for IoT plan for OT networks](how-to-manage-subscriptions.md#onboard-a-defender-for-iot-plan-for-ot-networks).
-
- > [!Note]
- > If you delete a subscription, you must upload a new activation file to the on-premises management console that was affiliated with the deleted subscription.
-
-1. Go back to the **Activation** pop-up screen and select **CHOOSE FILE**.
-
-1. Select the downloaded file.
-
-After initial activation, the number of monitored devices might exceed the number of committed devices defined during onboarding. This issue occurs if you connect more sensors to the management console. If there's a discrepancy between the number of monitored devices and the number of committed devices, a warning appears on the management console.
--
-If this warning appears, you need to upload a [new activation file](#activate-the-on-premises-management-console).
-
-### Activation expirations
-
-After activating an on-premises management console, you'll need to apply new activation files on both the on-premises management console and connected sensors as follows:
-
-|Location |Activation process |
-|||
-|**On-premises management console** | Apply a new activation file on your on-premises management console if you've [modified the number of committed devices](how-to-manage-subscriptions.md#edit-a-plan-for-ot-networks) in your subscription. |
-|**Cloud-connected and locally managed sensors** | Cloud-connected and locally managed sensors remain activated for as long as your Azure subscription with your Defender for IoT plan is active. <br><br>If you're updating an OT sensor from a legacy version, you'll need to re-activate your updated sensor. |
-
-For more information, see [Manage Defender for IoT subscriptions](how-to-manage-subscriptions.md).
-
-### Activate expired licenses from versions earlier than 10.0
-
-For users with versions prior to 10.0, your license might expire and the following alert will appear:
--
-**To activate your license**:
-
-1. Open a case with [support](https://portal.azure.com/?passwordRecovery=true&Microsoft_Azure_IoT_Defender=canary#create/Microsoft.Support).
-
-1. Supply support with your **Activation ID** number.
-
-1. Support will supply you with new license information in the form of a string of letters.
-
-1. Read the terms and conditions, and select the checkbox to approve.
-
-1. Paste the string into the space provided.
-
- :::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/add-license.png" alt-text="Screenshot that shows pasting the string into the box.":::
-
-1. Select **Activate**.
-
-## Set up a certificate
-
-After you install the management console, a local self-signed certificate is generated. This certificate is used to access the console. After an administrator signs in to the management console for the first time, that user is prompted to onboard an SSL/TLS certificate.
-
-Two levels of security are available:
--- Meet specific certificate and encryption requirements requested by your organization by uploading the CA-signed certificate.-- Allow validation between the management console and connected sensors. Validation is evaluated against a certificate revocation list and the certificate expiration date. *If validation fails, communication between the management console and the sensor is halted and a validation error is presented in the console.* This option is enabled by default after installation.-
-The console supports the following types of certificates:
--- Private and Enterprise Key Infrastructure (private PKI)-- Public Key Infrastructure (public PKI)-- Locally generated on the appliance (locally self-signed)-
- > [!IMPORTANT]
- > We recommend that you don't use a self-signed certificate. The certificate isn't secure and should be used for test environments only. The owner of the certificate can't be validated, and the security of your system can't be maintained. Never use this option for production networks.
-
-To upload a certificate:
-
-1. When you're prompted after you sign in, define a certificate name.
-
-1. Upload the CRT and key files.
-
-1. Enter a passphrase and upload a PEM file if necessary.
-
-You might need to refresh your screen after you upload the CA-signed certificate.
-
-To disable validation between the management console and connected sensors:
-
-1. Select **Next**.
-
-1. Turn off the **Enable system-wide validation** toggle.
-
-For information about uploading a new certificate, supported certificate files, and related items, see [Manage the on-premises management console](how-to-manage-the-on-premises-management-console.md).
-
-## Connect sensors to the on-premises management console
-
-Ensure that sensors send information to the on-premises management console. Make sure that the on-premises management console can perform backups, manage alerts, and carry out other activity on the sensors. Use the following procedures to verify that you make an initial connection between sensors and the on-premises management console.
-
-Two options are available for connecting Microsoft Defender for IoT sensors to the on-premises management console:
--- [Connect from the sensor console](#connect-sensors-to-the-on-premises-management-console-from-the-sensor-console)-- [Connect sensors by using tunneling](#connect-sensors-by-using-tunneling)-
-After connecting, set up sites and zones and assign each sensor to a zone to [monitor detected data segmented separately](monitor-zero-trust.md).
-
-For more information, see [Create OT sites and zones on an on-premises management console](ot-deploy/sites-and-zones-on-premises.md).
-
-### Connect sensors to the on-premises management console from the sensor console
-
-**To connect sensors to the on-premises management console from the sensor console**:
-
-1. In the on-premises management console, select **System Settings**.
-
-1. Copy the string in the **Copy Connection String** box.
-
- :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/connection-string.png" alt-text="Screenshot that shows copying the connection string for the sensor.":::
-
-1. On the sensor, go to **System Settings** > **Connection to Management Console**.
-
-1. Paste the copied connection string from the on-premises management console into the **Connection string** box.
-
- :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/paste-connection-string.png" alt-text="Screenshot that shows pasting the copied connection string into the Connection string box.":::
-
-1. Select **Connect**.
-
-### Connect sensors by using tunneling
-
-Enhance system security by preventing direct user access to the sensor. Instead of direct access, use proxy tunneling to let users access the sensor from the on-premises management console with a single firewall rule. This technique narrows the possibility of unauthorized access to the network environment beyond the sensor. The user's experience when signing in to the sensor remains the same.
-
-Using tunneling allows you to connect to the on-premises management console from its IP address and a single port (9000 by default) to any sensor.
-
-For example, the following image shows a sample architecture where users access the sensor consoles via the on-premises management console.
--
-**To set up tunneling at the on-premises management console**:
-
-1. Sign in to the on-premises management console's CLI with the *cyberx* or the *support* user credentials and run the following command:
-
- ```bash
- sudo cyberx-management-tunnel-enable
-
- ```
-
- For more information on users, see [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users).
-
-1. Allow a few minutes for the connection to start.
-
- When tunneling access is configured, the following URL syntax is used to access the sensor consoles: `https://<on-premises management console address>/<sensor address>/<page URL>`
-
-You can also customize the port range to a number other than 9000. An example is 10000.
-
-**To use a new port**:
-
-Sign in to the on-premises management console and run the following command:
-
-```bash
-sudo cyberx-management-tunnel-enable --port 10000
-
-```
-
-**To disable the connection**:
-
-Sign in to the on-premises management console and run the following command:
-
-```bash
-cyberx-management-tunnel-disable
-
-```
-
-No configuration is needed on the sensor.
-
-**To access the tunneling log files**:
-
-1. **From the on-premises management console**: Sign in and go to */var/log/apache2.log*.
-1. **From the sensor**: Sign in and go to */var/cyberx/logs/tunnel.log*.
-
-## Next steps
--
-For more information, see:
--- [Troubleshoot the sensor and on-premises management console](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md)-- [Manage individual sensors](how-to-manage-individual-sensors.md)-- [Create OT sites and zones on an on-premises management console](ot-deploy/sites-and-zones-on-premises.md)
defender-for-iot How To Activate And Set Up Your Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-activate-and-set-up-your-sensor.md
- Title: Activate and set up your sensor
-description: This article describes how to sign in and activate a sensor console.
Previously updated : 06/06/2022---
-# Activate and set up your sensor
-
-This article describes how to activate a sensor and perform initial setup.
-
-Administrator users carry out activation when signing in for the first time and when activation management is required. Setup ensures that the sensor is configured to optimally detect and alert.
-
-Security analysts and read-only users can't activate a sensor or generate a new password.
-
-## Sign in and activation for administrator users
-
-Administrators who sign in for the first time should verify that they have access to the activation and password recovery files for this sensor. These files were downloaded during sensor onboarding. If Administrators don't have these files, they can generate new ones via Defender for IoT in the Azure portal. The following Azure permissions are needed to generate the files:
--- Azure security administrator-- Subscription contributor-- Subscription owner permissions-
-### First-time sign in and activation checklist
-
-Before administrators sign in to the sensor console, administrator users should have access to:
--- The sensor IP address that was defined during the installation.--- User sign in credentials for the sensor. If you downloaded an ISO for the sensor, use the default credentials that you received during the installation. We recommend that you create a new *Administrator* user after activation.--- An initial password. If you purchased a preconfigured sensor from Arrow, you need to generate a password when signing in for the first time.--- The activation file associated with this sensor. The file was generated and downloaded during sensor onboarding by Defender for IoT.---- An SSL/TLS CA-signed certificate that your company requires.--
-### About activation files
-
-Your sensor was onboarded to Microsoft Defender for IoT in a specific management mode:
-
-| Mode type | Description |
-|--|--|
-| **Cloud connected mode** | Information that the sensor detects is displayed in the sensor console. Alert information is also delivered to Azure and can be shared with other Azure services, such as Microsoft Sentinel. You can also enable automatic threat intelligence updates. |
-| **Locally connected mode** | Information that the sensor detects is displayed in the sensor console. Detection information is also shared with the on-premises management console, if the sensor is connected to it. |
-
-A locally connected, or cloud-connected activation file was generated and downloaded for this sensor during onboarding. The activation file contains instructions for the management mode of the sensor. *A unique activation file should be uploaded to each sensor you deploy.* The first time you sign in, you need to upload the relevant activation file for this sensor.
--
-### About certificates
-
-Following sensor installation, a local self-signed certificate is generated. The certificate is used to access the sensor console. After administrators sign in to the console for the first time, they're prompted to onboard an SSL/TLS certificate.
-
-Two levels of security are available:
--- Meet specific certificate and encryption requirements requested by your organization, by uploading the CA-signed certificate.-- Allow validation between the management console and connected sensors. Validation is evaluated against a certificate revocation list and the certificate expiration date. *If validation fails, communication between the management console and the sensor is halted and a validation error appears in the console.* This option is enabled by default after installation. -
-The console supports the following certificate types:
--- Private and Enterprise Key Infrastructure (private PKI)--- Public Key Infrastructure (public PKI)--- Locally generated on the appliance (locally self-signed) -
- > [!IMPORTANT]
- > We recommend that you don't use the default self-signed certificate. The certificate is not secure and should be used for test environments only. The owner of the certificate can't be validated, and the security of your system can't be maintained. Never use this option for production networks.
-
-### Sign in and activate the sensor
-
-**To sign in and activate:**
-
-1. Go to the sensor console from your browser by using the IP defined during the installation. The sign-in dialog box opens.
-
- :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/sensor-log-in-1.png" alt-text="Screenshot of a Defender for IoT sensor sign-in page.":::
--
-1. Enter the credentials defined during the sensor installation, or select the **Password recovery** option. If you purchased a preconfigured sensor from Arrow, generate a password first. For more information on password recovery, see [Investigate password failure at initial sign-in](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md#investigate-password-failure-at-initial-sign-in).
--
-1. Select **Login/Next**. The **Sensor Network Settings** tab opens.
-
- :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/sensor-log-in-wizard-activate.png" alt-text="Screenshot of the sensor network settings options when signing into the sensor.":::
-
-1. Use this tab if you want to change the sensor network configuration before activation. The configuration parameters were defined during the software installation, or when you purchased a preconfigured sensor. The following parameters were defined:
-
- - IP address
- - DNS
- - Default gateway
- - Subnet mask
- - Host name
-
- You might want to update this information before activating the sensor. For example, you might need to change the preconfigured parameters defined by Arrow. You can also define proxy settings before activating your sensor.
-
- If you want to work with a proxy, enable the proxy toggle and add the proxy host, port and username.
-
- :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/sensor-log-in-wizard-activate-proxy.png" alt-text="Screenshot of the proxy options for signing in to a sensor.":::
-
-1. Select **Next.** The Activation tab opens.
-
- :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/wizard-upload-activation-file.png" alt-text="Screenshot of a first time activation file upload option.":::
-
-1. Select **Upload** and go to the activation file that you downloaded during the sensor onboarding.
-
-1. Approve the terms and conditions.
-
-1. Select **Activate**. The SSL/TLS certificate tab opens. Before defining certificates, see [Deploy SSL/TLS certificates on OT appliances](how-to-deploy-certificates.md).
-
- It is **not recommended** to use a locally generated certificate in a production environment.
-
- :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/wizard-upload-activation-certificates-1.png" alt-text="Screenshot of the SSL/TLS Certificates page when signing in to a sensor.":::
-
-1. Enable the **Import trusted CA certificate (recommended)** toggle.
-1. Define a certificate name.
-1. Upload the Key, CRT, and PEM files.
-1. Enter a passphrase and upload a PEM file if necessary.
-1. It's recommended to select **Enable certificate validation** to validate the connections between management console and connected sensors.
-
-1. Select **Finish**.
-
-You might need to refresh your screen after uploading the CA-signed certificate.
-
-For information about uploading a new certificate, supported certificate parameters, and working with CLI certificate commands, see [Manage individual sensors](how-to-manage-individual-sensors.md).
-
-### Activation expirations
-
-After you've activated your sensor, cloud-connected and locally managed sensors remain activated for as long as your Azure subscription with your Defender for IoT plan is active.
-
-If you're updating an OT sensor from a legacy version, you'll need to re-activate your updated sensor.
-
-For more information, see [Manage Defender for IoT subscriptions](how-to-manage-subscriptions.md) and [Manage the on-premises management console](how-to-manage-the-on-premises-management-console.md).
-
-### Activate an expired license (versions under 10.0)
-
-For users with versions prior to 10.0, your license may expire, and the following alert will be displayed.
-
- :::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/activation-popup.png" alt-text="Screenshot of a license expiration popup message.":::
-
-**To activate your license:**
-
-1. Open a case with [support](https://portal.azure.com/?passwordRecovery=true&Microsoft_Azure_IoT_Defender=canary#create/Microsoft.Support).
-
-1. Supply support with your Activation ID number.
-
-1. Support will supply you with new license information in the form of a string of letters.
-
-1. Read the terms and conditions, and check the checkbox to approve.
-
-1. Paste the string into space provided.
-
- :::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/add-license.png" alt-text="Screenshot of the license activation box and button.":::
-
-1. Select **Activate**.
-
-### Subsequent sign ins
-
-After first-time activation, the Microsoft Defender for IoT sensor console opens after sign-in without requiring an activation file or certificate definition. You only need your sign-in credentials.
--
-After your sign-in, the Microsoft Defender for IoT sensor console opens.
-
- :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/initial-dashboard.png" alt-text="Screenshot of the initial sensor console dashboard Overview page." lightbox="media/how-to-activate-and-set-up-your-sensor/initial-dashboard.png":::
-
-## Initial setup and learning (for administrators)
-
-After your first sign-in, the Microsoft Defender for IoT sensor starts to monitor your network automatically. Network devices will appear in the device map and device inventory sections. Microsoft Defender for IoT will begin to detect and alert you on all security and operational incidents that occur in your network. You can then create reports and queries based on the detected information.
-
-Initially this activity is carried out in the Learning mode, which instructs your sensor to learn your network's usual activity. For example, the sensor learns devices discovered in your network, protocols detected in the network, and file transfers that occur between specific devices. This activity becomes your network's baseline activity.
-
-### Review and update basic system settings
-
-Review the sensor's system settings to make sure the sensor is configured to optimally detect and alert.
-
-Define the sensor's system settings. For example:
--- Define ICS (or IoT) and segregated subnets.--- Define port aliases for site-specific protocols.--- Define VLANs and names that are in use.--- If DHCP is in use, define legitimate DHCP ranges.--- Define integration with Active Directory and mail server as appropriate.-
-### Disable Learning mode
-
-After adjusting the system settings, you can let the sensor run in Learning mode until you feel that system detections accurately reflect your network activity.
-
-The learning mode should run for about 2 to 6 weeks, depending on your network size and complexity. After you disable Learning mode, any activity that differs from your baseline activity will trigger an alert.
-
-**To disable learning mode:**
--- Select **System Settings**, **Network Monitoring,** **Detection Engines and Network Modeling** and disable the **Learning** toggle.-
-## First-time sign in for security analysts and read-only users
-
-Before you sign in, verify that you have:
--- The sensor IP address.-- Sign in credentials that your administrator provided.
-
- :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/sensor-log-in-1.png" alt-text="Screenshot of the sensor sign-in page after the initial setup.":::
--
-## Console tools: Overview
-
-You can access console tools from the side menu. Tools help you:
-- Gain deep, comprehensive visibility into your network-- Analyze network risks, vulnerabilities, trends and statistics-- Set up your sensor for maximum performance-- Create and manage users -
- :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/main-page-side-bar.png" alt-text="Screenshot of the sensor console's main menu on the left.":::
-
-### Discover
-
-| Tools| Description |
-| --|--|
-| Overview | View a dashboard with high-level information about your sensor deployment, alerts, traffic, and more. |
-| Device map | View the network devices, device connections, Purdue levels, and device properties in a map. Various zooms, highlight, and filter options are available to help you gain the insight you need. For more information, see [Investigate devices on a device map](how-to-work-with-the-sensor-device-map.md) |
-| Device inventory | The Device inventory displays a list of device attributes that this sensor detects. Options are available to: <br /> - Sort, or filter the information according to the table fields, and see the filtered information displayed. <br /> - Export information to a CSV file. <br /> - Import Windows registry details. For more information, see [Detect Windows workstations and servers with a local script](detect-windows-endpoints-script.md).|
-| Alerts | Alerts are triggered when sensor engines detect changes or suspicious activity in network traffic that requires your attention. For more information, see [View and manage alerts on your OT sensor](how-to-view-alerts.md).|
-
-### Analyze
-
-| Tools| Description |
-|||
-| Event timeline | View a timeline with information about alerts, network events, and user operations. For more information, see [Track sensor activity](how-to-track-sensor-activity.md).|
-| Data mining | Generate comprehensive and granular information about your network's devices at various layers. For more information, see [Sensor data mining queries](how-to-create-data-mining-queries.md).|
-| Trends and Statistics | View trends and statistics about an extensive range of network traffic and activity. As a small example, display charts and graphs showing top traffic by port, connectivity drops by hours, S7 traffic by control function, number of devices per VLAN, SRTP errors by day, or Modbus traffic by function. For more information, see [Sensor trends and statistics reports](how-to-create-trends-and-statistics-reports.md).
-| Risk Assessment | Proactively address vulnerabilities, identify risks such as missing patches or unauthorized applications. Detect changes to device configurations, controller logic, and firmware. Prioritize fixes based on risk scoring and automated threat modeling. For more information, see [Risk assessment reporting](how-to-create-risk-assessment-reports.md#create-risk-assessment-reports).|
-| Attack Vector | Display a graphical representation of a vulnerability chain of exploitable devices. These vulnerabilities can give an attacker access to key network devices. The Attack Vector Simulator calculates attack vectors in real time and analyzes all attack vectors for a specific target. For more information, see [Attack vector reporting](how-to-create-attack-vector-reports.md#create-attack-vector-reports).|
-
-### Manage
-
-| Tools| Description |
-|||
-| System settings | Configure the system settings. For example, define DHCP settings, provide mail server details, or create port aliases. |
-| Custom alert rules | Use custom alert rules to more specifically pinpoint activity or traffic of interest to you. For more information, see [Create custom alert rules on an OT sensor](how-to-accelerate-alert-incident-response.md#create-custom-alert-rules-on-an-ot-sensor). |
-| Users | Define users and roles with various access levels. For more information, see [Create and manage users on an OT network sensor](manage-users-sensor.md). |
-| Forwarding | Forward alert information to partners that integrate with Defender for IoT, for example, Microsoft Sentinel, Splunk, ServiceNow. You can also send to email addresses, webhook servers, and more. <br /> See [Forward alert information](how-to-forward-alert-information-to-partners.md) for details. |
--
-**Support**
-
-| Tool| Description |
-|-||
-| Support | Contact [Microsoft Support](https://support.microsoft.com/) for help.|
-
-## Review system messages
-
-System messages provide general information about your sensor that may require your attention, for example if:
--- your sensor activation file is expired or will expire soon-- your sensor isn't detecting traffic-- your sensor SSL certificate is expired or will expire soon-
-
-**To review system messages:**
-1. Sign into the sensor
-1. Select the **System Messages** icon (Bell icon).
--
-## Next steps
-
-For more information, see:
--- [Threat intelligence research and packages](how-to-work-with-threat-intelligence-packages.md)--- [Onboard a sensor](tutorial-onboarding.md#onboard-and-activate-the-virtual-sensor)--- [Manage sensor activation files](how-to-manage-individual-sensors.md#upload-a-new-activation-file)--- [Control what traffic is monitored](how-to-control-what-traffic-is-monitored.md)
defender-for-iot How To Deploy Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-deploy-certificates.md
- Title: Deploy SSL/TLS certificates on OT appliances - Microsoft Defender for IoT.
-description: Learn how to deploy SSL/TLS certificates on Microsoft Defender for IoT OT network sensors and on-premises management consoles.
Previously updated : 01/05/2023---
-# Deploy SSL/TLS certificates on OT appliances
-
-This article describes how to create and deploy SSL/TLS certificates on OT network sensors and on-premises management consoles. Defender for IoT uses SSL/TLS certificates to secure communication between the following system components:
--- Between users and the OT sensor or on-premises management console UI access-- Between OT sensors and an on-premises management console, including [API communication](references-work-with-defender-for-iot-apis.md)-- Between an on-premises management console and a high availability (HA) server, if configured-- Between OT sensors or on-premises management consoles and partners servers defined in [alert forwarding rules](how-to-forward-alert-information-to-partners.md)-
-You can deploy SSL/TLS certificates during initial configuration as well as later on.
-
-Defender for IoT validates certificates against the certificate expiration date and against a passphrase, if one is defined. Validations against a Certificate Revocation List (CRL) and the certificate trust chain are available as well, though not mandatory. Invalid certificates can't be uploaded to OT sensors or on-premises management consoles, and will block encrypted communication between Defender for IoT components.
-
-Each certificate authority (CA)-signed certificate must have both a `.key` file and a `.crt` file, which are uploaded to OT network sensors and on-premises management consoles after the first sign-in. While some organizations may also require a `.pem` file, a `.pem` file isn't required for Defender for IoT.
-
-Make sure to create a unique certificate for each OT sensor, on-premises management console, and HA server, where each certificate meets required parameter criteria.
-
-## Prerequisites
-
-To perform the procedures described in this article, make sure that:
--- You have a security, PKI or certificate specialist available to oversee the certificate creation-- You can access the OT network sensor or on-premises management console as an **Admin** user.-
- For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md).
-
-## Deploy an SSL/TLS certificate
-
-Deploy your SSL/TLS certificate by importing it to your OT sensor or on-premises management console.
-
-Verify that your SSL/TLS certificate [meets the required parameters](#verify-certificate-file-parameter-requirements), and that you have [access to a CRL server](#verify-crl-server-access).
-
-### Deploy a certificate on an OT sensor
-
-1. Sign into your OT sensor and select **System settings** > **Basic** > **SSL/TLS certificate**.
-
-1. In the **SSL/TLS certificate** pane, select one of the following, and then follow the instructions in the relevant tab:
-
- - **Import a trusted CA certificate (recommended)**
- - **Use Locally generated self-signed certificate (Not recommended)**
-
- # [Trusted CA certificates](#tab/import-trusted-ca-certificate)
-
- 1. Enter the following parameters:
-
- | Parameter | Description |
- |||
- | **Certificate Name** | Enter your certificate name. |
- | **Passphrase** - *Optional* | Enter a passphrase. |
- | **Private Key (KEY file)** | Upload a Private Key (KEY file). |
- | **Certificate (CRT file)** | Upload a Certificate (CRT file). |
- | **Certificate Chain (PEM file)** - *Optional* | Upload a Certificate Chain (PEM file). |
-
- Select **Use CRL (Certificate Revocation List) to check certificate status** to validate the certificate against a [CRL server](#verify-crl-server-access). The certificate is checked once during the import process.
-
- For example:
-
- :::image type="content" source="media/how-to-deploy-certificates/recommended-ssl.png" alt-text="Screenshot of importing a trusted CA certificate." lightbox="media/how-to-deploy-certificates/recommended-ssl.png":::
-
- # [Locally generated self-signed certificates](#tab/locally-generated-self-signed-certificate)
-
- > [!NOTE]
- > Using self-signed certificates in a production environment is not recommended, as it leads to a less secure environment.
- > We recommend using self-signed certificates in test environments only.
- > The owner of the certificate cannot be validated and the security of your system cannot be maintained.
-
- Select **Confirm** to acknowledge the warning.
-
-
-
-1. In the **Validation for on-premises management console certificates** area, select **Required** if SSL/TLS certificate validation is required. Otherwise, select **None**.
-
-1. Select **Save** to save your certificate settings.
-
-### Deploy a certificate on an on-premises management console
-
-1. Sign into your on-premises management console and select **System settings** > **SSL/TLS certificates**.
-
-1. In the **SSL/TLS certificate** pane, select one of the following, and then follow the instructions in the relevant tab:
-
- - **Import a trusted CA certificate**
- - **Use Locally generated self-signed certificate (Insecure, not recommended)**
-
- # [Trusted CA certificates](#tab/cm-import-trusted-ca-certificate)
-
- 1. In the **SSL/TLS Certificates** dialog, select **Add Certificate**.
-
- 1. Enter the following parameters:
-
- | Parameter | Description |
- |||
- | **Certificate Name** | Enter your certificate name. |
- | **Passphrase** - *Optional* | Enter a passphrase. |
- | **Private Key (KEY file)** | Upload a Private Key (KEY file). |
- | **Certificate (CRT file)** | Upload a Certificate (CRT file). |
- | **Certificate Chain (PEM file)** - *Optional* | Upload a Certificate Chain (PEM file). |
-
- For example:
-
- :::image type="content" source="media/how-to-deploy-certificates/management-ssl-certificate.png" alt-text="Screenshot of importing a trusted CA certificate." lightbox="media/how-to-deploy-certificates/management-ssl-certificate.png":::
-
- # [Locally generated self-signed certificates](#tab/cm-locally-generated-self-signed-certificate)
-
- > [!NOTE]
- > Using self-signed certificates in a production environment is not recommended, as it leads to a less secure environment.
- > We recommend using self-signed certificates in test environments only.
- > The owner of the certificate cannot be validated and the security of your system cannot be maintained.
-
- Select **I CONFIRM** to acknowledge the warning.
-
-
-
-1. Select the **Enable Certificate Validation** option to turn on system-wide validation for SSL/TLS certificates with the issuing [Certificate Authority](#create-ca-signed-ssltls-certificates) and [Certificate Revocation Lists](#verify-crl-server-access).
-
-1. Select **SAVE** to save your certificate settings.
-
-You can also [import the certificate to your OT sensor using CLI commands](references-work-with-defender-for-iot-cli-commands.md#tlsssl-certificate-commands).
-
-### Verify certificate file parameter requirements
-
-Verify that the certificates meet the following requirements:
--- **CRT file requirements**:-
- | Field | Requirement |
- |||
- | **Signature Algorithm** | SHA256RSA |
- | **Signature Hash Algorithm** | SHA256 |
- | **Valid from** | A valid past date |
- | **Valid To** | A valid future date |
- | **Public Key** | RSA 2048 bits (Minimum) or 4096 bits |
- | **CRL Distribution Point** | URL to a CRL server. If your organization doesn't [validate certificates against a CRL server](#verify-crl-server-access), remove this line from the certificate. |
- | **Subject CN (Common Name)** | domain name of the appliance, such as *sensor.contoso.com*, or *.contoso.com* |
- | **Subject (C)ountry** | Certificate country code, such as `US` |
- | **Subject (OU) Org Unit** | The organization's unit name, such as *Contoso Labs* |
- | **Subject (O)rganization** | The organization's name, such as *Contoso Inc.* |
-
- > [!IMPORTANT]
- > While certificates with other parameters might work, they aren't supported by Defender for IoT. Additionally, wildcard SSL certificates, which are public key certificates that can be used on multiple subdomains such as *.contoso.com*, are insecure and aren't supported.
- > Each appliance must use a unique CN.
--- **Key file requirements**: Use either RSA 2048 bits or 4096 bits. Using a key length of 4096 bits will slow down the SSL handshake at the start of each connection, and increase the CPU usage during handshakes.--- (Optional) Create a certificate chain, which is a `.pem` file that contains the certificates of all the certificate authorities in the chain of trust that led to your certificate. Certificate chain files support bag attributes.-
-### Verify CRL server access
-
-If your organization validates certificates, your OT sensors and on-premises management console must be able to access the CRL server defined by the certificate. By default, certificates access the CRL server URL via HTTP port 80. However, some organizational security policies block access to this port.
-
-If your OT sensors and on-premises management consoles can't access your CRL server on port 80, you can use one of the following workarounds:
--- **Define another URL and port in the certificate**:-
- - The URL you define must be configured as `http: //` and not `https://`
- - Make sure that the destination CRL server can listen on the port you define
--- **Use a proxy server that can access the CRL on port 80**-
- For more information, see [Forward OT alert information](how-to-forward-alert-information-to-partners.md).
-
-If validation fails, communication between the relevant components is halted and a validation error is presented in the console.
-
-## Create a certificate
-
-Create either a CA-signed SSL/TLS certificate or a self-signed SSL/TLS certificate (not recommended).
-
-### Create CA-signed SSL/TLS certificates
-
-Use a certificate management platform, such as an automated PKI management platform, to create a certificate. Verify that the certificate meets [certificate file requirements](#verify-certificate-file-parameter-requirements), and then [test the certificate](#test-your-ssltls-certificates) file you created when you're done.
-
-If you aren't carrying out certificate validation, remove the CRL URL reference in the certificate. For more information, see [certificate file requirements](#verify-certificate-file-parameter-requirements).
-
-Consult a security, PKI, or other qualified certificate lead if you don't have an application that can automatically create certificates.
-
-You can also convert existing certificate files if you don't want to create new ones.
-
-### Create self-signed SSL/TLS certificates
-
-Create self-signed SSL/TLS certificates by first [downloading a security certificate](#download-a-security-certificate) from the OT sensor or on-premises management console and then exporting it to the required file types.
-
-> [!NOTE]
-> While you can use a locally-generated and self-signed certificate, we do not recommend this option.
-
-**Export as a certificate file:**
-
-After downloading the security certificate, use a certificate management platform to create the following types of SSL/TLS certificate files:
-
-| File type | Description |
-|||
-| **.crt – certificate container file** | A `.pem`, or `.der` file, with a different extension for support in Windows Explorer.|
-| **.key – Private key file** | A key file is in the same format as a `.pem` file, with a different extension for support in Windows Explorer.|
-| **.pem – certificate container file (optional)** | Optional. A text file with a Base64-encoding of the certificate text, and a plain-text header and footer to mark the beginning and end of the certificate. |
-
-For example:
-
-1. Open the downloaded certificate file and select the **Details** tab > **Copy to file** to run the **Certificate Export Wizard**.
-
-1. In the **Certificate Export Wizard**, select **Next** > **DER encoded binary X.509 (.CER)** > and then select **Next** again.
-
-1. In the **File to Export** screen, select **Browse**, choose a location to store the certificate, and then select **Next**.
-
-1. Select **Finish** to export the certificate.
-
-> [!NOTE]
-> You may need to convert existing files types to supported types.
-
-### Check your certificate against a sample
-
-Use the following sample certificate to compare to the certificate you've created, making sure that the same fields exist in the same order.
-
-``` Sample SSL certificate
-Bag Attributes: <No Attributes>
-subject=C = US, S = Illinois, L = Springfield, O = Contoso Ltd, OU= Contoso Labs, CN= sensor.contoso.com, E
-= support@contoso.com
-issuer C=US, S = Illinois, L = Springfield, O = Contoso Ltd, OU= Contoso Labs, CN= Cert-ssl-root-da2e22f7-24af-4398-be51-
-e4e11f006383, E = support@contoso.com
BEGIN CERTIFICATE--
-MIIESDCCAZCgAwIBAgIIEZK00815Dp4wDQYJKoZIhvcNAQELBQAwgaQxCzAJBgNV
-BAYTAIVTMREwDwYDVQQIDAhJbGxpbm9pczEUMBIGA1UEBwwLU3ByaW5nZmllbGQx
-FDASBgNVBAoMCONvbnRvc28gTHRKMRUWEwYDVQQLDAXDb250b3NvIExhYnMxGzAZ
-BgNVBAMMEnNlbnNvci5jb250b3NvLmNvbTEIMCAGCSqGSIb3DQEJARYTc3VwcG9y
-dEBjb250b3NvLmNvbTAeFw0yMDEyMTcxODQwMzhaFw0yMjEyMTcxODQwMzhaMIGK
-MQswCQYDVQQGEwJVUzERMA8GA1UECAwISWxsaW5vaXMxFDASBgNVBAcMC1Nwcmlu
-Z2ZpZWxkMRQwEgYDVQQKDAtDb250b3NvIEX0ZDEVMBMGA1UECwwMQ29udG9zbyBM
-YWJzMRswGQYDVQQDDBJzZW5zb3luY29udG9zby5jb20xljAgBgkqhkiG9w0BCQEW
-E3N1cHBvcnRAY29udG9zby5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEK
-AoIBAQDRGXBNJSGJTfP/K5ThK8vGOPzh/N8AjFtLvQiiSfkJ4cxU/6d1hNFEMRYG
-GU+jY1Vknr0|A2nq7qPB1BVenW3 MwsuJZe Floo123rC5ekzZ7oe85Bww6+6eRbAT
-WyqpvGVVpfcsloDznBzfp5UM9SVI5UEybllod31MRR/LQUEIKLWILHLW0eR5pcLW
-pPLtOW7wsK60u+X3tqFo1AjzsNbXbEZ5pnVpCMqURKSNmxYpcrjnVCzyQA0C0eyq
-GXePs9PL5DXfHy1x4WBFTd98X83 pmh/vyydFtA+F/imUKMJ8iuOEWUtuDsaVSX0X
-kwv2+emz8CMDLsbWvUmo8Sg0OwfzAgMBAAGjfDB6MB0GA1UdDgQWBBQ27hu11E/w
-21Nx3dwjp0keRPuTsTAfBgNVHSMEGDAWgBQ27hu1lE/w21Nx3dwjp0keRPUTSTAM
-BgNVHRMEBTADAQH/MAsGA1UdDwQEAwIDqDAdBgNVHSUEFjAUBggrBgEFBQcDAgYI
-KwYBBQUHAwEwDQYJKoZIhvcNAQELBQADggEBADLsn1ZXYsbGJLLzsGegYv7jmmLh
-nfBFQqucORSQ8tqb2CHFME7LnAMfzFGpYYV0h1RAR+1ZL1DVtm+IKGHdU9GLnuyv
-9x9hu7R4yBh3K99ILjX9H+KACvfDUehxR/ljvthoOZLalsqZIPnRD/ri/UtbpWtB
-cfvmYleYA/zq3xdk4vfOI0YTOW11qjNuBIHh0d5S5sn+VhhjHL/s3MFaScWOQU3G
-9ju6mQSo0R1F989aWd+44+8WhtOEjxBvr+17CLqHsmbCmqBI7qVnj5dHvkh0Bplw
-zhJp150DfUzXY+2sV7Uqnel9aEU2Hlc/63EnaoSrxx6TEYYT/rPKSYL+++8=
END CERTIFICATE--
-```
-
-### Test your SSL/TLS certificates
-
-If you want to check the information within the certificate `.csr` file or private key file, use the following CLI commands:
--- **Check a Certificate Signing Request (CSR)**: Run `openssl req -text -noout -verify -in CSR.csr`-- **Check a private key**: Run `openssl rsa -in privateKey.key -check`-- **Check a certificate**: Run `openssl x509 -in certificate.crt -text -noout`-
-If these tests fail, review [certificate file parameter requirements](#verify-certificate-file-parameter-requirements) to verify that your file parameters are accurate, or consult your certificate specialist.
-
-## Troubleshoot
-
-### Download a security certificate
-
-1. After [installing your OT sensor software](ot-deploy/install-software-ot-sensor.md) or [on-premises management console](ot-deploy/install-software-on-premises-management-console.md), go to the sensor's or on-premises management console's IP address in a browser.
-
-1. Select the :::image type="icon" source="media/how-to-deploy-certificates/warning-icon.png" border="false"::: **Not secure** alert in the address bar of your web browser, then select the **>** icon next to the warning message **"Your connection to this site isn't secure"**. For example:
-
- :::image type="content" source="media/how-to-deploy-certificates/connection-is-not-secure.png" alt-text="Screenshot of web page with a Not secure warning in the address bar." lightbox="media/how-to-deploy-certificates/connection-is-not-secure.png":::
-
-1. Select the :::image type="icon" source="media/how-to-deploy-certificates/show-certificate-icon.png" border="false"::: **Show certificate** icon to view the security certificate for this website.
-
-1. In the **Certificate viewer** pane, select the **Details** tab, then select **Export** to save the file on your local machine.
-
-### Import a sensor's locally signed certificate to your certificate store
-
-After creating your locally signed certificate, import it to a trusted storage location. For example:
-
-1. Open the security certificate file and, in the **General** tab, select **Install Certificate** to start the **Certificate Import Wizard**.
-
-1. In **Store Location**, select **Local Machine**, then select **Next**.
-
-1. If a **User Allow Control** prompt appears, select **Yes** to allow the app to make changes to your device.
-
-1. In the **Certificate Store** screen, select **Automatically select the certificate store based on the type of certificate**, then select **Next**.
-
-1. Select **Place all certificates in the following store**, then **Browse**, and then select the **Trusted Root Certification Authorities** store. When you're done, select **Next**. For example:
-
- :::image type="content" source="media/how-to-deploy-certificates/certificate-store-trusted-root.png" alt-text="Screenshot of the certificate store screen where you can browse to the trusted root folder." lightbox="media/how-to-deploy-certificates/certificate-store-trusted-root.png":::
-
-1. Select **Finish** to complete the import.
-
-### Validate the certificate's common name
-
-1. To view the certificate's common name, open the certificate file and select the Details tab, and then select the **Subject** field.
-
- The certificate's common name will then appear next to **CN**.
-
-1. Sign-in to your sensor console without a secure connection. In the **Your connection isn't private** warning screen, you might see a **NET::ERR_CERT_COMMON_NAME_INVALID** error message.
-
-1. Select the error message to expand it, and then copy the string next to **Subject**. For example:
-
- :::image type="content" source="media/how-to-deploy-certificates/connection-is-not-private-subject.png" alt-text="Screenshot of the connection isn't private screen with the details expanded." lightbox="media/how-to-deploy-certificates/connection-is-not-private-subject.png":::
-
- The subject string should match the **CN** string in the security certificate's details.
-
-1. In your local file explorer, browse to the hosts file, such as at **This PC > Local Disk (C:) > Windows > System32 > drivers > etc**, and open the **hosts** file.
-
-1. In the hosts file, add in a line at the end of document with the sensor's IP address and the SSL certificate's common name that you copied in the previous steps. When you're done, save the changes. For example:
-
- :::image type="content" source="media/how-to-deploy-certificates/hosts-file.png" alt-text="Screenshot of the hosts file." lightbox="media/how-to-deploy-certificates/hosts-file.png":::
-
-### Troubleshoot certificate upload errors
-
-You won't be able to upload certificates to your OT sensors or on-premises management consoles if the certificates aren't created properly or are invalid. Use the following table to understand how to take action if your certificate upload fails and an error message is shown:
-
-| **Certificate validation error** | **Recommendation** |
-|--|--|
-| **Passphrase does not match to the key** | Make sure you have the correct passphrase. If the problem continues, try recreating the certificate using the correct passphrase. |
-| **Cannot validate chain of trust. The provided Certificate and Root CA don't match.** | Make sure a `.pem` file correlates to the `.crt` file. <br> If the problem continues, try recreating the certificate using the correct chain of trust, as defined by the `.pem` file. |
-| **This SSL certificate has expired and isn't considered valid.** | Create a new certificate with valid dates.|
-|**This certificate has been revoked by the CRL and can't be trusted for a secure connection** | Create a new unrevoked certificate. |
-|**The CRL (Certificate Revocation List) location is not reachable. Verify the URL can be accessed from this appliance** | Make sure that your network configuration allows the sensor or on-premises management console to reach the CRL server defined in the certificate. <br> For more information, see [CRL server access](#verify-crl-server-access). |
-|**Certificate validation failed** | This indicates a general error in the appliance. <br> Contact [Microsoft Support](https://support.microsoft.com/supportforbusiness/productselection?sapId=82c8f35-1b8e-f274-ec11-c6efdd6dd099).|
-
-## Next steps
-
-For more information, see:
--- [Identify required appliances](how-to-identify-required-appliances.md)-- [Manage individual sensors](how-to-manage-individual-sensors.md)-- [Manage the on-premises management console](how-to-manage-the-on-premises-management-console.md)
defender-for-iot How To Enhance Port And Vlan Name Resolution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-enhance-port-and-vlan-name-resolution.md
- Title: Customize port and VLAN names on OT network sensors - Microsoft Defender for IoT
-description: Learn how to customize port and VLAN names on Microsoft Defender for IoT OT network sensors.
Previously updated : 01/12/2023---
-# Customize port and VLAN names on OT network sensors
-
-Enrich device data shown in Defender for IoT by customizing port and VLAN names on your OT network sensors.
-
-For example, you might want to assign a name to a non-reserved port that shows unusually high activity in order to call it out, or assign a name to a VLAN number to identify it quicker.
-
-## Prerequisites
-
-To customize port and VLAN names, you must be able to access the OT network sensor as an **Admin** user.
-
-For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md).
-
-## Customize names of detected ports
-
-Defender for IoT automatically assigns names to most universally reserved ports, such as DHCP or HTTP. However, you might want to customize the name of a specific port to highlight it, such as when you're watching a port with unusually high detected activity.
-
-Port names are shown in Defender for IoT when [viewing device groups from the OT sensor's device map](how-to-work-with-the-sensor-device-map.md), or when you create OT sensor reports that include port information.
-
-**To customize a port name:**
-
-1. Sign into your OT sensor as an **Admin** user.
-
-1. Select **System settings** on the left and then, under **Network monitoring**, select **Port Naming**.
-
-1. In the **Port naming** pane that appears, enter the port number you want to name, the port's protocol, and a meaningful name. Supported protocol values include: **TCP**, **UDP**, and **BOTH**.
-
-1. Select **+ Add port** to customize an additional port, and **Save** when you're done.
-
-## Customize a VLAN name
-
-VLANs are either discovered automatically by the OT network sensor or added manually. Automatically discovered VLANs can't be edited or deleted, but manually added VLANs require a unique name. If a VLAN isn't explicitly named, the VLAN's number is shown instead.
-
-VLAN's support is based on 802.1q (up to VLAN ID 4094).
-
-VLAN names aren't synchronized between the OT network sensor and the on-premises management console. If you want to view customized VLAN names on the on-premises management console, [define the VLAN names](how-to-manage-the-on-premises-management-console.md#define-vlan-names) there as well.
-
-**To configure VLAN names on an OT network sensor:**
-
-1. Sign in to your OT sensor as an **Admin** user.
-
-1. Select **System Settings** on the left and then, under **Network monitoring**, select **VLAN Naming**.
-
-1. In the **VLAN naming** pane that appears, enter a VLAN ID and unique VLAN name. VLAN names can contain up to 50 ASCII characters.
-
-1. Select **+ Add VLAN** to customize an additional VLAN, and **Save** when you're done.
-
-1. **For Cisco switches**: Add the `monitor session 1 destination interface XX/XX encapsulation dot1q` command to the SPAN port configuration, where *XX/XX* is the name and number of the port.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Investigate detected devices from the OT sensor device inventory](how-to-investigate-sensor-detections-in-a-device-inventory.md)
-
-> [!div class="nextstepaction"]
-> [Create sensor trends and statistics reports](how-to-create-trends-and-statistics-reports.md)
-
-> [!div class="nextstepaction"]
-> [Create sensor data mining queries](how-to-create-data-mining-queries.md)
defender-for-iot How To Forward Alert Information To Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-forward-alert-information-to-partners.md
If your forwarding alert rules aren't working as expected, check the following d
- **Certificate validation**. Forwarding rules for [Syslog CEF](#syslog-server-actions), [Microsoft Sentinel](integrate-overview.md#microsoft-sentinel), and [QRadar](tutorial-qradar.md) support encryption and certificate validation.
- If your OT sensors or on-premises management console are configured to [validate certificates](how-to-deploy-certificates.md#verify-crl-server-access) and the certificate can't be verified, the alerts aren't forwarded.
+ If your OT sensors or on-premises management console are configured to [validate certificates](ot-deploy/create-ssl-certificates.md#verify-crl-server-access) and the certificate can't be verified, the alerts aren't forwarded.
In these cases, the sensor or on-premises management console is the session's client and initiator. Certificates are typically received from the server or use asymmetric encryption, where a specific certificate is provided to set up the integration.
defender-for-iot How To Gain Insight Into Global Regional And Local Threats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-gain-insight-into-global-regional-and-local-threats.md
- Title: Gain insight into global, regional, and local threats
-description: Gain insight into global, regional, and local threats by using the site map in the on-premises management console.
Previously updated : 01/01/2023---
-# Gain insight into global, regional, and local threats
-
-The site map in the on-premises management console helps you achieve full security coverage by dividing your network into geographical and logical segments that reflect your business topology:
--- **Geographical facility level**: A site reflects many devices grouped according to a geographical location presented on the map. By default, Microsoft Defender for IoT provides you with a world map. You update the map to reflect your organizational or business structure. For example, use a map that reflects sites across a specific country, city, or industrial campus. When the site color changes on the map, it provides the SOC team with an indication of critical system status in the facility.-
- The map is interactive and enables opening each site and delving into this site's information.
--- **Global logical layer**: A business unit is a way to divide your enterprise into logical segments according to specific industries. When you do this, your business topology is reflected on the map.-
- For example, a global company that contains glass factories, plastic factories, and automobile factories can be managed as three different business units. A physical site located in Toronto includes three different glass production lines, a plastic production line, and a truck engine production line. So, this site has representatives of all three business units.
--- **Geographical region level**: Create regions to divide a global enterprise into geographical regions. For example, the company that we described might use the regions North America, Western Europe, and Eastern Europe. North America has factories from all three business units. Western Europe has automobile factories and glass factories, and Eastern Europe has only plastic factories.--- **Local logical segment level**: A zone is a logical segment within a site that defines, for example, a functional area or production line. Working with zones allows enforcement of security policies that are relevant to the zone definition. For example, a site that contains five production lines can be segmented into five zones.--- **Local view level**: A local view of a single sensor installation provides insight into the operational and security status of connected devices.-
-## Work with site map views
-
-The on-premises management console provides an overall view of your industrial network in a context-related map. The general map view presents the global map of your organization with the geographical location of each site.
--
-### Color-coded map views
-
-**Green**: The number of security events is below the threshold that Defender for IoT has defined for your system. No action is needed.
-
-**Yellow**: The number of security events is equal to the threshold that Defender for IoT has defined for your system. Consider investigating the events.
-
-**Red**: The number of security events is beyond the threshold that Defender for IoT has defined for your system. Take immediate action.
-
-### Risk-level map views
-
-**Risk Assessment**: The Risk Assessment view displays information on site risks. Risk information helps you prioritize mitigation and build a road map to plan security improvements.
-
-**Incident Response**: Get a centralized view of all unacknowledged alerts on each site across the enterprise. You can drill down and manage alerts detected in a specific site.
--
-**Malicious Activity**: If malware was detected, the site appears in red. This indicates that you should take immediate action.
--
-**Operational Alerts**: This map view for OT systems provides a better understanding of which OT system might experience operational incidents, such as PLC stops, firmware upload, and program upload.
--
-To choose a map view:
-
-1. Select **Default View** from the map.
-2. Select a view.
--
-## Update the site map image
-
-Defender for IoT provides a default world map. You can change it to reflect your organization: a country map or a city map, for example.
-
-To replace the map:
-
-1. On the left pane, select **System Settings**.
-
-2. Select the **Change Site Map** and upload the graphic file to replace the default map.
-
-## Next step
-
-[View alerts](how-to-view-alerts.md)
defender-for-iot How To Manage Individual Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-individual-sensors.md
The following procedures describe how to deploy updated SSL/TLS certificates, su
If an upload fails, contact your security or IT administrator. For more information, see [SSL/TLS certificate requirements for on-premises resources](best-practices/certificate-requirements.md) and [Create SSL/TLS certificates for OT appliances](ot-deploy/create-ssl-certificates.md).
-1. In the **Validation for on-premises management console certificates** area, select **Required** if SSL/TLS certificate validation is required. Otherwise, select **None**.
+1. In the **Validation of on-premises management console certificate** area, select **Mandatory** if SSL/TLS certificate validation is required. Otherwise, select **None**.
- If you've selected **Required** and validation fails, communication between relevant components is halted, and a validation error is shown on the sensor. For more information, see [CRT file requirements](best-practices/certificate-requirements.md#crt-file-requirements).
+ If you've selected **Mandatory** and validation fails, communication between relevant components is halted, and a validation error is shown on the sensor. For more information, see [CRT file requirements](best-practices/certificate-requirements.md#crt-file-requirements).
1. Select **Save** to save your certificate settings.
When you're done, use the following procedures to validate your certificate file
1. Select the **Confirm** option to confirm the warning.
-1. In the **Validation for on-premises management console certificates** area, select **Required** if SSL/TLS certificate validation is required. Otherwise, select **None**.
+1. In the **Validation of on-premises management console certificate** area, select **Mandatory** if SSL/TLS certificate validation is required. Otherwise, select **None**.
If this option is toggled on and validation fails, communication between relevant components is halted, and a validation error is shown on the sensor. For more information, see [CRT file requirements](best-practices/certificate-requirements.md#crt-file-requirements).
defender-for-iot How To Set Up High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-set-up-high-availability.md
Before you perform the procedures in this article, verify that you've met the fo
- Make sure that the primary on-premises management console is fully [configured](how-to-manage-the-on-premises-management-console.md), including at least two [OT network sensors connected](ot-deploy/connect-sensors-to-management.md) and visible in the console UI, as well as the scheduled backups or VLAN settings. All settings are applied to the secondary appliance automatically after pairing. -- Make sure that your SSL/TLS certificates meet required criteria. For more information, see [Deploy OT appliance certificates](how-to-deploy-certificates.md).
+- Make sure that your SSL/TLS certificates meet required criteria. For more information, see [SSL/TLS certificate requirements for on-premises resources](best-practices/certificate-requirements.md).
- Make sure that your organizational security policy grants you access to the following services, on the primary and secondary on-premises management console. These services also allow the connection between the sensors and secondary on-premises management console:
defender-for-iot How To Set Up Your Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-set-up-your-network.md
- Title: Prepare your OT network for Microsoft Defender for IoT
-description: Learn about solution architecture, network preparation, prerequisites, and other information needed to ensure that you successfully set up your network to work with Microsoft Defender for IoT appliances.
Previously updated : 06/02/2022---
-# Prepare your OT network for Microsoft Defender for IoT
-
-This article describes how to set up your OT network to work with Microsoft Defender for IoT components, including the OT network sensors, the Azure portal, and an optional on-premises management console.
-
-OT network sensors use agentless, patented technology to discover, learn, and continuously monitor network devices for a deep visibility into OT/ICS/IoT risks. Sensors carry out data collection, analysis, and alerting on-site, making them ideal for locations with low bandwidth or high latency.
-
-This article is intended for personnel experienced in operating and managing OT and IoT networks, such as automation engineers, plant managers, OT network infrastructure service providers, cybersecurity teams, CISOs, and CIOs.
-
-We recommend that you use this article together with our [pre-deployment checklist](pre-deployment-checklist.md).
-
-For assistance or support, contact [Microsoft Support](https://support.microsoft.com/supportforbusiness/productselection?sapId=82c88f35-1b8e-f274-ec11-c6efdd6dd099).
-
-## Prerequisites
-
-Before performing the procedures in this article, make sure you understand your own network architecture and how you'll connect to Defender for IoT. For more information, see:
--- [Microsoft Defender for IoT system architecture](architecture.md)-- [Sensor connection methods](architecture-connections.md)-- [Best practices for planning your OT network monitoring](best-practices/plan-network-monitoring.md)-
-## On-site deployment tasks
-
-Perform the steps in this section before deploying Defender for IoT on your network.
-
-Make sure to perform each step methodologically, requesting the information and reviewing the data you receive. Prepare and configure your site and then validate your configuration.
-
-### Collect site information
-
-Record the following site information:
--- Sensor management network information.--- Site network architecture.--- Physical environment.--- System integrations.--- Planned user credentials.--- Configuration workstation.--- TLS/SSL certificates (optional but recommended).--- SMTP authentication (optional). To use the SMTP server with authentication, prepare the credentials required for your server.--- DNS servers (optional). Prepare your DNS server's IP and host name.-
-### Prepare a configuration workstation
-
-**To prepare a Windows or Mac workstation**:
--- Make sure that you can connect to the sensor management interface.--- Make sure that you have terminal software (like PuTTY) or a supported browser. Supported browsers include the latest versions of Microsoft Edge, Chrome, Firefox, or Safari (Mac only).-
- For more information, see [recommended browsers for the Azure portal](../../azure-portal/azure-portal-supported-browsers-devices.md#recommended-browsers).
--- Make sure the required firewall rules are open on the workstation. Verify that your organizational security policy allows access as required. For more information, see [Networking requirements](#networking-requirements).-
-### Set up certificates
-
-After you've installed the Defender for IoT sensor or on-premises management console software, a local, self-signed certificate is generated, and used to access the sensor web application.
-
-The first time they sign in to Defender for IoT, administrator users are prompted to provide an SSL/TLS certificate. Optional certificate validation is enabled by default.
-
-We recommend having your certificates ready before you start your deployment. For more information, see [Defender for IoT installation](how-to-install-software.md) and [About Certificates](how-to-deploy-certificates.md).
-
-### Plan rack installation
-
-**To plan your rack installation**:
-
-1. Prepare a monitor and a keyboard for your appliance network settings.
-
-1. Allocate the rack space for the appliance.
-
-1. Have AC power available for the appliance.
-
-1. Prepare the LAN cable for connecting the management to the network switch.
-
-1. Prepare the LAN cables for connecting switch SPAN (mirror) ports and network taps to the Defender for IoT appliance.
-
-1. Configure, connect, and validate SPAN ports in the mirrored switches using one of the following methods:
-
- |Method |Description |
- |||
- |[Switch SPAN port](traffic-mirroring/configure-mirror-span.md) | Mirror local traffic from interfaces on the switch to a different interface on the same switch. |
- |[Remote SPAN (RSPAN)](traffic-mirroring/configure-mirror-rspan.md) | Mirror traffic from multiple, distributed source ports into a dedicated remote VLAN. |
- |[Active or passive aggregation (TAP)](traffic-mirroring/configure-mirror-tap.md) | Mirror traffic by installing an active or passive aggregation terminal access point (TAP) inline to the network cable. |
- |[ERSPAN](traffic-mirroring/configure-mirror-erspan.md) | Mirror traffic with ERSPAN encapsulation when you need to extend monitored traffic across Layer 3 domains, when using specific Cisco routers and switches. |
- |[ESXi vSwitch](traffic-mirroring/configure-mirror-esxi.md) | Use *Promiscuous mode* in a virtual switch environment as a workaround for configuring a monitoring port. |
- |[Hyper-V vSwitch](traffic-mirroring/configure-mirror-hyper-v.md) | Use *Promiscuous mode* in a virtual switch environment as a workaround for configuring a monitoring port. |
-
- > [!NOTE]
- > SPAN and RSPAN are Cisco terminology. Other brands of switches have similar functionality but might use different terminology.
- >
-
-1. Connect the configured SPAN port to a computer running Wireshark, and verify that the port is configured correctly.
-
-1. Open all the relevant firewall ports.
-
-### Validate your network
-
-After preparing your network, use the guidance in this section to validate whether you're ready to deploy Defender for IoT.
-
-Make an attempt to receive a sample of recorded traffic (PCAP file) from the switch SPAN or mirror port. This sample will:
--- Validate if the switch is configured properly.--- Confirm if the traffic that goes through the switch is relevant for monitoring (OT traffic).--- Identify bandwidth and the estimated number of devices in this switch.-
-For example, you can record a sample PCAP file for a few minutes by connecting a laptop to an already configured SPAN port through the Wireshark application.
-
-**To use Wireshark to validate your network**:
--- Check that *Unicast packets* are present in the recording traffic. Unicast is from one address to another. If most of the traffic is ARP messages, then the switch setup is incorrect.--- Go to **Statistics** > **Protocol Hierarchy**. Verify that industrial OT protocols are present.-
-For example:
--
-## Networking requirements
-
-Use the following tables to ensure that required firewalls are open on your workstation and verify that your organization security policy allows required access.
-
-### User access to the sensor and management console
-
-| Protocol | Transport | In/Out | Port | Used | Purpose | Source | Destination |
-|--|--|--|--|--|--|--|--|
-| SSH | TCP | In/Out | 22 | CLI | To access the CLI | Client | Sensor and on-premises management console |
-| HTTPS | TCP | In/Out | 443 | To access the sensor, and on-premises management console web console | Access to Web console | Client | Sensor and on-premises management console |
-
-### Sensor access to Azure portal
-
-| Protocol | Transport | In/Out | Port | Purpose | Source | Destination |
-|--|--|--|--|--|--|--|
-| HTTPS | TCP | Out | 443 | Access to Azure | Sensor |OT network sensors connect to Azure to provide alert and device data and sensor health messages, access threat intelligence packages, and more. Connected Azure services include IoT Hub, Blob Storage, Event Hubs, and the Microsoft Download Center.<br><br>**For OT sensor versions 22.x**: Download the list from the **Sites and sensors** page in the Azure portal. Select an OT sensor with software versions 22.x or higher, or a site with one or more supported sensor versions. Then, select **More options > Download endpoint details**. For more information, see [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal).<br><br>**For OT sensor versions 10.x**: `*.azure-devices.net`<br> `*.blob.core.windows.net`<br> `*.servicebus.windows.net`<br> `download.microsoft.com`|
-
-### Sensor access to the on-premises management console
-
-| Protocol | Transport | In/Out | Port | Used | Purpose | Source | Destination |
-|--|--|--|--|--|--|--|--|
-| NTP | UDP | In/Out | 123 | Time Sync | Connects the NTP to the on-premises management console | Sensor | On-premises management console |
-| TLS/SSL | TCP | In/Out | 443 | Give the sensor access to the on-premises management console. | The connection between the sensor, and the on-premises management console | Sensor | On-premises management console |
-
-### Other firewall rules for external services (optional)
-
-Open these ports to allow extra services for Defender for IoT.
-
-| Protocol | Transport | In/Out | Port | Used | Purpose | Source | Destination |
-|--|--|--|--|--|--|--|--|
-| SMTP | TCP | Out | 25 | Email | Used to open the customer's mail server, in order to send emails for alerts, and events | Sensor and On-premises management console | Email server |
-| DNS | TCP/UDP | In/Out | 53 | DNS | The DNS server port | On-premises management console and Sensor | DNS server |
-| HTTP | TCP | Out | 80 | The CRL download for certificate validation when uploading certificates. | Access to the CRL server | Sensor and on-premises management console | CRL server |
-| [WMI](how-to-configure-windows-endpoint-monitoring.md) | TCP/UDP | Out | 135, 1025-65535 | Monitoring | Windows Endpoint Monitoring | Sensor | Relevant network element |
-| [SNMP](how-to-set-up-snmp-mib-monitoring.md) | UDP | Out | 161 | Monitoring | Monitors the sensor's health | On-premises management console and Sensor | SNMP server |
-| LDAP | TCP | In/Out | 389 | Active Directory | Allows Active Directory management of users that have access, to sign in to the system | On-premises management console and Sensor | LDAP server |
-| Proxy | TCP/UDP | In/Out | 443 | Proxy | To connect the sensor to a proxy server | On-premises management console and Sensor | Proxy server |
-| Syslog | UDP | Out | 514 | LEEF | The logs that are sent from the on-premises management console to Syslog server | On-premises management console and Sensor | Syslog server |
-| LDAPS | TCP | In/Out | 636 | Active Directory | Allows Active Directory management of users that have access, to sign in to the system | On-premises management console and Sensor | LDAPS server |
-| Tunneling | TCP | In | 9000 </br></br> In addition to port 443 </br></br> Allows access from the sensor, or end user, to the on-premises management console </br></br> Port 22 from the sensor to the on-premises management console | Monitoring | Tunneling | Endpoint, Sensor | On-premises management console |
-
-## Choose a cloud connection method
-
-If you're setting up OT sensors and connecting them to the cloud, understand supported cloud connection methods, and make sure to connect your sensors as needed.
-
-For more information, see:
--- [OT sensor cloud connection methods](architecture-connections.md)-- [Connect your OT sensors to the cloud](connect-sensors.md)-
-## Troubleshooting
-
-This section provides troubleshooting for common issues when preparing your network for a Defender for IoT deployment.
-
-### Can't connect by using a web interface
-
-1. Verify that the computer you're trying to connect is on the same network as the appliance.
-
-2. Verify that the GUI network is connected to the management port on the sensor.
-
-3. Ping the appliance IP address. If there's no response to ping:
-
- 1. Connect a monitor and a keyboard to the appliance.
-
- 1. Use the **support** user* and password to sign in.
-
- 1. Use the command **network list** to see the current IP address.
-
-4. If the network parameters are misconfigured, sign into the OT sensor as the **cyberx_host** user* to re-run the OT monitoring software configuration wizard. For example:
-
- ```bash
- root@xsense:/# sudo dpkg-reconfigure iot-sensor
- ```
-
- The configuration wizard starts automatically. For more information, see [Install OT monitoring software](../how-to-install-software.md#install-ot-monitoring-software).
-
-5. Restart the sensor machine and sign in with the **support** user*. Run the **network list** command to verify that the parameters were changed.
-
-6. Try to ping and connect from the GUI again.
-
-(*) For more information, see [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users).
-
-### Appliance isn't responding
-
-1. Connect with a monitor and keyboard to the appliance, or use PuTTY to connect remotely to the CLI.
-
-2. Use the *support* credentials to sign in.
-
-3. Use the **system sanity** command and check that all processes are running.
-
- :::image type="content" source="media/how-to-set-up-your-network/system-sanity-command.png" alt-text="Screenshot of the system sanity command.":::
-
-For any other issues, contact [Microsoft Support](https://support.microsoft.com/supportforbusiness/productselection?sapId=82c88f35-1b8e-f274-ec11-c6efdd6dd099).
-
-## Next steps
-
-For more information, see:
--- [Predeployment checklist](pre-deployment-checklist.md)-- [Quickstart: Get started with Defender for IoT](getting-started.md)-- [Tutorial: Get started with Microsoft Defender for IoT for OT security](tutorial-onboarding.md)-- [Defender for IoT installation](how-to-install-software.md)-- [Connect your sensors to Microsoft Defender for IoT](connect-sensors.md)-- [Microsoft Defender for IoT system architecture](architecture.md)-- [Sensor connection methods](architecture-connections.md)
defender-for-iot How To Troubleshoot The Sensor And On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-troubleshoot-the-sensor-and-on-premises-management-console.md
- Title: Troubleshoot the OT sensor and on-premises management console
-description: Troubleshoot your OT sensor and on-premises management console to eliminate any problems you might be having.
Previously updated : 06/15/2022--
-# Troubleshoot the sensor and on-premises management console
-
-This article describes basic troubleshooting tools for the sensor and the on-premises management console. In addition to the items described here, you can check the health of your system in the following ways:
--- **Alerts**: An alert is created when the sensor interface that monitors the traffic is down.-- **SNMP**: Sensor health is monitored through SNMP. Microsoft Defender for IoT responds to SNMP queries sent from an authorized monitoring server.-- **System notifications**: When a management console controls the sensor, you can forward alerts about failed sensor backups and disconnected sensors.-
-## Check system health
-
-Check your system health from the sensor or on-premises management console.
-
-**To access the system health tool**:
-
-1. Sign in to the sensor or on-premises management console with the *support* user credentials.
-
-1. Select **System Statistics** from the **System Settings** window.
-
- :::image type="icon" source="media/tutorial-install-components/system-statistics-icon.png" border="false":::
-
-1. System health data appears. Select an item on the left to view more details in the box. For example:
-
- :::image type="content" source="media/tutorial-install-components/system-health-check-screen.png" alt-text="Screenshot that shows the system health check.":::
-
-System health checks include the following:
-
-|Name |Description |
-|||
-|**Sanity** | |
-|- Appliance | Runs the appliance sanity check. You can perform the same check by using the CLI command `system-sanity`. |
-|- Version | Displays the appliance version. |
-|- Network Properties | Displays the sensor network parameters. |
-|**Redis** | |
-|- Memory | Provides the overall picture of memory usage, such as how much memory was used and how much remained. |
-|- Longest Key | Displays the longest keys that might cause extensive memory usage. |
-|**System** | |
-|- Core Log | Provides the last 500 rows of the core log, so that you can view the recent log rows without exporting the entire system log. |
-|- Task Manager | Translates the tasks that appear in the table of processes to the following layers: <br><br> - Persistent layer (Redis)<br> - Cache layer (SQL) |
-|- Network Statistics | Displays your network statistics. |
-|- TOP | Shows the table of processes. It's a Linux command that provides a dynamic real-time view of the running system. |
-|- Backup Memory Check | Provides the status of the backup memory, checking the following:<br><br> - The location of the backup folder<br> - The size of the backup folder<br> - The limitations of the backup folder<br> - When the last backup happened<br> - How much space there are for the extra backup files |
-|- ifconfig | Displays the parameters for the appliance's physical interfaces. |
-|- CyberX nload | Displays network traffic and bandwidth by using the six-second tests. |
-|- Errors from Core, log | Displays errors from the core log file. |
-
-### Check system health by using the CLI
-
-Verify that the system is up and running prior to testing the system's sanity.
-
-For more information, see [CLI command reference from OT network sensors](cli-ot-sensor.md).
-
-**To test the system's sanity**:
-
-1. Connect to the CLI with the Linux terminal (for example, PuTTY) and the user *support*.
-
-1. Enter `system sanity`.
-
-1. Check that all the services are green (running).
-
- :::image type="content" source="media/tutorial-install-components/support-screen.png" alt-text="Screenshot that shows running services.":::
-
-1. Verify that **System is UP! (prod)** appears at the bottom.
-
-Verify that the correct version is used:
-
-**To check the system's version**:
-
-1. Connect to the CLI with the Linux terminal (for example, PuTTY) and the user *support*.
-
-1. Enter `system version`.
-
-1. Check that the correct version appears.
-
-Verify that all the input interfaces configured during the installation process are running:
-
-**To validate the system's network status**:
-
-1. Connect to the CLI with the Linux terminal (for example, PuTTY) and the *support* user.
-
-1. Enter `network list` (the equivalent of the Linux command `ifconfig`).
-
-1. Validate that the required input interfaces appear. For example, if two quad Copper NICs are installed, there should be 10 interfaces in the list.
-
- :::image type="content" source="media/tutorial-install-components/interface-list-screen.png" alt-text="Screenshot that shows the list of interfaces.":::
-
-Verify that you can access the console web GUI:
-
-**To check that management has access to the UI**:
-
-1. Connect a laptop with an Ethernet cable to the management port (**Gb1**).
-
-1. Define the laptop NIC address to be in the same range as the appliance.
-
- :::image type="content" source="media/tutorial-install-components/access-to-ui.png" alt-text="Screenshot that shows management access to the UI." border="false":::
-
-1. Ping the appliance's IP address from the laptop to verify connectivity (default: 10.100.10.1).
-
-1. Open the Chrome browser in the laptop and enter the appliance's IP address.
-
-1. In the **Your connection is not private** window, select **Advanced** and proceed.
-
-1. The test is successful when the Defender for IoT sign-in screen appears.
-
- :::image type="content" source="media/tutorial-install-components/defender-for-iot-sign-in-screen.png" alt-text="Screenshot that shows access to management console.":::
-
-## Troubleshoot sensors
--
-### You can't connect by using a web interface
-
-1. Verify that the computer that you're trying to connect is on the same network as the appliance.
-
-1. Verify that the GUI network is connected to the management port.
-
-1. Ping the appliance's IP address. If there's no ping:
-
- 1. Connect a monitor and a keyboard to the appliance.
-
- 1. Use the *support* user and password to sign in.
-
- 1. Use the command `network list` to see the current IP address.
-
-1. If the network parameters are misconfigured, use the following procedure to change them:
-
- 1. Use the command `network edit-settings`.
-
- 1. To change the management network IP address, select **Y**.
-
- 1. To change the subnet mask, select **Y**.
-
- 1. To change the DNS, select **Y**.
-
- 1. To change the default gateway IP address, select **Y**.
-
- 1. For the input interface change (sensor only), select **N**.
-
- 1. To apply the settings, select **Y**.
-
-1. After restart, connect with the *support* user credentials and use the `network list` command to verify that the parameters were changed.
-
-1. Try to ping and connect from the GUI again.
-
-### The appliance isn't responding
-
-1. Connect a monitor and keyboard to the appliance, or use PuTTY to connect remotely to the CLI.
-
-1. Use the *support* user credentials to sign in.
-
-1. Use the `system sanity` command and check that all processes are running. For example:
-
- :::image type="content" source="media/tutorial-install-components/system-sanity-screen.png" alt-text="Screenshot that shows the system sanity command.":::
-
-For any other issues, contact [Microsoft Support](https://support.microsoft.com/en-us/supportforbusiness/productselection?sapId=82c88f35-1b8e-f274-ec11-c6efdd6dd099).
--
-### Investigate password failure at initial sign-in
-
-When signing into a pre-configured sensor for the first time, you'll need to perform password recovery as follows:
-
-1. On the Defender for IoT sign in screen, select **Password recovery**. The **Password recovery** screen opens.
-
-1. Select either **CyberX** or **Support**, and copy the unique identifier.
-
-1. Navigate to the Azure portal and select **Sites and Sensors**.
-
-1. Select the **More Actions** drop down menu and select **Recover on-premises management console password**.
-
- :::image type="content" source="media/how-to-create-and-manage-users/recover-password.png" alt-text=" Screenshot of the recover on-premises management console password option.":::
-
-1. Enter the unique identifier that you received on the **Password recovery** screen and select **Recover**. The `password_recovery.zip` file is downloaded. Don't extract or modify the zip file.
-
- :::image type="content" source="media/how-to-create-and-manage-users/enter-identifier.png" alt-text="Screenshot of the Recover dialog box.":::
-
-1. On the **Password recovery** screen, select **Upload**. **The Upload Password Recovery File** window will open.
-
-1. Select **Browse** to locate your `password_recovery.zip` file, or drag the `password_recovery.zip` to the window.
-
-1. Select **Next**, and your user, and system-generated password for your management console will then appear.
-
- > [!NOTE]
- > When you sign in to a sensor or on-premises management console for the first time, it's linked to your Azure subscription, which you'll need if you need to recover the password for the *cyberx*, or *support* user. For more information, see the relevant procedure for [sensors](manage-users-sensor.md#recover-privileged-access-to-a-sensor) or an [on-premises management console](manage-users-on-premises-management-console.md#recover-privileged-access-to-an-on-premises-management-console).
-
-### Investigate a lack of traffic
-
-An indicator appears at the top of the console when the sensor recognizes that there's no traffic on one of the configured ports. This indicator is visible to all users. When this message appears, you can investigate where there's no traffic. Make sure the span cable is connected and there was no change in the span architecture.
--
-### Check system performance
-
-When a new sensor is deployed or a sensor is working slowly or not showing any alerts, you can check system performance.
-
-1. In the Defender for IoT dashboard > **Overview**, make sure that `PPS > 0`.
-1. In *Devices** check that devices are being discovered.
-1. In **Data Mining**, generate a report.
-1. In **Trends & Statistics** window, create a dashboard.
-1. In **Alerts**, check that the alert was created.
--
-### Investigate a lack of expected alerts
-
-If the **Alerts** window doesn't show an alert that you expected, verify the following:
-
-1. Check if the same alert already appears in the **Alerts** window as a reaction to a different security instance. If yes, and this alert hasn't been handled yet, the sensor console does not show a new alert.
-1. Make sure you did not exclude this alert by using the **Alert Exclusion** rules in the management console.
-
-### Investigate dashboard that shows no data
-
-When the dashboards in the **Trends & Statistics** window show no data, do the following:
-1. [Check system performance](#check-system-performance).
-1. Make sure the time and region settings are properly configured and not set to a future time.
-
-### Investigate a device map that shows only broadcasting devices
-
-When devices shown on the device map appear not connected to each other, something might be wrong with the SPAN port configuration. That is, you might be seeing only broadcasting devices and no unicast traffic.
-
-1. Validate that you're only seeing the broadcast traffic. To do this, in **Data Mining**, select **Create report**. In **Create new report**,specify the report fields. In **Choose Category**, choose **Select all**.
-1. Save the report, and review it to see if only broadcast and multicast traffic (and no unicast traffic) appears. If so, asking networking to fix the SPAN port configuration so that you can see the unicast traffic as well. Alternately, you can record a PCAP directly from the switch, or connect a laptop by using Wireshark.
-
-### Connect the sensor to NTP
-
-You can configure a standalone sensor and a management console, with the sensors related to it, to connect to NTP.
-
-To connect a standalone sensor to NTP:
--- [See the CLI documentation](./references-work-with-defender-for-iot-cli-commands.md).-
-To connect a sensor controlled by the management console to NTP:
--- The connection to NTP is configured on the management console. All the sensors that the management console controls get the NTP connection automatically.-
-### Investigate when devices aren't shown on the map, or you have multiple internet-related alerts
-
-Sometimes ICS devices are configured with external IP addresses. These ICS devices are not shown on the map. Instead of the devices, an internet cloud appears on the map. The IP addresses of these devices are included in the cloud image. Another indication of the same problem is when multiple internet-related alerts appear. Fix the issue as follows:
-
-1. Right-click the cloud icon on the device map and select **Export IP Addresses**.
-1. Copy the public ranges that are private, and add them to the subnet list.
-1. Generate a new data-mining report for internet connections.
-1. In the data-mining report, enter the administrator mode and delete the IP addresses of your ICS devices.
-
-### Clearing sensor data
-
-In cases where the sensor needs to be relocated or erased, all learned data can be cleared from the sensor.
-
-### Export logs from the sensor console for troubleshooting
-
-For further troubleshooting, you may want to export logs to send to the support team, such as database or operating system logs.
-
-**To export log data**:
-
-1. In the sensor console, go to **System settings** > **Sensor management** > **Backup & restore** > **Backup**.
-
-1. In the **Export Troubleshooting Information** dialog:
-
- 1. In the **File Name** field, enter a meaningful name for the exported log. The default filename uses the current date, such as **13:10-June-14-2022.tar.gz**.
-
- 1. Select the logs you would like to export.
-
- 1. Select **Export**.
-
- The file is exported and is linked from the **Archived Files** list at the bottom of the **Export Troubleshooting Information** dialog.
-
- For example:
-
- :::image type="content" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/export-logs-sensor.png" alt-text="Screenshot of the export troubleshooting information dialog in the sensor console. " lightbox="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/export-logs-sensor.png":::
-
-1. Select the file link to download the exported log, and also select the :::image type="icon" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/eye-icon.png" border="false"::: button to view its one-time password.
-
-1. To open the exported logs, forward the downloaded file and the one-time password to the support team. Exported logs can be opened only together with the Microsoft support team.
-
- To keep your logs secure, make sure to forward the password separately from the downloaded log.
-
-> [!NOTE]
-> Support ticket diagnostics can be downloaded from the sensor console and then uploaded directly to the support team in the Azure portal.
-
-## Troubleshoot an on-premises management console
-
-### Investigate a lack of expected alerts
-
-If you don't see an expected alert on the on-premises **Alerts** page, do the following to troubleshoot:
--- Verify whether the alert is already listed as a reaction to a different security instance. If it has, and that alert hasn't yet been handled, a new alert isn't shown elsewhere.--- Verify that the alert isn't being excluded by **Alert Exclusion** rules. For more information, see [Create alert exclusion rules on an on-premises management console](how-to-accelerate-alert-incident-response.md#create-alert-exclusion-rules-on-an-on-premises-management-console).-
-### Tweak the Quality of Service (QoS)
-
-To save your network resources, you can limit the number of alerts sent to external systems (such as emails or SIEM) in one sync operation between an appliance and the on-premises management console.
-
-The default is 50. This means that in one communication session between an appliance and the on-premises management console, there will be no more than 50 alerts to external systems.
-
-To limit the number of alerts, use the `notifications.max_number_to_report` property available in `/var/cyberx/properties/management.properties`. No restart is needed after you change this property.
-
-**To tweak the Quality of Service (QoS)**:
-
-1. Sign in as a Defender for IoT user.
-
-1. Verify the default values:
-
- ```bash
- grep \"notifications\" /var/cyberx/properties/management.properties
- ```
-
- The following default values appear:
-
- ```bash
- notifications.max_number_to_report=50
- notifications.max_time_to_report=10 (seconds)
- ```
-
-1. Edit the default settings:
-
- ```bash
- sudo nano /var/cyberx/properties/management.properties
- ```
-
-1. Edit the settings of the following lines:
-
- ```bash
- notifications.max_number_to_report=50
- notifications.max_time_to_report=10 (seconds)
- ```
-
-1. Save the changes. No restart is required.
-
-### Export logs from the on-premises management console for troubleshooting
-
-For further troubleshooting, you may want to export logs to send to the support team, such as audit or database logs.
-
-**To export log data**:
-
-1. In the on-premises management console, select **System Settings > Export**.
-
-1. In the **Export Troubleshooting Information** dialog:
-
- 1. In the **File Name** field, enter a meaningful name for the exported log. The default filename uses the current date, such as **13:10-June-14-2022.tar.gz**.
-
- 1. Select the logs you would like to export.
-
- 1. Select **Export**.
-
- The file is exported and is linked from the **Archived Files** list at the bottom of the **Export Troubleshooting Information** dialog.
-
- For example:
-
- :::image type="content" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/export-logs-on-premises-management-console.png" alt-text="Screenshot of the Export Troubleshooting Information dialog in the on-premises management console." lightbox="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/export-logs-on-premises-management-console.png":::
-
-1. Select the file link to download the exported log, and also select the :::image type="icon" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/eye-icon.png" border="false"::: button to view its one-time password.
-
-1. To open the exported logs, forward the downloaded file and the one-time password to the support team. Exported logs can be opened only together with the Microsoft support team.
-
- To keep your logs secure, make sure to forward the password separately from the downloaded log.
-
-## Next steps
--- [View alerts](how-to-view-alerts.md)--- [Set up SNMP MIB monitoring](how-to-set-up-snmp-mib-monitoring.md)--- [Track on-premises user activity](track-user-activity.md)
defender-for-iot How To Work With The Sensor Device Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-the-sensor-device-map.md
Use a device map to retrieve, analyze, and manage device information, either all
To perform the procedures in this article, make sure that you have: -- An OT network sensor [installed](ot-deploy/install-software-ot-sensor.md), [activated, and configured](how-to-activate-and-set-up-your-sensor.md), with network traffic ingested
+- An OT network sensor [installed](ot-deploy/install-software-ot-sensor.md), [activated, and configured](ot-deploy/activate-deploy-sensor.md), with network traffic ingested
- Access to your OT sensor or on-premises management console. Users with the **Viewer** role can view data on the map. To import or export data or edit the map view, you need access as a **Security Analyst** or **Admin** user. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md).
-To view devices across multiple sensors in a zone, you'll also need an on-premises management console [installed](ot-deploy/install-software-on-premises-management-console.md), [activated, and configured](how-to-activate-and-set-up-your-on-premises-management-console.md), with multiple sensors connected and assigned to sites and zones.
+To view devices across multiple sensors in a zone, you'll also need an on-premises management console [installed](ot-deploy/install-software-on-premises-management-console.md), [activated, and configured](ot-deploy/activate-deploy-management.md), with multiple sensors connected and assigned to sites and zones.
## View devices on OT sensor device map
To view devices across multiple sensors in a zone, you'll also need an on-premis
- Starred devices are those that had been marked as important - Devices with no alerts are shown in black, or grey in the zoomed-in connections view
- For example:
+ For example:
:::image type="content" source="media/how-to-work-with-maps/device-map-default.png" alt-text="Screenshot of a default view of an OT sensor's device map." lightbox="media/how-to-work-with-maps/device-map-default.png":::
To view devices across multiple sensors in a zone, you'll also need an on-premis
- The number of devices grouped in a subnet in an IT network, if relevant. This number of devices is shown in a black circle. - Whether the device is newly detected or unauthorized.
-1. Right-click a specific device and select **View properties** to drill down further to the **Map View** tab on the device's [device details page](how-to-investigate-sensor-detections-in-a-device-inventory.md#view-the-device-inventory).
+1. Right-click a specific device and select **View properties** to drill down further to the **Map View** tab on the device's [device details page](how-to-investigate-sensor-detections-in-a-device-inventory.md#view-the-device-inventory).
### Modify the OT sensor map display
To see device details, select a device and expand the device details pane on the
- Select **Event Timeline** to jump to the device's [event timeline](how-to-track-sensor-activity.md) - Select **Device Details** to jump to a full [device details page](how-to-investigate-sensor-detections-in-a-device-inventory.md#view-the-device-inventory). - ### View IT subnets from an OT sensor device map By default, IT devices are automatically aggregated by [subnet](how-to-control-what-traffic-is-monitored.md#define-ot-and-iot-subnets), so that the map focuses on your local OT and IoT networks.
By default, IT devices are automatically aggregated by [subnet](how-to-control-w
1. Sign into your OT sensor and select **Device map**. 1. Select one or more expanded subnets and then select **Collapse All**. - ## Create a custom device group In addition to OT sensor's [built-in device groups](#built-in-device-map-groups), create new custom groups as needed to use when highlighting or filtering devices on the map.
In addition to OT sensor's [built-in device groups](#built-in-device-map-groups)
1. In the **Add custom group** pane:
- - In the **Name** field, enter a meaningful name for your group, with up to 30 characters.
+ - In the **Name** field, enter a meaningful name for your group, with up to 30 characters.
- From the **Copy from groups** menu, select any groups you want to copy devices from. - From the **Devices** menu, select any extra devices to add to your group.
Use one of the following options to import and export device data:
- **Import Devices**. Select to import devices from a pre-configured .CSV file. - **Export Devices**. Select to export all currently displayed devices, with full details, to a .CSV file.-- **Export Device Summary**. Select to export a high level summary of all currently displayed devices to a .CSV file. -
+- **Export Device Summary**. Select to export a high level summary of all currently displayed devices to a .CSV file.
## Edit devices
-1. Sign into an OT sensor and select **Device map**.
+1. Sign into an OT sensor and select **Device map**.
1. Right-click a device to open the device options menu, and then select any of the following options:
You can only merge [authorized devices](device-inventory.md#unauthorized-devices
> [!IMPORTANT] > You can't undo a device merge. If you mistakenly merged two devices, delete the devices and then wait for the sensor to rediscover both.
->
**To merge multiple devices**: 1. Sign into your OT sensor and select **Device map**.
-1. Select the authorized devices you want to merge by using the SHIFT key to select more than one device, and then right-click and select **Merge**.
+1. Select the authorized devices you want to merge by using the SHIFT key to select more than one device, and then right-click and select **Merge**.
1. At the prompt, select **Confirm** to confirm that you want to merge the devices.
You may have situations where you'd want to handle multiple notifications togeth
When you handle multiple notifications together, you may still have remaining notifications that need to be handled manually, such as for new IP addresses or no subnets detected. - ### Device notification responses The following table lists available responses for each notification, and when we recommend using each one:
The following table lists available responses for each notification, and when we
|--|--|--|--| | **New IP detected** | A new IP address is associated with the device. This may occur in the following scenarios: <br><br>- A new or additional IP address was associated with a device already detected, with an existing MAC address.<br><br> - A new IP address was detected for a device that's using a NetBIOS name. <br /><br /> - An IP address was detected as the management interface for a device associated with a MAC address. <br /><br /> - A new IP address was detected for a device that's using a virtual IP address. | - **Set Additional IP to Device**: Merge the devices <br />- **Replace Existing IP**: Replaces any existing IP address with the new address <br /> - **Dismiss**: Remove the notification. |**Dismiss** | | **No subnets configured** | No subnets are currently configured in your network. <br /><br /> We recommend configuring subnets for the ability to differentiate between OT and IT devices on the map. | - **Open Subnet Configuration** and [configure subnets](how-to-control-what-traffic-is-monitored.md#define-ot-and-iot-subnets). <br />- **Dismiss**: Remove the notification. |**Dismiss** |
-| **Operating system changes** | One or more new operating systems have been associated with the device. | - Select the name of the new OS that you want to associate with the device.<br /> - **Dismiss**: Remove the notification. |No automatic handling|
+| **Operating system changes** | One or more new operating systems have been associated with the device. | - Select the name of the new OS that you want to associate with the device.<br /> - **Dismiss**: Remove the notification. | Set with new operating system only if not already configured manually. <br><br>If the operating system has already been configured: **Dismiss**. |
| **New subnets** | New subnets were discovered. |- **Learn**: Automatically add the subnet.<br />- **Open Subnet Configuration**: Add all missing subnet information.<br />- **Dismiss**: <br />Remove the notification. |**Dismiss** | | **Device type changes** | A new device type has been associated with the device. | - **Set as {…}**: Associate the new type with the device.<br />- **Dismiss**: Remove the notification. |No automatic handling|
On the on-premises management console, zone maps show all network elements relat
1. Right-click a device shown in red and select **View alerts** to jump to the **Alerts page**, with alerts filtered only for the selected device. - ## Built-in device map groups The following table lists the device groups available out-of-the-box on the OT sensor **Device map** page. [Create extra, custom groups](#create-a-custom-device-group) as needed for your organization.
The following table lists the device groups available out-of-the-box on the OT s
## Next steps For more information, see [Investigate sensor detections in a Device Inventory](how-to-investigate-sensor-detections-in-a-device-inventory.md).-
defender-for-iot Manage Users On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/manage-users-on-premises-management-console.md
Before you create access groups, we also recommend that you:
Users with **Admin** roles have access to all business topology entities by default, and can't be assigned to access groups. -- Carefully set up your business topology. For a rule to be successfully applied, you must assign sensors to zones in the **Site Management** window. For more information, see:-
- - [Work with site map views](how-to-gain-insight-into-global-regional-and-local-threats.md#work-with-site-map-views)
- - [Create zones](ot-deploy/sites-and-zones-on-premises.md#create-zones)
- - [Assign sensors to zones](ot-deploy/sites-and-zones-on-premises.md#manage-sites-and-zones)
+- Carefully set up your business topology. For a rule to be successfully applied, you must assign sensors to zones in the **Site Management** window. For more information, see [Create OT sites and zones on an on-premises management console](ot-deploy/sites-and-zones-on-premises.md).
**To create access groups**:
defender-for-iot Manage Users Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/manage-users-sensor.md
For example, use Active Directory when you have a large number of users that you
|Name |Description | ||| |**Domain Controller FQDN** | The fully qualified domain name (FQDN), exactly as it appears on your LDAP server. For example, enter `host1.subdomain.contoso.com`. <br><br> If you encounter an issue with the integration using the FQDN, check your DNS configuration. You can also enter the explicit IP of the LDAP server instead of the FQDN when setting up the integration. |
- |**Domain Controller Port** | The port where your LDAP is configured. |
+ |**Domain Controller Port** | The port where your LDAP is configured. For example, use port 636 for LDAPS (SSL) connections. |
|**Primary Domain** | The domain name, such as `subdomain.contoso.com`, and then select the connection type for your LDAP configuration. <br><br>Supported connection types include: **LDAPS/NTLMv3** (recommended), **LDAP/NTLMv3**, or **LDAP/SASL-MD5** | |**Active Directory Groups** | Select **+ Add** to add an Active Directory group to each permission level listed, as needed. <br><br> When you enter a group name, make sure that you enter the group name exactly as it's defined in your Active Directory configuration on the LDAP server. You'll use these group names when [adding new sensor users](#add-new-ot-sensor-users) with Active Directory.<br><br> Supported permission levels include **Read-only**, **Security Analyst**, **Admin**, and **Trusted Domains**. |
defender-for-iot Sites And Zones On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-deploy/sites-and-zones-on-premises.md
An on-premises management console adds the extra layers of *business units* and
- A clear understanding of where your OT network sensors are placed in your network, and how you want to [segment your network into sites and zones](../concept-zero-trust.md). -- An on-premises management console [installed](install-software-on-premises-management-console.md) and [activated](../how-to-activate-and-set-up-your-on-premises-management-console.md)
+- An on-premises management console [installed](install-software-on-premises-management-console.md) and [activated](activate-deploy-management.md)
- OT sensors [connected to your on-premises management console](connect-sensors-to-management.md)
defender-for-iot Pre Deployment Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/pre-deployment-checklist.md
- Title: OT network pre-deployment checklist
-description: Use this checklist as a worksheet to ensure that your OT network is ready for a Microsoft Defender for IoT deployment.
Previously updated : 02/22/2022----
-# Predeployment checklist
-
-Use this checklist as a worksheet to ensure that your OT network is ready for a Microsoft Defender for IoT deployment.
-
-We recommend printing this browser page or using the print function to save it as a PDF file where you can check off things as you go. For example, on Windows machines, press **CTRL+P** to access the Print dialog for this page.
-
-Use this checklist together with [Prepare your OT network for Microsoft Defender for IoT](how-to-set-up-your-network.md).
-
-## Site checklist
-
-Review the following items before deploying your site:
-
-| **#** | **Task or activity** | **Status** | **Comments** |
-|--|--|--|--|
-| 1 | If you're using physical appliances, order your appliances. <br>For more information, see [Identify required appliances](how-to-identify-required-appliances.md). | ΓÿÉ | |
-| 2 | Identify the managed switches you want to monitor | ΓÿÉ | |
-| 3 | Provide network details for sensors (IP address, subnet, D-GW, DNS, host). | ΓÿÉ | |
-| 4 | Create necessary firewall rules and the access list. For more information, see [Networking requirements](how-to-set-up-your-network.md#networking-requirements).| ΓÿÉ | |
-| 5 | Configure port mirroring, defining the *source* as the physical ports or VLANs you want to monitor, and the *destination* as the output port that connected to OT sensor. | ΓÿÉ | |
-| 7 | Connect the switch to the OT sensor. | ΓÿÉ | |
-| 8 | Create Active Directory groups or local users. | ΓÿÉ | |
-| 9 | On the Azure portal, add a Defender for IoT subscription and an OT sensor and then activate your sensor. | ΓÿÉ | |
-| 10 | Validate the link and incoming traffic to the OT sensor | ΓÿÉ | |
--
-| **Date** | **Note** | **Deployment date** | **Note** |
-|--|--|--|--|
-| Defender for IoT | | Site name* | |
-| Name | | Name | |
-| Position | | Position | |
-
-## Architecture review
-
-Review your industrial network architecture to define the proper location for the Defender for IoT equipment.
-
-1. **Global network diagram** - View a global network diagram of the industrial OT environment. For example:
-
- :::image type="content" source="media/how-to-set-up-your-network/backbone-switch.png" alt-text="Diagram of the industrial OT environment for the global network.":::
-
- > [!NOTE]
- > The Defender for IoT appliance should be connected to a lower-level switch that sees the traffic between the ports on the switch.
-
-1. **Committed devices** - Provide the approximate number of network devices that will be monitored. You'll need this information when onboarding your subscription to Defender for IoT in the Azure portal. During the onboarding process, you'll be prompted to enter the number of devices in increments of 100. For more information, see [What is a Defender for IoT committed device?](architecture.md#what-is-a-defender-for-iot-committed-device)
-
-1. **(Optional) Subnet list** - Provide a subnet list for the production networks and a description (optional).
-
- | **#** | **Subnet name** | **Description** |
- |--| | |
- | 1 | |
- | 2 | |
- | 3 | |
- | 4 | |
-
-1. **VLANs** - Provide a VLAN list of the production networks.
-
- | **#** | **VLAN Name** | **Description** |
- |--|--|--|
- | 1 | | |
- | 2 | | |
- | 3 | | |
- | 4 | | |
-
-1. **Switch models and mirroring support** - To verify that the switches have port mirroring capability, provide the switch model numbers that the Defender for IoT platform should connect to:
-
- | **#** | **Switch** | **Model** | **Traffic mirroring support (SPAN, RSPAN, or none)** |
- |--|--|--|--|
- | 1 | | |
- | 2 | | |
- | 3 | | |
- | 4 | | |
-
-1. **Third-party switch management** - Does a third party manage the switches? Y or N
-
- If yes, who? __________________________________
-
- What is their policy? __________________________________
-
- For example:
-
- - Siemens
-
- - Rockwell automation ΓÇô Ethernet or IP
-
- - Emerson ΓÇô DeltaV, Ovation
-
-1. **Serial connection** - Are there devices that communicate via a serial connection in the network? Yes or No
-
- If yes, specify which serial communication protocol: ________________
-
- If yes, mark on the network diagram what devices communicate with serial protocols, and where they are:
-
- *Add your network diagram with marked serial connection*
-
-1. **Quality of Service** - For Quality of Service (QoS), the default setting of the sensor is 1.5 Mbps. Specify if you want to change it: ________________
-
- Business unit (BU): ________________
-
-1. **Sensor** - Specifications for site equipment
-
- The sensor appliance is connected to switch SPAN port through a network adapter. It's connected to the customer's corporate network for management through another dedicated network adapter.
-
- Provide address details for the sensor NIC that will be connected in the corporate network:
-
- | Item | Appliance 1 | Appliance 2 | Appliance 3 |
- |--|--|--|--|
- | Appliance IP address | | | |
- | Subnet | | | |
- | Default gateway | | | |
- | DNS | | | |
- | Host name | | | |
-
-1. **iDRAC/iLO/Server management**
-
- | Item | Appliance 1 | Appliance 2 | Appliance 3 |
- |--|--|--|--|
- | Appliance IP address | | | |
- | Subnet | | | |
- | Default gateway | | | |
- | DNS | | | |
-
-1. **On-premises management console**
-
- | Item | Active | Passive (when using HA) |
- |--|--|--|
- | IP address | | |
- | Subnet | | |
- | Default gateway | | |
- | DNS | | |
-
-1. **SNMP**
-
- | Item | Details |
- |--|--|
- | IP | |
- | IP address | |
- | Username | |
- | Password | |
- | Authentication type | MD5 or SHA |
- | Encryption | DES or AES |
- | Secret key | |
- | SNMP v2 community string |
-
-1. **On-premises management console SSL certificate**
-
- Are you planning to use an SSL certificate? Yes or No
-
- If yes, what service will you use to generate it? What attributes will you include in the certificate (for example, domain or IP address)?
-
-1. **SMTP authentication**
-
- Are you planning to use SMTP to forward alerts to an email server? Yes or No
-
- If yes, what authentication method will you use?
-
-1. **Active Directory or local users**
-
- Contact an Active Directory administrator to create an Active Directory site user group or create local users. Be sure to have your users ready for the deployment day.
-
-1. IoT device types in the network
-
- | Device type | Number of devices in the network | Average bandwidth |
- | | | -- |
- | Camera | |
- | X-ray machine | |
- | | |
- | | |
- | | |
- | | |
- | | |
- | | |
- | | |
- | | |
-
-## Next steps
-
-For more information, see:
--- [Quickstart: Get started with Defender for IoT](getting-started.md)-- [Best practices for planning your OT network monitoring](best-practices/plan-network-monitoring.md)-- [Prepare your network for Microsoft Defender for IoT](how-to-set-up-your-network.md)
defender-for-iot Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes-archive.md
Webhook extended can be used to send extra data to the endpoint. The extended fe
### Unicode support for certificate passphrases
-Unicode characters are now supported when working with sensor certificate passphrases. For more information, see [About certificates](how-to-deploy-certificates.md).
+Unicode characters are now supported when working with sensor certificate passphrases. For more information, see [Prepare CA-signed certificates](best-practices/plan-prepare-deploy.md#prepare-ca-signed-certificates).
## April 2021
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
Cloud features may be dependent on a specific sensor version. Such features are
| Version / Patch | Release date | Scope | Supported until | | - | | -- | - | | **22.3** | | | |
+| 22.3.8 | 04/2023 | Patch | 03/2024 |
| 22.3.7 | 03/2023 | Patch | 02/2024 | | 22.3.6 | 03/2023 | Patch | 02/2024 | | 22.3.5 | 01/2023 | Patch | 12/2023 |
To understand whether a feature is supported in your sensor version, check the r
## Versions 22.3.x
+### 22.3.8
+
+**Release date**: 04/2023
+
+**Supported until**: 03/2024
+
+- [Download WMI script from OT sensor console](detect-windows-endpoints-script.md#download-and-run-the-script)
+- [Automatically resolved notifications for operating system changes and device type changes](how-to-work-with-the-sensor-device-map.md#device-notification-responses)
+- [UI enhancements when uploading SSL/TLS certificates](how-to-deploy-certificates.md#deploy-a-certificate-on-an-ot-sensor)
+ ### 22.3.6 / 22.3.7 <a name=22.3.7></a>
Version 22.3.7 includes the same features as 22.3.6. If you have version 22.3.6
- [Merging](how-to-investigate-sensor-detections-in-a-device-inventory.md#merge-devices) and [deleting](how-to-investigate-sensor-detections-in-a-device-inventory.md#delete-devices) devices on OT sensors now include confirmation messages when the action has completed - Support for [deleting multiple devices](how-to-investigate-sensor-detections-in-a-device-inventory.md#delete-devices) on OT sensors - An enhanced [editing device details](how-to-investigate-sensor-detections-in-a-device-inventory.md#edit-device-details) process on the OT sensor, using an **Edit** button in the toolbar at the top of the page-- [Enhanced UI on the OT sensor for uploading an SSL/TLS certificate](how-to-deploy-certificates.md#deploy-ssltls-certificates-on-ot-appliances)
+- [Enhanced UI on the OT sensor for uploading an SSL/TLS certificate](ot-deploy/activate-deploy-sensor.md#deploy-an-ssltls-certificate)
- [Activation files for locally managed sensors no longer expire](how-to-manage-individual-sensors.md#upload-a-new-activation-file) - Severity for all [**Suspicion of Malicious Activity**](alert-engine-messages.md#malware-engine-alerts) alerts is now **Critical** - [Allow internet connections on an OT network in bulk](how-to-accelerate-alert-incident-response.md#allow-internet-connections-on-an-ot-network) - ### 22.3.5 **Release date**: 01/2023
This version includes the following new updates and fixes:
- [New naming convention for hardware profiles](ot-appliance-sizing.md) - [PCAP access from the Azure portal](how-to-manage-cloud-alerts.md) - [Bi-directional alert synch between OT sensors and the Azure portal](alerts.md#managing-ot-alerts-in-a-hybrid-environment)-- [Sensor connections restored after certificate rotation](how-to-deploy-certificates.md)
+- [Sensor connections restored after certificate rotation](ot-deploy/activate-deploy-sensor.md#deploy-an-ssltls-certificate)
- [Upload diagnostic logs for support tickets from the Azure portal](how-to-manage-sensors-on-the-cloud.md#upload-a-diagnostics-log-for-support) - [Improved security for uploading protocol plugins](resources-manage-proprietary-protocols.md) - [Sensor names shown in browser tabs](how-to-manage-individual-sensors.md) - [Site-based access control on the Azure portal](manage-users-portal.md#manage-site-based-access-control-public-preview) + ## Versions 22.1.x Software versions 22.1.x support direct updates to the latest OT monitoring software versions available. For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md).
This version includes the following new updates and fixes:
- [New PCAP API](api/management-alert-apis.md#pcap-request-alert-pcap) - [Export logs from the on-premises management console for troubleshooting](how-to-troubleshoot-on-premises-management-console.md#export-logs-from-the-on-premises-management-console-for-troubleshooting) - [Support for Webhook extended to send data to endpoints](how-to-forward-alert-information-to-partners.md#webhook-extended)-- [Unicode support for certificate passphrases](how-to-deploy-certificates.md)
+- [Unicode support for certificate passphrases](best-practices/plan-prepare-deploy.md#prepare-ca-signed-certificates)
## Next steps
defender-for-iot Configure Mirror Tap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/traffic-mirroring/configure-mirror-tap.md
- Title: Configure traffic mirroring with active or passive aggregation with terminal access points - Microsoft Defender for IoT
-description: This article describes traffic mirroring with active passive aggregation with terminal access points (TAP) for OT monitoring with Microsoft Defender for IoT.
Previously updated : 11/08/2022----
-# Configure traffic mirroring with active or passive aggregation (TAP)
-
-When using active or passive aggregation to mirror traffic, an active or passive aggregation terminal access point (TAP) is installed inline to the network cable. The TAP duplicates both *Receive* and *Transmit* traffic to the OT network sensor so that you can monitor the traffic with Defender for IoT.
-
-A TAP is a hardware device that allows network traffic to flow back and forth between ports without interruption. The TAP creates an exact copy of both sides of the traffic flow, continuously, without compromising network integrity.
-
-For example:
--
-Some TAPs aggregate both *Receive* and *Transmit*, depending on the switch configuration. If your switch doesn't support aggregation, each TAP uses two ports on your OT network sensor to monitor both *Receive* and *Transmit* traffic.
-
-## Advantages of mirroring traffic with a TAP
-
-We recommend TAPs especially when traffic mirroring for forensic purposes. Advantages of mirroring traffic with TAPs include:
--- TAPs are hardware-based and can't be compromised--- TAPs pass all traffic, even damaged messages that are often dropped by the switches--- TAPs aren't processor-sensitive, which means that packet timing is exact. In contrast, switches handle mirroring functionality as a low-priority task, which can affect the timing of the mirrored packets.-
-You can also use a TAP aggregator to monitor your traffic ports. However, TAP aggregators aren't processor-based, and aren't as intrinsically secure as hardware TAPs. TAP aggregators may not reflect exact packet timing.
-
-## Common TAP models
-
-The following TAP models have been tested for compatibility with Defender for IoT. Other vendors and models might also be compatible.
--- **Garland P1GCCAS**-
- When using a Garland TAP, make sure to set up your network to support aggregation. For more information, refer to the **Tap Aggregation** diagram under the **Network Diagrams** tab in the [Garland installation guide](https://www.garlandtechnology.com/products/aggregator-tap-copper).
--- **IXIA TPA2-CU3**-
- When using an Ixia TAP, make sure **Aggregation mode** is active. For more information, see the [Ixia install guide](https://support.ixiacom.com/sites/default/files/resources/install-guide/c_taps_zd-copper_qig_0303.pdf).
--- **US Robotics USR 4503**-
- When using a US Robotics TAP, make sure to toggle the aggregation mode on by setting the selectable switch to **AGG**. For more information, see the [US Robotics installation guide](https://www.usr.com/files/9814/7819/2756/4503-ig.pdf).
-
-## Next steps
-
-For more information, see:
--- [Traffic mirroring methods for OT monitoring](../best-practices/traffic-mirroring-methods.md)-- [Prepare your OT network for Microsoft Defender for IoT](../how-to-set-up-your-network.md)
defender-for-iot Update Ot Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/update-ot-software.md
To perform the procedures described in this article, make sure that you have:
For example, the new version may require a new or modified firewall rule to support sensor access to the Azure portal. From the **Sites and sensors** page, select **More actions > Download sensor endpoint details** for the full list of endpoints required to access the Azure portal.
- For more information, see [Networking requirements](how-to-set-up-your-network.md#networking-requirements) and [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal).
+ For more information, see [Networking requirements](networking-requirements.md) and [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal).
## Update OT sensors
defender-for-iot Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md
Features released earlier than nine months ago are described in the [What's new
> Noted features listed below are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. >
+## April 2023
+
+|Service area |Updates |
+|||
+| **Documentation** | [End-to-end deployment guides](#end-to-end-deployment-guides) |
+| **OT networks** | **Sensor version 22.3.8**: <br>- [Download WMI script from OT sensor console](#download-wmi-script-from-ot-sensor-console) <br>- [Automatically resolved OS notifications](#automatically-resolved-os-notifications) <br>- [UI enhancement when uploading SSL/TLS certificates](#ui-enhancement-when-uploading-ssltls-certificates) |
+
+### End-to-end deployment guides
+
+The Defender for IoT documentation now includes a new **Deploy** section, with a full set of deployment guides for the following scenarios:
+
+- [Standard deployment for OT monitoring](ot-deploy/ot-deploy-path.md)
+- [Air-gapped deployment for OT monitoring with an on-premises sensor management](ot-deploy/air-gapped-deploy.md)
+- [Enterprise IoT deployment](eiot-defender-for-endpoint.md)
+
+For example, the recommended deployment for OT monitoring includes the following steps, which are all detailed in our new articles:
++
+The step-by-step instructions in each section are intended to help customers optimize for success and deploy for Zero Trust. Navigational elements on each page, including flow charts at the top and **Next steps** links at the bottom, indicate where you are in the process, what youΓÇÖve just completed, and what your next step should be. For example:
++
+For more information, see [Deploy Defender for IoT for OT monitoring](ot-deploy/ot-deploy-path.md).
+
+### Download WMI script from OT sensor console
+
+The script used to configure OT sensors to detect Microsoft Windows workstations and servers is now available for download from the OT sensor itself.
+
+For more information, see [Download the script](detect-windows-endpoints-script.md#download-and-run-the-script)
+
+### Automatically resolved OS notifications
+
+After updating your OT sensor to version 22.3.8, no new device notifications for **Operating system changes** are generated. Existing **Operating system changes** notifications are automatically resolved if they aren't dismissed or otherwise handled within 14 days.
+
+For more information, see [Device notification responses](how-to-work-with-the-sensor-device-map.md#device-notification-responses)
+
+### UI enhancement when uploading SSL/TLS certificates
+
+The OT sensor version 22.3.8 has an enhanced **SSL/TLS Certificates** configuration page for defining your SSL/TLS certificate settings and deploying a CA-signed certificate.
+
+For more information, see [Manage SSL/TLS certificates](how-to-manage-individual-sensors.md#manage-ssltls-certificates).
+ ## March 2023 |Service area |Updates |
For more information, see [Device data retention periods](references-data-retent
The OT sensor version 22.3.6 has an enhanced **SSL/TLS Certificates** configuration page for defining your SSL/TLS certificate settings and deploying a CA-signed certificate.
-For more information, see [Deploy SSL/TLS certificates on OT appliances](how-to-deploy-certificates.md).
+For more information, see [Deploy an SSL/TLS certificate](ot-deploy/activate-deploy-sensor.md#deploy-an-ssltls-certificate).
### Activation files expiration updates
For more information, see:
### Sensor connections restored after certificate rotation
-Starting in version 22.2.3, after rotating your certificates, your sensor connections are automatically restored to your central manager, and you don't need to reconnect them manually.
+Starting in version 22.2.3, after rotating your certificates, your sensor connections are automatically restored to your on-premises management console, and you don't need to reconnect them manually.
-For more information, see [About certificates](how-to-deploy-certificates.md).
+For more information, see [Prepare CA-signed certificates](best-practices/plan-prepare-deploy.md#prepare-ca-signed-certificates) and [Deploy an SSL/TLS certificate](ot-deploy/activate-deploy-sensor.md#deploy-an-ssltls-certificate).
### Support diagnostic log enhancements (Public preview)
digital-twins Concepts Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-models.md
If you need to delete all models in an Azure Digital Twins instance at once, you
### Visualize models
-Once you have uploaded models into your Azure Digital Twins instance, you can use [Azure Digital Twins Explorer](http://explorer.digitaltwins.azure.net/) to view them. The explorer contains a list of all models in the instance, as well as a **model graph** that illustrates how they relate to each other, including any inheritance and model relationships.
+Once you have uploaded models into your Azure Digital Twins instance, you can use [Azure Digital Twins Explorer](https://explorer.digitaltwins.azure.net/) to view them. The explorer contains a list of all models in the instance, as well as a **model graph** that illustrates how they relate to each other, including any inheritance and model relationships.
Here's an example of what a model graph might look like:
energy-data-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/release-notes.md
Azure Data Manager for Energy Preview is updated on an ongoing basis. To stay up
<hr width = 100%>
+## April 2023
+
+### Monitoring and investigating actions with Audit logs
+
+Knowing who is taking what action on which item is critical in helping organizations meet regulatory compliance and record management requirements. Azure Data Manager for Energy captures audit logs for data plane APIs of OSDU services and audit events listed [here](https://community.opengroup.org/osdu/documentation/-/wikis/Releases/R3.0/GCP/GCP-Operation/Logging/Audit-Logging-Status). Learn more about [audit logging in Azure Data Manager for Energy](how-to-manage-audit-logs.md).
+ ## February 2023 ### Compliant with M14 OSDU&trade; release Azure Data Manager for Energy Preview is now compliant with the M14 OSDU&trade; milestone release. With this release, you can take advantage of the latest features and capabilities available in the [OSDU&trade; M14](https://community.opengroup.org/osdu/governance/project-management-committee/-/wikis/M14-Release-Notes).
-### Product Billing Enabled
+### Product Billing enabled
Billing for Azure Data Manager for Energy Preview is enabled. During Preview, the price for each instance is based on a fixed per-hour consumption. [Pricing information for Azure Data Manager for Energy Preview.](https://azure.microsoft.com/pricing/details/energy-data-services/#pricing)
CORS provides a secure way to allow one origin (the origin domain) to call APIs
## January 2023
-### Managed Identity Support
+### Managed Identity support
You can use a managed identity to authenticate to any [service that supports Azure AD (Active Directory) authentication](../active-directory/managed-identities-azure-resources/services-azure-active-directory-support.md) with Azure Data Manager for Energy Preview. For example, you can write a script in Azure Function to ingest data in Azure Data Manager for Energy Preview. Now, you can use managed identity to connect to Azure Data Manager for Energy Preview using system or user assigned managed identity from other Azure services. [Learn more.](../energy-data-services/how-to-use-managed-identity.md)
-### Availability zone support
+### Availability Zone support
Availability Zones are physically separate locations within an Azure region made up of one or more datacenters equipped with independent power, cooling, and networking. Availability Zones provide in-region High Availability and protection against local disasters. Azure Data Manager for Energy Preview supports zone-redundant instance by default and there's no setup required by the customer. [Learn more.](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=energy-data-services&regions=all)
Most operations, support, and troubleshooting performed by Microsoft personnel d
Azure Private Link on Azure Data Manager for Energy Preview provides private access to the service. With Azure Private Link, traffic between your private network and Azure Data Manager for Energy Preview travels over the Microsoft backbone network, therefore limiting any exposure over the internet. By using Azure Private Link, you can connect to an Azure Data Manager for Energy Preview instance from your virtual network via a private endpoint. You can limit access to your Azure Data Manager for Energy Preview instance over these private IP addresses. [Create a private endpoint for Azure Data Manager for Energy Preview](how-to-set-up-private-links.md).
-### Encryption at Rest using Customer Managed Keys
+### Encryption at rest using Customer Managed keys
Azure Data Manager for Energy Preview supports customer managed encryption keys (CMK). All data in Azure Data Manager for Energy Preview is encrypted with Microsoft-managed keys by default. In addition to Microsoft-managed key, you can use your own encryption key to protect the data in Azure Data Manager for Energy Preview. When you specify a customer-managed key, that key is used to protect and control access to the Microsoft-managed key that encrypts your data. [Data security and encryption in Azure Data Manager for Energy Preview](how-to-manage-data-security-and-encryption.md).
governance Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/concepts/lifecycle.md
Title: Understand the lifecycle of a blueprint
description: Learn about the lifecycle that a blueprint definition goes through and details about each stage, including updating and removing blueprint assignments. Last updated 01/04/2023 -+ # Understand the lifecycle of an Azure Blueprint
governance Agent Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/agent-notes.md
- Title: Azure Automanage machine configuration agent release notes
-description: Details guest configuration agent release notes, issues, and frequently asked questions.
Previously updated : 09/13/2022--
-# Azure Automanage machine configuration agent release notes
--
-## About the guest configuration agent
-
-The guest configuration agent receives improvements on an ongoing basis. To stay up to date with the most recent developments, this article provides you with information about:
--- The latest releases-- Known issues-- Bug fixes-
-For information on release notes for the connected machine agent, please see [What's new with the connected machine agent](../../azure-arc/servers/agent-release-notes.md).
-
-## Release notes
-
-### Version 1.29.48 - January 2023
-
-#### New Features
--- In this release we have added support for Linux distributions such as Red Hat Enterprise Linux (RHEL) 9, Mariner 1&2, Alma 9, and Rocky 9. -
-#### Fixed
--- Reliability improvements were made to the guest configuration policy engine--
-### Guest Configuration Linux Extension version 1.26.38
-
-In this release, various improvements were made.
--- You can now restrict which URLs can be used to download machine configuration packages by setting the allowedGuestConfigPkgUrls tag on the server resource and providing a comma-separated list of URL patterns to allow. If the tag exists, the agent will only allow custom packages to be downloaded from the specified URLs. Built-in packages are unaffected by this feature. -
-## Fixed
--- Resolves local elevation of privilege vulnerability [CVE-2022-38007](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2022-38007). -- If you're currently running an older version of the AzurePolicyforLinux extension, use the PowerShell or Azure CLI commands below to update your extension to the latest version. -
-```powershell
-Set-AzVMExtension -Publisher 'Microsoft.GuestConfiguration' -Type 'ConfigurationforLinux' -Name 'AzurePolicyforLinux' -TypeHandlerVersion 1.26.38 -ResourceGroupName 'myResourceGroup' -Location 'myLocation' -VMName 'myVM' -EnableAutomaticUpgrade $true
-```
-
-```azurecli
-az vm extension set --publisher Microsoft.GuestConfiguration --name ConfigurationforLinux --extension-instance-name AzurePolicyforLinux --resource-group myResourceGroup --vm-name myVM --version 1.26.38 --enable-auto-upgrade true
-```
-
-## Next steps
--- Set up a custom machine configuration package [development environment](./machine-configuration-create-setup.md).-- [Create a package artifact](./machine-configuration-create.md)
- for machine configuration.
-- [Test the package artifact](./machine-configuration-create-test.md)
- from your development environment.
-- Use the `GuestConfiguration` module to
- [create an Azure Policy definition](./machine-configuration-create-definition.md)
- for at-scale management of your environment.
-- [Assign your custom policy definition](../policy/assign-policy-portal.md) using
- Azure portal.
-- Learn how to view
- [compliance details for machine configuration](../policy/how-to/determine-non-compliance.md) policy assignments.
governance Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/agent-release-notes.md
+
+ Title: Azure Automanage machine configuration agent release notes
+description: Details guest configuration agent release notes, issues, and frequently asked questions.
Last updated : 04/18/2023++
+# Azure Automanage machine configuration agent release notes
++
+## About the machine configuration agent
+
+The machine configuration agent receives improvements on an ongoing basis. To stay up to date with
+the most recent developments, this article provides you with information about:
+
+- The latest releases
+- Known issues
+- Bug fixes
+
+For information on release notes for the connected machine agent, see
+[What's new with the connected machine agent][01].
+
+## Release notes
+
+### Version 1.29.48 - January 2023
+
+#### New Features
+
+- In this release, we've added support for Linux distributions such as Red Hat Enterprise Linux
+ (RHEL) 9, Mariner 1&2, Alma 9, and Rocky 9.
+
+#### Fixed
+
+- Reliability improvements were made to the guest configuration policy engine
++
+### Guest Configuration Linux Extension version 1.26.38
+
+In this release, various improvements were made.
+
+- You can now restrict which URLs can be used to download machine configuration packages by setting
+ the `allowedGuestConfigPkgUrls` tag on the server resource and providing a comma-separated list of
+ URL patterns to allow. If the tag exists, the agent only allows custom packages to be
+ downloaded from the specified URLs. Built-in packages are unaffected by this feature.
+
+## Fixed
+
+- Resolves local elevation of privilege vulnerability [CVE-2022-38007][03].
+- If you're currently running an older version of the AzurePolicyforLinux extension, use the
+ PowerShell or Azure CLI commands in the following examples to update your extension to the latest
+ version.
+
+```azurepowershell-interactive
+$params = @{
+ Publisher = 'Microsoft.GuestConfiguration'
+ Type = 'ConfigurationforLinux'
+ Name = 'AzurePolicyforLinux'
+ TypeHandlerVersion = '1.26.38'
+ ResourceGroupName = '<resource-group>'
+ Location = '<location>'
+ VMName = '<vm-name>'
+ EnableAutomaticUpgrade = $true
+}
+Set-AzVMExtension @params
+```
+
+```azurecli
+az vm extension set \
+ --publisher Microsoft.GuestConfiguration \
+ --name ConfigurationforLinux \
+ --extension-instance-name AzurePolicyforLinux \
+ --resource-group <resource-group> \
+ --vm-name <vm-name> \
+ --version 1.26.38 \
+ --enable-auto-upgrade true
+```
+
+## Next steps
+
+- Set up a custom machine configuration package [development environment][04].
+- [Create a package artifact][05] for machine configuration.
+- [Test the package artifact][06] from your development environment.
+- Use the `GuestConfiguration` module to [create an Azure Policy definition][07] for at-scale
+ management of your environment.
+- [Assign your custom policy definition][08] using Azure portal.
+- Learn how to view [compliance details for machine configuration][09] policy assignments.
+
+<!-- Reference link definitions -->
+[01]: ../../azure-arc/servers/agent-release-notes.md
+[03]: https://msrc.microsoft.com/update-guide/vulnerability/CVE-2022-38007
+[04]: ./how-to-set-up-authoring-environment.md
+[05]: ./how-to-create-package.md
+[06]: ./how-to-test-package.md
+[07]: ./how-to-create-policy-definition.md
+[08]: ../policy/assign-policy-portal.md
+[09]: ../policy/how-to/determine-non-compliance.md
governance Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/assignments.md
+
+ Title: Understand machine configuration assignment resources
+description: Machine configuration creates extension resources named machine configuration assignments that map configurations to machines.
Last updated : 04/18/2023++
+# Understand machine configuration assignment resources
++
+When an Azure Policy is assigned, if it's in the category `Guest Configuration` there's metadata
+included to describe a guest assignment.
+
+[A video walk-through of this document is available][01].
+
+You can think of a guest assignment as a link between a machine and an Azure Policy scenario. For
+example, the following snippet associates the Azure Windows Baseline configuration with minimum
+version `1.0.0` to any machines in scope of the policy.
+
+```json
+"metadata": {
+ "category": "Guest Configuration",
+ "guestConfiguration": {
+ "name": "AzureWindowsBaseline",
+ "version": "1.*"
+ }
+ //additional metadata properties exist
+}
+```
+
+## How Azure Policy uses machine configuration assignments
+
+The machine configuration service uses the metadata information to automatically create an audit
+resource for definitions with either `AuditIfNotExists` or `DeployIfNotExists` policy effects. The
+resource type is `Microsoft.GuestConfiguration/guestConfigurationAssignments`. Azure Policy uses
+the **complianceStatus** property of the guest assignment resource to report compliance status. For
+more information, see [getting compliance data][02].
+
+### Deletion of guest assignments from Azure Policy
+
+When an Azure Policy assignment is deleted, if the policy created a machine configuration
+assignment, the machine configuration assignment is also deleted.
+
+When an Azure Policy assignment is deleted, you need to manually delete any machine configuration
+assignments the policy created. You can do so by navigating to the guest assignments page on Azure
+portal and deleting the assignment there.
+
+## Manually creating machine configuration assignments
+
+You can create guest assignment resources in Azure Resource Manager by using Azure Policy or any
+client SDK.
+
+An example deployment template:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "resources": [
+ {
+ "apiVersion": "2021-01-25",
+ "type": "Microsoft.Compute/virtualMachines/providers/guestConfigurationAssignments",
+ "name": "myMachine/Microsoft.GuestConfiguration/myConfig",
+ "location": "westus2",
+ "properties": {
+ "guestConfiguration": {
+ "name": "myConfig",
+ "contentUri": "https://mystorageaccount.blob.core.windows.net/mystoragecontainer/myConfig.zip?sv=SASTOKEN",
+ "contentHash": "SHA256HASH",
+ "version": "1.0.0",
+ "assignmentType": "ApplyAndMonitor",
+ "configurationParameter": {}
+ }
+ }
+ }
+ ]
+}
+```
+
+The following table describes each property of guest assignment resources.
+
+| Property | Description |
+| -- | |
+| **name** | Name of the configuration inside the content package MOF file. |
+| **contentUri** | HTTPS URI path to the content package (`.zip`). |
+| **contentHash** | A SHA256 hash value of the content package, used to verify it hasn't changed. |
+| **version** | Version of the content package. Only used for built-in packages and not used for custom content packages. |
+| **assignmentType** | Behavior of the assignment. Allowed values: `Audit`, `ApplyandMonitor`, and `ApplyandAutoCorrect`. |
+| **configurationParameter** | List of DSC resource type, name, and value in the content package MOF file to be overridden after it's downloaded in the machine. |
+
+### Deletion of manually created machine configuration assignments
+
+You must manually delete machine configuration assignments created through any manual approach
+(such as an Azure Resource Manager template deployment). Deleting the parent resource (virtual
+machine or Arc-enabled machine) also deletes the machine configuration assignment.
+
+To manually delete a machine configuration assignment, use the following example. Make sure to
+replace all example strings, indicated by `<>` brackets.
+
+```azurepowershell-interactive
+# First get details about the machine configuration assignment
+$resourceDetails = @{
+ ResourceGroupName = '<resource-group-name>'
+ ResourceType = 'Microsoft.Compute/virtualMachines/providers/guestConfigurationAssignments/'
+ ResourceName = '<vm-name>/Microsoft.GuestConfiguration'
+ ApiVersion = '2020-06-25'
+}
+$guestAssignment = Get-AzResource @resourceDetails
+
+# Review details of the machine configuration assignment
+$guestAssignment
+
+# After reviewing properties of $guestAssignment to confirm
+$guestAssignment | Remove-AzResource
+```
+
+## Next steps
+
+- Read the [machine configuration overview][03].
+- Set up a custom machine configuration package [development environment][04].
+- [Create a package artifact][05] for machine configuration.
+- [Test the package artifact][06] from your development environment.
+- Use the **GuestConfiguration** module to [create an Azure Policy definition][07] for at-scale
+ management of your environment.
+- [Assign your custom policy definition][08] using Azure portal.
+- Learn how to view [compliance details for machine configuration][09] policy assignments.
+
+<!-- Reference link definitions -->
+[01]: https://youtu.be/DmCphySEB7A
+[02]: ../policy/how-to/get-compliance-data.md
+[03]: ./overview.md
+[04]: ./how-to-set-up-authoring-environment.md
+[05]: ./how-to-create-package.md
+[06]: ./how-to-test-package.md
+[07]: ./how-to-create-policy-definition.md
+[08]: ../policy/assign-policy-portal.md
+[09]: ../policy/how-to/determine-non-compliance.md
governance Dsc In Machine Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/dsc-in-machine-configuration.md
+
+ Title: Changes to behavior in PowerShell Desired State Configuration for machine configuration
+description: This article describes the platform used to deliver configuration changes to machines through Azure Policy.
Last updated : 04/18/2023+++
+# Changes to behavior in PowerShell Desired State Configuration for machine configuration
++
+Before you begin, it's a good idea to read the overview of [machine configuration][01].
+
+[A video walk-through of this document is available][02].
+
+Machine configuration uses [Desired State Configuration (DSC)][03] version 3 to audit and configure
+machines. The DSC configuration defines the state that the machine should be in. There's many
+notable differences in how DSC is implemented in machine configuration.
+
+## Machine configuration uses PowerShell 7 cross platform
+
+Machine configuration is designed so the experience of managing Windows and Linux can be
+consistent. Across both operating system environments, someone with PowerShell DSC knowledge can
+create and publish configurations using scripting skills.
+
+Machine configuration only uses PowerShell DSC version 3 and doesn't rely on the previous
+implementation of [DSC for Linux][04] or the `nx*` providers included in that repository.
+
+As of version 1.29.33, machine configuration operates in PowerShell 7.1.2 for Windows and
+PowerShell 7.2 preview 6 for Linux. Starting with version 7.2, the **PSDesiredStateConfiguration**
+module moved from being part of the PowerShell installation and is instead installed as a
+[module from the PowerShell Gallery][05].
+
+## Multiple configurations
+
+Machine configuration supports assigning multiple configurations to the same machine. There's no
+special steps required within the operating system of machine configuration extension. There's no
+need to configure [partial configurations][06].
+
+## Dependencies are managed per-configuration
+
+When a configuration is [packaged using the available tools][07], the required dependencies for the
+configuration are included in a `.zip` file. Machines extract the contents into a unique folder for
+each configuration. The agent delivered by the machine configuration extension creates a dedicated
+PowerShell session for each configuration. It uses a `$Env:PSModulePath` that limits automatic
+module loading to only the path where the package was extracted.
+
+This change has multiple benefits:
+
+- It's possible to use different module versions for each configuration, on the same machine.
+- When a configuration is no longer needed on a machine, the agent safely deletes the entire folder
+ where the configuration was extracted. You don't need to manage shared dependencies across
+ configurations.
+- It's not required to manage multiple versions of any module in a central service.
+
+## Artifacts are managed as packages
+
+The Azure Automation State Configuration feature includes artifact management for modules and
+configuration scripts. Once both are published to the service, the script can be compiled to MOF
+format. Similarly, the Windows Pull Server also required managing configurations and modules at the
+web service instance. By contrast, the DSC extension has a simplified model where all artifacts are
+packaged together and stored in a location accessible from the target machine using an HTTPS
+request. Azure Blob Storage is the popular option for hosting the artifacts.
+
+Machine configuration only uses the simplified model where all artifacts are packaged together and
+accessed from the target machine over HTTPS. There's no need to publish modules, scripts, or
+compile in the service. One change is that the package should always include a compiled MOF. It
+isn't possible to include a script file in the package and compile on the target machine.
+
+## Maximum size of custom configuration package
+
+In Azure Automation State Configuration, DSC configurations were [limited in size][08]. Machine
+configuration supports a total package size of 100 MB before compression. There's no specific
+limit on the size of the MOF file within the package.
+
+## Configuration mode is set in the package artifact
+
+When you create the configuration package, the mode is set using the following options:
+
+- `Audit` - Verifies the compliance of a machine. No changes are made.
+- `AuditandSet` - Verifies and remediates the compliance state of the machine. Changes are made if
+ the machine isn't compliant.
+
+The mode is set in the package rather than in the [Local Configuration Manager][09] service because
+each configuration may be applied with a different mode.
+
+## Parameter support through Azure Resource Manager
+
+Parameters set by the **configurationParameter** property array in
+[machine configuration assignments][10] overwrite the static text within a configuration MOF file
+when the file is stored on a machine. Parameters enable customization and an operator to control
+changes from the service API without needing to run commands within the machine.
+
+Parameters in Azure Policy that pass values to machine configuration assignments must be **string**
+type. It isn't possible to pass arrays through parameters, even if the DSC resource supports
+arrays.
+
+## Trigger Set from outside machine
+
+A challenge in previous versions of DSC has been correcting drift at scale without much custom code
+and reliance on WinRM remote connections. Guest configuration solves this problem. Users of machine
+configuration have control over drift correction through [Remediation On Demand][11].
+
+## Sequence includes Get method
+
+When machine configuration audits or configures a machine the same sequence of events is used for
+both Windows and Linux. The notable change in behavior is that the `Get` method is called by the
+service to return details about the state of the machine.
+
+1. The agent first runs `Test` to determine whether the configuration is in the correct state.
+1. If the package is set to `Audit`, the boolean value returned by the function determines if the
+ Azure Resource Manager status for the Guest Assignment should be `Compliant` or `NonCompliant`.
+1. If the package is set to `AuditandSet`, the boolean value determines whether to remediate the
+ machine by applying the configuration using the `Set` method. If the `Test` method returns
+ `$false`, `Set` is run. If `Test` returns `$true`, then `Set` isn't run.
+1. Last, the provider runs `Get` to return the current state of each setting so details are
+ available both about why a machine isn't compliant and to confirm that the current state is
+ compliant.
+
+## Special requirements for Get
+
+The DSC `Get` method has special requirements for machine configuration that haven't been needed
+for DSC.
+
+- The hash table that's returned should include a property named **Reasons**.
+- The **Reasons** property must be an array.
+- Each item in the array should be a hash table with keys named **Code** and **Phrase**.
+- No values other than the hash table should be returned.
+
+The **Reasons** property is used by the service to standardize how compliance information is
+presented. You can think of each item in **Reasons** as a message about how the resource is or
+isn't compliant. The property is an array because a resource could be out of compliance for more
+than one reason.
+
+The properties **Code** and **Phrase** are expected by the service. When authoring a custom
+resource, set the text you would like to show as the reason the resource isn't compliant as the
+value for **Phrase**. **Code** has specific formatting requirements so reporting can clearly
+display information about the resource used to do the audit. This solution makes guest
+configuration extensible. Any command could be run as long as the output can be returned as a
+string value for the **Phrase** property.
+
+- **Code** (string): The name of the resource, repeated, and then a short name with no spaces as an
+ identifier for the reason. These three values should be colon-delimited with no spaces.
+ - An example would be `registry:registry:keynotpresent`
+- **Phrase** (string): Human-readable text to explain why the setting isn't compliant.
+ - An example would be `The registry key $key isn't present on the machine.`
+
+```powershell
+$reasons = @()
+$reasons += @{
+ Code = 'Name:Name:ReasonIdentifer'
+ Phrase = 'Explain why the setting is not compliant'
+}
+return @{
+ reasons = $reasons
+}
+```
+
+When using command-line tools to get information that returns in `Get`, you might find the tool
+returns output you didn't expect. Even though you capture the output in PowerShell, output might
+also have been written to standard error. To avoid this issue, consider redirecting output to null.
+
+### The Reasons property embedded class
+
+In script-based resources (Windows only), the **Reasons** class is included in the schema MOF file
+as follows.
+
+```mof
+[ClassVersion("1.0.0.0")]
+class Reason
+{
+ [Read] String Phrase;
+ [Read] String Code;
+};
+
+[ClassVersion("1.0.0.0"), FriendlyName("ResourceName")]
+class ResourceName : OMI_BaseResource
+{
+ [Key, Description("Example description")] String Example;
+ [Read, EmbeddedInstance("Reason")] String Reasons[];
+};
+```
+
+In class-based resources (Windows and Linux), the **Reason** class is included in the PowerShell
+module as follows. Linux is case-sensitive, so the `C` in `Code` and `P` in `Phrase` must be
+capitalized.
+
+```powershell
+enum ensure {
+ Absent
+ Present
+}
+
+class Reason {
+ [DscProperty()]
+ [string] $Code
+
+ [DscProperty()]
+ [string] $Phrase
+}
+
+[DscResource()]
+class Example {
+
+ [DscProperty(Key)]
+ [ensure] $ensure
+
+ [DscProperty()]
+ [Reason[]] $Reasons
+
+ [Example] Get() {
+ # return current current state
+ }
+
+ [void] Set() {
+ # set the state
+ }
+
+ [bool] Test() {
+ # check whether state is correct
+ }
+}
+
+```
+
+If the resource has required properties, those properties should also be returned by `Get` in
+parallel with the **Reason** class. If **Reason** isn't included, the service includes a
+"catch-all" behavior that compares the values input to `Get` and the values returned by `Get`, and
+provides a detailed comparison as **Reason**.
+
+## Configuration names
+
+The name of the custom configuration must be consistent everywhere. These items must have the same
+name:
+
+- The `.zip` file for the content package
+- The configuration name in the MOF file
+- The machine configuration assignment name in the Azure Resource Manager template
+
+## Running commands in Windows PowerShell
+
+Running Windows modules in PowerShell can be achieved using the below pattern in your DSC
+resources. The below pattern temporarily sets the `PSModulePath` to run Windows PowerShell instead
+of PowerShell to discover required modules available in Windows PowerShell. This sample is a
+snippet adapted from the DSC resource used in the [Secure Web Server][12] built-in DSC resource.
+
+This pattern temporarily sets the PowerShell execution path to run from Windows PowerShell and
+discovers the required cmdlet, which in this case is `Get-WindowsFeature`. The output of the
+command is returned and then standardized for compatibility requirements. Once the cmdlet has been
+executed, `$env:PSModulePath` is set back to the original path.
+
+```powershell
+# The Get-WindowsFeature cmdlet needs to be run through Windows PowerShell
+# rather than through PowerShell, which is what the Policy engine runs.
+$null = Invoke-Command -ScriptBlock {
+ param ([string]$FileName)
+
+ $InitialPSModulePath = $env:PSModulePath
+ $WindowsPSFolder = "$env:SystemRoot\System32\WindowsPowershell\v1.0"
+ $WindowsPSExe = "$WindowsPSFolder\powershell.exe"
+ $WindowsPSModuleFolder = "$WindowsPSFolder\Modules"
+ $GetFeatureScriptBlock = {
+ param([string]$FileName)
+
+ if (Get-Command -Name Get-WindowsFeature -ErrorAction SilentlyContinue) {
+ Get-WindowsFeature -Name Web-Server |
+ ConvertTo-Json |
+ Out-File $FileName
+ } else {
+ Add-Content -Path $FileName -Value 'NotServer'
+ }
+ }
+
+ try {
+ # Set env variable to include Windows Powershell modules so we can find
+ # the Get-WindowsFeature cmdlet.
+ $env:PSModulePath = $WindowsPSModuleFolder
+ # Call Windows PowerShell to get the info about the Web-Server feature
+ & $WindowsPSExe -command $WindowsFeatureScriptBlock -args $FileName
+ } finally {
+ # Reset the env variable even if there's an error.
+ $env:PSModulePath = $InitialPSModulePath
+ }
+}
+```
+
+## Common DSC features not available during machine configuration public preview
+
+During public preview, machine configuration doesn't support
+[specifying cross-machine dependencies][13] using `WaitFor*` resources. It isn't possible for one
+machine to watch and wait for another machine to reach a state before progressing.
+
+[Reboot handling][14] isn't available in the public preview release of machine configuration,
+including, the `$global:DSCMachineStatus` isn't available. Configurations aren't able to reboot a
+node during or at the end of a configuration.
+
+## Known compatibility issues with supported modules
+
+The **PsDscResources** module in the PowerShell Gallery and the **PSDesiredStateConfiguration**
+module that ships with Windows are supported by Microsoft and have been a commonly used set of
+resources for DSC. Until the **PSDscResources** module is updated for DSCv3, be aware of the
+following known compatibility issues.
+
+- Don't use resources from the **PSDesiredStateConfiguration** module that ships with Windows.
+ Instead, switch to **PSDscResources**.
+- Don't use the `WindowsFeature`, `WindowsFeatureSet`, `WindowsOptionalFeature`, and
+ `WindowsOptionalFeatureSet` resources in **PsDscResources**. There's a known issue loading the
+ **DISM** module in PowerShell 7.1.3 on Windows Server that requires an update.
+
+The `nx*` resources for Linux that were included in the [DSC for Linux][15] repository were written
+in a combination of the languages C and Python. Because the path forward for DSC on Linux is to use
+PowerShell, the existing `nx*` resources aren't compatible with DSCv3. Until a new module
+containing supported resources for Linux is available, it's required to author custom resources.
+
+## Coexistence with DSC version 3 and previous versions
+
+DSC version 3 in machine configuration can coexist with older versions installed in [Windows][16]
+and [Linux][17]. The implementations are separate. However, there's no conflict detection across
+DSC versions, so don't try to manage the same settings.
+
+## Next steps
+
+- Read the [machine configuration overview][01].
+- Set up a custom machine configuration package [development environment][18].
+- [Create a package artifact][07] for machine configuration.
+- [Test the package artifact][19] from your development environment.
+- Use the `GuestConfiguration` module to [create an Azure Policy definition][20] for at-scale
+ management of your environment.
+- [Assign your custom policy definition][21] using Azure portal.
+- Learn how to view [compliance details for machine configuration][22] policy assignments.
+
+<!-- Reference link definitions -->
+[01]: ./overview.md
+[02]: https://youtu.be/nYd55FiKpgs
+[03]: /powershell/dsc/overview
+[04]: https://github.com/Microsoft/PowerShell-DSC-for-Linux
+[05]: https://www.powershellgallery.com/packages/PSDesiredStateConfiguration
+[06]: /powershell/dsc/pull-server/partialConfigs
+[07]: ./how-to-create-package.md
+[08]: ../../automation/automation-dsc-compile.md#compile-your-dsc-configuration-in-windows-powershell
+[09]: /powershell/dsc/managing-nodes/metaConfig#basic-settings
+[10]: assignments.md
+[11]: ./remediation-options.md#remediation-on-demand-applyandmonitor
+[12]: https://github.com/Azure/azure-policy/blob/master/samples/GuestConfiguration/package-samples/resource-modules/SecureProtocolWebServer/DSCResources/SecureWebServer/SecureWebServer.psm1#L253
+[13]: /powershell/dsc/configurations/crossnodedependencies
+[14]: /powershell/dsc/configurations/reboot-a-node
+[15]: https://github.com/microsoft/PowerShell-DSC-for-Linux/tree/master/Providers
+[16]: /powershell/dsc/getting-started/wingettingstarted
+[17]: /powershell/dsc/getting-started/lnxgettingstarted
+[18]: ./how-to-set-up-authoring-environment.md
+[19]: ./how-to-test-package.md
+[20]: ./how-to-create-policy-definition.md
+[21]: ../policy/assign-policy-portal.md
+[22]: ../policy/how-to/determine-non-compliance.md
governance How To Create Assignment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/how-to-create-assignment.md
+
+ Title: How to create a machine configuration assignment using templates
+description: Learn how to deploy configurations to machines directly from Azure Resource Manager.
Last updated : 04/18/2023+++
+# How to create a machine configuration assignment using templates
++
+The best way to [assign machine configuration packages][01] to multiple machines is using
+[Azure Policy][02]. You can also assign machine configuration packages to a single machine.
+
+## Built-in and custom configurations
+
+To assign a machine configuration package to a single machine, modify the following examples. There
+are two scenarios.
+
+- Apply a custom configuration to a machine using a link to a package that you [published][03].
+- Apply a [built-in][04] configuration to a machine, such as an Azure baseline.
+
+## Extending other resource types, such as Arc-enabled servers
+
+In each of the following sections, the example includes a **type** property where the name starts
+with `Microsoft.Compute/virtualMachines`. The guest configuration resource provider
+`Microsoft.GuestConfiguration` is an [extension resource][05] that must reference a parent type.
+
+To modify the example for other resource types such as [Arc-enabled servers][06], change the parent
+type to the name of the resource provider. For Arc-enabled servers, the resource provider is
+`Microsoft.HybridCompute/machines`.
+
+Replace the following "<>" fields with values specific to your environment:
+
+- `<vm_name>`: Specify the name of the machine resource to apply the configuration on.
+- `<configuration_name>`: Specify the name of the configuration to apply.
+- `<vm_location>`: Specify the Azure region to create the machine configuration assignment in.
+- `<Url_to_Package.zip>`: Specify an HTTPS link to the `.zip` file for your custom content package.
+- `<SHA256_hash_of_package.zip>`: Specify the SHA256 hash of the `.zip` file for your custom
+ content package.
+
+## Assign a configuration using an Azure Resource Manager template
+
+You can deploy an [Azure Resource Manager template][07] containing machine configuration assignment
+resources.
+
+The following example assigns a custom configuration.
+
+```json
+{
+ "apiVersion": "2020-06-25",
+ "type": "Microsoft.Compute/virtualMachines/providers/guestConfigurationAssignments",
+ "name": "<vm_name>/Microsoft.GuestConfiguration/<configuration_name>",
+ "location": "<vm_location>",
+ "dependsOn": [
+ "Microsoft.Compute/virtualMachines/<vm_name>"
+ ],
+ "properties": {
+ "guestConfiguration": {
+ "name": "<configuration_name>",
+ "contentUri": "<Url_to_Package.zip>",
+ "contentHash": "<SHA256_hash_of_package.zip>",
+ "assignmentType": "ApplyAndMonitor"
+ }
+ }
+}
+```
+
+The following example assigns the `AzureWindowBaseline` built-in configuration.
+
+```json
+{
+ "apiVersion": "2020-06-25",
+ "type": "Microsoft.Compute/virtualMachines/providers/guestConfigurationAssignments",
+ "name": "<vm_name>/Microsoft.GuestConfiguration/<configuration_name>",
+ "location": "<vm_location>",
+ "dependsOn": [
+ "Microsoft.Compute/virtualMachines/<vm_name>"
+ ],
+ "properties": {
+ "guestConfiguration": {
+ "name": "AzureWindowsBaseline",
+ "version": "1.*",
+ "assignmentType": "ApplyAndMonitor",
+ "configurationParameter": [
+ {
+ "name": "Minimum Password Length;ExpectedValue",
+ "value": "16"
+ },
+ {
+ "name": "Minimum Password Length;RemediateValue",
+ "value": "16"
+ },
+ {
+ "name": "Maximum Password Age;ExpectedValue",
+ "value": "75"
+ },
+ {
+ "name": "Maximum Password Age;RemediateValue",
+ "value": "75"
+ }
+ ]
+ }
+ }
+}
+```
+
+## Assign a configuration using Bicep
+
+You can use [Azure Bicep][08] to deploy machine configuration assignments.
+
+The following example assigns a custom configuration.
+
+```Bicep
+resource myVM 'Microsoft.Compute/virtualMachines@2021-03-01' existing = {
+ name: '<vm_name>'
+}
+
+resource myConfiguration 'Microsoft.GuestConfiguration/guestConfigurationAssignments@2020-06-25' = {
+ name: '<configuration_name>'
+ scope: myVM
+ location: resourceGroup().location
+ properties: {
+ guestConfiguration: {
+ name: '<configuration_name>'
+ contentUri: '<Url_to_Package.zip>'
+ contentHash: '<SHA256_hash_of_package.zip>'
+ version: '1.*'
+ assignmentType: 'ApplyAndMonitor'
+ }
+ }
+}
+```
+
+The following example assigns the `AzureWindowBaseline` built-in configuration.
+
+```Bicep
+resource myWindowsVM 'Microsoft.Compute/virtualMachines@2021-03-01' existing = {
+ name: '<vm_name>'
+}
+
+resource AzureWindowsBaseline 'Microsoft.GuestConfiguration/guestConfigurationAssignments@2020-06-25' = {
+ name: 'AzureWindowsBaseline'
+ scope: myWindowsVM
+ location: resourceGroup().location
+ properties: {
+ guestConfiguration: {
+ name: 'AzureWindowsBaseline'
+ version: '1.*'
+ assignmentType: 'ApplyAndMonitor'
+ configurationParameter: [
+ {
+ name: 'Minimum Password Length;ExpectedValue'
+ value: '16'
+ }
+ {
+ name: 'Minimum Password Length;RemediateValue'
+ value: '16'
+ }
+ {
+ name: 'Maximum Password Age;ExpectedValue'
+ value: '75'
+ }
+ {
+ name: 'Maximum Password Age;RemediateValue'
+ value: '75'
+ }
+ ]
+ }
+ }
+}
+```
+
+## Assign a configuration using Terraform
+
+You can use [Terraform][09] to [deploy][10] machine configuration assignments.
+
+> [!IMPORTANT]
+> The Terraform provider [azurerm_policy_virtual_machine_configuration_assignment][11] hasn't been
+> updated to support the **assignmentType** property so only configurations that perform audits are
+> supported.
+
+The following example assigns a custom configuration.
+
+```Terraform
+resource "azurerm_virtual_machine_configuration_policy_assignment" "<configuration_name>" {
+ name = "<configuration_name>"
+ location = azurerm_windows_virtual_machine.example.location
+ virtual_machine_id = azurerm_windows_virtual_machine.example.id
+ configuration {
+ name = "<configuration_name>"
+ contentUri = '<Url_to_Package.zip>'
+ contentHash = '<SHA256_hash_of_package.zip>'
+ version = "1.*"
+ assignmentType = "ApplyAndMonitor
+ }
+}
+```
+
+The following example assigns the `AzureWindowBaseline` built-in configuration.
+
+```Terraform
+resource "azurerm_virtual_machine_configuration_policy_assignment" "AzureWindowsBaseline" {
+ name = "AzureWindowsBaseline"
+ location = azurerm_windows_virtual_machine.example.location
+ virtual_machine_id = azurerm_windows_virtual_machine.example.id
+ configuration {
+ name = "AzureWindowsBaseline"
+ version = "1.*"
+ parameter {
+ name = "Minimum Password Length;ExpectedValue"
+ value = "16"
+ }
+ parameter {
+ name = "Minimum Password Length;RemediateValue"
+ value = "16"
+ }
+ parameter {
+ name = "Minimum Password Age;ExpectedValue"
+ value = "75"
+ }
+ parameter {
+ name = "Minimum Password Age;RemediateValue"
+ value = "75"
+ }
+ }
+}
+```
+
+## Next steps
+
+- Read the [machine configuration overview][12].
+- Set up a custom machine configuration package [development environment][13].
+- [Create a package artifact][14] for machine configuration.
+- [Test the package artifact][15] from your development environment.
+- [Publish the package artifact][03] so it's accessible to your machines.
+- Use the **GuestConfiguration** module to [create an Azure Policy definition][02] for at-scale
+ management of your environment.
+- [Assign your custom policy definition][16] using Azure portal.
+
+<!-- Reference link definitions -->
+[01]: ./assignments.md
+[02]: ./how-to-create-policy-definition.md
+[03]: ./how-to-publish-package.md
+[04]: ../policy/samples/built-in-packages.md
+[05]: ../../azure-resource-manager/management/extension-resource-types.md
+[06]: ../../azure-arc/servers/overview.md
+[07]: ../../azure-resource-manager/templates/deployment-tutorial-local-template.md?tabs=azure-powershell
+[08]: ../../azure-resource-manager/bicep/overview.md
+[09]: https://www.terraform.io/
+[10]: /azure/developer/terraform/get-started-windows-powershell
+[11]: https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_machine_configuration_policy_assignment
+[12]: ./overview.md
+[13]: ./how-to-set-up-authoring-environment.md
+[14]: ./how-to-create-package.md
+[15]: ./how-to-test-package.md
+[16]: ../policy/assign-policy-portal.md
governance How To Create Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/how-to-create-package.md
+
+ Title: How to create custom machine configuration package artifacts
+description: Learn how to create a machine configuration package file.
Last updated : 04/18/2023++
+# How to create custom machine configuration package artifacts
++
+Before you begin, it's a good idea to read the overview page for [machine configuration][01].
+
+Machine configuration uses [Desired State Configuration][02] (DSC) when auditing and configuring
+both Windows and Linux. The DSC configuration defines the condition that the machine should be in.
+
+> [!IMPORTANT]
+> Custom packages that audit the state of an environment and apply configurations are in Generally
+> Available (GA) support status. However, the following limitations apply:
+>
+> To use machine configuration packages that apply configurations, Azure VM guest configuration
+> extension version 1.29.24 or later, or Arc agent 1.10.0 or later, is required.
+>
+> The **GuestConfiguration** module is only available on Ubuntu 18. However, the package and
+> policies produced by the module can be used on any Linux distribution and version supported in
+> Azure or Arc.
+>
+> Testing packages on macOS isn't available.
+>
+> Don't use secrets or confidential information in custom content packages.
+
+Use the following steps to create your own configuration for managing the state of an Azure or
+non-Azure machine.
+
+## Install PowerShell 7 and required PowerShell modules
+
+First, follow the steps in [How to set up a machine configuration authoring environment][03]. Those
+steps help you to install the required version of PowerShell for your OS, the
+**GuestConfiguration** module, and the **PSDesiredStateConfiguration** module.
+
+## Author a configuration
+
+Before you create a configuration package, author and compile a DSC configuration. Example
+configurations are available for Windows and Linux.
+
+> [!IMPORTANT]
+> When compiling configurations for Windows, use **PSDesiredStateConfiguration** version 2.0.5 (the
+> stable release). When compiling configurations for Linux install the prerelease version 3.0.0.
+
+An example is provided in the DSC [Getting started document][04] for Windows.
+
+For Linux, you need to create a custom DSC resource module using [PowerShell classes][05]. The
+article [Writing a custom DSC resource with PowerShell classes][05] includes a full example of a
+custom resource and configuration tested with machine configuration.
+
+## Create a configuration package artifact
+
+Once the MOF is compiled, the supporting files must be packaged together. The completed package is
+used by machine configuration to create the Azure Policy definitions.
+
+The `New-GuestConfigurationPackage` cmdlet creates the package. Modules required by the
+configuration must be in available in `$Env:PSModulePath` for the development environment so the
+commands in the module can add them to the package.
+
+Parameters of the `New-GuestConfigurationPackage` cmdlet when creating Windows content:
+
+- **Name**: machine configuration package name.
+- **Configuration**: Compiled DSC configuration document full path.
+- **Path**: Output folder path. This parameter is optional. If not specified, the package is
+ created in current directory.
+- **Type**: (`Audit`, `AuditandSet`) Determines whether the configuration should only audit or if
+ the configuration should be applied and change the state of the machine. The default is `Audit`.
+
+This step doesn't require elevation. The **Force** parameter is used to overwrite existing
+packages, if you run the command more than once.
+
+The following commands create a package artifact:
+
+```powershell
+# Create a package that will only audit compliance
+$params = @{
+ Name = 'MyConfig'
+ Configuration = './Config/MyConfig.mof'
+ Type = 'Audit'
+ Force = $true
+}
+New-GuestConfigurationPackage @params
+```
+
+```powershell
+# Create a package that will audit and apply the configuration (Set)
+$params = @{
+ Name = 'MyConfig'
+ Configuration = './Config/MyConfig.mof'
+ Type = 'AuditAndSet'
+ Force = $true
+}
+New-GuestConfigurationPackage @params
+```
+
+An object is returned with the Name and Path of the created package.
+
+```Output
+Name Path
+- -
+MyConfig /Users/.../MyConfig/MyConfig.zip
+```
+
+### Expected contents of a machine configuration artifact
+
+The completed package is used by machine configuration to create the Azure Policy definitions. The
+package consists of:
+
+- The compiled DSC configuration as a MOF
+- Modules folder
+ - **GuestConfiguration** module
+ - **DscNativeResources** module
+ - DSC resource modules required by the MOF
+- A metaconfig file that stores the package `type` and `version`
+
+The PowerShell cmdlet creates the package `.zip` file. No root level folder or version folder is
+required. The package format must be a `.zip` file and can't exceed a total size of 100 MB when
+uncompressed.
+
+## Extending machine configuration with third-party tools
+
+The artifact packages for machine configuration can be extended to include third-party tools.
+Extending machine configuration requires development of two components.
+
+- A Desired State Configuration resource that handles all activity related to managing the
+ third-party tool
+ - Install
+ - Invoke
+ - Convert output
+- Content in the correct format for the tool to natively consume
+
+The DSC resource requires custom development if a community solution doesn't already exist.
+Community solutions can be discovered by searching the PowerShell Gallery for tag
+[GuestConfiguration][06].
+
+> [!NOTE]
+> Machine configuration extensibility is a "bring your own license" scenario. Ensure you have met
+> the terms and conditions of any third party tools before use.
+
+After the DSC resource has been installed in the development environment, use the
+**FilesToInclude** parameter for `New-GuestConfigurationPackage` to include content for the
+third-party platform in the content artifact.
+
+## Next steps
+
+- [Test the package artifact][07] from your development environment.
+- [Publish the package artifact][08] so it's accessible to your machines.
+- Use the **GuestConfiguration** module to [create an Azure Policy definition][09] for at-scale
+ management of your environment.
+- [Assign your custom policy definition][10] using Azure portal.
+
+<!-- Reference link definitions -->
+[01]: ./overview.md
+[02]: /powershell/dsc/overview
+[03]: ./how-to-set-up-authoring-environment.md
+[04]: /powershell/dsc/getting-started/wingettingstarted#define-a-configuration-and-generate-the-configuration-document
+[05]: /powershell/dsc/resources/authoringResourceClass
+[06]: https://www.powershellgallery.com/packages?q=Tags%3A%22GuestConfiguration%22
+[07]: ./how-to-test-package.md
+[08]: ./how-to-publish-package.md
+[09]: ./how-to-create-policy-definition.md
+[10]: ../policy/assign-policy-portal.md
governance How To Create Policy Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/how-to-create-policy-definition.md
+
+ Title: How to create custom machine configuration policy definitions
+description: Learn how to create a machine configuration policy.
Last updated : 04/18/2023++
+# How to create custom machine configuration policy definitions
++
+Before you begin, it's a good idea to read the overview page for [machine configuration][01], and
+the details about machine configuration's [remediation options][02].
+
+> [!IMPORTANT]
+> The machine configuration extension is required for Azure virtual machines. To deploy the
+> extension at scale across all machines, assign the following policy initiative:
+> `Deploy prerequisites to enable machine configuration policies on virtual machines`
+>
+> To use machine configuration packages that apply configurations, Azure VM guest configuration
+> extension version 1.29.24 or later, or Arc agent 1.10.0 or later, is required.
+>
+> Custom machine configuration policy definitions using either `AuditIfNotExists` or
+> `DeployIfNotExists` are in Generally Available (GA) support status.
+
+Use the following steps to create your own policies that audit compliance or manage the state of
+Azure or Arc-enabled machines.
+
+## Install PowerShell 7 and required PowerShell modules
+
+First, [set up a machine configuration authoring environment][03] to install the required version
+of PowerShell for your OS and the **GuestConfiguration** module.
+
+## Create and publish a machine configuration package artifact
+
+If you haven't already, create and publish a custom machine configuration package by following the
+steps in [How to create custom machine configuration package artifacts][04]. Then validate the
+package in your development environment by following the steps in
+[How to test machine configuration package artifacts][05].
+
+## Policy requirements for machine configuration
+
+The policy definition **metadata** section must include two properties for the machine
+configuration service to automate provisioning and reporting of guest configuration assignments.
+The **category** property must be set to `Guest Configuration` and a section named
+**guestConfiguration** must contain information about the machine configuration assignment. The
+`New-GuestConfigurationPolicy` cmdlet creates this text automatically.
+
+The following example demonstrates the **metadata** section that's automatically created by
+`New-GuestConfigurationPolicy`.
+
+```json
+"metadata": {
+ "category": "Guest Configuration",
+ "guestConfiguration": {
+ "name": "test",
+ "version": "1.0.0",
+ "contentType": "Custom",
+ "contentUri": "CUSTOM-URI-HERE",
+ "contentHash": "CUSTOM-HASH-VALUE-HERE",
+ "configurationParameter": {}
+ }
+}
+```
+
+If the definition effect is set to `DeployIfNotExists`, the **then** section must contain
+deployment details about a machine configuration assignment. The `New-GuestConfigurationPolicy`
+cmdlet creates this text automatically.
+
+### Create an Azure Policy definition
+
+Once a machine configuration custom policy package has been created and uploaded, create the
+machine configuration policy definition. The `New-GuestConfigurationPolicy` cmdlet takes a custom
+policy package and creates a policy definition.
+
+The **PolicyId** parameter of `New-GuestConfigurationPolicy` requires a unique string. A globally
+unique identifier (GUID) is required. For new definitions, generate a new GUID using the `New-GUID`
+cmdlet. When making updates to the definition, use the same unique string for **PolicyId** to
+ensure the correct definition is updated.
+
+Parameters of the `New-GuestConfigurationPolicy` cmdlet:
+
+- **PolicyId**: A GUID.
+- **ContentUri**: Public HTTP(s) URI of machine configuration content package.
+- **DisplayName**: Policy display name.
+- **Description**: Policy description.
+- **Parameter**: Policy parameters provided in a hash table.
+- **PolicyVersion**: Policy version.
+- **Path**: Destination path where policy definitions are created.
+- **Platform**: Target platform (Windows/Linux) for machine configuration policy and content
+ package.
+- **Mode**: (`ApplyAndMonitor`, `ApplyAndAutoCorrect`, `Audit`) choose if the policy should audit
+ or deploy the configuration. The default is `Audit`.
+- **Tag** adds one or more tag filters to the policy definition
+- **Category** sets the category metadata field in the policy definition
+
+For more information about the **Mode** parameter, see the page
+[How to configure remediation options for machine configuration][02].
+
+Create a policy definition that audits using a custom configuration package, in a specified path:
+
+```powershell
+$PolicyConfig = @{
+ PolicyId = '_My GUID_'
+ ContentUri = $contenturi
+ DisplayName = 'My audit policy'
+ Description = 'My audit policy'
+ Path = './policies/auditIfNotExists.json'
+ Platform = 'Windows'
+ PolicyVersion = 1.0.0
+}
+
+New-GuestConfigurationPolicy @PolicyConfig
+```
+
+Create a policy definition that deploys a configuration using a custom configuration package, in a
+specified path:
+
+```powershell
+$PolicyConfig2 = @{
+ PolicyId = '_My GUID_'
+ ContentUri = $contenturi
+ DisplayName = 'My audit policy'
+ Description = 'My audit policy'
+ Path = './policies/deployIfNotExists.json'
+ Platform = 'Windows'
+ PolicyVersion = 1.0.0
+ Mode = 'ApplyAndAutoCorrect'
+}
+
+New-GuestConfigurationPolicy @PolicyConfig2
+```
+
+The cmdlet output returns an object containing the definition display name and path of the policy
+files. Definition JSON files that create audit policy definitions have the name
+`auditIfNotExists.json` and files that create policy definitions to apply configurations have the
+name `deployIfNotExists.json`.
+
+#### Filtering machine configuration policies using tags
+
+The policy definitions created by cmdlets in the **GuestConfiguration** module can optionally
+include a filter for tags. The **Tag** parameter of `New-GuestConfigurationPolicy` supports an
+array of hash tables containing individual tag entries. The tags are added to the **if** section of
+the policy definition and can't be modified by a policy assignment.
+
+An example snippet of a policy definition that filters for tags follows.
+
+```json
+"if": {
+ "allOf" : [
+ {
+ "allOf": [
+ {
+ "field": "tags.Owner",
+ "equals": "BusinessUnit"
+ },
+ {
+ "field": "tags.Role",
+ "equals": "Web"
+ }
+ ]
+ },
+ {
+ // Original machine configuration content
+ }
+ ]
+}
+```
+
+#### Using parameters in custom machine configuration policy definitions
+
+Machine configuration supports overriding properties of a DSC Configuration at run time. This
+feature means that the values in the MOF file in the package don't have to be considered static.
+The override values are provided through Azure Policy and don't change how the DSC Configurations
+are authored or compiled.
+
+The cmdlets `New-GuestConfigurationPolicy` and `Get-GuestConfigurationPackageComplianceStatus`
+include a parameter named **Parameter**. This parameter takes a hash table definition including all
+details about each parameter and creates the required sections of each file used for the Azure
+Policy definition.
+
+The following example creates a policy definition to audit a service, where the user selects from a
+list at the time of policy assignment.
+
+```powershell
+# This DSC resource definition...
+Service 'UserSelectedNameExample' {
+ Name = 'ParameterValue'
+ Ensure = 'Present'
+ State = 'Running'
+}
+
+# ...can be converted to a hash table:
+$PolicyParameterInfo = @(
+ @{
+ # Policy parameter name (mandatory)
+ Name = 'ServiceName'
+ # Policy parameter display name (mandatory)
+ DisplayName = 'windows service name.'
+ # Policy parameter description (optional)
+ Description = 'Name of the windows service to be audited.'
+ # DSC configuration resource type (mandatory)
+ ResourceType = 'Service'
+ # DSC configuration resource id (mandatory)
+ ResourceId = 'UserSelectedNameExample'
+ # DSC configuration resource property name (mandatory)
+ ResourcePropertyName = 'Name'
+ # Policy parameter default value (optional)
+ DefaultValue = 'winrm'
+ # Policy parameter allowed values (optional)
+ AllowedValues = @('BDESVC','TermService','wuauserv','winrm')
+ })
+
+# ...and then passed into the `New-GuestConfigurationPolicy` cmdlet
+$PolicyParam = @{
+ PolicyId = 'My GUID'
+ ContentUri = $contenturi
+ DisplayName = 'Audit Windows Service.'
+ Description = "Audit if a Windows Service isn't enabled on Windows machine."
+ Path = '.\policies\auditIfNotExists.json'
+ Parameter = $PolicyParameterInfo
+ PolicyVersion = 1.0.0
+}
+
+New-GuestConfigurationPolicy @PolicyParam
+```
+
+### Publish the Azure Policy definition
+
+Finally, you can publish the policy definitions using the `New-AzPolicyDefinition` cmdlet. The
+below commands publish your machine configuration policy to the policy center.
+
+To run the `New-AzPolicyDefinition` command, you need access to create policy definitions in Azure.
+The specific authorization requirements are documented in the [Azure Policy Overview][06] page. The
+recommended built-in role is `Resource Policy Contributor`.
+
+```azurepowershell-interactive
+New-AzPolicyDefinition -Name 'mypolicydefinition' -Policy '.\policies\auditIfNotExists.json'
+```
+
+Or, if the policy is a deploy if not exist policy (DINE) use this command:
+
+```azurepowershell-interactive
+New-AzPolicyDefinition -Name 'mypolicydefinition' -Policy '.\policies\deployIfNotExists.json'
+```
+
+With the policy definition created in Azure, the last step is to assign the definition. See how to
+assign the definition with [Portal][07], [Azure CLI][08], and [Azure PowerShell][09].
+
+## Policy lifecycle
+
+If you would like to release an update to the policy definition, make the change for both the guest
+configuration package and the Azure Policy definition details.
+
+> [!NOTE]
+> The `version` property of the machine configuration assignment only effects packages that are
+> hosted by Microsoft. The best practice for versioning custom content is to include the version in
+> the file name.
+
+First, when running `New-GuestConfigurationPackage`, specify a name for the package that makes it
+unique from earlier versions. You can include a version number in the name such as
+`PackageName_1.0.0`. The number in this example is only used to make the package unique, not to
+specify that the package should be considered newer or older than other packages.
+
+Second, update the parameters used with the `New-GuestConfigurationPolicy` cmdlet following each of
+the following explanations.
+
+- **Version**: When you run the `New-GuestConfigurationPolicy` cmdlet, you must specify a version
+ number greater than what's currently published.
+- **contentUri**: When you run the `New-GuestConfigurationPolicy` cmdlet, you must specify a URI to
+ the location of the package. Including a package version in the file name ensures the value of
+ this property changes in each release.
+- **contentHash**: The `New-GuestConfigurationPolicy` cmdlet updates this property automatically.
+ It's a hash value of the package created by `New-GuestConfigurationPackage`. The property must be
+ correct for the `.zip` file you publish. If only the **contentUri** property is updated, the
+ Extension rejects the content package.
+
+The easiest way to release an updated package is to repeat the process described in this article
+and specify an updated version number. That process guarantees all properties have been correctly
+updated.
+
+## Next steps
+
+- [Assign your custom policy definition][07] using Azure portal.
+- Learn how to view [compliance details for machine configuration][10] policy assignments.
+
+<!-- Reference link definitions -->
+[01]: ./overview.md
+[02]: ./remediation-options.md
+[03]: ./how-to-set-up-authoring-environment.md
+[04]: ./how-to-create-package.md
+[05]: ./how-to-test-package.md
+[06]: ../policy/overview.md
+[07]: ../policy/assign-policy-portal.md
+[08]: ../policy/assign-policy-azurecli.md
+[09]: ../policy/assign-policy-powershell.md
+[10]: ../policy/how-to/determine-non-compliance.md#compliance-details
governance How To Publish Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/how-to-publish-package.md
+
+ Title: How to publish custom machine configuration package artifacts
+description: Learn how to publish a machine configuration package file to Azure blob storage and get a SAS token for secure access.
Last updated : 04/18/2023+++
+# How to publish custom machine configuration package artifacts
++
+Before you begin, it's a good idea to read the overview page for [machine configuration][01].
+
+Machine configuration custom `.zip` packages must be stored in a location that's accessible via
+HTTPS by the managed machines. Examples include GitHub repositories, an Azure Repo, Azure storage,
+or a web server within your private datacenter.
+
+Configuration packages that support `Audit` and `AuditandSet` are published the same way. There
+isn't a need to do anything special during publishing based on the package mode.
+
+## Publish a configuration package
+
+The preferred location to store a configuration package is Azure Blob Storage. There are no special
+requirements for the storage account, but it's a good idea to host the file in a region near your
+machines. If you prefer to not make the package public, you can include a [SAS token][02] in the
+URL or implement a [service endpoint][03] for machines in a private network.
+
+If you don't have a storage account, use the following example to create one.
+
+```azurepowershell-interactive
+# Creates a new resource group, storage account, and container
+$ResourceGroup = '<resource-group-name>'
+$Location = '<location-id>'
+New-AzResourceGroup -Name $ResourceGroup -Location $Location
+
+$newAccountParams = @{
+ ResourceGroupname = $ResourceGroup
+ Location = $Location
+ Name = '<storage-account-name>'
+ SkuName = 'Standard_LRS'
+}
+New-AzStorageAccount @newAccountParams |
+ New-AzStorageContainer -Name guestconfiguration -Permission Blob
+```
+
+To publish your configuration package to Azure blob storage, you can follow these steps, which use
+the **Az.Storage** module.
+
+First, obtain the context of the storage account you want to store the package in. This example
+creates a context by specifying a connection string and saves the context in the variable
+`$Context`.
+
+```azurepowershell-interactive
+$connectionString = @(
+ 'DefaultEndPointsProtocol=https'
+ 'AccountName=ContosoGeneral'
+ 'AccountKey=<storage-key-for-ContosoGeneral>' # ends with '=='
+) -join ';'
+$Context = New-AzStorageContext -ConnectionString $connectionString
+```
+
+Next, add the configuration package to the storage account. This example uploads the zip file
+`./MyConfig.zip` to the blob `machineConfiguration`.
+
+```azurepowershell-interactive
+$setParams = @{
+ Container = 'machineConfiguration'
+ File = './MyConfig.zip'
+ Context = $Context
+}
+Set-AzStorageBlobContent @setParams
+```
+
+Optionally, you can add a SAS token in the URL to ensure the content package is accessed securely.
+The below example generates a blob SAS token with read access and returns the full blob URI with
+the shared access signature token. In this example, the token has a time limit of three years.
+
+```azurepowershell-interactive
+$StartTime = Get-Date
+$EndTime = $startTime.AddYears(3)
+
+$tokenParams = @{
+ StartTime = $StartTime
+ EndTime = $EndTime
+ Container = 'machineConfiguration'
+ Blob = 'MyConfig.zip'
+ Permission = 'r'
+ Context = $Context
+ FullUri = $true
+}
+$contenturi = New-AzStorageBlobSASToken @tokenParams
+```
+
+## Next steps
+
+- [Test the package artifact][04] from your development environment.
+- Use the **GuestConfiguration** module to [create an Azure Policy definition][05] for at-scale
+ management of your environment.
+- [Assign your custom policy definition][06] using Azure portal.
+
+<!-- Reference link definitions -->
+[01]: ./overview.md
+[02]: ../../storage/common/storage-sas-overview.md
+[03]: ../../storage/common/storage-network-security.md#grant-access-from-a-virtual-network
+[04]: ./how-to-test-package.md
+[05]: ./how-to-create-policy-definition.md
+[06]: ../policy/assign-policy-portal.md
governance How To Set Up Authoring Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/how-to-set-up-authoring-environment.md
+
+ Title: How to install the machine configuration authoring module
+description: Learn how to install the PowerShell module for creating and testing machine configuration policy definitions and assignments.
Last updated : 04/18/2023++
+# How to set up a machine configuration authoring environment
++
+The PowerShell module **GuestConfiguration** automates the process of creating custom content
+including:
+
+- Creating a machine configuration content artifact (`.zip`)
+- Validating the package meets requirements
+- Installing the machine configuration agent locally for testing
+- Validating the package can be used to audit settings in a machine
+- Validating the package can be used to configure settings in a machine
+- Publishing the package to Azure storage
+- Creating a policy definition
+- Publishing the policy
+
+Support for applying configurations through machine configuration is introduced in version 3.4.2.
+
+### Base requirements
+
+Operating systems where the module can be installed:
+
+- Ubuntu 18
+- Windows
+
+The module can be installed on a machine running PowerShell 7.x. Install the versions of PowerShell
+listed in the following table for your operating system.
+
+| OS | PowerShell Version |
+| | - |
+| Windows | [PowerShell 7.1.3][01] |
+| Ubuntu 18 | [PowerShell 7.2.4][02] |
+
+The **GuestConfiguration** module requires the following software:
+
+- Azure PowerShell 5.9.0 or higher. The required Az PowerShell modules are installed automatically
+ with the **GuestConfiguration** module, or you can follow [these instructions][03].
++
+### Install the module from the PowerShell Gallery
+
+To install the **GuestConfiguration** module on either Windows or Linux, run the following command
+in PowerShell 7.
+
+```powershell
+# Install the machine configuration DSC resource module from PowerShell Gallery
+Install-Module -Name GuestConfiguration
+```
+
+Validate that the module has been imported:
+
+```powershell
+# Get a list of commands for the imported GuestConfiguration module
+Get-Command -Module 'GuestConfiguration'
+```
+
+## Next steps
+
+- [Create a package artifact][04] for machine configuration.
+- [Test the package artifact][05] from your development environment.
+- Use the **GuestConfiguration** module to [create an Azure Policy definition][06] for at-scale
+ management of your environment.
+- [Assign your custom policy definition][07] using Azure portal.
+
+<!-- Reference link definitions -->
+[01]: https://github.com/PowerShell/PowerShell/releases/tag/v7.1.3
+[02]: https://github.com/PowerShell/PowerShell/releases/tag/v7.2.4
+[03]: /powershell/azure/install-az-ps
+[04]: ./how-to-create-package.md
+[05]: ./how-to-test-package.md
+[06]: ./how-to-create-policy-definition.md
+[07]: ../policy/assign-policy-portal.md
governance How To Sign Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/how-to-sign-package.md
+
+ Title: How to sign machine configuration packages
+description: You can optionally sign machine configuration content packages and force the agent to only allow signed content
Last updated : 04/18/2023+++
+# How to sign machine configuration packages
++
+Machine configuration custom policies use SHA256 hash to validate the policy package hasn't
+changed. Optionally, customers may also use a certificate to sign packages and force the machine
+configuration extension to only allow signed content.
+
+To enable this scenario, there are two steps you need to complete. Run the cmdlet to sign the
+content package, and append a tag to the machines that should require code to be signed.
+
+## Signature validation using a code signing certificate
+
+To use the Signature Validation feature, run the `Protect-GuestConfigurationPackage` cmdlet to sign
+the package before it's published. This cmdlet requires a 'Code Signing' certificate. If you don't
+have a 'Code Signing' certificate, use the following script to create a self-signed certificate for
+testing purposes to follow along with the example.
+
+## Windows signature validation
+
+```azurepowershell-interactive
+# How to create a self sign cert and use it to sign Machine Configuration
+# custom policy package
+
+# Create Code signing cert
+$codeSigningParams = @{
+ Type = 'CodeSigningCert'
+ DnsName = 'GCEncryptionCertificate'
+ HashAlgorithm = 'SHA256'
+}
+$mycert = New-SelfSignedCertificate @codeSigningParams
+
+# Export the certificates
+$mypwd = ConvertTo-SecureString -String "Password1234" -Force -AsPlainText
+$mycert | Export-PfxCertificate -FilePath C:\demo\GCPrivateKey.pfx -Password $mypwd
+$mycert | Export-Certificate -FilePath "C:\demo\GCPublicKey.cer" -Force
+
+# Import the certificate
+$importParams = @{
+ FilePath = 'C:\demo\GCPrivateKey.pfx'
+ Password = $mypwd
+ CertStoreLocation = 'Cert:\LocalMachine\My'
+}
+Import-PfxCertificate @importParams
+
+# Sign the policy package
+$certToSignThePackage = Get-ChildItem -Path cert:\LocalMachine\My |
+ Where-Object { $_.Subject-eq "CN=GCEncryptionCertificate" }
+$protectParams = @{
+ Path = 'C:\demo\AuditWindowsService.zip'
+ Certificate = $certToSignThePackage
+ Verbose = $true
+}
+Protect-GuestConfigurationPackage @protectParams
+```
+
+## Linux signature validation
+
+```azurepowershell-interactive
+# generate gpg key
+gpg --gen-key
+
+# export public key
+gpg --output public.gpg --export <email-id-used-to-generate-gpg-key>
+
+# export private key
+gpg --output private.gpg --export-secret-key <email-id-used-to-generate-gpg-key>
+
+# Sign linux policy package
+Import-Module GuestConfiguration
+$protectParams = @{
+ Path = './not_installed_application_linux.zip'
+ PrivateGpgKeyPath = './private.gpg'
+ PublicGpgKeyPath = './public.gpg'
+ Verbose = $true
+}
+Protect-GuestConfigurationPackage
+```
+
+Parameters of the `Protect-GuestConfigurationPackage` cmdlet:
+
+- **Path**: Full path of the machine configuration package.
+- **Certificate**: Code signing certificate to sign the package. This parameter is only supported
+ when signing content for Windows.
+
+## Certificate requirements
+
+The machine configuration agent expects the certificate public key to be present in "Trusted Root
+Certificate Authorities" on Windows machines and in the path `/usr/local/share/ca-certificates/gc`
+on Linux machines. For the node to verify signed content, install the certificate public key on the
+machine before applying the custom policy. This process can be done using any technique inside the
+VM or by using Azure Policy. An example template is available
+[to deploy a machine with a certificate][01]. The Key Vault access policy must allow the Compute
+resource provider to access certificates during deployments. For detailed steps, see
+[Set up Key Vault for virtual machines in Azure Resource Manager][02].
+
+Following is an example to export the public key from a signing certificate, to import to the
+machine.
+
+```azurepowershell-interactive
+$Cert = Get-ChildItem -Path cert:\LocalMachine\My |
+ Where-Object { $_.Subject-eq "CN=mycert3" } |
+ Select-Object -First 1
+$Cert | Export-Certificate -FilePath "$env:temp\DscPublicKey.cer" -Force
+```
+
+## Tag requirements
+
+After your content is published, append a tag with name `GuestConfigPolicyCertificateValidation`
+and value `enabled` to all virtual machines where code signing should be required. See the
+[Tag samples][03] for how tags can be delivered at scale using Azure Policy. Once this tag is in
+place, the policy definition generated using the `New-GuestConfigurationPolicy` cmdlet enables the
+requirement through the machine configuration extension.
+
+## Next steps
+
+- [Test the package artifact][04] from your development environment.
+- [Publish the package artifact][05] so it's accessible to your machines.
+- Use the `GuestConfiguration` module to [create an Azure Policy definition][06] for at-scale
+ management of your environment.
+- [Assign your custom policy definition][07] using Azure portal.
+- Learn how to view [compliance details for machine configuration][08] policy assignments.
+
+<!-- Reference link definitions -->
+[01]: https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vm-push-certificate-windows
+[02]: ../../virtual-machines/windows/key-vault-setup.md#use-templates-to-set-up-key-vault
+[03]: ../policy/samples/built-in-policies.md#tags
+[04]: ./how-to-test-package.md
+[05]: ./how-to-publish-package.md
+[06]: ./how-to-create-policy-definition.md
+[07]: ../policy/assign-policy-portal.md
+[08]: ../policy/how-to/determine-non-compliance.md
governance How To Test Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/how-to-test-package.md
+
+ Title: How to test machine configuration package artifacts
+description: The experience creating and testing packages that audit or apply configurations to machines.
Last updated : 04/18/2023++
+# How to test machine configuration package artifacts
++
+The PowerShell module **GuestConfiguration** includes tools to automate testing a configuration
+package outside of Azure. Use these tools to find issues and iterate quickly before moving on to
+test in an Azure or Arc connected environment.
+
+Before you can begin testing, you need to [set up your authoring environment][01] and
+[create a custom machine configuration package artifact][02].
+
+> [!IMPORTANT]
+> Custom packages that audit the state of an environment and apply configurations are in Generally
+> Available (GA) support status. However, the following limitations apply:
+>
+> To use machine configuration packages that apply configurations, Azure VM guest configuration
+> extension version 1.29.24 or later, or Arc agent 1.10.0 or later, is required.
+>
+> The **GuestConfiguration** module is only available on Ubuntu 18. However, the package and
+> policies produced by the module can be used on any Linux distro/version supported in Azure or
+> Arc.
+>
+> Testing packages on macOS isn't available.
+
+You can test the package from your workstation or continuous integration and continuous deployment
+(CI/CD) environment. The **GuestConfiguration** module includes the same agent for your development
+environment as is used inside Azure or Arc enabled machines. The agent includes a stand-alone
+instance of PowerShell 7.1.3 for Windows and 7.2.0-preview.7 for Linux. The stand-alone instance
+ensures the script environment where the package is tested is consistent with machines you manage
+using machine configuration.
+
+The agent service in Azure and Arc-enabled machines is running as the `LocalSystem` account in
+Windows and Root in Linux. Run the commands in this article in a privileged security context for
+best results.
+
+To run PowerShell as `LocalSystem` in Windows, use the SysInternals tool [PSExec][03].
+
+To run PowerShell as Root in Linux, use the [sudo command][04].
+
+## Validate the configuration package meets requirements
+
+First test that the configuration package meets basic requirements using
+`Get-GuestConfigurationPackageComplianceStatus`. The command verifies the following package
+requirements.
+
+- MOF is present and valid, at the right location
+- Required Modules/dependencies are present with the right version, without duplicates
+- Validate the package is signed (optional)
+- Test that `Test` and `Get` return information about the compliance status
+
+Parameters of the `Get-GuestConfigurationPackageComplianceStatus` cmdlet:
+
+- **Path**: File path or URI of the machine configuration package.
+- **Parameter**: Policy parameters provided as a hash table.
+
+When this command is run for the first time, the machine configuration agent gets installed on the
+test machine at the path `C:\ProgramData\GuestConfig\bin` on Windows and `/var/lib/GuestConfig/bin`
+on Linux. This path isn't accessible to a user account so the command requires elevation.
+
+Run the following command to test the package:
+
+In Windows, from an elevated PowerShell 7 session.
+
+```powershell
+# Get the current compliance results for the local machine
+Get-GuestConfigurationPackageComplianceStatus -Path ./MyConfig.zip
+```
+
+In Linux, by running PowerShell using sudo.
+
+```bash
+# Get the current compliance results for the local machine
+sudo pwsh -command 'Get-GuestConfigurationPackageComplianceStatus -Path ./MyConfig.zip'
+```
+
+The command outputs an object containing the compliance status and details per resource.
+
+```Output
+ complianceStatus resources
+ -
+ True @{BuiltInAccount=localSystem; ConfigurationName=MyConfig; …
+```
+
+#### Test the configuration package can apply a configuration
+
+Finally, if the configuration package mode is `AuditandSet` you can test that the `Set` method can
+apply settings to a local machine using the command `Start-GuestConfigurationPackageRemediation`.
+
+> [!IMPORTANT]
+> This command attempts to make changes in the local environment where it's run.
+
+Parameters of the `Start-GuestConfigurationPackageRemediation` cmdlet:
+
+- **Path**: Full path of the machine configuration package.
+
+In Windows, from an elevated PowerShell 7 session.
+
+```powershell
+# Test applying the configuration to local machine
+Start-GuestConfigurationPackageRemediation -Path ./MyConfig.zip
+```
+
+In Linux, by running PowerShell using sudo.
+
+```bash
+# Test applying the configuration to local machine
+sudo pwsh -command 'Start-GuestConfigurationPackageRemediation -Path ./MyConfig.zip'
+```
+
+The command only returns output when errors occur. To troubleshoot details about events occurring
+during `Set`, use the `-verbose` parameter.
+
+After running the command `Start-GuestConfigurationPackageRemediation`, you can run the command
+`Get-GuestConfigurationComplianceStatus` again to confirm the machine is now in the correct state.
+
+## Next steps
+
+- [Publish the package artifact][05] so it's accessible to your machines.
+- Use the **GuestConfiguration** module to [create an Azure Policy definition][06] for at-scale
+ management of your environment.
+- [Assign your custom policy definition][07] using Azure portal.
+- Learn how to view [compliance details for machine configuration][08] policy assignments.
+
+<!-- Reference link definitions -->
+[01]: ./how-to-set-up-authoring-environment.md
+[02]: ./how-to-create-package.md
+[03]: /sysinternals/downloads/psexec
+[04]: https://www.sudo.ws/docs/man/sudo.man/
+[05]: ./how-to-publish-package.md
+[06]: ./how-to-create-policy-definition.md
+[07]: ../policy/assign-policy-portal.md
+[08]: ../policy/how-to/determine-non-compliance.md
governance Machine Configuration Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-assignments.md
- Title: Understand machine configuration assignment resources
-description: Machine configuration creates extension resources named machine configuration assignments that map configurations to machines.
Previously updated : 01/12/2023--
-# Understand machine configuration assignment resources
--
-When an Azure Policy is assigned, if it's in the category "Guest Configuration"
-there's metadata included to describe a guest assignment.
-
-[A video walk-through of this document is available](https://youtu.be/DmCphySEB7A).
-
-You can think of a guest assignment as a link between a machine and an Azure
-Policy scenario. For example, the following snippet associates the Azure Windows
-Baseline configuration with minimum version `1.0.0` to any machines in scope of
-the policy.
-
-```json
-"metadata": {
- "category": "Guest Configuration",
- "guestConfiguration": {
- "name": "AzureWindowsBaseline",
- "version": "1.*"
- }
-//additional metadata properties exist
-```
-
-## How Azure Policy uses machine configuration assignments
-
-The metadata information is used by the machine configuration service to
-automatically create an audit resource for definitions with either
-**AuditIfNotExists** or **DeployIfNotExists** policy effects. The resource type
-is `Microsoft.GuestConfiguration/guestConfigurationAssignments`. Azure Policy
-uses the **complianceStatus** property of the guest assignment resource to
-report compliance status. For more information, see
-[getting compliance data](../policy/how-to/get-compliance-data.md).
-
-### Deletion of guest assignments from Azure Policy
-
-When an Azure Policy assignment is deleted, if a machine configuration assignment
-was created by the policy, the machine configuration assignment is also deleted.
-
-When an Azure Policy assignment is deleted from an initiative, if a machine configuration assignment was created by the policy, you will need to manually delete the corresponding machine configuration assignment. This can be done by navigating to the guest assignments page on Azure portal and deleting the assignment there.
-
-## Manually creating machine configuration assignments
-
-Guest assignment resources in Azure Resource Manager can be created by Azure
-Policy or any client SDK.
-
-An example deployment template:
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "resources": [
- {
- "apiVersion": "2021-01-25",
- "type": "Microsoft.Compute/virtualMachines/providers/guestConfigurationAssignments",
- "name": "myMachine/Microsoft.GuestConfiguration/myConfig",
- "location": "westus2",
- "properties": {
- "guestConfiguration": {
- "name": "myConfig",
- "contentUri": "https://mystorageaccount.blob.core.windows.net/mystoragecontainer/myConfig.zip?sv=SASTOKEN",
- "contentHash": "SHA256HASH",
- "version": "1.0.0",
- "assignmentType": "ApplyAndMonitor",
- "configurationParameter": {}
- }
- }
- }
- ]
-}
-```
-
-The following table describes each property of guest assignment resources.
-
-| Property | Description |
-|-|-|
-| name | Name of the configuration inside the content package MOF file. |
-| contentUri | HTTPS URI path to the content package (.zip). |
-| contentHash | A SHA256 hash value of the content package, used to verify it has not changed. |
-| version | Version of the content package. Only used for built-in packages and not used for custom content packages. |
-| assignmentType | Behavior of the assignment. Allowed values: `Audit`, `ApplyandMonitor`, and `ApplyandAutoCorrect`. |
-| configurationParameter | List of DSC resource type, name, and value in the content package MOF file to be overridden after it's downloaded in the machine. |
-
-### Deletion of manually created machine configuration assignments
-
-Machine configuration assignments created through any manual approach (such as
-an Azure Resource Manager template deployment) must be deleted manually.
-Deleting the parent resource (virtual machine or Arc-enabled machine) will also
-delete the machine configuration assignment.
-
-To manually delete a machine configuration assignment, use the following
-example. Make sure to replace all example strings, indicated by "\<\>" brackets.
-
-```PowerShell
-# First get details about the machine configuration assignment
-$resourceDetails = @{
- ResourceGroupName = '<myResourceGroupName>'
- ResourceType = 'Microsoft.Compute/virtualMachines/providers/guestConfigurationAssignments/'
- ResourceName = '<myVMName>/Microsoft.GuestConfiguration'
- ApiVersion = '2020-06-25'
-}
-$guestAssignment = Get-AzResource @resourceDetails
-
-# Review details of the machine configuration assignment
-$guestAssignment
-
-# After reviewing properties of $guestAssignment to confirm
-$guestAssignment | Remove-AzResource
-```
-
-## Next steps
--- Read the [machine configuration overview](./overview.md).-- Setup a custom machine configuration package [development environment](./machine-configuration-create-setup.md).-- [Create a package artifact](./machine-configuration-create.md)
- for machine configuration.
-- [Test the package artifact](./machine-configuration-create-test.md)
- from your development environment.
-- Use the `GuestConfiguration` module to
- [create an Azure Policy definition](./machine-configuration-create-definition.md)
- for at-scale management of your environment.
-- [Assign your custom policy definition](../policy/assign-policy-portal.md) using
- Azure portal.
-- Learn how to view
- [compliance details for machine configuration](../policy/how-to/determine-non-compliance.md) policy assignments.
governance Machine Configuration Azure Automation Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-azure-automation-migration.md
- Title: Azure Automation State Configuration to machine configuration migration planning
-description: This article provides process and technical guidance for customers interested in moving from DSC version 2 in Azure Automation to version 3 in Azure Policy.
Previously updated : 03/06/2023---
-# Azure Automation state configuration to machine configuration migration planning
--
-Machine configuration is the latest implementation of functionality
-that has been provided by Azure Automation State Configuration (also known as
-Azure Automation Desired State Configuration, or AADSC).
-When possible, you should plan to move your content and machines to the new service.
-This article provides guidance on developing a migration strategy from Azure
-Automation to machine configuration.
-
-New features in machine configuration address top asks from customers:
--- Increased size limit for configurations ( 100MB )-- Advanced reporting through Azure Resource Graph including resource ID and state-- Manage multiple configurations for the same machine-- When machines drift from the desired state, you control when remediation occurs-- Linux and Windows both consume PowerShell-based DSC resources-
-Before you begin, it's a good idea to read the conceptual overview
-information at the page
-[Azure Policy's machine configuration](./overview.md).
-
-## Understand migration
-
-The best approach to migration is to redeploy content first, and then
-migrate machines. The expected steps for migration are outlined.
--- Export configurations from Azure Automation-- Discover module requirements and load them in your environment-- Compile configurations-- Create and publish machine configuration packages-- Test machine configuration packages-- Onboard hybrid machines to Azure Arc-- Unregister servers from Azure Automation State Configuration-- Assign configurations to servers using machine configuration-
-Machine configuration uses DSC version 3 with PowerShell version 7.
-DSC version 3 can coexist with older versions of DSC in
-[Windows](/powershell/dsc/getting-started/wingettingstarted) and
-[Linux](/powershell/dsc/getting-started/lnxgettingstarted).
-The implementations are separate. However, there's no conflict detection.
-
-Machine configuration doesn't require publishing modules or configurations in to
-a service, or compiling in a service. Instead, content is developed and tested
-using purpose-built tooling and published anywhere the machine can reach over
-HTTPS (typically Azure Blob Storage).
-
-If you decide the right plan for your migration is to have machines in both
-services for some period of time, while that could be confusing to manage,
-there are no technical barriers. The two services are independent.
-
-## Export content from Azure Automation
-
-Start by discovering and exporting content from Azure Automation State
-Configuration in to a development environment where you create, test, and publish
-content packages for machine configuration.
-
-### Configurations
-
-Only configuration scripts can be exported from Azure Automation. It isn't
-possible to export "Node configurations", or compiled MOF files.
-If you published MOF files directly in to the Automation Account and no longer
-have access to the original file, you must recompile from your private
-configuration scripts, or possibly re-author the configuration if the original
-can't be found.
-
-To export configuration scripts from Azure Automation, first identify the Azure
-Automation account that contains the configurations and the name of the Resource
-Group where the Automation Account is deployed.
-
-Install the PowerShell module "Az.Automation".
-
-```powershell
-Install-Module -Name Az.Automation
-```
-
-Next, use the `Get-AzAutomationAccount` command to identify your Automation
-Accounts and the Resource Group where they're deployed.
-The properties **ResourceGroupName** and **AutomationAccountName**
-are important for next steps.
-
-```azurepowershell
-Get-AzAutomationAccount
-
-SubscriptionId : <your subscription id>
-ResourceGroupName : <your resource group name>
-AutomationAccountName : <your automation account name>
-Location : centralus
-State :
-Plan :
-CreationTime : 6/30/2021 11:56:17 AM -05:00
-LastModifiedTime : 6/30/2021 11:56:17 AM -05:00
-LastModifiedBy :
-Tags : {}
-```
-
-Discover the configurations in your Automation Account. The output
-contains one entry per configuration. If you have many, store the information
-as a variable so it's easier to work with.
-
-```azurepowershell
-Get-AzAutomationDscConfiguration -ResourceGroupName <your resource group name> -AutomationAccountName <your automation account name>
-
-ResourceGroupName : <your resource group name>
-AutomationAccountName : <your automation account name>
-Location : centralus
-State : Published
-Name : <your configuration name>
-Tags : {}
-CreationTime : 6/30/2021 12:18:26 PM -05:00
-LastModifiedTime : 6/30/2021 12:18:26 PM -05:00
-Description :
-Parameters : {}
-LogVerbose : False
-```
-
-Finally, export each configuration to a local script file using the command
-`Export-AzAutomationDscConfiguration`. The resulting file name uses the
-pattern `\ConfigurationName.ps1`.
-
-```azurepowershell
-Export-AzAutomationDscConfiguration -OutputFolder /<location on your machine> -ResourceGroupName <your resource group name> -AutomationAccountName <your automation account name> -name <your configuration name>
-
-UnixMode User Group LastWriteTime Size Name
- -- - - -
- 12/31/1600 18:09
-```
-
-#### Export configurations using the PowerShell pipeline
-
-After you've discovered your accounts and the number of configurations,
-you might wish to export all configurations to a local folder on your machine.
-To automate this process, pipe the output of each command above to the next.
-
-The example exports 5 configurations. The output pattern is
-the only indication of success.
-
-```azurepowershell
-Get-AzAutomationAccount | Get-AzAutomationDscConfiguration | Export-AzAutomationDSCConfiguration -OutputFolder /<location on your machine>
-
-UnixMode User Group LastWriteTime Size Name
- -- - - -
- 12/31/1600 18:09
- 12/31/1600 18:09
- 12/31/1600 18:09
- 12/31/1600 18:09
- 12/31/1600 18:09
-```
-
-#### Consider decomposing complex configuration files
-
-Machine configuration can manage multiple configurations per machine.
-Many configurations written for Azure Automation State Configuration assumed the
-limitation of managing a single configuration per machine. To take advantage of
-the expanded capabilities offered by machine configuration, large
-configuration files can be divided into many smaller configurations where each
-handles a specific scenario.
-
-There is no orchestration in machine configuration to control the order of how
-configurations are sorted, so keep steps in a configuration together in one
-package if they are required to happen sequentially.
-
-### Modules
-
-It isn't possible to export modules from Azure Automation or automatically
-correlate which configurations require which module/version. You must
-have the modules in your local environment to create a new machine configuration
-package. To create a list of modules you need for migration, use PowerShell to
-query Azure Automation for the name and version of modules.
-
-If you are using modules that are custom authored and only exist in your private
-development environment, it isn't possible to export them from Azure
-Automation.
-
-If a custom module is required for a configuration and is in the account, but you
-can't find it in your environment, you won't be able to compile the
-configuration, which means you won't be able to migrate the configuration.
-
-#### List modules imported in Azure Automation
-
-To retrieve a list of all modules that are installed in your automation account,
-use the `Get-AzAutomationModule` command. The property "IsGlobal" tells you
-if the module is built in to Azure Automation always, or if it was published to
-the account.
-
-For example, to create a list of all modules published to any of your accounts.
-
-```azurepowershell
-Get-AzAutomationAccount | Get-AzAutomationModule | Where-Object IsGlobal -eq $false
-```
-
-You can also use the PowerShell Gallery as an aid in finding details about
-modules that are publicly available. For example, the list of modules that are
-built in to new Automation Accounts, and that contain DSC resources, is produced
-by the following example.
-
-```azurepowershell
-Get-AzAutomationAccount | Get-AzAutomationModule | Where-Object IsGlobal -eq $true | Find-Module -ErrorAction SilentlyContinue | Where-Object {'' -ne $_.Includes.DscResource} | Select-Object -Property Name, Version -Unique | Format-Table -AutoSize
-
-Name Version
-- -
-AuditPolicyDsc 1.4.0
-ComputerManagementDsc 8.4.0
-PSDscResources 2.12.0
-SecurityPolicyDsc 2.10.0
-xDSCDomainjoin 1.2.23
-xPowerShellExecutionPolicy 3.1.0.0
-xRemoteDesktopAdmin 1.1.0.0
-```
-
-#### Download modules from PowerShell Gallery or PowerShellGet repository
-
-If the modules were imported from the PowerShell Gallery, you can pipe the output
-from `Find-Module` directly in `Install-Module`. Piping the output across commands
-provides a solution to load a developer environment with all modules currently in
-an Automation Account that are available publicly in the PowerShell Gallery.
-
-The same approach could be used to pull modules from a custom NuGet feed, if
-the feed is registered in your local environment as a
-[PowerShellGet repository](/powershell/scripting/gallery/how-to/working-with-local-psrepositories).
-
-The `Find-Module` command in the example doesn't suppress errors, meaning
-any modules not found in the gallery return an error message.
-
-```azurepowershell
-Get-AzAutomationAccount | Get-AzAutomationModule | Where-Object IsGlobal -eq $false | Find-Module | Where-Object {'' -ne $_.Includes.DscResource} | Install-Module
-
- Installing package xWebAdministration'
-
- [ ]
-```
-
-#### Inspecting configuration scripts for module requirements
-
-If you've exported configuration scripts from Azure Automation, you can also
-review the contents for details about which modules are required to compile each
-configuration to a MOF file. This approach would only be needed if you find
-configurations in your Automation Accounts where the modules have been removed.
-The configurations would no longer be useful for machines, but they might still
-be in the account.
-
-Towards the top of each file, look for a line that includes 'Import-DscResource'.
-This command is only applicable inside a configuration, and is used to load modules
-at the time of compilation.
-
-For example, the "WindowsIISServerConfig" configuration in the PowerShell Gallery
-contains the lines in this example.
-
-```powershell
-configuration WindowsIISServerConfig
-{
-
-Import-DscResource -ModuleName @{ModuleName = 'xWebAdministration';ModuleVersion = '1.19.0.0'}
-Import-DscResource -ModuleName 'PSDesiredStateConfiguration'
-```
-
-The configuration requires you to have the "xWebAdministration" module version
-"1.19.0.0" and the module "PSDesiredStateConfiguration".
-
-### Test content in Azure machine configuration
-
-The best way to evaluate whether your content from Azure Automation State
-Configuration can be used with machine configuration is to follow
-the step-by-step tutorial in the page
-[How to create custom machine configuration package artifacts](./machine-configuration-create.md).
-
-When you reach the step
-[Author a configuration](./machine-configuration-create.md#author-a-configuration),
-the configuration script that generates a MOF file should be one of the scripts
-you exported from Azure Automation State Configuration. You must have the
-required PowerShell modules installed in your environment before you can compile
-the configuration to a MOF file and create a machine configuration package.
-
-#### What if a module does not work with machine configuration?
-
-Some modules might encounter compatibility issues with machine configuration. The
-most common problems are related to .NET framework vs .NET core. Detailed
-technical information is available on the page,
-[Differences between Windows PowerShell 5.1 and PowerShell (core) 7.x](/powershell/gallery/how-to/working-with-local-psrepositories)
-
-One option to resolve compatibility issues is to run commands in Windows PowerShell
-from within a module that is imported in PowerShell 7, by running `powershell.exe`.
-You can review a sample module that uses this technique in the Azure-Policy repo
-where it is used to audit the state of
-[Windows DSC Configuration](https://github.com/Azure/azure-policy/blob/bbfc60104c2c5b7fa6dd5b784b5d4713ddd55218/samples/GuestConfiguration/package-samples/resource-modules/WindowsDscConfiguration/DscResources/WindowsDscConfiguration/WindowsDscConfiguration.psm1#L97).
-
-The example also illustrates a small proof of concept.
-
-```powershell
-# example function that could be loaded from module
-function New-TaskResolvedInPWSH7 {
- # runs the fictitious command 'Get-myNotCompatibleCommand' in Windows PowerShell
- $compatObject = & powershell.exe -noprofile -NonInteractive -command { Get-myNotCompatibleCommand }
- # resulting object can be used in PowerShell 7
- return $compatObject
-}
-```
-
-#### Will I have to add "Reasons" property to Get-TargetResource in all modules I migrate?
-
-Implementing the
-["Reasons" property](./machine-configuration-custom.md#special-requirements-for-get)
-provides a better experience when viewing
-the results of a configuration assignment from the Azure Portal. If the `Get`
-method in a module doesn't include "Reasons", generic output is returned
-with details from the properties returned by the `Get` method. Therefore,
-it's optional for migration.
-
-## Machines
-
-After you've finished testing content from Azure Automation State Configuration
-in machine configuration, develop a plan for migrating machines.
-
-Azure Automation State Configuration is available for both virtual machines in
-Azure and hybrid machines located outside of Azure. You must plan for each of
-these scenarios using different steps.
-
-### Azure VMs
-
-Azure virtual machines already have a
-[resource](../../azure-resource-manager/management/overview.md#terminology)
-in Azure, which means they're ready for machine configuration assignments that
-associate them with a configuration. The high-level tasks for migrating Azure
-virtual machines are to remove them from Azure Automation State Configuration
-and then assign configurations using machine configuration.
-
-To remove a machine from Azure Automation State Configuration, follow the steps
-in the page
-[How to remove a configuration and node from Automation State Configuration](../../automation/state-configuration/remove-node-and-configuration-package.md).
-
-To assign configurations using machine configuration, follow the steps in the
-Azure Policy Quickstarts, such as
-[Quickstart: Create a policy assignment to identify non-compliant resources](../policy/assign-policy-portal.md).
-In step 6 when selecting a policy definition, pick the definition that applies
-a configuration you migrated from Azure Automation State Configuration.
-
-### Hybrid machines
-
-Machines outside of Azure
-[can be registered to Azure Automation State Configuration](../../automation/automation-dsc-onboarding.md#enable-physicalvirtual-linux-machines),
-but they don't have a machine resource in Azure. The connection
-to Azure Automation is handled by Local Configuration Manager service inside
-the machine and the record of the node is managed as a resource in the Azure
-Automation provider type.
-
-Before removing a machine from Azure Automation State Configuration,
-onboard each node as an
-[Azure Arc-enabled server](../../azure-arc/servers/overview.md).
-Onboard to Azure Arc creates a machine resource in Azure so the machine
-can be managed by Azure Policy. The machine can be onboarded to Azure Arc at any
-time but you can use Azure Automation State Configuration to automate the process.
-
-You can register a machine to Azure Arc-enabled servers by using PowerShell DSC.
-For details, view the page
-[How to install the Connected Machine agent using Windows PowerShell DSC](../../azure-arc/servers/onboard-dsc.md).
-Remember however, that Azure Automation State Configuration can manage only one
-configuration per machine, per Automation Account. This means you have the option
-to export, test, and prepare your content for machine configuration, and then
-"switch" the node configuration in Azure Automation to onboard to Azure Arc. As
-the last step, you remove the node registration from Azure Automation State
-Configuration and move forward only managing the machine state through guest
-configuration.
-
-## Troubleshooting issues when exporting content
-
-Details about known issues are provided
-
-### Exporting configurations results in "\\" character in file name
-
-When using PowerShell on MacOS/Linux, you encounter issues dealing with the file
-names output by `Export-AzAutomationDSCConfiguration`.
-
-As a workaround, a module has been published to the PowerShell Gallery named
-[AADSCConfigContent](https://www.powershellgallery.com/packages/AADSCConfigContent/).
-The module has only one command, which exports the content
-of a configuration stored in Azure Automation by making a REST request to the
-service.
-
-## Next steps
--- [Create a package artifact](./machine-configuration-create.md)
- for machine configuration.
-- [Test the package artifact](./machine-configuration-create-test.md)
- from your development environment.
-- [Publish the package artifact](./machine-configuration-create-publish.md)
- so it is accessible to your machines.
-- Use the `GuestConfiguration` module to
- [create an Azure Policy definition](./machine-configuration-create-definition.md)
- for at-scale management of your environment.
-- [Assign your custom policy definition](../policy/assign-policy-portal.md) using
- Azure portal.
-- Learn how to view
- [compliance details for machine configuration](../policy/how-to/determine-non-compliance.md) policy assignments.
governance Machine Configuration Create Assignment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-create-assignment.md
- Title: How to create a machine configuration assignment using templates
-description: Learn how to deploy configurations to machines directly from Azure Resource Manager.
Previously updated : 07/25/2022---
-# How to create a machine configuration assignment using templates
--
-The best way to
-[assign machine configuration packages](./machine-configuration-assignments.md)
-to multiple machines is using
-[Azure Policy](./machine-configuration-create-definition.md). You can also
-assign machine configuration packages to a single machine.
-
-## Built-in and custom configurations
-
-To assign a machine configuration package to a single machine, modify the following
-examples. There are two scenarios.
--- Apply a custom configuration to a machine using a link to a package that you
- [published](./machine-configuration-create-publish.md).
-- Apply a [built-in](../policy/samples/built-in-packages.md) configuration to a machine,
- such as an Azure baseline.
-
-## Extending other resource types, such as Arc-enabled servers
-
-In each of the following sections, the example includes a **type** property
-where the name starts with `Microsoft.Compute/virtualMachines`. The guest
-configuration resource provider `Microsoft.GuestConfiguration` is an
-[extension resource](../../azure-resource-manager/management/extension-resource-types.md)
-that must reference a parent type.
-
-To modify the example for other resource types such as
-[Arc-enabled servers](../../azure-arc/servers/overview.md),
-change the parent type to the name of the resource provider.
-For Arc-enabled servers, the resource provider is
-`Microsoft.HybridCompute/machines`.
-
-Replace the following "<>" fields with values specific to you environment:
--- **<vm_name>**: Name of the machine resource where the configuration will be applied-- **<configuration_name>**: Name of the configuration to apply-- **<vm_location>**: Azure region where the machine configuration assignment will be created-- **<Url_to_Package.zip>**: For custom content package, an HTTPS link to the .zip file-- **<SHA256_hash_of_package.zip>**: For custom content package, a SHA256 hash of the .zip file-
-## Assign a configuration using an Azure Resource Manager template
-
-You can deploy an
-[Azure Resource Manager template](../../azure-resource-manager/templates/deployment-tutorial-local-template.md?tabs=azure-powershell)
-containing machine configuration assignment resources.
-
-The following example assigns a custom configuration.
-
-```json
-{
- "apiVersion": "2020-06-25",
- "type": "Microsoft.Compute/virtualMachines/providers/guestConfigurationAssignments",
- "name": "<vm_name>/Microsoft.GuestConfiguration/<configuration_name>",
- "location": "<vm_location>",
- "dependsOn": [
- "Microsoft.Compute/virtualMachines/<vm_name>"
- ],
- "properties": {
- "guestConfiguration": {
- "name": "<configuration_name>",
- "contentUri": "<Url_to_Package.zip>",
- "contentHash": "<SHA256_hash_of_package.zip>",
- "assignmentType": "ApplyAndMonitor"
- }
- }
- }
-```
-
-The following example assigns the `AzureWindowBaseline` built-in configuration.
-
-```json
-{
- "apiVersion": "2020-06-25",
- "type": "Microsoft.Compute/virtualMachines/providers/guestConfigurationAssignments",
- "name": "<vm_name>/Microsoft.GuestConfiguration/<configuration_name>",
- "location": "<vm_location>",
- "dependsOn": [
- "Microsoft.Compute/virtualMachines/<vm_name>"
- ],
- "properties": {
- "guestConfiguration": {
- "name": "AzureWindowsBaseline",
- "version": "1.*",
- "assignmentType": "ApplyAndMonitor",
- "configurationParameter": [
- {
- "name": "Minimum Password Length;ExpectedValue",
- "value": "16"
- },
- {
- "name": "Minimum Password Length;RemediateValue",
- "value": "16"
- },
- {
- "name": "Maximum Password Age;ExpectedValue",
- "value": "75"
- },
- {
- "name": "Maximum Password Age;RemediateValue",
- "value": "75"
- }
- ]
- }
- }
- }
-```
-
-## Assign a configuration using Bicep
-
-You can use
-[Azure Bicep](../../azure-resource-manager/bicep/overview.md)
-to deploy machine configuration assignments.
-
-The following example assigns a custom configuration.
-
-```Bicep
-resource myVM 'Microsoft.Compute/virtualMachines@2021-03-01' existing = {
- name: '<vm_name>'
-}
-
-resource myConfiguration 'Microsoft.GuestConfiguration/guestConfigurationAssignments@2020-06-25' = {
- name: '<configuration_name>'
- scope: myVM
- location: resourceGroup().location
- properties: {
- guestConfiguration: {
- name: '<configuration_name>'
- contentUri: '<Url_to_Package.zip>'
- contentHash: '<SHA256_hash_of_package.zip>'
- version: '1.*'
- assignmentType: 'ApplyAndMonitor'
- }
- }
-}
-```
-
-The following example assigns the `AzureWindowBaseline` built-in configuration.
-
-```Bicep
-resource myWindowsVM 'Microsoft.Compute/virtualMachines@2021-03-01' existing = {
- name: '<vm_name>'
-}
-
-resource AzureWindowsBaseline 'Microsoft.GuestConfiguration/guestConfigurationAssignments@2020-06-25' = {
- name: 'AzureWindowsBaseline'
- scope: myWindowsVM
- location: resourceGroup().location
- properties: {
- guestConfiguration: {
- name: 'AzureWindowsBaseline'
- version: '1.*'
- assignmentType: 'ApplyAndMonitor'
- configurationParameter: [
- {
- name: 'Minimum Password Length;ExpectedValue'
- value: '16'
- }
- {
- name: 'Minimum Password Length;RemediateValue'
- value: '16'
- }
- {
- name: 'Maximum Password Age;ExpectedValue'
- value: '75'
- }
- {
- name: 'Maximum Password Age;RemediateValue'
- value: '75'
- }
- ]
- }
- }
-}
-```
-
-## Assign a configuration using Terraform
-
-You can use
-[Terraform](https://www.terraform.io/)
-to
-[deploy](/azure/developer/terraform/get-started-windows-powershell)
-machine configuration assignments.
-
-> [!IMPORTANT]
-> The Terraform provider
-> [azurerm_policy_virtual_machine_configuration_assignment](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_machine_configuration_policy_assignment)
-> hasn't been updated to support the `assignmentType` property so only
-> configurations that perform audits are supported.
-
-The following example assigns a custom configuration.
-
-```Terraform
-resource "azurerm_virtual_machine_configuration_policy_assignment" "<configuration_name>" {
- name = "<configuration_name>"
- location = azurerm_windows_virtual_machine.example.location
- virtual_machine_id = azurerm_windows_virtual_machine.example.id
- configuration {
- name = "<configuration_name>"
- contentUri = '<Url_to_Package.zip>'
- contentHash = '<SHA256_hash_of_package.zip>'
- version = "1.*"
- assignmentType = "ApplyAndMonitor
- }
-}
-```
-
-The following example assigns the `AzureWindowBaseline` built-in configuration.
-
-```Terraform
-resource "azurerm_virtual_machine_configuration_policy_assignment" "AzureWindowsBaseline" {
- name = "AzureWindowsBaseline"
- location = azurerm_windows_virtual_machine.example.location
- virtual_machine_id = azurerm_windows_virtual_machine.example.id
- configuration {
- name = "AzureWindowsBaseline"
- version = "1.*"
- parameter {
- name = "Minimum Password Length;ExpectedValue"
- value = "16"
- }
- parameter {
- name = "Minimum Password Length;RemediateValue"
- value = "16"
- }
- parameter {
- name = "Minimum Password Age;ExpectedValue"
- value = "75"
- }
- parameter {
- name = "Minimum Password Age;RemediateValue"
- value = "75"
- }
- }
-}
-```
-
-## Next steps
--- Read the [machine configuration overview](./overview.md).-- Setup a custom machine configuration package [development environment](./machine-configuration-create-setup.md).-- [Create a package artifact](./machine-configuration-create.md)
- for machine configuration.
-- [Test the package artifact](./machine-configuration-create-test.md)
- from your development environment.
-- [Publish the package artifact](./machine-configuration-create-publish.md)
- so it is accessible to your machines.
-- Use the `GuestConfiguration` module to
- [create an Azure Policy definition](./machine-configuration-create-definition.md)
- for at-scale management of your environment.
-- [Assign your custom policy definition](../policy/assign-policy-portal.md) using
- Azure portal.
governance Machine Configuration Create Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-create-definition.md
- Title: How to create custom machine configuration policy definitions
-description: Learn how to create a machine configuration policy.
Previously updated : 10/17/2022--
-# How to create custom machine configuration policy definitions
--
-Before you begin, it's a good idea to read the overview page for
-[machine configuration](./overview.md),
-and the details about machine configuration policy effects
-[How to configure remediation options for machine configuration](./machine-configuration-policy-effects.md).
-
-> [!IMPORTANT]
-> The machine configuration extension is required for Azure virtual machines. To
-> deploy the extension at scale across all machines, assign the following policy
-> initiative: `Deploy prerequisites to enable machine configuration policies on
-> virtual machines`
->
-> To use machine configuration packages that apply configurations, Azure VM guest
-> configuration extension version **1.29.24** or later,
-> or Arc agent **1.10.0** or later, is required.
->
-> Custom machine configuration policy definitions using either **AuditIfNotExists** or **DeployIfNotExists** are now
-> Generally Available.
-
-Use the following steps to create your own policies that audit compliance or
-manage the state of Azure or Arc-enabled machines.
-
-## Install PowerShell 7 and required PowerShell modules
-
-First, make sure you've followed all steps on the page
-[How to set up a machine configuration authoring environment](./machine-configuration-create-setup.md)
-to install the required version of PowerShell for your OS and the
-`GuestConfiguration` module.
-
-## Create and publish a machine configuration package artifact
-
-If you haven't already, follow all steps on the page
-[How to create custom machine configuration package artifacts](./machine-configuration-create.md)
-to create and publish a custom machine configuration package
-and
-[How to test machine configuration package artifacts](./machine-configuration-create-test.md) to validate the machine configuration package locally in your
-development environment.
-
-## Policy requirements for machine configuration
-
-The policy definition `metadata` section must include two properties for the
-machine configuration service to automate provisioning and reporting of guest
-configuration assignments. The `category` property must be set to "Guest
-Configuration" and a section named `guestConfiguration` must contain information
-about the machine configuration assignment. The `New-GuestConfigurationPolicy`
-cmdlet creates this text automatically.
-
-The following example demonstrates the `metadata` section that is automatically
-created by `New-GuestConfigurationPolicy`.
-
-```json
- "metadata": {
- "category": "Guest Configuration",
- "guestConfiguration": {
- "name": "test",
- "version": "1.0.0",
- "contentType": "Custom",
- "contentUri": "CUSTOM-URI-HERE",
- "contentHash": "CUSTOM-HASH-VALUE-HERE",
- "configurationParameter": {}
- }
- },
-```
-
-The `category` property must be set to "Guest Configuration". If the definition
-effect is set to "DeployIfNotExists", the `then` section must contain deployment
-details about a machine configuration assignment. The
-`New-GuestConfigurationPolicy` cmdlet creates this text automatically.
-
-### Create an Azure Policy definition
-
-Once a machine configuration custom policy package has been created and uploaded,
-create the machine configuration policy definition. The `New-GuestConfigurationPolicy`
-cmdlet takes a custom policy package and creates a policy definition.
-
-The **PolicyId** parameter of `New-GuestConfigurationPolicy` requires a unique
-string. A globally unique identifier (GUID) is required. For new definitions,
-generate a new GUID using the cmdlet `New-GUID`. When making updates to the
-definition, use the same unique string for **PolicyId** to ensure the correct
-definition is updated.
-
-Parameters of the `New-GuestConfigurationPolicy` cmdlet:
--- **PolicyId**: A GUID.-- **ContentUri**: Public HTTP(s) URI of machine configuration content package.-- **DisplayName**: Policy display name.-- **Description**: Policy description.-- **Parameter**: Policy parameters provided in hashtable format.-- **PolicyVersion**: Policy version.-- **Path**: Destination path where policy definitions are created.-- **Platform**: Target platform (Windows/Linux) for machine configuration policy
- and content package.
-- **Mode**: (ApplyAndMonitor, ApplyAndAutoCorrect, Audit) choose if the policy
- should audit or deploy the configuration. Default is "Audit".
-- **Tag** adds one or more tag filters to the policy definition-- **Category** sets the category metadata field in the policy definition-
-For more information about the "Mode" parameter, see the page
-[How to configure remediation options for machine configuration](./machine-configuration-policy-effects.md).
-
-Create a policy definition that audits using a custom
-configuration package, in a specified path:
-
-```powershell
-$PolicyConfig = @{
- PolicyId = '_My GUID_'
- ContentUri = $contenturi
- DisplayName = 'My audit policy'
- Description = 'My audit policy'
- Path = './policies/auditIfNotExists.json'
- Platform = 'Windows'
- PolicyVersion = 1.0.0
-}
-
-New-GuestConfigurationPolicy @PolicyConfig
-```
-
-Create a policy definition that deploys a configuration using a custom
-configuration package, in a specified path:
-
-```powershell
-$PolicyConfig2 = @{
- PolicyId = '_My GUID_'
- ContentUri = $contenturi
- DisplayName = 'My audit policy'
- Description = 'My audit policy'
- Path = './policies/deployIfNotExists.json'
- Platform = 'Windows'
- PolicyVersion = 1.0.0
- Mode = 'ApplyAndAutoCorrect'
-}
-
-New-GuestConfigurationPolicy @PolicyConfig2
-```
-
-The cmdlet output returns an object containing the definition display name and
-path of the policy files. Definition JSON files that create audit policy definitions
-have the name **auditIfNotExists.json** and files that create policy definitions to
-apply configurations have the name **deployIfNotExists.json**.
-
-#### Filtering machine configuration policies using tags
-
-The policy definitions created by cmdlets in the Guest Configuration can optionally include a
-filter for tags. The **Tag** parameter of `New-GuestConfigurationPolicy` supports an array of
-hashtables containing individual tag entires. The tags are added to the `If` section of the policy
-definition and can't be modified by a policy assignment.
-
-An example snippet of a policy definition that filters for tags is given below.
-
-```json
-"if": {
- "allOf" : [
- {
- "allOf": [
- {
- "field": "tags.Owner",
- "equals": "BusinessUnit"
- },
- {
- "field": "tags.Role",
- "equals": "Web"
- }
- ]
- },
- {
- // Original machine configuration content
- }
- ]
-}
-```
-
-#### Using parameters in custom machine configuration policy definitions
-
-Machine configuration supports overriding properties of a Configuration at run time. This feature
-means that the values in the MOF file in the package don't have to be considered static. The
-override values are provided through Azure Policy and don't change how the Configurations are
-authored or compiled.
-
-The cmdlets `New-GuestConfigurationPolicy` and `Get-GuestConfigurationPackageComplianceStatus ` include a
-parameter named **Parameter**. This parameter takes a hashtable definition including all details
-about each parameter and creates the required sections of each file used for the Azure Policy
-definition.
-
-The following example creates a policy definition to audit a service, where the user selects from a
-list at the time of policy assignment.
-
-```powershell
-# This DSC resource definition...
-Service 'UserSelectedNameExample'
- {
- Name = 'ParameterValue'
- Ensure = 'Present'
- State = 'Running'
- }`
-
-# ...can be converted to a hash table:
-$PolicyParameterInfo = @(
- @{
- # Policy parameter name (mandatory)
- Name = 'ServiceName'
- # Policy parameter display name (mandatory)
- DisplayName = 'windows service name.'
- # Policy parameter description (optional)
- Description = 'Name of the windows service to be audited.'
- # DSC configuration resource type (mandatory)
- ResourceType = 'Service'
- # DSC configuration resource id (mandatory)
- ResourceId = 'UserSelectedNameExample'
- # DSC configuration resource property name (mandatory)
- ResourcePropertyName = 'Name'
- # Policy parameter default value (optional)
- DefaultValue = 'winrm'
- # Policy parameter allowed values (optional)
- AllowedValues = @('BDESVC','TermService','wuauserv','winrm')
- })
-
-# ...and then passed into the `New-GuestConfigurationPolicy` cmdlet
-$PolicyParam = @{
- PolicyId = 'My GUID'
- ContentUri = $contenturi
- DisplayName = 'Audit Windows Service.'
- Description = "Audit if a Windows Service isn't enabled on Windows machine."
- Path = '.\policies\auditIfNotExists.json'
- Parameter = $PolicyParameterInfo
- PolicyVersion = 1.0.0
-}
-
-New-GuestConfigurationPolicy @PolicyParam
-```
-
-### Publish the Azure Policy definition
-
-Finally, you can publish the policy definitions using the New-AzPolicyDefinition cmdlet. The below commands will publish your machine configuration policy to the policy center.
-
-To run the New-AzPolicyDefinition command, you need access to create policy definitions in Azure. The specific authorization
-requirements are documented in the [Azure Policy Overview](./overview.md) page. The recommended built-in
-role is **Resource Policy Contributor**.
-
-```powershell
-New-AzPolicyDefinition -Name 'mypolicydefinition' -Policy '.\policies\auditIfNotExists.json'
-```
-
-Or, if this is a deploy if not exist policy (DINE) please use
-
-```powershell
-New-AzPolicyDefinition -Name 'mypolicydefinition' -Policy '.\policies\deployIfNotExists.json'
-```
-
-With the policy definition created in Azure, the last step is to assign the definition. See how to assign the
-definition with [Portal](../policy/assign-policy-portal.md), [Azure CLI](../policy/assign-policy-azurecli.md), and
-[Azure PowerShell](../policy/assign-policy-powershell.md).
-
-## Policy lifecycle
-
-If you would like to release an update to the policy definition, make the change for both the guest
-configuration package and the Azure Policy definition details.
-
-> [!NOTE]
-> The `version` property of the machine configuration assignment only effects packages that
-> are hosted by Microsoft. The best practice for versioning custom content is to include
-> the version in the file name.
-
-First, when running `New-GuestConfigurationPackage`, specify a name for the package that makes it
-unique from previous versions. You can include a version number in the name such as
-`PackageName_1.0.0`. The number in this example is only used to make the package unique, not to
-specify that the package should be considered newer or older than other packages.
-
-Second, update the parameters used with the `New-GuestConfigurationPolicy` cmdlet following each of
-the following explanations.
--- **Version**: When you run the `New-GuestConfigurationPolicy` cmdlet, you must specify a version
- number greater than what is currently published.
-- **contentUri**: When you run the `New-GuestConfigurationPolicy` cmdlet, you must specify a URI
- to the location of the package. Including a package version in the file name will ensure the value
- of this property changes in each release.
-- **contentHash**: This property is updated automatically by the `New-GuestConfigurationPolicy`
- cmdlet. It's a hash value of the package created by `New-GuestConfigurationPackage`. The property
- must be correct for the `.zip` file you publish. If only the **contentUri** property is updated,
- the Extension won't accept the content package.
-
-The easiest way to release an updated package is to repeat the process described in this article and
-provide an updated version number. That process guarantees all properties have been correctly
-updated.
-
-## Next steps
--- [Assign your custom policy definition](../policy/assign-policy-portal.md) using
- Azure portal.
-- Learn how to view
- [compliance details for machine configuration](../policy/how-to/determine-non-compliance.md#compliance-details) policy assignments.
governance Machine Configuration Create Publish https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-create-publish.md
- Title: How to publish custom machine configuration package artifacts
-description: Learn how to publish a machine configuration package file top Azure blob storage and get a SAS token for secure access.
Previously updated : 07/25/2022---
-# How to publish custom machine configuration package artifacts
--
-Before you begin, it's a good idea to read the overview page for
-[machine configuration](./overview.md).
-
-Machine configuration custom .zip packages must be stored in a location that is
-accessible via HTTPS by the managed machines. Examples include GitHub
-repositories, an Azure Repo, Azure storage, or a web server within your private
-datacenter.
-
-Configuration packages that support `Audit` and `AuditandSet` are published the
-same way. There isn't a need to do anything special during publishing based on
-the package mode.
-
-## Publish a configuration package
-
-The preferred location to store a configuration package is Azure Blob Storage.
-There are no special requirements for the storage account, but it's a good idea
-to host the file in a region near your machines. If you prefer to not make the
-package public, you can include a
-[SAS token](../../storage/common/storage-sas-overview.md)
-in the URL or implement a
-[service endpoint](../../storage/common/storage-network-security.md#grant-access-from-a-virtual-network)
-for machines in a private network.
-
-If you don't have a storage account, use the following example to create one.
-
-```powershell
-# Creates a new resource group, storage account, and container
-New-AzResourceGroup -name myResourceGroupName -Location WestUS
-New-AzStorageAccount -ResourceGroupName myResourceGroupName -Name mystorageaccount -SkuName 'Standard_LRS' -Location 'WestUs' | New-AzStorageContainer -Name guestconfiguration -Permission Blob
-```
-
-To publish your configuration package to Azure blob storage, you can follow the below steps which leverages the Az.Storage module.
-
-First, obtain the context of the storage account in which the package will be stored. This example creates a context by specifying a connection string and saves the context in the variable $Context.
-
-```powershell
-$Context = New-AzStorageContext -ConnectionString "DefaultEndpointsProtocol=https;AccountName=ContosoGeneral;AccountKey=< Storage Key for ContosoGeneral ends with == >;"
-```
-
-Next, add the configuration package to the storage account. This example uploads the zip file ./MyConfig.zip to the blob "guestconfiguration".
-
-```powershell
-Set-AzStorageBlobContent -Container "guestconfiguration" -File ./MyConfig.zip -Context $Context
-```
-
-Optionally, you can add a SAS token in the URL, this ensures that the content package will be accessed securely. The below example generates a blob SAS token with read access and returns the full blob URI with the shared access signature token. In this example, this includes a time limit of 3 years.
-
-```powershell
-$StartTime = Get-Date
-$EndTime = $startTime.AddYears(3)
-$contenturi = New-AzStorageBlobSASToken -StartTime $StartTime -ExpiryTime $EndTime -Container "guestconfiguration" -Blob "MyConfig.zip" -Permission r -Context $Context -FullUri
-```
-
-## Next steps
--- [Test the package artifact](./machine-configuration-create-test.md)
- from your development environment.
-- Use the `GuestConfiguration` module to
- [create an Azure Policy definition](./machine-configuration-create-definition.md)
- for at-scale management of your environment.
-- [Assign your custom policy definition](../policy/assign-policy-portal.md) using
- Azure portal.
governance Machine Configuration Create Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-create-setup.md
- Title: How to install the machine configuration authoring module
-description: Learn how to install the PowerShell module for creating and testing machine configuration policy definitions and assignments.
Previously updated : 01/13/2023--
-# How to set up a machine configuration authoring environment
--
-The PowerShell module `GuestConfiguration` automates the process of creating
-custom content including:
--- Creating a machine configuration content artifact (.zip)-- Validating the package meets requirements-- Installing the machine configuration agent locally for testing-- Validating the package can be used to audit settings in a machine-- Validating the package can be used to configure settings in a machine-- Publishing the package to Azure storage-- Creating a policy definition-- Publishing the policy-
-Support for applying configurations through machine configuration
-is introduced in version `3.4.2`.
-
-### Base requirements
-
-Operating systems where the module can be installed:
--- Ubuntu 18-- Windows-
-The module can be installed on a machine running PowerShell 7.x. Install the
-versions of PowerShell listed below.
-
-| OS | PowerShell Version |
-|-|-|
-|Windows|[PowerShell 7.1.3](https://github.com/PowerShell/PowerShell/releases/tag/v7.1.3)|
-|Ubuntu 18|[PowerShell 7.2.4](https://github.com/PowerShell/PowerShell/releases/tag/v7.2.4)|
-
-The `GuestConfiguration` module requires the following software:
--- Azure PowerShell 5.9.0 or higher. The required Az modules are installed
- automatically with the `GuestConfiguration` module, or you can follow
- [these instructions](/powershell/azure/install-az-ps).
--
-### Install the module from the PowerShell Gallery
-
-To install the `GuestConfiguration` module on either Windows or Linux, run the
-following command in PowerShell 7.
-
-```powershell
-# Install the machine configuration DSC resource module from PowerShell Gallery
-Install-Module -Name GuestConfiguration
-```
-
-Validate that the module has been imported:
-
-```powershell
-# Get a list of commands for the imported GuestConfiguration module
-Get-Command -Module 'GuestConfiguration'
-```
-
-## Next steps
--- [Create a package artifact](./machine-configuration-create.md)
- for machine configuration.
-- [Test the package artifact](./machine-configuration-create-test.md)
- from your development environment.
-- Use the `GuestConfiguration` module to
- [create an Azure Policy definition](./machine-configuration-create-definition.md)
- for at-scale management of your environment.
-- [Assign your custom policy definition](../policy/assign-policy-portal.md) using
- Azure portal.
governance Machine Configuration Create Signing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-create-signing.md
- Title: How to sign machine configuration packages
-description: You can optionally sign machine configuration content packages and force the agent to only allow signed content
Previously updated : 07/25/2022---
-# How to sign machine configuration packages
--
-Machine configuration custom policies use SHA256 hash to validate the policy
-package hasn't changed. Optionally, customers may also use a certificate to sign
-packages and force the machine configuration extension to only allow signed
-content.
-
-To enable this scenario, there are two steps you need to complete. Run the
-cmdlet to sign the content package, and append a tag to the machines that should
-require code to be signed.
-
-## Signature validation using a code signing certificate
-
-To use the Signature Validation feature, run the
-`Protect-GuestConfigurationPackage` cmdlet to sign the package before it's
-published. This cmdlet requires a 'Code Signing' certificate. If you do not have a 'Code Signing' certificate, please use the script below to create a self-signed certificate for testing purposes to follow along with the example.
-
-## Windows signature validation
-
-```azurepowershell-interactive
-# How to create a self sign cert and use it to sign Machine Configuration custom policy package
-
-# Create Code signing cert
-$mycert = New-SelfSignedCertificate -Type CodeSigningCert -DnsName 'GCEncryptionCertificate' -HashAlgorithm SHA256
-
-# Export the certificates
-$mypwd = ConvertTo-SecureString -String "Password1234" -Force -AsPlainText
-$mycert | Export-PfxCertificate -FilePath C:\demo\GCPrivateKey.pfx -Password $mypwd
-$mycert | Export-Certificate -FilePath "C:\demo\GCPublicKey.cer" -Force
-
-# Import the certificate
-Import-PfxCertificate -FilePath C:\demo\GCPrivateKey.pfx -Password $mypwd -CertStoreLocation 'Cert:\LocalMachine\My'
--
-# Sign the policy package
-$certToSignThePackage = Get-ChildItem -Path cert:\LocalMachine\My | Where-Object {($_.Subject-eq "CN=GCEncryptionCertificate") }
-Protect-GuestConfigurationPackage -Path C:\demo\AuditWindowsService.zip -Certificate $certToSignThePackage -Verbose
-```
-
-## Linux signature validation
-
-```bash
-# generate gpg key
-gpg --gen-key
-
-# export public key
-gpg --output public.gpg --export <email-id used to generate gpg key>
-# export private key
-gpg --output private.gpg --export-secret-key <email-id used to generate gpg key>
-
-# Sign linux policy package
-Import-Module GuestConfiguration
-Protect-GuestConfigurationPackage -Path ./not_installed_application_linux.zip -PrivateGpgKeyPath ./private.gpg -PublicGpgKeyPath ./public.gpg -Verbose
-```
-
-Parameters of the `Protect-GuestConfigurationPackage` cmdlet:
--- **Path**: Full path of the machine configuration package.-- **Certificate**: Code signing certificate to sign the package. This parameter is only supported
- when signing content for Windows.
-
-## Certificate requirements
-
-GuestConfiguration agent expects the certificate public key to be present in
-"Trusted Root Certificate Authorities" on Windows machines and in the path
-`/usr/local/share/ca-certificates/gc` on Linux machines. For the node to
-verify signed content, install the certificate public key on the machine before
-applying the custom policy. This process can be done using any technique inside
-the VM or by using Azure Policy. An example template is available
-[to deploy a machine with a certificate](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vm-push-certificate-windows).
-The Key Vault access policy must allow the Compute resource provider to access
-certificates during deployments. For detailed steps, see
-[Set up Key Vault for virtual machines in Azure Resource Manager](../../virtual-machines/windows/key-vault-setup.md#use-templates-to-set-up-key-vault).
-
-Following is an example to export the public key from a signing certificate, to
-import to the machine.
-
-```azurepowershell-interactive
-$Cert = Get-ChildItem -Path cert:\LocalMachine\My | Where-Object {($_.Subject-eq "CN=mycert3") } | Select-Object -First 1
-$Cert | Export-Certificate -FilePath "$env:temp\DscPublicKey.cer" -Force
-```
-
-## Tag requirements
-
-After your content is published, append a tag with name
-`GuestConfigPolicyCertificateValidation` and value `enabled` to all virtual
-machines where code signing should be required. See the
-[Tag samples](../policy/samples/built-in-policies.md#tags) for how tags can be
-delivered at scale using Azure Policy. Once this tag is in place, the policy
-definition generated using the `New-GuestConfigurationPolicy` cmdlet enables the
-requirement through the machine configuration extension.
-
-## Next steps
--- [Test the package artifact](./machine-configuration-create-test.md)
- from your development environment.
-- [Publish the package artifact](./machine-configuration-create-publish.md)
- so it is accessible to your machines.
-- Use the `GuestConfiguration` module to
- [create an Azure Policy definition](./machine-configuration-create-definition.md)
- for at-scale management of your environment.
-- [Assign your custom policy definition](../policy/assign-policy-portal.md) using
- Azure portal.
-- Learn how to view
- [compliance details for machine configuration](../policy/how-to/determine-non-compliance.md) policy assignments.
governance Machine Configuration Create Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-create-test.md
- Title: How to test machine configuration package artifacts
-description: The experience creating and testing packages that audit or apply configurations to machines.
Previously updated : 07/25/2022--
-# How to test machine configuration package artifacts
--
-The PowerShell module `GuestConfiguration` includes tools to automate
-testing a configuration package outside of Azure. Use these tools to find issues
-and iterate quickly before moving on to test in an Azure or Arc connected
-environment.
-
-Before you can begin testing, follow all steps in the page
-[How to setup a machine configuration authoring environment](./machine-configuration-create-setup.md)
-and then
-[How to create custom machine configuration package artifacts](./machine-configuration-create.md)
-to create and publish a custom machine configuration package.
-
-> [!IMPORTANT]
-> Custom packages that audit the state of an environment are Generally Available,
-> but packages that apply configurations are **in preview**. **The following limitations apply:**
->
-> To use machine configuration packages that apply configurations, Azure VM guest
-> configuration extension version **1.29.24** or later,
-> or Arc agent **1.10.0** or later, is required.
->
-> To test creating and applying configurations on Linux, the
-> `GuestConfiguration` module is only available on Ubuntu 18 but the package
-> and policies produced by the module can be used on any Linux distro/version
-> supported in Azure or Arc.
->
-> Testing packages on MacOS is not available.
-
-You can test the package from your workstation or continuous integration and
-continuous deployment (CI/CD) environment. The `GuestConfiguration` module
-includes the same agent for your development environment as is used inside Azure
-or Arc enabled machines. The agent includes a stand-alone instance of PowerShell
-7.1.3 for Windows and 7.2.0-preview.7 for Linux, so the script environment where
-the package is tested will be consistent with machines you manage using guest
-configuration.
-
-The agent service in Azure and Arc-enabled machines is running as the
-"LocalSystem" account in Windows and "Root" in Linux. Run the commands below in
-privileged security context for best results.
-
-To run PowerShell as "LocalSystem" in Windows, use the SysInternals tool
-[PSExec](/sysinternals/downloads/psexec).
-
-To run PowerShell as "Root" in Linux, use the
-[sudo command](https://www.sudo.ws/docs/man/sudo.man/).
-
-## Validate the configuration package meets requirements
-
-First test that the configuration package meets basic requirements using
-`Get-GuestConfigurationPackageComplianceStatus `. The command verifies the
-following package requirements.
--- MOF is present and valid, at the right location-- Required Modules/dependencies are present with the right version, without
- duplicates
-- Validate the package is signed (optional)-- Test that `Test` and `Get` return information about the compliance status-
-Parameters of the `Get-GuestConfigurationPackageComplianceStatus ` cmdlet:
--- **Path**: File path or URI of the machine configuration package.-- **Parameter**: Policy parameters provided in hashtable format.-
-When this command is run for the first time, the machine configuration agent gets
-installed on the test machine at the path `c:\programdata\GuestConfig\bin` on
-Windows and `/var/lib/GuestConfig/bin` on Linux. This path isn't accessible to
-a user account so the command requires elevation.
-
-Run the following command to test the package:
-
-In Windows, from an elevated PowerShell 7 session.
-
-```powershell
-# Get the current compliance results for the local machine
-Get-GuestConfigurationPackageComplianceStatus -Path ./MyConfig.zip
-```
-
-In Linux, by running PowerShell using sudo.
-
-```bash
-# Get the current compliance results for the local machine
-sudo pwsh -command 'Get-GuestConfigurationPackageComplianceStatus -Path ./MyConfig.zip'
-```
-
-The command outputs an object containing the compliance status and details
-per resource.
-
-```powershell
- complianceStatus resources
- -
- True @{BuiltInAccount=localSystem; ConfigurationName=MyConfig; Credential=; Dependencies=System.Obje…
-```
-
-#### Test the configuration package can apply a configuration
-
-Finally, if the configuration package mode is `AuditandSet` you can test that
-the `Set` method can apply settings to a local machine using the command
-`Start-GuestConfigurationPackageRemediation`.
-
-> [!IMPORTANT]
-> This command attempts to make changes in the local environment where
-> it's run.
-
-Parameters of the `Start-GuestConfigurationPackageRemediation` cmdlet:
--- **Path**: Full path of the machine configuration package.-
-In Windows, from an elevated PowerShell 7 session.
-
-```powershell
-# Test applying the configuration to local machine
-Start-GuestConfigurationPackageRemediation -Path ./MyConfig.zip
-```
-
-In Linux, by running PowerShell using sudo.
-
-```bash
-# Test applying the configuration to local machine
-sudo pwsh -command 'Start-GuestConfigurationPackageRemediation -Path ./MyConfig.zip'
-```
-
-The command won't return output unless errors occur. To troubleshoot details
-about events occurring during `Set`, use the `-verbose` parameter.
-
-After running the command `Start-GuestConfigurationPackageRemediation`, you can
-run the command `Get-GuestConfigurationComplianceStatus` again to confirm the
-machine is now in the correct state.
-
-## Next steps
--- [Publish the package artifact](./machine-configuration-create-publish.md)
- so it is accessible to your machines.
-- Use the `GuestConfiguration` module to
- [create an Azure Policy definition](./machine-configuration-create-definition.md)
- for at-scale management of your environment.
-- [Assign your custom policy definition](../policy/assign-policy-portal.md) using
- Azure portal.
-- Learn how to view
- [compliance details for machine configuration](../policy/how-to/determine-non-compliance.md) policy assignments.
governance Machine Configuration Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-create.md
- Title: How to create custom machine configuration package artifacts
-description: Learn how to create a machine configuration package file.
Previously updated : 02/14/2023--
-# How to create custom machine configuration package artifacts
--
-Before you begin, it's a good idea to read the overview page for
-[machine configuration](./overview.md).
-
-When auditing / configuring both Windows and Linux, machine configuration uses a
-[Desired State Configuration](/powershell/dsc/overview)
-(DSC). The DSC configuration defines the condition that the machine should
-be in.
-
-> [!IMPORTANT]
-> Custom packages that audit the state of an environment and apply
-> configurations are in generally available (GA) support status. However, the following
-> limitations apply:
->
-> To use machine configuration packages that apply configurations, Azure VM guest
-> configuration extension version **1.29.24** or later,
-> or Arc agent **1.10.0** or later, is required.
->
-> To test creating and applying configurations on Linux, the
-> `GuestConfiguration` module is only available on Ubuntu 18 but the package
-> and policies produced by the module can be used on any Linux distribution
-> and version
-> supported in Azure or Arc.
->
-> Testing packages on macOS is not available.
->
-> Don't use secrets or confidential information in custom content packages.
-
-Use the following steps to create your own configuration for managing the
-state of an Azure or non-Azure machine.
-
-## Install PowerShell 7 and required PowerShell modules
-
-First, make sure you've followed all steps on the page
-[How to setup a machine configuration authoring environment](./machine-configuration-create-setup.md)
-to install the required version of PowerShell for your OS, the
-`GuestConfiguration` module, and if needed, the module
-`PSDesiredStateConfiguration`.
-
-## Author a configuration
-
-Before creating a configuration package, author and compile a DSC configuration.
-If needed, example configurations are available for Windows and Linux.
-
-> [!IMPORTANT]
-> When compiling configurations for Windows, use `PSDesiredStateConfiguration`
-> version `2.0.5` (the stable release). When compiling configurations for Linux
-> install the prerelease version `3.0.0`.
-
-An example is provided in the DSC
-[Getting started document](/powershell/dsc/getting-started/wingettingstarted#define-a-configuration-and-generate-the-configuration-document)
-for Windows.
-
-For Linux, you'll need to create a custom DSC resource module using
-[PowerShell classes](/powershell/dsc/resources/authoringResourceClass).
-A full example of a custom resource and configuration is available
-(and has been tested with machine configuration) in the PowerShell docs page
-[Writing a custom DSC resource with PowerShell classes](/powershell/dsc/resources/authoringResourceClass).
-
-## Create a configuration package artifact
-
-Once the MOF is compiled, the supporting files must be packaged together.
-The completed package is used by machine configuration to create the Azure Policy
-definitions.
-
-The `New-GuestConfigurationPackage` cmdlet creates the package. Modules that are
-needed by the configuration must be in available in `$Env:PSModulePath` for the
-development environment so the commands in the module can add them to the
-package.
-
-Parameters of the `New-GuestConfigurationPackage` cmdlet when creating Windows
-content:
--- **Name**: machine configuration package name.-- **Configuration**: Compiled DSC configuration document full path.-- **Path**: Output folder path. This parameter is optional. If not specified,
- the package is created in current directory.
-- **Type**: (Audit, AuditandSet) Determines whether the configuration should
- only audit or if the configuration should be applied and change the state of
- the machine. The default is "Audit".
-
-This step doesn't require elevation. The Force cmdlet is used to overwrite
-existing packages, if you run the command more than once.
-
-The following commands create a package artifacts:
-
-```powershell
-# Create a package that will only audit compliance
-New-GuestConfigurationPackage `
- -Name 'MyConfig' `
- -Configuration './Config/MyConfig.mof' `
- -Type Audit `
- -Force
-```
-
-```powershell
-# Create a package that will audit and apply the configuration (Set)
-New-GuestConfigurationPackage `
- -Name 'MyConfig' `
- -Configuration './Config/MyConfig.mof' `
- -Type AuditAndSet `
- -Force
-```
-
-An object is returned with the Name and Path of the created package.
-
-```
-Name Path
-- -
-MyConfig /Users/.../MyConfig/MyConfig.zip
-```
-
-### Expected contents of a machine configuration artifact
-
-The completed package is used by machine configuration to create the Azure Policy
-definitions. The package consists of:
--- The compiled DSC configuration as a MOF-- Modules folder
- - GuestConfiguration module
- - DscNativeResources module
- - DSC resource modules required by the MOF
-- A metaconfig file that stores the package `type` and `version`-
-The PowerShell cmdlet creates the package .zip file. No root level folder or
-version folder is required. The package format must be a .zip file and can't
-exceed a total size of 100 MB when uncompressed.
-
-## Extending machine configuration with third-party tools
-
-The artifact packages for machine configuration can be extended to include
-third-party tools. Extending machine configuration requires development of two
-components.
--- A Desired State Configuration resource that handles all activity related to
-managing the third-party tool
- - Install
- - Invoke
- - Convert output
-- Content in the correct format for the tool to natively consume-
-The DSC resource requires custom development if a community solution doesn't
-already exist. Community solutions can be discovered by searching the PowerShell Gallery for tag
-[GuestConfiguration](https://www.powershellgallery.com/packages?q=Tags%3A%22GuestConfiguration%22).
-
-> [!NOTE]
-> Machine configuration extensibility is a "bring your own
-> license" scenario. Ensure you have met the terms and conditions of any third
-> party tools before use.
-
-After the DSC resource has been installed in the development environment, use
-the **FilesToInclude** parameter for `New-GuestConfigurationPackage` to include
-content for the third-party platform in the content artifact.
-
-## Next steps
--- [Test the package artifact](./machine-configuration-create-test.md)
- from your development environment.
-- [Publish the package artifact](./machine-configuration-create-publish.md)
- so it is accessible to your machines.
-- Use the `GuestConfiguration` module to
- [create an Azure Policy definition](./machine-configuration-create-definition.md)
- for at-scale management of your environment.
-- [Assign your custom policy definition](../policy/assign-policy-portal.md) using
- Azure portal.
-
governance Machine Configuration Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-custom.md
- Title: Changes to behavior in PowerShell Desired State Configuration for machine configuration
-description: This article describes the platform used to deliver configuration changes to machines through Azure Policy.
Previously updated : 07/15/2022---
-# Changes to behavior in PowerShell Desired State Configuration for machine configuration
--
-Before you begin, it's a good idea to read the overview of
-[machine configuration](./overview.md).
-
-[A video walk-through of this document is available](https://youtu.be/nYd55FiKpgs).
-
-Machine configuration uses
-[Desired State Configuration (DSC)](/powershell/dsc/overview)
-version 3 to audit and configure machines. The DSC configuration defines the
-state that the machine should be in. There's many notable differences in how
-DSC is implemented in machine configuration.
-
-## Machine configuration uses PowerShell 7 cross platform
-
-Machine configuration is designed so the experience of managing Windows and Linux
-can be consistent. Across both operating system environments, someone with
-PowerShell DSC knowledge can create and publish configurations using scripting
-skills.
-
-Machine configuration only uses PowerShell DSC version 3 and doesn't rely on the
-previous implementation of
-[DSC for Linux](https://github.com/Microsoft/PowerShell-DSC-for-Linux)
-or the "nx" providers included in that repository.
-
-As of version 1.29.33, machine configuration operates in PowerShell 7.1.2 for Windows and PowerShell 7.2
-preview 6 for Linux. Starting with version 7.2, the `PSDesiredStateConfiguration`
-module moved from being part of the PowerShell installation and is instead
-installed as a
-[module from the PowerShell Gallery](https://www.powershellgallery.com/packages/PSDesiredStateConfiguration).
-
-## Multiple configurations
-
-Machine configuration supports assigning multiple configurations to
-the same machine. There's no special steps required within the
-operating system of machine configuration extension. There's no need to configure
-[partial configurations](/powershell/dsc/pull-server/partialConfigs).
-
-## Dependencies are managed per-configuration
-
-When a configuration is
-[packaged using the available tools](./machine-configuration-create.md),
-the required dependencies for the configuration are included in a .zip file.
-Machines extract the contents into a unique folder for each configuration.
-The agent delivered by the machine configuration extension creates a dedicated
-PowerShell session for each configuration, using a `$Env:PSModulePath` that
-limits automatic module loading to only the path where the package was
-extracted.
-
-Multiple benefits result from this change.
--- It's possible to use different module versions for each configuration, on
- the same machine.
-- When a configuration is no longer needed on a machine, the entire folder
- where it was extracted is safely deleted by the agent without the need to
- manage shared dependencies across configurations.
-- It's not required to manage multiple versions of any module in a central
- service.
-
-## Artifacts are managed as packages
-
-The Azure Automation State Configuration feature includes artifact management
-for modules and configuration scripts. Once both are published in to the service,
-the script can be compiled to MOF format. Similarly, Windows Pull Server also required
-managing configurations and modules at the web service instance. By contrast, the
-DSC extension has a simplified model where all artifacts are packaged together
-and stored in a location accessible from the target machine using an HTTPS request
-(Azure Blob Storage is the popular option).
-
-Machine configuration only uses the simplified model where all artifacts
-are packaged together and accessed from the target machine over HTTPS.
-There's no need to publish modules, scripts, or compile in the service. One
-change is that the package should always include a compiled MOF. It is
-not possible to include a script file in the package and compile
-on the target machine.
-
-## Maximum size of custom configuration package
-
-In Azure Automation state configuration, DSC configurations were
-[limited in size](../../automation/automation-dsc-compile.md#compile-your-dsc-configuration-in-windows-powershell).
-Machine configuration supports a total package size of 100 MB (before
-compression). There's no specific limit on the size of the MOF file within
-the package.
-
-## Configuration mode is set in the package artifact
-
-When creating the configuration package, the mode is set using the following
-options:
--- _Audit_: Verifies the compliance of a machine. No changes are made.-- _AuditandSet_: Verifies and Remediates the compliance state of the machine.
- Changes are made if the machine isn't compliant.
-
-The mode is set in the package rather than in the
-[Local Configuration Manager](/powershell/dsc/managing-nodes/metaConfig#basic-settings)
-service because it can be different per configuration, when multiple
-configurations are assigned.
-
-## Parameter support through Azure Resource Manager
-
-Parameters set by the `configurationParameter` property array in
-[machine configuration assignments](machine-configuration-assignments.md)
-overwrite the static text within a configuration MOF file when the file is
-stored on a machine. Parameters allow for customization and changes to be controlled
-by an operator from the service API without needing to run commands within
-the machine.
-
-Parameters in Azure Policy that pass values to machine configuration
-assignments must be _string_ type. It isn't possible to pass arrays through
-parameters, even if the DSC resource supports arrays.
-
-## Trigger Set from outside machine
-
-A challenge in previous versions of DSC has been correcting drift at scale
-without much custom code and reliance on WinRM remote connections. Guest
-configuration solves this problem. Users of machine configuration have control
-over drift correction through
-[Remediation On Demand](./machine-configuration-policy-effects.md#remediation-on-demand-applyandmonitor).
-
-## Sequence includes Get method
-
-When machine configuration audits or configures a machine the same
-sequence of events is used for both Windows and Linux. The notable change in
-behavior is the `Get` method is called by the service to return details about
-the state of the machine.
-
-1. The agent first runs `Test` to determine whether the configuration is in the
- correct state.
-1. If the package is set to `Audit`, the Boolean value returned by the function
- determines
- if the Azure Resource Manager status for the Guest Assignment should be
- Compliant/Not-Compliant.
-1. If the package is set to `AuditandSet`, the Boolean value determines whether
- to remediate the machine by applying the configuration using the `Set` method.
- If the `Test` method returns False, `Set` is run. If `Test` returns True, then
- `Set` isn't run.
-1. Last, the provider runs `Get` to return the current state of each setting so
- details are available both about why a machine isn't compliant and to confirm
- that the current state is compliant.
-
-## Special requirements for Get
-
-The function `Get` method has special requirements for Azure Policy guest
-configuration that haven't been needed for Windows PowerShell Desired State
-Configuration.
--- The hashtable that is returned should include a property named **Reasons**.-- The Reasons property must be an array.-- Each item in the array should be a hashtable with keys named **Code** and
- **Phrase**.
-- No other values other than the hashtable should be returned.-
-The Reasons property is used by the service to standardize how compliance
-information is presented. You can think of each item in Reasons as a "reason"
-that the resource is or isn't compliant. The property is an array because a
-resource could be out of compliance for more than one reason.
-
-The properties **Code** and **Phrase** are expected by the service. When
-authoring a custom resource, set the text (typically stdout) you would like to
-show as the reason the resource isn't compliant as the value for **Phrase**.
-**Code** has specific formatting requirements so reporting can clearly display
-information about the resource used to do the audit. This solution makes guest
-configuration extensible. Any command could be run as long as the output can be
-returned as a string value for the **Phrase** property.
--- **Code** (string): The name of the resource, repeated, and then a short name
- with no spaces as an identifier for the reason. These three values should be
- colon-delimited with no spaces.
- - An example would be `registry:registry:keynotpresent`
-- **Phrase** (string): Human-readable text to explain why the setting isn't
- compliant.
- - An example would be `The registry key $key isn't present on the machine.`
-
-```powershell
-$reasons = @()
-$reasons += @{
- Code = 'Name:Name:ReasonIdentifer'
- Phrase = 'Explain why the setting is not compliant'
-}
-return @{
- reasons = $reasons
-}
-```
-
-When using commandline tools to get information that will return in Get, you
-might find the tool returns output you didn't expect. Even though you capture
-the output in PowerShell, output might also have been written to
-standard error. To avoid this issue, consider redirecting
-output to null.
-
-### The Reasons property embedded class
-
-In script-based resources (Windows only), the Reasons class is included in the
-schema MOF file as follows.
-
-```mof
-[ClassVersion("1.0.0.0")]
-class Reason
-{
- [Read] String Phrase;
- [Read] String Code;
-};
-
-[ClassVersion("1.0.0.0"), FriendlyName("ResourceName")]
-class ResourceName : OMI_BaseResource
-{
- [Key, Description("Example description")] String Example;
- [Read, EmbeddedInstance("Reason")] String Reasons[];
-};
-```
-
-In class-based resources (Windows and Linux), the `Reason` class is included in
-the PowerShell module as follows. Linux is case-sensitive, so the "C" in Code
-and "P" in Phrase must be capitalized.
-
-```powershell
-enum ensure {
- Absent
- Present
-}
-
-class Reason {
- [DscProperty()]
- [string] $Code
-
- [DscProperty()]
- [string] $Phrase
-}
-
-[DscResource()]
-class Example {
-
- [DscProperty(Key)]
- [ensure] $ensure
-
- [DscProperty()]
- [Reason[]] $Reasons
-
- [Example] Get() {
- # return current current state
- }
-
- [void] Set() {
- # set the state
- }
-
- [bool] Test() {
- # check whether state is correct
- }
-}
-
-```
-
-If the resource has required properties, those properties should also be
-returned by `Get` in parallel with the `Reason` class. If `Reason` isn't
-included, the service includes a "catch-all" behavior that compares the values
-input to `Get` and the values returned by `Get`, and provides a detailed
-comparison as `Reason`.
-
-## Configuration names
-
-The name of the custom configuration must be consistent everywhere. The name of
-the `.zip` file for the content package, the configuration name in the MOF file,
-and the guest assignment name in the Azure Resource Manager template, must be
-the same.
-
-## Running commands in Windows PowerShell
-
-Running Windows modules in PowerShell can be achieved through using the below pattern in your DSC resources. The below pattern temporarily sets the `PSModulePath` to run Windows PowerShell instead of PowerShell core in order to discover required modules available in Windows PowerShell. This sample is a snippet from the DSC resource used in the [Secure Web Server](https://github.com/Azure/azure-policy/blob/master/samples/GuestConfiguration/package-samples/resource-modules/SecureProtocolWebServer/DSCResources/SecureWebServer/SecureWebServer.psm1#L253) built-in DSC resource.
-
-This pattern temporarily sets the PowerShell execution path to run from full PowerShell and discovers the required cmdlet which in this case is `Get-WindowsFeature`. The output of the command is returned and then standardized for compatability requirements. Once the cmdlet has been executed, the `PSModulePath` is set back to the original path.
-
-```powershell
-
- # This command needs to be run through full PowerShell rather than through PowerShell Core which is what the Policy engine runs
- $null = Invoke-Command -ScriptBlock {
- param ($fileName)
- $fullPowerShellExePath = "$env:SystemRoot\System32\WindowsPowershell\v1.0\powershell.exe"
- $oldPSModulePath = $env:PSModulePath
- try
- {
- # Set env variable to full powershell module path so that powershell can discover Get-WindowsFeature cmdlet.
- $env:PSModulePath = "$env:SystemRoot\System32\WindowsPowershell\v1.0\Modules"
- &$fullPowerShellExePath -command "if (Get-Command 'Get-WindowsFeature' -errorAction SilentlyContinue){Get-WindowsFeature -Name Web-Server | ConvertTo-Json | Out-File $fileName} else { Add-Content -Path $fileName -Value 'NotServer'}"
- }
- finally
- {
- $env:PSModulePath = $oldPSModulePath
- }
- }
-
-```
-
-## Common DSC features not available during machine configuration public preview
-
-During public preview, machine configuration doesn't support
-[specifying cross-machine dependencies](/powershell/dsc/configurations/crossnodedependencies)
-using "WaitFor*" resources. It isn't possible for one
-machine to monitor and wait for another machine to reach a state before
-progressing.
-
-[Reboot handling](/powershell/dsc/configurations/reboot-a-node) isn't
-available in the public preview release of machine configuration, including,
-the `$global:DSCMachineStatus` isn't available. Configurations aren't able to reboot a node during or at the end of a configuration.
-
-## Known compatibility issues with supported modules
-
-The `PsDscResources` module in the PowerShell Gallery and the `PSDesiredStateConfiguration`
-module that ships with Windows are supported by Microsoft and have been a commonly used
-set of resources for DSC. Until the `PSDscResources` module is updated for DSCv3, be aware of the
-following known compatibility issues.
--- Don't use resources from the `PSDesiredStateConfiguration` module that ships with Windows. Instead,
- switch to `PSDscResources`.
-- Don't use the `WindowsFeature`, `WindowsFeatureSet`, `WindowsOptionalFeature`, and
- `WindowsOptionalFeatureSet` resources in `PsDscResources`. There's a known
- issue loading the `DISM` module in PowerShell 7.1.3 on Windows Server,
- that will require an update.
-
-The "nx" resources for Linux that were included in the
-[DSC for Linux](https://github.com/microsoft/PowerShell-DSC-for-Linux/tree/master/Providers)
-repo were written in a combination of the languages C and Python. Because the path
-forward for DSC on Linux is to use PowerShell, the existing "nx" resources
-aren't compatible with DSCv3. Until a new module containing supported resources for Linux
-is available, it's required to author custom resources.
-
-## Coexistence with DSC version 3 and previous versions
-
-DSC version 3 in machine configuration can coexist with older versions installed in
-[Windows](/powershell/dsc/getting-started/wingettingstarted) and
-[Linux](/powershell/dsc/getting-started/lnxgettingstarted).
-The implementations are separate. However, there's no conflict detection
-across DSC versions, so don't attempt to manage the same settings.
-
-## Next steps
--- Read the [machine configuration overview](./overview.md).-- Setup a custom machine configuration package [development environment](./machine-configuration-create-setup.md).-- [Create a package artifact](./machine-configuration-create.md)
- for machine configuration.
-- [Test the package artifact](./machine-configuration-create-test.md)
- from your development environment.
-- Use the `GuestConfiguration` module to
- [create an Azure Policy definition](./machine-configuration-create-definition.md)
- for at-scale management of your environment.
-- [Assign your custom policy definition](../policy/assign-policy-portal.md) using
- Azure portal.
-- Learn how to view
- [compliance details for machine configuration](../policy/how-to/determine-non-compliance.md) policy assignments.
governance Machine Configuration Dsc Extension Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-dsc-extension-migration.md
- Title: Planning a change from Desired State Configuration extension for Linux to machine configuration
-description: Guidance for moving from Desired State Configuration extension to the machine configuration feature of Azure Policy.
Previously updated : 07/25/2022--
-# Planning a change from Desired State Configuration extension for Linux to machine configuration
--
-Machine configuration is the latest implementation of functionality that has been provided by the
-PowerShell Desired State Configuration (DSC) extension for Linux virtual machines in Azure. When possible,
-you should plan to move your content and machines to the new service. This article provides guidance
-on developing a migration strategy.
-
-New features in machine configuration:
--- Advanced reporting through Azure Resource Graph including resource ID and state-- Manage multiple configurations for the same machine-- When machines drift from the desired state, you control when remediation occurs-- Linux machines consume PowerShell-based DSC resources-
-Before you begin, it's a good idea to read the conceptual overview information at the page
-[Azure Policy's machine configuration](./overview.md).
-
-## Major differences
-
-Configurations are deployed through DSC extension for Linux in a "push" model, where the operation
-is completed asynchronously. The deployment doesn't return until the configuration has finished
-running inside the virtual machine. After deployment, no further information is returned to ARM.
-The monitoring and drift are managed within the machine.
-
-Machine configuration processes configurations in a "pull" model. The extension is
-deployed to a virtual machine and then jobs are executed based on guest assignment details. it's
-not possible to view the status while the configuration in real time as it's being applied inside
-the machine. it's possible to monitor and correct drift from Azure Resource Manager (ARM) after the
-configuration is applied.
-
-The DSC extension included "privateSettings" where secrets could be passed to the configuration such
-as passwords or shared keys. Secrets management hasn't yet been implemented for machine configuration.
-
-### Considerations for whether to migrate existing machines or only new machines
-
-Machine configuration uses DSC version 3 with PowerShell version 7. DSC version 3 can coexist with
-older versions of DSC in
-[Linux](/powershell/dsc/getting-started/lnxgettingstarted).
-The implementations are separate. However, there's no conflict detection.
-
-For machines that will only exist for days or weeks, update the deployment templates and switch from
-DSC extension to machine configuration. After testing, use the updated templates to build future
-machines.
-
-If a machine is planned to exist for months or years, you might choose to change which configuration
-features of Azure manage the machine, to take advantage of new features.
-
-it isn't advised to have both platforms manage the same configuration.
-
-## Understand migration
-
-The best approach to migration is to recreate, test, and redeploy content first, and then use the
-new solution for new machines.
-
-The expected steps for migration are:
--- Download and expand the .zip package used for DSC extension-- Examine the Managed Object Format (MOF) file and resources to understand the scenario-- Create custom DSC resources in PowerShell classes-- Update the MOF file to use the new resources-- Use the machine configuration authoring module to create, test, and publish a new package-- Use machine configuration for future deployments rather than DSC extension-
-#### Consider decomposing complex configuration files
-
-Machine configuration can manage multiple configurations per machine. Many configurations written for
-DSC extension for Linux assumed the limitation of managing a single configuration per
-machine. To take advantage of the expanded capabilities offered by machine configuration, large
-configuration files can be divided into many smaller configurations where each handles a specific
-scenario.
-
-There's no orchestration in machine configuration to control the order of how configurations are
-sorted. Keep steps in a configuration together in one package if they're required to happen
-sequentially.
-
-### Test content in Azure machine configuration
-
-Read the page
-[How to create custom machine configuration package artifacts](./machine-configuration-create.md).
-to evaluate whether your content from DSC extension can be used with machine configuration.
-
-When you reach the step
-[Author a configuration](./machine-configuration-create.md#author-a-configuration),
-use the MOF file from the DSC extension package as the basis for creating a new MOF file and
-custom DSC resources. You must have the custom PowerShell modules available in `PSModulePath`
-before you can create a machine configuration package.
-
-#### Update deployment templates
-
-If your deployment templates include the DSC extension
-(see [examples](../../virtual-machines/extensions/dsc-template.md)),
-there are two changes required.
-
-First, replace the DSC extension with the
-[extension for the machine configuration feature](./overview.md).
-
-Then, add a
-[machine configuration assignment](./machine-configuration-assignments.md)
-that associates the new configuration package (and hash value) with the machine.
-
-#### Older "nx" modules for Linux DSC are not compatible with DSCv3
-
-The modules that shipped with DSC for Linux on GitHub were created in the C programming language.
-In the latest version of DSC, which is used by the machine configuration feature, modules
-for Linux are written in PowerShell classes. Meaning, none of the original resources are compatible
-with the new platform.
-
-As a result, new Linux packages will require custom module development.
-
-Linux content authored using ChefInspec remains supported but should only be used for legacy configurations.
-
-#### Updated "nx" module functionality
-
-A new "nx" module will be released with the purpose of making managing Linux systems easier for PowerShell users.
-
-The module will help in managing common tasks such as:
--- User and group management-- File system operations (changing mode, owner, listing, set/replace content)-- Service management (start, stop, restart, remove, add)-- Archive operations (compress, extract)-- Package Management (list, search, install, uninstall packages)-
-The module will include class based DSC resources for Linux, as well as built-in Machine Configuration packages.
-
-To provide feedback on the above listed fuctionality, please open an issue on the documentation and we will respond accordingly.
-
-#### Will I have to add "Reasons" property to custom resources?
-
-Implementing the
-["Reasons" property](./machine-configuration-custom.md#special-requirements-for-get)
-provides a better experience when viewing the results of a configuration assignment from the Azure
-Portal. If the `Get` method in a module doesn't include "Reasons", generic output is returned with
-details from the properties returned by the `Get` method. Therefore, it's optional for migration.
-
-### Removing a configuration the was assigned in Linux by DSC extension
-
-In previous versions of DSC, the DSC extension assigned a configuration through the Local
-Configuration Manager. It's recommended to remove the DSC extension and reset
-LCM.
-
-> [!IMPORTANT]
-> Removing a configuration in Local Configuration Manager doesn't "roll back"
-> the settings in Linux that were set by the configuration. The
-> action of removing the configuration only causes the LCM to stop managing
-> the assigned configuration. The settings remain in place.
-
-Use the `Remove.py` script as documented in
-[Performing DSC Operations from the Linux Computer](https://github.com/Microsoft/PowerShell-DSC-for-Linux#performing-dsc-operations-from-the-linux-computer)
-
-## Next steps
--- [Create a package artifact](./machine-configuration-create.md)
- for machine configuration.
-- [Test the package artifact](./machine-configuration-create-test.md)
- from your development environment.
-- [Publish the package artifact](./machine-configuration-create-publish.md)
- so it's accessible to your machines.
-- Use the `GuestConfiguration` module to
- [create an Azure Policy definition](./machine-configuration-create-definition.md)
- for at-scale management of your environment.
-- [Assign your custom policy definition](../policy/assign-policy-portal.md) using
- Azure portal.
governance Machine Configuration Policy Effects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-policy-effects.md
- Title: Remediation options for machine configuration
-description: Azure Policy's machine configuration feature offers options for continuous remediation or control using remediation tasks.
Previously updated : 07/25/2022--
-# Remediation options for machine configuration
--
-Before you begin, it's a good idea to read the overview page for
-[machine configuration](./overview.md).
-
-> [!IMPORTANT]
-> The machine configuration extension is required for Azure virtual machines. To
-> deploy the extension at scale across all machines, assign the following policy
-> initiative: `Deploy prerequisites to enable guest configuration policies on
-> virtual machines`
->
-> To use machine configuration packages that apply configurations, Azure VM guest
-> configuration extension version **1.29.24** or later,
-> or Arc agent **1.10.0** or later, is required.
->
-> Custom machine configuration policy definitions using **AuditIfNotExists**
-> as well as **DeployIfNotExists** are now Generally Available.
-
-## How remediation (Set) is managed by machine configuration
-
-Machine configuration uses the policy effect
-[DeployIfNotExists](../policy/concepts/effects.md#deployifnotexists)
-for definitions that deliver changes inside machines.
-Set the properties of a policy assignment to control how
-[evaluation](../policy/concepts/effects.md#deployifnotexists-evaluation)
-delivers configurations automatically or on-demand.
-
-[A video walk-through of this document is available](https://youtu.be/rjAk1eNmDLk).
-
-### Machine configuration assignment types
-
-There are three available assignment types when guest assignments are created.
-The property is available as a parameter of machine configuration definitions
-that support **DeployIfNotExists**.
-
-| Assignment type | Behavior |
-|-|-|
-| Audit | Report on the state of the machine, but don't make changes. |
-| ApplyAndMonitor | Applied to the machine once and then monitored for changes. If the configuration drifts and becomes NonCompliant, it won't be automatically corrected unless remediation is triggered. |
-| ApplyAndAutoCorrect | Applied to the machine. If it drifts, the local service inside the machine makes a correction at the next evaluation. |
-
-In each of the three assignment types, when a new policy assignment is assigned
-to an existing machine, a guest assignment is automatically created to
-audit the state of the configuration first, providing information to make
-decisions about which machines need remediation.
-
-## Remediation on-demand (ApplyAndMonitor)
-
-By default, machine configuration assignments operates in a "remediation on
-demand" scenario. The configuration is applied and then allowed to drift out of
-compliance. The compliance status of the guest assignment is "Compliant"
-unless an error occurs while applying the configuration or if during the next
-evaluation the machine is no longer in the desired state. The agent reports
-the status as "NonCompliant" and doesn't automatically remediate.
-
-To enable this behavior, set the
-[assignmentType property](/rest/api/guestconfiguration/guest-configuration-assignments/get#assignmenttype)
-of the machine configuration assignment to "ApplyandMonitor". Each time the
-assignment is processed within the machine, for each resource the
-[Test](/powershell/dsc/resources/get-test-set#test)
-method returns "true" the agent reports "Compliant"
-or if the method returns "false" the agent reports "NonCompliant".
-
-## Continuous remediation (AutoCorrect)
-
-Machine configuration supports the concept of "continuous remediation". If the machine drifts out of compliance for a configuration, the next time it's evaluated the configuration is corrected automatically. Unless an error occurs, the machine always reports status as "Compliant" for the configuration. There's no way to report when a drift was automatically corrected when using continuous remediation.
-
-To enable this behavior, set the
-[assignmentType property](/rest/api/guestconfiguration/guest-configuration-assignments/get#assignmenttype)
-of the machine configuration assignment to "ApplyandAutoCorrect". Each time the
-assignment is processed within the machine, for each resource the
-[Test](/powershell/dsc/resources/get-test-set#test)
-method returns "false", the
-[Set](/powershell/dsc/resources/get-test-set#set)
-method runs automatically.
-
-## Disable remediation
-
-When the `assignmentType` property is set to "Audit", the agent only
-performs an audit of the machine and doesn't attempt to remediate the configuration
-if it isn't compliant.
-
-### Disable remediation of custom content
-
-You can override the assignment type property for custom content packages by
-adding a tag to the machine with name **CustomGuestConfigurationSetPolicy** and
-value **disable**. Adding the tag disables remediation for custom content
-packages only, not for built-in content provided by Microsoft.
-
-## Azure Policy enforcement
-
-Azure Policy assignments include a required property
-[Enforcement Mode](../policy/concepts/assignment-structure.md#enforcement-mode)
-that determines behavior for new and existing resources.
-Use this property to control whether configurations are automatically applied to
-machines.
-
-**By default, enforcement is "Enabled"**. When a new machine is deployed **or the
-properties of a machine are updated**, if the machine is in the scope of an Azure
-Policy assignment with a policy definition in the category "Guest
-Configuration", Azure Policy automatically applies the configuration. **Update
-operations include actions that occur in Azure Resource Manager** such as adding
-or changing a tag, and for virtual machines, changes such as resizing or
-attaching a disk. Leave enforcement enabled if the configuration should be
-remediated when changes occur to the machine resource in Azure. Changes
-happening inside the machine don't trigger automatic remediation as long as they
-don't change the machine resource in Azure Resource Manager.
-
-If enforcement is set to "Disabled", the configuration assignment
-audits the state of the machine until the behavior is changed by a
-[remediation task](../policy/how-to/remediate-resources.md). By default, machine configuration
-definitions update the
-[assignmentType property](/rest/api/guestconfiguration/guest-configuration-assignments/get#assignmenttype) from "Audit" to "ApplyandMonitor" so the configuration
-is applied one time and then it won't apply again until a remediation is
-triggered.
-
-## OPTIONAL: Remediate all existing machines
-
-If an Azure Policy assignment is created from the Azure portal, on the
-"Remediation" tab a checkbox is available "Create a remediation task". When the
-box is checked, after the policy assignment is created any resources that
-evaluate to "NonCompliant" is automatically be corrected by remediation tasks.
-
-The effect of this setting for machine configuration is that you can deploy a
-configuration across many machines simply by assigning a policy. You won't
-also have to run the remediation task manually for machines that aren't
-compliant.
-
-## Manually trigger remediation outside of Azure Policy
-
-It's also possible to orchestrate remediation outside of the Azure Policy
-experience by updating a guest assignment resource, even if the update
-doesn't make changes to the resource properties.
-
-When a machine configuration assignment is created, the
-[complianceStatus property](/rest/api/guestconfiguration/guest-configuration-assignments/get#compliancestatus)
-is set to "Pending".
-The machine configuration service inside the machine (delivered to Azure
-virtual machines by the
-[Guest configuration extension](./overview.md)
-and included with Arc-enabled servers) requests a list of assignments every 5
-minutes.
-If the machine configuration assignment has both requirements, a
-`complianceStatus` of "Pending" and a `configurationMode` of either
-"ApplyandMonitor" or "ApplyandAutoCorrect", the service in the machine
-applies the configuration. After the configuration is applied, at the
-[next interval](./overview.md)
-the configuration mode dictates whether the behavior is to only report on
-compliance status and allow drift or to automatically correct.
-
-## Understanding combinations of settings
-
-|~| Audit | ApplyandMonitor | ApplyandAutoCorrect |
-|-|-|-|-|
-| Enforcement Enabled | Only reports status | Configuration applied on VM Create **and re-applied on Update** but otherwise allowed to drift | Configuration applied on VM Create and reapplied on Update and corrected on next interval if drift occurs |
-| Enforcement Disabled | Only reports status | Configuration applied but allowed to drift | Configuration applied on VM Create or Update and corrected on next interval if drift occurs |
-
-## Next steps
--- Read the [machine configuration overview](./overview.md).-- Setup a custom machine configuration package [development environment](./machine-configuration-create-setup.md).-- [Create a package artifact](./machine-configuration-create.md)
- for machine configuration.
-- [Test the package artifact](./machine-configuration-create-test.md)
- from your development environment.
-- Use the `GuestConfiguration` module to
- [create an Azure Policy definition](./machine-configuration-create-definition.md)
- for at-scale management of your environment.
-- [Assign your custom policy definition](../policy/assign-policy-portal.md) using
- Azure portal.
-- Learn how to view
- [compliance details for machine configuration](../policy/how-to/determine-non-compliance.md) policy assignments.
governance Migrate From Azure Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/migrate-from-azure-automation.md
+
+ Title: Azure Automation State Configuration to machine configuration migration planning
+description: This article provides process and technical guidance for customers interested in moving from DSC version 2 in Azure Automation to version 3 in Azure Policy.
Last updated : 04/18/2023+++
+# Azure Automation state configuration to machine configuration migration planning
++
+Machine configuration is the latest implementation of functionality that has been provided by Azure
+Automation State Configuration (also known as Azure Automation Desired State Configuration, or
+AADSC). When possible, you should plan to move your content and machines to the new service. This
+article provides guidance on developing a migration strategy from Azure Automation to machine
+configuration.
+
+New features in machine configuration address customer requests:
+
+- Increased size limit for configurations to 100 MB
+- Advanced reporting through Azure Resource Graph including resource ID and state
+- Manage multiple configurations for the same machine
+- When machines drift from the desired state, you control when remediation occurs
+- Linux and Windows both consume PowerShell-based DSC resources
+
+Before you begin, it's a good idea to read the conceptual overview information at the page
+[Azure Policy's machine configuration][01].
+
+## Understand migration
+
+The best approach to migration is to redeploy content first, and then migrate machines. This
+section outlines the expected steps for migration.
+
+1. Export configurations from Azure Automation
+1. Discover module requirements and load them in your environment
+1. Compile configurations
+1. Create and publish machine configuration packages
+1. Test machine configuration packages
+1. Onboard hybrid machines to Azure Arc
+1. Unregister servers from Azure Automation State Configuration
+1. Assign configurations to servers using machine configuration
+
+Machine configuration uses DSC version 3 with PowerShell version 7. DSC version 3 can coexist with
+older versions of DSC in [Windows][02] and [Linux][03]. The implementations are separate. However,
+there's no conflict detection.
+
+Machine configuration doesn't require publishing modules or configurations in to a service, or
+compiling in a service. Instead, you develop and test content using purpose-built tooling and
+publish the content anywhere the machine can reach over HTTPS (typically Azure Blob Storage).
+
+If you decide to have machines in both services for some period of time, there are no technical
+barriers. The two services are independent.
+
+## Export content from Azure Automation
+
+Start by discovering and exporting content from Azure Automation State Configuration into a
+development environment where you create, test, and publish content packages for machine
+configuration.
+
+### Configurations
+
+You can only export configuration scripts from Azure Automation. It isn't possible to export node
+configurations, or compiled MOF files. If you published MOF files directly into the Automation
+Account and no longer have access to the original file, you need to recompile from your private
+configuration scripts. If you can't find the original configuration, you must reauthor it.
+
+To export configuration scripts from Azure Automation, first identify the Azure Automation account
+that has the configurations and the name of the Resource Group the Automation Account is deployed
+in.
+
+Install the PowerShell module **Az.Automation**.
+
+```powershell
+Install-Module -Name Az.Automation
+```
+
+Next, use the `Get-AzAutomationAccount` command to identify your Automation Accounts and the
+Resource Group where they're deployed. The properties **ResourceGroupName** and
+**AutomationAccountName** are important for next steps.
+
+```azurepowershell-interactive
+Get-AzAutomationAccount
+```
+
+```Output
+SubscriptionId : <your-subscription-id>
+ResourceGroupName : <your-resource-group-name>
+AutomationAccountName : <your-automation-account-name>
+Location : centralus
+State :
+Plan :
+CreationTime : 6/30/2021 11:56:17 AM -05:00
+LastModifiedTime : 6/30/2021 11:56:17 AM -05:00
+LastModifiedBy :
+Tags : {}
+```
+
+Discover the configurations in your Automation Account. The output has one entry per configuration.
+If you have many, store the information as a variable so it's easier to work with.
+
+```azurepowershell-interactive
+$getParams = @{
+ ResourceGroupName = '<your-resource-group-name>'
+ AutomationAccountName = '<your-automation-account-name>'
+}
+
+Get-AzAutomationDscConfiguration @params
+```
+
+```Output
+ResourceGroupName : <your-resource-group-name>
+AutomationAccountName : <your-automation-account-name>
+Location : centralus
+State : Published
+Name : <your-configuration-name>
+Tags : {}
+CreationTime : 6/30/2021 12:18:26 PM -05:00
+LastModifiedTime : 6/30/2021 12:18:26 PM -05:00
+Description :
+Parameters : {}
+LogVerbose : False
+```
+
+Finally, export each configuration to a local script file using the command
+`Export-AzAutomationDscConfiguration`. The resulting file name uses the pattern
+`\ConfigurationName.ps1`.
+
+```azurepowershell-interactive
+$exportParams = @{
+ OutputFolder = '<location-on-your-machine>'
+ ResourceGroupName = '<your-resource-group-name>'
+ AutomationAccountName = '<your-automation-account-name>'
+ Name = '<your-configuration-name>'
+}
+Export-AzAutomationDscConfiguration @exportParams
+```
+
+```Output
+UnixMode User Group LastWriteTime Size Name
+-- - -- - - -
+ 12/31/1600 18:09
+```
+
+#### Export configurations using the PowerShell pipeline
+
+After you've discovered your accounts and the number of configurations, you might wish to export
+all configurations to a local folder on your machine. To automate this process, pipe the output of
+each command in the earlier examples to the next command.
+
+The example exports five configurations. The output pattern is the only indicator of success.
+
+```azurepowershell-interactive
+Get-AzAutomationAccount |
+ Get-AzAutomationDscConfiguration |
+ Export-AzAutomationDSCConfiguration -OutputFolder <location on your machine>
+```
+
+```Output
+UnixMode User Group LastWriteTime Size Name
+-- - -- - - -
+ 12/31/1600 18:09
+ 12/31/1600 18:09
+ 12/31/1600 18:09
+ 12/31/1600 18:09
+ 12/31/1600 18:09
+```
+
+#### Consider decomposing complex configuration files
+
+Machine configuration can manage more than one configuration per machine. Many configurations
+written for Azure Automation State Configuration assumed the limitation of managing a single
+configuration per machine. To take advantage of the expanded capabilities offered by machine
+configuration, you can divide large configuration files into many smaller configurations where each
+handles a specific scenario.
+
+There's no orchestration in machine configuration to control the order of how configurations are
+sorted. Keep steps in a configuration together in one package if they're required to happen
+sequentially.
+
+### Modules
+
+It isn't possible to export modules from Azure Automation or automatically correlate which
+configurations require which modules and versions. You must have the modules in your local
+environment to create a new machine configuration package. To create a list of modules you need for
+migration, use PowerShell to query Azure Automation for the name and version of modules.
+
+If you're using modules that are custom authored and only exist in your private development
+environment, it isn't possible to export them from Azure Automation.
+
+If you can't find a custom module in your environment that's required for a configuration and in
+the account, you can't compile the configuration. Therefore, you can't migrate the configuration.
+
+#### List modules imported in Azure Automation
+
+To retrieve a list of all modules installed in your automation account, use the
+`Get-AzAutomationModule` command. The property **IsGlobal** tells you if the module is built into
+Azure Automation always, or if it was published to the account.
+
+For example, to create a list of all modules published to any of your accounts.
+
+```azurepowershell-interactive
+Get-AzAutomationAccount |
+ Get-AzAutomationModule |
+ Where-Object IsGlobal -eq $false
+```
+
+You can also use the PowerShell Gallery as an aid in finding details about modules that are
+publicly available. The following example lists the modules that are built into new Automation
+Accounts and contain DSC resources.
+
+```azurepowershell-interactive
+Get-AzAutomationAccount |
+ Get-AzAutomationModule |
+ Where-Object IsGlobal -eq $true |
+ Find-Module -ErrorAction SilentlyContinue |
+ Where-Object {'' -ne $_.Includes.DscResource} |
+ Select-Object -Property Name, Version -Unique |
+ Format-Table -AutoSize
+```
+
+```Output
+Name Version
+- -
+AuditPolicyDsc 1.4.0
+ComputerManagementDsc 8.4.0
+PSDscResources 2.12.0
+SecurityPolicyDsc 2.10.0
+xDSCDomainjoin 1.2.23
+xPowerShellExecutionPolicy 3.1.0.0
+xRemoteDesktopAdmin 1.1.0.0
+```
+
+#### Download modules from PowerShell Gallery or a PowerShellGet repository
+
+If the modules were imported from the PowerShell Gallery, you can pipe the output from
+`Find-Module` directly to `Install-Module`. Piping the output across commands provides a solution
+to load a developer environment with all modules currently in an Automation Account if they're
+available in the PowerShell Gallery.
+
+You can use the same approach to pull modules from a custom NuGet feed if you have registered the
+feed in your local environment as a [PowerShellGet repository][04].
+
+The `Find-Module` command in this example doesn't suppress errors, meaning any modules not found in
+the gallery return an error message.
+
+```azurepowershell-interactive
+Get-AzAutomationAccount |
+ Get-AzAutomationModule |
+ Where-Object IsGlobal -eq $false |
+ Find-Module |
+ Where-Object { '' -ne $_.Includes.DscResource } |
+ Install-Module
+```
+
+#### Inspecting configuration scripts for module requirements
+
+If you've exported configuration scripts from Azure Automation, you can also review the contents
+for details about which modules are required to compile each configuration to a MOF file. This
+approach is only needed if you find configurations in your Automation Accounts where the modules
+have been removed. The configurations would no longer be useful for machines, but they might still
+be in the account.
+
+Towards the top of each file, look for a line that includes `Import-DscResource`. This command is
+only applicable inside a configuration, and it's used to load modules at the time of compilation.
+
+For example, the `WindowsIISServerConfig` configuration in the PowerShell Gallery has the lines in
+this example.
+
+```powershell
+configuration WindowsIISServerConfig
+{
+
+Import-DscResource -ModuleName @{ModuleName = 'xWebAdministration';ModuleVersion = '1.19.0.0'}
+Import-DscResource -ModuleName 'PSDesiredStateConfiguration'
+```
+
+The configuration requires you to have the **xWebAdministration** module version 1.19.0.0 and the
+module **PSDesiredStateConfiguration**.
+
+### Test content in Azure machine configuration
+
+To evaluate whether you can use your content from Azure Automation State Configuration with machine
+configuration, follow the step-by-step tutorial in the page
+[How to create custom machine configuration package artifacts][05].
+
+When you reach the step [Author a configuration][06], the configuration script that generates a MOF
+file should be one of the scripts you exported from Azure Automation State Configuration. You must
+have the required PowerShell modules installed in your environment before you can compile the
+configuration to a MOF file and create a machine configuration package.
+
+#### What if a module doesn't work with machine configuration?
+
+Some modules might have compatibility issues with machine configuration. The most common
+problems are related to .NET framework vs .NET core. Detailed technical information is available on
+the page, [Differences between Windows PowerShell 5.1 and PowerShell 7.x][07].
+
+One option to resolve compatibility issues is to run commands in Windows PowerShell from within a
+module that's imported in PowerShell 7, by running `powershell.exe`. You can review a sample module
+that uses this technique in the Azure-Policy repository where it's used to audit the state of
+[Windows DSC Configuration][08].
+
+The example also illustrates a small proof of concept.
+
+```powershell
+# example function that could be loaded from module
+function New-TaskResolvedInPWSH7 {
+ # runs the fictitious command 'Get-myNotCompatibleCommand' in Windows PowerShell
+ $compatObject = & powershell.exe -NoProfile -NonInteractive -Command {
+ Get-myNotCompatibleCommand
+ }
+ # resulting object can be used in PowerShell 7
+ return $compatObject
+}
+```
+
+#### Do I need to add the Reasons property to Get-TargetResource in all modules I migrate?
+
+Implementing the [Reasons property][09] provides a better experience when viewing the results of a
+configuration assignment from the Azure portal. If the `Get` method in a module doesn't include
+**Reasons**, generic output is returned with details from the properties returned by the `Get`
+method. Therefore, it's optional for migration.
+
+## Machines
+
+After you've finished testing content from Azure Automation State Configuration in machine
+configuration, develop a plan for migrating machines.
+
+Azure Automation State Configuration is available for both virtual machines in Azure and hybrid
+machines located outside of Azure. You must plan for each of these scenarios using different steps.
+
+### Azure VMs
+
+Azure virtual machines already have a [resource][10] in Azure, which means they're ready for
+machine configuration assignments that associate them with a configuration. The high-level tasks
+for migrating Azure virtual machines are to remove them from Azure Automation State Configuration
+and then assign configurations using machine configuration.
+
+To remove a machine from Azure Automation State Configuration, follow the steps in the page
+[How to remove a configuration and node from Automation State Configuration][11].
+
+To assign configurations using machine configuration, follow the steps in the Azure Policy
+Quickstarts, such as
+[Quickstart: Create a policy assignment to identify non-compliant resources][12]. In step 6 when
+selecting a policy definition, pick the definition that applies a configuration you migrated from
+Azure Automation State Configuration.
+
+### Hybrid machines
+
+Machines outside of Azure [can be registered to Azure Automation State Configuration][13], but they
+don't have a machine resource in Azure. The Local Configuration Manager (LCM) service inside the
+machine handles the connection to Azure Automation. The record of the node is managed as a resource
+in the Azure Automation provider type.
+
+Before removing a machine from Azure Automation State Configuration, onboard each node as an
+[Azure Arc-enabled server][14]. Onboarding to Azure Arc creates a machine resource in Azure so
+Azure Policy can manage the machine. The machine can be onboarded to Azure Arc at any time, but you
+can use Azure Automation State Configuration to automate the process.
+
+You can register a machine to Azure Arc-enabled servers by using PowerShell DSC. For details, view
+the page [How to install the Connected Machine agent using Windows PowerShell DSC][15]. Remember
+however, that Azure Automation State Configuration can manage only one configuration per machine,
+per Automation Account. You can export, test, and prepare your content for machine configuration,
+and then switch the node configuration in Azure Automation to onboard to Azure Arc. As the last
+step, remove the node registration from Azure Automation State Configuration and move forward only
+managing the machine state through machine configuration.
+
+## Troubleshooting issues when exporting content
+
+Details about known issues are provided in this section.
+
+### Exporting configurations results in "\\" character in file name
+
+When using PowerShell on macOS and Linux, you may have issues dealing with the file names output by
+`Export-AzAutomationDSCConfiguration`.
+
+As a workaround, a module has been published to the PowerShell Gallery named
+[AADSCConfigContent][16]. The module has only one command, which exports the content of a
+configuration stored in Azure Automation by making a REST request to the service.
+
+## Next steps
+
+- [Create a package artifact][05] for machine configuration.
+- [Test the package artifact][17] from your development environment.
+- [Publish the package artifact][18] so it's accessible to your machines.
+- Use the **GuestConfiguration** module to [create an Azure Policy definition][19] for at-scale
+ management of your environment.
+- [Assign your custom policy definition][20] using Azure portal.
+- Learn how to view [compliance details for machine configuration][21] policy assignments.
+
+<!-- Reference link definitions -->
+[01]: ./overview.md
+[02]: /powershell/dsc/getting-started/wingettingstarted
+[03]: /powershell/dsc/getting-started/lnxgettingstarted
+[04]: /powershell/scripting/gallery/how-to/working-with-local-psrepositories
+[05]: ./how-to-create-package.md
+[06]: ./how-to-create-package.md#author-a-configuration
+[07]: /powershell/gallery/how-to/working-with-local-psrepositories
+[08]: https://github.com/Azure/azure-policy/blob/bbfc60104c2c5b7fa6dd5b784b5d4713ddd55218/samples/GuestConfiguration/package-samples/resource-modules/WindowsDscConfiguration/DscResources/WindowsDscConfiguration/WindowsDscConfiguration.psm1#L97
+[09]: ./dsc-in-machine-configuration.md#special-requirements-for-get
+[10]: ../../azure-resource-manager/management/overview.md#terminology
+[11]: ../../automation/state-configuration/remove-node-and-configuration-package.md
+[12]: ../policy/assign-policy-portal.md
+[13]: ../../automation/automation-dsc-onboarding.md#enable-physicalvirtual-linux-machines
+[14]: ../../azure-arc/servers/overview.md
+[15]: ../../azure-arc/servers/onboard-dsc.md
+[16]: https://www.powershellgallery.com/packages/AADSCConfigContent/
+[17]: ./how-to-test-package.md
+[18]: ./how-to-publish-package.md
+[19]: ./how-to-create-policy-definition.md
+[20]: ../policy/assign-policy-portal.md
+[21]: ../policy/how-to/determine-non-compliance.md
governance Migrate From Dsc Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/migrate-from-dsc-extension.md
+
+ Title: Planning a change from Desired State Configuration extension for Linux to machine configuration
+description: Guidance for moving from Desired State Configuration extension to the machine configuration feature of Azure Policy.
Last updated : 04/18/2023++
+# Planning a change from Desired State Configuration extension for Linux to machine configuration
++
+Machine configuration is the latest implementation of functionality that has been provided by the
+PowerShell Desired State Configuration (DSC) extension for Linux virtual machines in Azure. When
+possible, you should plan to move your content and machines to the new service. This article
+provides guidance on developing a migration strategy.
+
+New features in machine configuration:
+
+- Advanced reporting through Azure Resource Graph including resource ID and state
+- Manage multiple configurations for the same machine
+- When machines drift from the desired state, you control when remediation occurs
+- Linux machines consume PowerShell-based DSC resources
+
+Before you begin, it's a good idea to read the conceptual overview information at the page
+[Azure Policy's machine configuration][01].
+
+## Major differences
+
+Configurations are deployed through the DSC extension for Linux in a "push" model, where the
+operation is completed asynchronously. The deployment doesn't return until the configuration has
+finished running inside the virtual machine. After deployment, no further information is returned
+to Resource Manager. The monitoring and drift are managed within the machine.
+
+Machine configuration processes configurations in a "pull" model. The extension is deployed to a
+virtual machine and then jobs are executed based on machine configuration assignment details. It
+isn't possible to view the status while the configuration in real time as it's being applied inside
+the machine. It's possible to watch and correct drift from Azure Resource Manager after the
+configuration is applied.
+
+The DSC extension included **privateSettings** where secrets could be passed to the configuration,
+such as passwords or shared keys. Secrets management hasn't yet been implemented for machine
+configuration.
+
+### Considerations for whether to migrate existing machines or only new machines
+
+Machine configuration uses DSC version 3 with PowerShell version 7. DSC version 3 can coexist with
+older versions of DSC in [Linux][02]. The implementations are separate. However, there's no
+conflict detection.
+
+For machines only intended to exist for days or weeks, update the deployment templates and switch
+from the DSC extension to machine configuration. After testing, use the updated templates to build
+future machines.
+
+If a machine is planned to exist for months or years, you might choose to change which
+configuration features of Azure manage the machine to take advantage of new features.
+
+Using both platforms to manage the same configuration isn't advised.
+
+## Understand migration
+
+The best approach to migration is to recreate, test, and redeploy content first, and then use the
+new solution for new machines.
+
+The expected steps for migration are:
+
+1. Download and expand the `.zip` package used for the DSC extension.
+1. Examine the Managed Object Format (MOF) file and resources to understand the scenario.
+1. Create custom DSC resources in PowerShell classes.
+1. Update the MOF file to use the new resources.
+1. Use the machine configuration authoring module to create, test, and publish a new package.
+1. Use machine configuration for future deployments rather than DSC extension.
+
+#### Consider decomposing complex configuration files
+
+Machine configuration can manage multiple configurations per machine. Many configurations written
+for the DSC extension for Linux assumed the limitation of managing a single configuration per
+machine. To take advantage of the expanded capabilities offered by machine configuration, large
+configuration files can be divided into many smaller configurations where each handles a specific
+scenario.
+
+There's no orchestration in machine configuration to control the order of how configurations are
+sorted. Keep steps in a configuration together in one package if they must happen sequentially.
+
+### Test content in Azure machine configuration
+
+Read the page [How to create custom machine configuration package artifacts][03] to evaluate
+whether your content from the DSC extension can be used with machine configuration.
+
+When you reach the step [Author a configuration][04], use the MOF file from the DSC extension
+package as the basis for creating a new MOF file and custom DSC resources. You must have the custom
+PowerShell modules available in `$env:PSModulePath` before you can create a machine configuration
+package.
+
+#### Update deployment templates
+
+If your deployment templates include the DSC extension (see [examples][05]), there are two changes
+required.
+
+First, replace the DSC extension with the [extension for the machine configuration feature][01].
+
+Then, add a [machine configuration assignment][06] that associates the new configuration package
+(and hash value) with the machine.
+
+#### Older nx\* modules for Linux DSC aren't compatible with DSCv3
+
+The modules that shipped with DSC for Linux on GitHub were created in the C programming language.
+In the latest version of DSC, which is used by the machine configuration feature, modules for Linux
+are written in PowerShell classes. None of the original resources are compatible with the new
+platform.
+
+As a result, new Linux packages require custom module development.
+
+Linux content authored using Chef Inspec is still supported but should only be used for legacy
+configurations.
+
+#### Updated nx\* module functionality
+
+A new open-source [nxtools module][07] has been released to help make managing Linux systems easier
+for PowerShell users.
+
+The module helps with managing common tasks such as:
+
+- Managing users and groups
+- Performing file system operations
+- Managing services
+- Performing archive operations
+- Managing packages
+
+The module includes class-based DSC resources for Linux and built-in machine configuration
+packages.
+
+To give feedback about this functionality, open an issue on the documentation. We currently _don't_
+accept PRs for this project, and support is best effort.
+
+#### Do I need to add the Reasons property to custom resources?
+
+Implementing the [Reasons property][08] provides a better experience when viewing the results of
+a configuration assignment from the Azure portal. If the `Get` method in a module doesn't include
+**Reasons**, generic output is returned with details from the properties returned by the `Get`
+method. Therefore, it's optional for migration.
+
+### Removing a configuration the DSC extension assigned in Linux
+
+In previous versions of DSC, the DSC extension assigned a configuration through the Local
+Configuration Manager (LCM). It's recommended to remove the DSC extension and reset the LCM.
+
+> [!IMPORTANT]
+> Removing a configuration in Local Configuration Manager doesn't "roll back" the settings in Linux
+> that were set by the configuration. The action of removing the configuration only causes the LCM
+> to stop managing the assigned configuration. The settings remain in place.
+
+Use the `Remove.py` script as documented in
+[Performing DSC Operations from the Linux Computer][09]
+
+## Next steps
+
+- [Create a package artifact][03] for machine configuration.
+- [Test the package artifact][10] from your development environment.
+- [Publish the package artifact][11] so it's accessible to your machines.
+- Use the **GuestConfiguration** module to [create an Azure Policy definition][12] for at-scale
+ management of your environment.
+- [Assign your custom policy definition][13] using Azure portal.
+
+<!-- Reference link definitions -->
+[01]: ./overview.md
+[02]: /powershell/dsc/getting-started/lnxgettingstarted
+[03]: ./how-to-create-package.md
+[04]: ./how-to-create-package.md#author-a-configuration
+[05]: ../../virtual-machines/extensions/dsc-template.md
+[06]: ./assignments.md
+[07]: https://github.com/azure/nxtools#getting-started
+[08]: ./dsc-in-machine-configuration.md#special-requirements-for-get
+[09]: https://github.com/Microsoft/PowerShell-DSC-for-Linux#performing-dsc-operations-from-the-linux-computer
+[10]: ./how-to-test-package.md
+[11]: ./how-to-publish-package.md
+[12]: ./how-to-create-policy-definition.md
+[13]: ../policy/assign-policy-portal.md
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/overview.md
Title: Understand Azure Automanage Machine Configuration description: Learn how Azure Policy uses the machine configuration feature to audit or configure settings inside virtual machines. Previously updated : 03/02/2023 Last updated : 04/18/2023 # Understand the machine configuration feature of Azure Automanage
-Azure Policy's machine configuration feature provides native capability
-to audit or configure operating system settings as code,
-both for machines running in Azure and hybrid
-[Arc-enabled machines](../../azure-arc/servers/overview.md).
-The feature can be used directly per-machine,
-or at-scale orchestrated by Azure Policy.
+Azure Policy's machine configuration feature provides native capability to audit or configure
+operating system settings as code for machines running in Azure and hybrid
+[Arc-enabled machines][01]. You can use the feature directly per-machine, or orchestrate it at
+scale by using Azure Policy.
-Configuration resources in Azure are designed as an
-[extension resource](../../azure-resource-manager/management/extension-resource-types.md).
-You can imagine each configuration as an additional set of properties
-for the machine. Configurations can include settings such as:
+Configuration resources in Azure are designed as an [extension resource][02]. You can imagine each
+configuration as an extra set of properties for the machine. Configurations can include settings
+such as:
- Operating system settings - Application configuration or presence - Environment settings
-Configurations are distinct from policy definitions. Machine configuration
-utilizes Azure Policy to dynamically assign configurations
-to machines. You can also assign configurations to machines
-[manually](machine-configuration-assignments.md#manually-creating-machine-configuration-assignments),
-or by using other Azure services such as
-[Automanage](../../automanage/index.yml).
+Configurations are distinct from policy definitions. Machine configuration uses Azure Policy to
+dynamically assign configurations to machines. You can also assign configurations to machines
+[manually][03], or by using other Azure services such as [Automanage][04].
Examples of each scenario are provided in the following table.
-| Type | Description | Example story |
-| - | -- | |
-| [Configuration management](machine-configuration-assignments.md) | You want a complete representation of a server, as code in source control. The deployment should include properties of the server (size, network, storage) and configuration of operating system and application settings. | "This machine should be a web server configured to host my website." |
-| [Compliance](../policy/assign-policy-portal.md) | You want to audit or deploy settings to all machines in scope either reactively to existing machines or proactively to new machines as they are deployed. | "All machines should use TLS 1.2. Audit existing machines so I can release change where it is needed, in a controlled way, at scale. For new machines, enforce the setting when they are deployed." |
+| Type | Description | Example story |
+| | -- | |
+| [Configuration management][05] | You want a complete representation of a server, as code in source control. The deployment should include properties of the server (size, network, storage) and configuration of operating system and application settings. | "This machine should be a web server configured to host my website." |
+| [Compliance][06] | You want to audit or deploy settings to all machines in scope either reactively to existing machines or proactively to new machines as they're deployed. | "All machines should use TLS 1.2. Audit existing machines so I can release change where it's needed, in a controlled way, at scale. For new machines, enforce the setting when they're deployed." |
-The per-setting results from configurations can be viewed either in the
-[Guest assignments page](../policy/how-to/determine-non-compliance.md)
-or if the configuration is orchestrated by an Azure Policy assignment,
-by clicking on the "Last evaluated resource" link on the
-["Compliance details" page](../policy/how-to/determine-non-compliance.md).
+You can view the per-setting results from configurations in the [Guest assignments page][07]. If an
+Azure Policy assignment orchestrated the configuration is orchestrated, you can select the "Last
+evaluated resource" link on the ["Compliance details" page][07].
-[A video walk-through of this document is available](https://youtu.be/t9L8COY-BkM). (update coming soon)
+[A video walk-through of this document is available][08]. (Update coming soon)
## Enable machine configuration
and Arc-enabled servers, review the following details.
## Resource provider
-Before you can use the machine configuration feature of Azure Policy, you must
-register the `Microsoft.GuestConfiguration` resource provider. If assignment of
-a machine configuration policy is done through the portal, or if the subscription
-is enrolled in Microsoft Defender for Cloud, the resource provider is registered
-automatically. You can manually register through the
-[portal](../../azure-resource-manager/management/resource-providers-and-types.md#azure-portal),
-[Azure PowerShell](../../azure-resource-manager/management/resource-providers-and-types.md#azure-powershell),
-or
-[Azure CLI](../../azure-resource-manager/management/resource-providers-and-types.md#azure-cli).
+Before you can use the machine configuration feature of Azure Policy, you must register the
+`Microsoft.GuestConfiguration` resource provider. If assignment of a machine configuration policy
+is done through the portal, or if the subscription is enrolled in Microsoft Defender for Cloud, the
+resource provider is registered automatically. You can manually register through the [portal][09],
+[Azure PowerShell][10], or [Azure CLI][11].
## Deploy requirements for Azure virtual machines
-To manage settings inside a machine, a
-[virtual machine extension](../../virtual-machines/extensions/overview.md) is
-enabled and the machine must have a system-managed identity. The extension
-downloads applicable machine configuration assignment and the corresponding
-dependencies. The identity is used to authenticate the machine as it reads and
-writes to the machine configuration service. The extension isn't required for Arc-enabled
-servers because it's included in the Arc Connected Machine agent.
+To manage settings inside a machine, a [virtual machine extension][12] is enabled and the machine
+must have a system-managed identity. The extension downloads applicable machine configuration
+assignments and the corresponding dependencies. The identity is used to authenticate the machine as
+it reads and writes to the machine configuration service. The extension isn't required for
+Arc-enabled servers because it's included in the Arc Connected Machine agent.
> [!IMPORTANT]
-> The machine configuration extension and a managed identity are required to
-> manage Azure virtual machines.
+> The machine configuration extension and a managed identity are required to manage Azure virtual
+> machines.
To deploy the extension at scale across many machines, assign the policy initiative
-`Deploy prerequisites to enable guest configuration policies on virtual machines`
-to a management group, subscription, or resource group containing the machines
-that you plan to manage.
+`Deploy prerequisites to enable guest configuration policies on virtual machines` to a management
+group, subscription, or resource group containing the machines that you plan to manage.
-If you prefer to deploy the extension and managed identity to a single machine,
-follow the guidance for each:
+If you prefer to deploy the extension and managed identity to a single machine, follow the guidance
+for each:
-- [Overview of the Azure Policy Guest Configuration extension](./overview.md)-- [Configure managed identities for Azure resources on a VM using the Azure portal](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md)
+- [Overview of the Azure Policy Guest Configuration extension][13]
+- [Configure managed identities for Azure resources on a VM using the Azure portal][14]
-To use machine configuration packages that apply configurations, Azure VM guest
-configuration extension version **1.29.24** or later is required.
+To use machine configuration packages that apply configurations, Azure VM guest configuration
+extension version 1.29.24 or later is required.
### Limits set on the extension
-To limit the extension from impacting applications running inside the machine,
-the machine configuration agent isn't allowed to exceed more than 5% of CPU. This
-limitation exists for both built-in and custom definitions. The same is true for
-the machine configuration service in Arc Connected Machine agent.
+To limit the extension from impacting applications running inside the machine, the machine
+configuration agent isn't allowed to exceed more than 5% of CPU. This limitation exists for both
+built-in and custom definitions. The same is true for the machine configuration service in Arc
+Connected Machine agent.
### Validation tools
-Inside the machine, the machine configuration agent uses local tools to perform
-tasks.
+Inside the machine, the machine configuration agent uses local tools to perform tasks.
-The following table shows a list of the local tools used on each supported
-operating system. For built-in content, machine configuration handles loading
-these tools automatically.
+The following table shows a list of the local tools used on each supported operating system. For
+built-in content, machine configuration handles loading these tools automatically.
-| Operating system | Validation tool | Notes |
-| - | | - |
-| Windows | [PowerShell Desired State Configuration](/powershell/dsc/overview) v3 | Side-loaded to a folder only used by Azure Policy. Won't conflict with Windows PowerShell DSC. PowerShell Core isn't added to system path. |
-| Linux | [PowerShell Desired State Configuration](/powershell/dsc/overview) v3 | Side-loaded to a folder only used by Azure Policy. PowerShell Core isn't added to system path. |
-| Linux | [Chef InSpec](https://www.chef.io/inspec/) | Installs Chef InSpec version 2.2.61 in default location and added to system path. Dependencies for the InSpec package including Ruby and Python are installed as well. |
+| Operating system | Validation tool | Notes |
+| - | -- | |
+| Windows | [PowerShell Desired State Configuration][15] v3 | Side-loaded to a folder only used by Azure Policy. Doesn't conflict with Windows PowerShell DSC. PowerShell isn't added to system path. |
+| Linux | [PowerShell Desired State Configuration][15] v3 | Side-loaded to a folder only used by Azure Policy. PowerShell isn't added to system path. |
+| Linux | [Chef InSpec][16] | Installs Chef InSpec version 2.2.61 in default location and adds it to system path. It installs InSpec's dependencies, including Ruby and Python, too. |
### Validation frequency
-The machine configuration agent checks for new or changed guest assignments every
-5 minutes. Once a guest assignment is received, the settings for that
-configuration are rechecked on a 15-minute interval. If multiple configurations
-are assigned, each is evaluated sequentially. Long-running configurations impact
-the interval for all configurations, because the next will not run until the
+The machine configuration agent checks for new or changed guest assignments every 5 minutes. Once a
+guest assignment is received, the settings for that configuration are rechecked on a 15-minute
+interval. If multiple configurations are assigned, each is evaluated sequentially. Long-running
+configurations affect the interval for all configurations, because the next can't run until the
prior configuration has finished.
-Results are sent to the machine configuration service when the audit completes.
-When a policy
-[evaluation trigger](../policy/how-to/get-compliance-data.md#evaluation-triggers)
-occurs, the state of the machine is written to the machine configuration resource
-provider. This update causes Azure Policy to evaluate the Azure Resource Manager
-properties. An on-demand Azure Policy evaluation retrieves the latest value from
-the machine configuration resource provider. However, it doesn't trigger a new
-activity within the machine. The status is then written to Azure
-Resource Graph.
+Results are sent to the machine configuration service when the audit completes. When a policy
+[evaluation trigger][17] occurs, the state of the machine is written to the machine configuration
+resource provider. This update causes Azure Policy to evaluate the Azure Resource Manager
+properties. An on-demand Azure Policy evaluation retrieves the latest value from the machine
+configuration resource provider. However, it doesn't trigger a new activity within the machine. The
+status is then written to Azure Resource Graph.
## Supported client types Machine configuration policy definitions are inclusive of new versions. Older versions of operating systems available in Azure Marketplace are excluded if the Guest Configuration client isn't
-compatible. The following table shows a list of supported operating systems on Azure images.
-The ".x" text is symbolic to represent new minor versions of Linux distributions.
+compatible. The following table shows a list of supported operating systems on Azure images. The
+`.x` text is symbolic to represent new minor versions of Linux distributions.
| Publisher | Name | Versions | | | -- | - |
The ".x" text is symbolic to represent new minor versions of Linux distributions
\* Red Hat CoreOS isn't supported.
-Custom virtual machine images are supported by machine configuration policy
-definitions as long as they're one of the operating systems in the table above.
+Machine configuration policy definitions support custom virtual machine images as long as they're
+one of the operating systems in the previous table.
## Network requirements
-Azure virtual machines can use either their local virtual network adapter (vNIC)
-or Azure Private Link to communicate with the machine configuration service.
-
-Azure Arc-enabled machines connect using the on-premises network infrastructure to reach
-Azure services and report compliance status.
-
-Following is a list of the Azure Storage endpoints required for Azure and Azure Arc-enabled
-virtual machines to communicate with the machine configuration resource provider in Azure:
--- oaasguestconfigac2s1.blob.core.windows.net-- oaasguestconfigacs1.blob.core.windows.net-- oaasguestconfigaes1.blob.core.windows.net-- oaasguestconfigases1.blob.core.windows.net-- oaasguestconfigbrses1.blob.core.windows.net-- oaasguestconfigbrss1.blob.core.windows.net-- oaasguestconfigccs1.blob.core.windows.net-- oaasguestconfigces1.blob.core.windows.net-- oaasguestconfigcids1.blob.core.windows.net-- oaasguestconfigcuss1.blob.core.windows.net-- oaasguestconfigeaps1.blob.core.windows.net-- oaasguestconfigeas1.blob.core.windows.net-- oaasguestconfigeus2s1.blob.core.windows.net-- oaasguestconfigeuss1.blob.core.windows.net-- oaasguestconfigfcs1.blob.core.windows.net-- oaasguestconfigfss1.blob.core.windows.net-- oaasguestconfiggewcs1.blob.core.windows.net-- oaasguestconfiggns1.blob.core.windows.net-- oaasguestconfiggwcs1.blob.core.windows.net-- oaasguestconfigjiws1.blob.core.windows.net-- oaasguestconfigjpes1.blob.core.windows.net-- oaasguestconfigjpws1.blob.core.windows.net-- oaasguestconfigkcs1.blob.core.windows.net-- oaasguestconfigkss1.blob.core.windows.net-- oaasguestconfigncuss1.blob.core.windows.net-- oaasguestconfignes1.blob.core.windows.net-- oaasguestconfignres1.blob.core.windows.net-- oaasguestconfignrws1.blob.core.windows.net-- oaasguestconfigqacs1.blob.core.windows.net-- oaasguestconfigsans1.blob.core.windows.net-- oaasguestconfigscuss1.blob.core.windows.net-- oaasguestconfigseas1.blob.core.windows.net-- oaasguestconfigsecs1.blob.core.windows.net-- oaasguestconfigsfns1.blob.core.windows.net-- oaasguestconfigsfws1.blob.core.windows.net-- oaasguestconfigsids1.blob.core.windows.net-- oaasguestconfigstzns1.blob.core.windows.net-- oaasguestconfigswcs1.blob.core.windows.net-- oaasguestconfigswns1.blob.core.windows.net-- oaasguestconfigswss1.blob.core.windows.net-- oaasguestconfigswws1.blob.core.windows.net-- oaasguestconfiguaecs1.blob.core.windows.net-- oaasguestconfiguaens1.blob.core.windows.net-- oaasguestconfigukss1.blob.core.windows.net-- oaasguestconfigukws1.blob.core.windows.net-- oaasguestconfigwcuss1.blob.core.windows.net-- oaasguestconfigwes1.blob.core.windows.net-- oaasguestconfigwids1.blob.core.windows.net-- oaasguestconfigwus2s1.blob.core.windows.net-- oaasguestconfigwus3s1.blob.core.windows.net-- oaasguestconfigwuss1.blob.core.windows.net
+Azure virtual machines can use either their local virtual network adapter (vNIC) or Azure Private
+Link to communicate with the machine configuration service.
+
+Azure Arc-enabled machines connect using the on-premises network infrastructure to reach Azure
+services and report compliance status.
+
+Following is a list of the Azure Storage endpoints required for Azure and Azure Arc-enabled virtual
+machines to communicate with the machine configuration resource provider in Azure:
+
+- `oaasguestconfigac2s1.blob.core.windows.net`
+- `oaasguestconfigacs1.blob.core.windows.net`
+- `oaasguestconfigaes1.blob.core.windows.net`
+- `oaasguestconfigases1.blob.core.windows.net`
+- `oaasguestconfigbrses1.blob.core.windows.net`
+- `oaasguestconfigbrss1.blob.core.windows.net`
+- `oaasguestconfigccs1.blob.core.windows.net`
+- `oaasguestconfigces1.blob.core.windows.net`
+- `oaasguestconfigcids1.blob.core.windows.net`
+- `oaasguestconfigcuss1.blob.core.windows.net`
+- `oaasguestconfigeaps1.blob.core.windows.net`
+- `oaasguestconfigeas1.blob.core.windows.net`
+- `oaasguestconfigeus2s1.blob.core.windows.net`
+- `oaasguestconfigeuss1.blob.core.windows.net`
+- `oaasguestconfigfcs1.blob.core.windows.net`
+- `oaasguestconfigfss1.blob.core.windows.net`
+- `oaasguestconfiggewcs1.blob.core.windows.net`
+- `oaasguestconfiggns1.blob.core.windows.net`
+- `oaasguestconfiggwcs1.blob.core.windows.net`
+- `oaasguestconfigjiws1.blob.core.windows.net`
+- `oaasguestconfigjpes1.blob.core.windows.net`
+- `oaasguestconfigjpws1.blob.core.windows.net`
+- `oaasguestconfigkcs1.blob.core.windows.net`
+- `oaasguestconfigkss1.blob.core.windows.net`
+- `oaasguestconfigncuss1.blob.core.windows.net`
+- `oaasguestconfignes1.blob.core.windows.net`
+- `oaasguestconfignres1.blob.core.windows.net`
+- `oaasguestconfignrws1.blob.core.windows.net`
+- `oaasguestconfigqacs1.blob.core.windows.net`
+- `oaasguestconfigsans1.blob.core.windows.net`
+- `oaasguestconfigscuss1.blob.core.windows.net`
+- `oaasguestconfigseas1.blob.core.windows.net`
+- `oaasguestconfigsecs1.blob.core.windows.net`
+- `oaasguestconfigsfns1.blob.core.windows.net`
+- `oaasguestconfigsfws1.blob.core.windows.net`
+- `oaasguestconfigsids1.blob.core.windows.net`
+- `oaasguestconfigstzns1.blob.core.windows.net`
+- `oaasguestconfigswcs1.blob.core.windows.net`
+- `oaasguestconfigswns1.blob.core.windows.net`
+- `oaasguestconfigswss1.blob.core.windows.net`
+- `oaasguestconfigswws1.blob.core.windows.net`
+- `oaasguestconfiguaecs1.blob.core.windows.net`
+- `oaasguestconfiguaens1.blob.core.windows.net`
+- `oaasguestconfigukss1.blob.core.windows.net`
+- `oaasguestconfigukws1.blob.core.windows.net`
+- `oaasguestconfigwcuss1.blob.core.windows.net`
+- `oaasguestconfigwes1.blob.core.windows.net`
+- `oaasguestconfigwids1.blob.core.windows.net`
+- `oaasguestconfigwus2s1.blob.core.windows.net`
+- `oaasguestconfigwus3s1.blob.core.windows.net`
+- `oaasguestconfigwuss1.blob.core.windows.net`
### Communicate over virtual networks in Azure
-To communicate with the machine configuration resource provider in Azure, machines
-require outbound access to Azure datacenters on port **443**. If a network in
-Azure doesn't allow outbound traffic, configure exceptions with
-[Network Security Group](../../virtual-network/manage-network-security-group.md#create-a-security-rule)
-rules. The
-[service tags](../../virtual-network/service-tags-overview.md)
-"AzureArcInfrastructure" and "Storage" can be used to reference the guest
-configuration and Storage services rather than manually maintaining the
-[list of IP ranges](https://www.microsoft.com/download/details.aspx?id=56519)
-for Azure datacenters. Both tags are required because machine configuration
-content packages are hosted by Azure Storage.
+To communicate with the machine configuration resource provider in Azure, machines require outbound
+access to Azure datacenters on port `443`*. If a network in Azure doesn't allow outbound traffic,
+configure exceptions with [Network Security Group][18] rules. The [service tags][19]
+`AzureArcInfrastructure` and `Storage` can be used to reference the guest configuration and Storage
+services rather than manually maintaining the [list of IP ranges][20] for Azure datacenters. Both
+tags are required because Azure Storage hosts the machine configuration content packages.
### Communicate over Private Link in Azure
-Virtual machines can use
-[private link](../../private-link/private-link-overview.md)
-for communication to the machine configuration service. Apply tag with the name
-`EnablePrivateNetworkGC` and value `TRUE` to enable this feature. The tag can be
-applied before or after machine configuration policy definitions are applied to
-the machine.
+Virtual machines can use [private link][21] for communication to the machine configuration service.
+Apply tag with the name `EnablePrivateNetworkGC` and value `TRUE` to enable this feature. The tag
+can be applied before or after machine configuration policy definitions are applied to the machine.
> [!IMPORTANT]
-> In order to communicate over private link for custom packages, the link to the location of the package must be added to the list of allowed URLS.
+> To communicate over private link for custom packages, the link to the location of the
+> package must be added to the list of allowed URLs.
-Traffic is routed using the Azure
-[virtual public IP address](../../virtual-network/what-is-ip-address-168-63-129-16.md)
-to establish a secure, authenticated channel with Azure platform resources.
+Traffic is routed using the Azure [virtual public IP address][22] to establish a secure,
+authenticated channel with Azure platform resources.
### Communicate over public endpoints outside of Azure Servers located on-premises or in other clouds can be managed with machine configuration
-by connecting them to [Azure Arc](../../azure-arc/servers/overview.md).
+by connecting them to [Azure Arc][01].
For Azure Arc-enabled servers, allow traffic using the following patterns: - Port: Only TCP 443 required for outbound internet access - Global URL: `*.guestconfiguration.azure.com`
-See the [Azure Arc-enabled servers network requirements](../../azure-arc/servers/network-requirements.md) for a full list
-of all network endpoints required by the Azure Connected Machine Agent for core Azure Arc and machine configuration scenarios.
+See the [Azure Arc-enabled servers network requirements][23] for a full list of all network
+endpoints required by the Azure Connected Machine Agent for core Azure Arc and machine
+configuration scenarios.
### Communicate over Private Link outside of Azure
-When using [private link with Arc-enabled servers](../../azure-arc/servers/private-link-security.md), built-in policy packages will automatically be downloaded over the private link.
-You do not need to set any tags on the Arc-enabled server to enable this feature.
+When you use [private link with Arc-enabled servers][24], built-in policy packages are
+automatically downloaded over the private link. You don't need to set any tags on the Arc-enabled
+server to enable this feature.
## Assigning policies to machines outside of Azure The Audit policy definitions available for machine configuration include the **Microsoft.HybridCompute/machines** resource type. Any machines onboarded to
-[Azure Arc-enabled servers](../../azure-arc/servers/overview.md) that are in the
-scope of the policy assignment are automatically included.
+[Azure Arc-enabled servers][01] that are in the scope of the policy assignment are automatically
+included.
## Managed identity requirements
-Policy definitions in the initiative `Deploy prerequisites to enable guest configuration policies on virtual machines` enable a system-assigned managed
-identity, if one doesn't exist. There are two policy definitions in the
-initiative that manage identity creation. The IF conditions in the policy
-definitions ensure the correct behavior based on the current state of the
-machine resource in Azure.
+Policy definitions in the initiative
+`Deploy prerequisites to enable guest configuration policies on virtual machines` enable a
+system-assigned managed identity, if one doesn't exist. There are two policy definitions in the
+initiative that manage identity creation. The `if` conditions in the policy definitions ensure the
+correct behavior based on the current state of the machine resource in Azure.
> [!IMPORTANT]
-> These definitions create a System-Assigned managed identity on the target resources, in addition to existing User-Assigned Identities (if any). For existing applications unless they specify the User-Assigned identity in the request, the machine will default to using System-Assigned Identity instead. [Learn More](../../active-directory/managed-identities-azure-resources/managed-identities-faq.md#what-identity-will-imds-default-to-if-dont-specify-the-identity-in-the-request)
+> These definitions create a System-Assigned managed identity on the target resources, in addition
+> to existing User-Assigned Identities (if any). For existing applications unless they specify the
+> User-Assigned identity in the request, the machine will default to using System-Assigned Identity
+> instead. [Learn More][25]
-If the machine doesn't currently have any managed identities, the effective
-policy is:
-[Add system-assigned managed identity to enable machine configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e)
+If the machine doesn't currently have any managed identities, the effective policy is:
+[Add system-assigned managed identity to enable machine configuration assignments on virtual machines with no identities][26]
-If the machine currently has a user-assigned system identity, the effective
-policy is:
-[Add system-assigned managed identity to enable machine configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6)
+If the machine currently has a user-assigned system identity, the effective policy is:
+[Add system-assigned managed identity to enable machine configuration assignments on VMs with a user-assigned identity][27]
## Availability
-Customers designing a highly available solution should consider the redundancy planning requirements for
-[virtual machines](../../virtual-machines/availability.md) because guest assignments are extensions of
-machine resources in Azure. When guest assignment resources are provisioned in to an Azure region that is
-[paired](../../availability-zones/cross-region-replication-azure.md), as long as at least one region in the pair
-is available, then guest assignment reports are available. If the Azure region isn't paired and
-it becomes unavailable, then it isn't possible to access reports for a guest assignment until
-the region is restored.
-
-When you considering an architecture for highly available applications,
-especially where virtual machines are provisioned in
-[Availability Sets](../../virtual-machines/availability.md#availability-sets)
-behind a load balancer solution to provide high availability,
-it's best practice to assign the same policy definitions with the same parameters to all machines
-in the solution. If possible, a single policy assignment spanning all
-machines would offer the least administrative overhead.
-
-For machines protected by
-[Azure Site Recovery](../../site-recovery/site-recovery-overview.md),
-ensure that machines in a secondary site are within scope of Azure Policy assignments
-for the same definitions using the same parameter values as machines in the primary site.
+Customers designing a highly available solution should consider the redundancy planning
+requirements for [virtual machines][28] because guest assignments are extensions of machine
+resources in Azure. When guest assignment resources are provisioned into an Azure region that's
+[paired][29], you can view guest assignment reports if at least one region in the pair is
+available. When the Azure region isn't paired and it becomes unavailable, you can't access reports
+for a guest assignment. When the region is restored, you can access the reports again.
+
+It's best practice to assign the same policy definitions with the same parameters to all machines
+in the solution for highly available applications. This is especially true for scenarios where
+virtual machines are provisioned in [Availability Sets][30] behind a load balancer solution. A
+single policy assignment spanning all machines has the least administrative overhead.
+
+For machines protected by [Azure Site Recovery][31], ensure that the machines in the primary and
+secondary site are within scope of Azure Policy assignments for the same definitions. Use the same
+parameter values for both sites.
## Data residency
-Machine configuration stores/processes customer data. By default, customer data is replicated to the
-[paired region.](../../availability-zones/cross-region-replication-azure.md)
-For the regions: Singapore, Brazil South, and East Asia all customer data is stored and processed in the region.
+Machine configuration stores and processes customer data. By default, customer data is replicated
+to the [paired region.][29] For the regions Singapore, Brazil South, and East Asia, all customer
+data is stored and processed in the region.
## Troubleshooting machine configuration For more information about troubleshooting machine configuration, see
-[Azure Policy troubleshooting](../policy/troubleshoot/general.md).
+[Azure Policy troubleshooting][32].
### Multiple assignments
-At this time, only some built-in Guest Configuration policy definitions support multiple assignments. However, all custom policies support multiple assignments by default if you used the latest version of [the `GuestConfiguration` PowerShell module](./machine-configuration-create-setup.md) to create Guest Configuration packages and policies.
+At this time, only some built-in machine configuration policy definitions support multiple
+assignments. However, all custom policies support multiple assignments by default if you used the
+latest version of [the GuestConfiguration PowerShell module][33] to create machine configuration
+packages and policies.
-Following is the list of built-in Guest Configuration policy definitions that support multiple assignments:
+Following is the list of built-in machine configuration policy definitions that support multiple
+assignments:
| ID | DisplayName | |--|--|
Following is the list of built-in Guest Configuration policy definitions that su
| /providers/Microsoft.Authorization/policyDefinitions/c633f6a2-7f8b-4d9e-9456-02f0f04f5505 | Audit Windows machines that are not set to the specified time zone | > [!NOTE]
-> Please check this page periodically for updates to the list of built-in Guest Configuration policy definitions that support multiple assignments.
+> Please check this page periodically for updates to the list of built-in machine configuration
+> policy definitions that support multiple assignments.
### Assignments to Azure management groups
-Azure Policy definitions in the category `Guest Configuration` can be assigned
-to management groups when the effect is `AuditIfNotExists` or `DeployIfNotExists`.
+Azure Policy definitions in the category `Guest Configuration` can be assigned to management groups
+when the effect is `AuditIfNotExists` or `DeployIfNotExists`.
### Client log files
Linux
### Collecting logs remotely
-The first step in troubleshooting machine configurations or modules
-should be to use the cmdlets following the steps in
-[How to test machine configuration package artifacts](./machine-configuration-create-test.md).
-If that isn't successful, collecting client logs can help diagnose issues.
+The first step in troubleshooting machine configurations or modules should be to use the cmdlets
+following the steps in [How to test machine configuration package artifacts][34]. If that isn't
+successful, collecting client logs can help diagnose issues.
#### Windows
-Capture information from log files using
-[Azure VM Run Command](../../virtual-machines/windows/run-command.md), the
-following example PowerShell script can be helpful.
+Capture information from log files using [Azure VM Run Command][35], the following example
+PowerShell script can be helpful.
```powershell $linesToIncludeBeforeMatch = 0
-$linesToIncludeAfterMatch = 10
-$logPath = 'C:\ProgramData\GuestConfig\gc_agent_logs\gc_agent.log'
-Select-String -Path $logPath -pattern 'DSCEngine','DSCManagedEngine' -CaseSensitive -Context $linesToIncludeBeforeMatch,$linesToIncludeAfterMatch | Select-Object -Last 10
+$linesToIncludeAfterMatch = 10
+$params = @{
+ Path = 'C:\ProgramData\GuestConfig\gc_agent_logs\gc_agent.log'
+ Pattern = @(
+ 'DSCEngine'
+ 'DSCManagedEngine'
+ )
+ CaseSensitive = $true
+ Context = @(
+ $linesToIncludeBeforeMatch
+ $linesToIncludeAfterMatch
+ )
+}
+Select-String @params | Select-Object -Last 10
``` #### Linux
-Capture information from log files using
-[Azure VM Run Command](../../virtual-machines/linux/run-command.md), the
-following example Bash script can be helpful.
+Capture information from log files using [Azure VM Run Command][36], the following example Bash
+script can be helpful.
```bash LINES_TO_INCLUDE_BEFORE_MATCH=0
egrep -B $LINES_TO_INCLUDE_BEFORE_MATCH -A $LINES_TO_INCLUDE_AFTER_MATCH 'DSCEng
### Agent files
-The machine configuration agent downloads content packages to a machine and
-extracts the contents. To verify what content has been downloaded and stored,
-view the folder locations given below.
+The machine configuration agent downloads content packages to a machine and extracts the contents.
+To verify what content has been downloaded and stored, view the folder locations in the following
+list.
-Windows: `c:\programdata\guestconfig\configuration`
-
-Linux: `/var/lib/GuestConfig/Configuration`
+- Windows: `C:\ProgramData\guestconfig\configuration`
+- Linux: `/var/lib/GuestConfig/Configuration`
### Open-source nxtools module functionality
-A new open-source [nxtools module](https://github.com/azure/nxtools#getting-started) has been released to help make managing Linux systems easier for PowerShell users.
-
-The module will help in managing common tasks such as these:
+A new open-source [nxtools module][37] has been released to help make managing Linux systems easier
+for PowerShell users.
-- User and group management-- File system operations (changing mode, owner, listing, set/replace content)-- Service management (start, stop, restart, remove, add)-- Archive operations (compress, extract)-- Package management (list, search, install, uninstall packages)
+The module helps in managing common tasks such as:
-The module includes class-based DSC resources for Linux, as well as built-in machine-configuration packages.
+- Managing users and groups
+- Performing file system operations
+- Managing services
+- Performing archive operations
+- Managing packages
-To provide feedback about this functionality, open an issue on the documentation. We currently _don't_ accept PRs for this project, and support is best effort.
+The module includes class-based DSC resources for Linux and built-in machine configuration
+packages.
+To provide feedback about this functionality, open an issue on the documentation. We currently
+_don't_ accept PRs for this project, and support is best effort.
## Machine configuration samples
-Machine configuration built-in policy samples are available in the following
-locations:
+Machine configuration built-in policy samples are available in the following locations:
-- [Built-in policy definitions - Guest Configuration](../policy/samples/built-in-policies.md)-- [Built-in initiatives - Guest Configuration](../policy/samples/built-in-initiatives.md)-- [Azure Policy samples GitHub repo](https://github.com/Azure/azure-policy/tree/master/built-in-policies/policySetDefinitions/Guest%20Configuration)-- [Sample DSC resource modules](https://github.com/Azure/azure-policy/tree/master/samples/GuestConfiguration/package-samples/resource-modules)
+- [Built-in policy definitions - Guest Configuration][38]
+- [Built-in initiatives - Guest Configuration][39]
+- [Azure Policy samples GitHub repository][40]
+- [Sample DSC resource modules][41]
## Next steps -- Set up a custom machine configuration package [development environment](./machine-configuration-create-setup.md).-- [Create a package artifact](./machine-configuration-create.md)
- for machine configuration.
-- [Test the package artifact](./machine-configuration-create-test.md)
- from your development environment.
-- Use the `GuestConfiguration` module to
- [create an Azure Policy definition](./machine-configuration-create-definition.md)
- for at-scale management of your environment.
-- [Assign your custom policy definition](../policy/assign-policy-portal.md) using
- Azure portal.
-- Learn how to view
- [compliance details for machine configuration](../policy/how-to/determine-non-compliance.md) policy assignments.
+- Set up a custom machine configuration package [development environment][33].
+- [Create a package artifact][42] for machine configuration.
+- [Test the package artifact][34] from your development environment.
+- Use the **GuestConfiguration** module to [create an Azure Policy definition][43] for at-scale
+ management of your environment.
+- [Assign your custom policy definition][06] using Azure portal.
+- Learn how to view [compliance details for machine configuration][07] policy assignments.
+
+<!-- Link reference definitions -->
+[01]: ../../azure-arc/servers/overview.md
+[02]: ../../azure-resource-manager/management/extension-resource-types.md
+[03]: assignments.md#manually-creating-machine-configuration-assignments
+[04]: ../../automanage/index.yml
+[05]: assignments.md
+[06]: ../policy/assign-policy-portal.md
+[07]: ../policy/how-to/determine-non-compliance.md
+[08]: https://youtu.be/t9L8COY-BkM
+[09]: ../../azure-resource-manager/management/resource-providers-and-types.md#azure-portal
+[10]: ../../azure-resource-manager/management/resource-providers-and-types.md#azure-powershell
+[11]: ../../azure-resource-manager/management/resource-providers-and-types.md#azure-cli
+[12]: ../../virtual-machines/extensions/overview.md
+[13]: ./overview.md
+[14]: ../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md
+[15]: /powershell/dsc/overview
+[16]: https://www.chef.io/inspec/
+[17]: ../policy/how-to/get-compliance-data.md#evaluation-triggers
+[18]: ../../virtual-network/manage-network-security-group.md#create-a-security-rule
+[19]: ../../virtual-network/service-tags-overview.md
+[20]: https://www.microsoft.com/download/details.aspx?id=56519
+[21]: ../../private-link/private-link-overview.md
+[22]: ../../virtual-network/what-is-ip-address-168-63-129-16.md
+[23]: ../../azure-arc/servers/network-requirements.md
+[24]: ../../azure-arc/servers/private-link-security.md
+[25]: ../../active-directory/managed-identities-azure-resources/managed-identities-faq.md#what-identity-will-imds-default-to-if-dont-specify-the-identity-in-the-request
+[26]: https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e
+[27]: https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6
+[28]: ../../virtual-machines/availability.md
+[29]: ../../availability-zones/cross-region-replication-azure.md
+[30]: ../../virtual-machines/availability.md#availability-sets
+[31]: ../../site-recovery/site-recovery-overview.md
+[32]: ../policy/troubleshoot/general.md
+[33]: ./how-to-set-up-authoring-environment.md
+[34]: ./how-to-test-package.md
+[35]: ../../virtual-machines/windows/run-command.md
+[36]: ../../virtual-machines/linux/run-command.md
+[37]: https://github.com/azure/nxtools#getting-started
+[38]: ../policy/samples/built-in-policies.md
+[39]: ../policy/samples/built-in-initiatives.md
+[40]: https://github.com/Azure/azure-policy/tree/master/built-in-policies/policySetDefinitions/Guest%20Configuration
+[41]: https://github.com/Azure/azure-policy/tree/master/samples/GuestConfiguration/package-samples/resource-modules
+[42]: ./how-to-create-package.md
+[43]: ./how-to-create-policy-definition.md
governance Remediation Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/remediation-options.md
+
+ Title: Remediation options for machine configuration
+description: Azure Policy's machine configuration feature offers options for continuous remediation or control using remediation tasks.
Last updated : 04/18/2023++
+# Remediation options for machine configuration
++
+Before you begin, it's a good idea to read the overview page for [machine configuration][01].
+
+> [!IMPORTANT]
+> The machine configuration extension is required for Azure virtual machines. To deploy the
+> extension at scale across all machines, assign the following policy initiative:
+> `Deploy prerequisites to enable guest configuration policies on virtual machines`
+>
+> To use machine configuration packages that apply configurations, Azure VM guest configuration
+> extension version 1.29.24 or later, or Arc agent 1.10.0 or later, is required.
+>
+> Custom machine configuration policy definitions using `AuditIfNotExists` as well as
+> `DeployIfNotExists` are in Generally Available (GA) support status.
+
+## How machine configuration manages remediation (Set)
+
+Machine configuration uses the policy effect [DeployIfNotExists][02] for definitions that deliver
+changes inside machines. Set the properties of a policy assignment to control how [evaluation][03]
+delivers configurations automatically or on-demand.
+
+[A video walk-through of this document is available][04].
+
+### Machine configuration assignment types
+
+There are three available assignment types when guest assignments are created. The property is
+available as a parameter of machine configuration definitions that support `DeployIfNotExists`.
+
+| Assignment type | Behavior |
+| | - |
+| `Audit` | Report on the state of the machine, but don't make changes. |
+| `ApplyAndMonitor` | Applied to the machine once and then monitored for changes. If the configuration drifts and becomes `NonCompliant`, it isn't automatically corrected unless remediation is triggered. |
+| `ApplyAndAutoCorrect` | Applied to the machine. If it drifts, the local service inside the machine makes a correction at the next evaluation. |
+
+When a new policy assignment is assigned to an existing machine, a guest assignment is
+automatically created to audit the state of the configuration first. The audit gives you
+information you can use to decide which machines need remediation.
+
+## Remediation on-demand (ApplyAndMonitor)
+
+By default, machine configuration assignments operate in a remediation on demand scenario. The
+configuration is applied and then allowed to drift out of compliance.
+
+The compliance status of the guest assignment is `Compliant` unless either:
+
+- An error occurs while applying the configuration
+- If the machine is no longer in the desired state during the next evaluation
+
+When either of those conditions are met, the agent reports the status as `NonCompliant` and doesn't
+automatically remediate.
+
+To enable this behavior, set the [assignmentType property][05] of the machine configuration
+assignment to `ApplyandMonitor`. Each time the assignment is processed within the machine, the
+agent reports `Compliant` for each resource when the [Test][06] method returns `$true` or
+`NonCompliant` if the method returns `$false`.
+
+## Continuous remediation (autocorrect)
+
+Machine configuration supports the concept of _continuous remediation_. If the machine drifts out
+of compliance for a configuration, the next time it's evaluated the configuration is corrected
+automatically. Unless an error occurs, the machine always reports status as `Compliant` for the
+configuration. There's no way to report when a drift was automatically corrected when using
+continuous remediation.
+
+To enable this behavior, set the [assignmentType property][05] of the machine configuration
+assignment to `ApplyandAutoCorrect`. Each time the assignment is processed within the machine, the
+[Set][07] method runs automatically for each resource the [Test][06] method returns `false`.
+
+## Disable remediation
+
+When the **assignmentType** property is set to `Audit`, the agent only performs an audit of the
+machine and doesn't try to remediate the configuration if it isn't compliant.
+
+### Disable remediation of custom content
+
+You can override the assignment type property for custom content packages by adding a tag to the
+machine with name **CustomGuestConfigurationSetPolicy** and value `disable`. Adding the tag
+disables remediation for custom content packages only, not for built-in content provided by
+Microsoft.
+
+## Azure Policy enforcement
+
+Azure Policy assignments include a required property [Enforcement Mode][08] that determines
+behavior for new and existing resources. Use this property to control whether configurations are
+automatically applied to machines.
+
+By default, enforcement is set to `Enabled`. Azure Policy automatically applies the configuration
+when a new machine is deployed. It also applies the configuration when the properties of a machine
+in the scope of an Azure Policy assignment with a policy in the category `Guest Configuration` is
+updated. Update operations include actions that occur in Azure Resource Manager, like adding or
+changing a tag. Update operations also include changes for virtual machines like resizing or
+attaching a disk.
+
+Leave enforcement enabled if the configuration should be remediated when changes occur to the
+machine resource in Azure. Changes happening inside the machine don't trigger automatic remediation
+as long as they don't change the machine resource in Azure Resource Manager.
+
+If enforcement is set to `Disabled`, the configuration assignment audits the state of the machine
+until a [remediation task][09] changes the behavior. By default, machine configuration definitions
+update the [assignmentType property][05] from `Audit` to `ApplyandMonitor` so the configuration is
+applied one time and then it isn't applied again until a remediation is triggered.
+
+## Optional: Remediate all existing machines
+
+If an Azure Policy assignment is created from the Azure portal, on the "Remediation" tab a checkbox
+labeled "Create a remediation task" is available. When the box is checked, after the policy
+assignment is created remediation tasks automatically correct any resources that evaluate to
+`NonCompliant`.
+
+The effect of this setting for machine configuration is that you can deploy a configuration across
+many machines by assigning a policy. You don't also have to run the remediation task manually for
+machines that aren't compliant.
+
+## Manually trigger remediation outside of Azure Policy
+
+You can orchestrate remediation outside of the Azure Policy experience by updating a
+guest assignment resource, even if the update doesn't make changes to the resource properties.
+
+When a machine configuration assignment is created, the [complianceStatus property][10] is set to
+`Pending`. The machine configuration service requests a list of assignments every 5 minutes. If the
+machine configuration assignment's **complianceStatus** is `Pending` and its **configurationMode**
+is `ApplyandMonitor` or `ApplyandAutoCorrect`, the service in the machine applies the
+configuration.
+
+After the configuration is applied, the configuration mode dictates whether the behavior is to only
+report on compliance status and allow drift or to automatically correct.
+
+## Understanding combinations of settings
+
+| ~ | Audit | ApplyandMonitor | ApplyandAutoCorrect |
+| -- | - | -- | - |
+| Enforcement Enabled | Only reports status | Configuration applied on VM Create and reapplied on Update but otherwise allowed to drift | Configuration applied on VM Create, reapplied on Update, and corrected on next interval if drift occurs |
+| Enforcement Disabled | Only reports status | Configuration applied but allowed to drift | Configuration applied on VM Create or Update and corrected on next interval if drift occurs |
+
+## Next steps
+
+- Read the [machine configuration overview][01].
+- Set up a custom machine configuration package [development environment][11].
+- [Create a package artifact][12] for machine configuration.
+- [Test the package artifact][13] from your development environment.
+- Use the **GuestConfiguration** module to [create an Azure Policy definition][14] for at-scale
+ management of your environment.
+- [Assign your custom policy definition][15] using Azure portal.
+- Learn how to view [compliance details for machine configuration][16] policy assignments.
+
+<!-- Reference link definitions -->
+[01]: ./overview.md
+[02]: ../policy/concepts/effects.md#deployifnotexists
+[03]: ../policy/concepts/effects.md#deployifnotexists-evaluation
+[04]: https://youtu.be/rjAk1eNmDLk
+[05]: /rest/api/guestconfiguration/guest-configuration-assignments/get#assignmenttype
+[06]: /powershell/dsc/resources/get-test-set#test
+[07]: /powershell/dsc/resources/get-test-set#set
+[08]: ../policy/concepts/assignment-structure.md#enforcement-mode
+[09]: ../policy/how-to/remediate-resources.md
+[10]: /rest/api/guestconfiguration/guest-configuration-assignments/get#compliancestatus
+[11]: ./how-to-set-up-authoring-environment.md
+[12]: ./how-to-create-package.md
+[13]: ./how-to-test-package.md
+[14]: ./how-to-create-policy-definition.md
+[15]: ../policy/assign-policy-portal.md
+[16]: ../policy/how-to/determine-non-compliance.md
governance Create Management Group Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/create-management-group-go.md
can be used, including [bash on Windows 10](/windows/wsl/install-win10) or local
```bash # Add the management group package for Go
- go get -u github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2020-05-01/managementgroups
+ go install github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2020-05-01/managementgroups@latest
# Add the Azure auth package for Go
- go get -u github.com/Azure/go-autorest/autorest/azure/auth
+ go install github.com/Azure/go-autorest/autorest/azure/auth@latest
``` ## Application setup
governance Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/manage.md
description: Learn how to view, maintain, update, and delete your management gro
Last updated 12/01/2022 --++ # Manage your Azure subscriptions at scale with management groups
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/overview.md
Title: Organize your resources with management groups - Azure Governance
description: Learn about the management groups, how their permissions work, and how to use them. Last updated 01/24/2023 --++ # What are Azure management groups?
governance Determine Non Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/determine-non-compliance.md
The guest configuration feature can be used outside of Azure Policy assignments.
For example, [Azure AutoManage](../../../automanage/index.yml) creates guest configuration assignments, or you might
-[assign configurations when you deploy machines](../../machine-configuration/machine-configuration-create-assignment.md).
+[assign configurations when you deploy machines](../../machine-configuration/how-to-create-assignment.md).
To view all guest configuration assignments across your tenant, from the Azure portal open the **Guest Assignments** page. To view detailed compliance
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes.md
Azure Health Data Services is a set of managed API services based on open standards and frameworks for the healthcare industry. They enable you to build scalable and secure healthcare solutions by bringing protected health information (PHI) datasets together and connecting them end-to-end with tools for machine learning, analytics, and AI. This document provides details about the features and enhancements made to Azure Health Data Services including the different service types (FHIR service, DICOM service, and MedTech service) that seamlessly work with one another.
+## March 2023
+#### Azure Health Data Services
+
+**Azure Health Data services General Available (GA) in new regions**
+
+General availability (GA) of Azure Health Data services in Japan East region.
+++ ## February 2023 #### FHIR service
Two new sample apps have been released in the open source samples repo: [Azure-S
## January 2023
-### Azure Health Data Services
+#### Azure Health Data Services
**Azure Health Data services General Available (GA) in new regions**
Customers can now determine if their mappings are working as intended, as they c
**Fixed issue where Querying with :not operator was returning more results than expected**
-The issue is now fixed and querying with :not operator should provide correct results. For more information, see [#2790](https://github.com/microsoft/fhir-server/pull/2785). |
+The issue is now fixed and querying with :not operator should provide correct results. For more information, see [#2790](https://github.com/microsoft/fhir-server/pull/2785).
internet-peering How To Exchange Route Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/how-to-exchange-route-server-portal.md
As an Internet Exchange Provider, you can create an exchange peering request by
* For Peering type, select **Direct** * For Microsoft network, select **AS8075 with exchange route server**.
- * Select SKU as **Basic Free**. Don't select premium free as it's reserved for special applications.
+ * For SKU, select **Premium Free**.
* Select the **Metro** location where you want to set up peering. 1. Under **Peering Connections**, select **Create new**
iot-dps Quick Create Simulated Device X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-x509.md
In this section, you'll use your Windows command prompt.
:::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope-and-global-device-endpoint.png" alt-text="Screenshot of the ID scope and global device endpoint on Azure portal.":::
-1. In your Windows command prompt, go to the directory of the [provision_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/main/samples/async-hub-scenarios/provision_x509.py) sample. The path shown is relative to the location where you cloned the SDK.
+1. In your Windows command prompt, go to the directory of the [provision_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/v2/samples/async-hub-scenarios/provision_x509.py) sample. The path shown is relative to the location where you cloned the SDK.
```cmd cd ./azure-iot-sdk-python/samples/async-hub-scenarios
In this section, you'll use your Windows command prompt.
set PASS_PHRASE=1234 ```
-1. Review the code for [provision_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/main/samples/async-hub-scenarios/provision_x509.py). If you're not using **Python version 3.7** or later, make the [code change mentioned here](https://github.com/Azure/azure-iot-sdk-python/tree/main/samples/async-hub-scenarios#advanced-iot-hub-scenario-samples-for-the-azure-iot-hub-device-sdk) to replace `asyncio.run(main())` and save your changes.
+1. Review the code for [provision_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/v2/samples/async-hub-scenarios/provision_x509.py). If you're not using **Python version 3.7** or later, make the [code change mentioned here](https://github.com/Azure/azure-iot-sdk-python/tree/v2/samples/async-hub-scenarios#advanced-iot-hub-scenario-samples-for-the-azure-iot-hub-device-sdk) to replace `asyncio.run(main())` and save your changes.
1. Run the sample. The sample connects to DPS, which will provision the device to an IoT hub. After the device is provisioned, the sample will send some test messages to the IoT hub.
iot-dps Tutorial Custom Hsm Enrollment Group X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/tutorial-custom-hsm-enrollment-group-x509.md
In the following steps, use your Windows command prompt.
:::image type="content" source="./media/tutorial-custom-hsm-enrollment-group-x509/copy-id-scope.png" alt-text="Screenshot of the ID scope in the Azure portal.":::
-1. In your Windows command prompt, go to the directory of the [provision_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/main/samples/async-hub-scenarios/provision_x509.py) sample. The path shown is relative to the location where you cloned the SDK.
+1. In your Windows command prompt, go to the directory of the [provision_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/v2/samples/async-hub-scenarios/provision_x509.py) sample. The path shown is relative to the location where you cloned the SDK.
```cmd cd .\azure-iot-sdk-python\samples\async-hub-scenarios
In the following steps, use your Windows command prompt.
set X509_KEY_FILE=<your-certificate-folder>\private\device-01.key.pem ```
-1. Review the code for [provision_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/main/samples/async-hub-scenarios/provision_x509.py). If you're not using **Python version 3.7** or later, make the [code change mentioned here](https://github.com/Azure/azure-iot-sdk-python/tree/main/samples/async-hub-scenarios#advanced-iot-hub-scenario-samples-for-the-azure-iot-hub-device-sdk) to replace `asyncio.run(main())`.
+1. Review the code for [provision_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/v2/samples/async-hub-scenarios/provision_x509.py). If you're not using **Python version 3.7** or later, make the [code change mentioned here](https://github.com/Azure/azure-iot-sdk-python/tree/v2/samples/async-hub-scenarios#advanced-iot-hub-scenario-samples-for-the-azure-iot-hub-device-sdk) to replace `asyncio.run(main())`.
1. Run the sample. The sample connects to DPS, which will provision the device to an IoT hub. After the device is provisioned, the sample sends some test messages to the IoT hub.
iot-edge How To Manage Device Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-manage-device-certificates.md
description: How to install and manage certificates on an Azure IoT Edge device
Previously updated : 1/17/2023 Last updated : 4/18/2023
drwxr-xr-x 4 root root 4096 Dec 14 00:16 ..
Using a self-signed certificate authority (CA) certificate as a root of trust with IoT Edge and modules is known as *trust bundle*. The trust bundle is available for IoT Edge and modules to communicate with servers. To configure the trust bundle, specify its file path in the IoT Edge configuration file.
-1. Get a publicly trusted root CA certificate from a PKI provider.
+1. Get the root CA certificate from a PKI provider.
1. Check that the certificate meets the [format requirements](#format-requirements).
iot-edge Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/version-history.md
This table provides recent version history for IoT Edge package releases, and hi
| Release notes and assets | Type | Release Date | End of Support Date | Highlights | | | - | | - | - |
-| [1.4](https://github.com/Azure/azure-iotedge/releases/tag/1.4.0) | Long-term support (LTS) | August 2022 | November 12, 2024 | IoT Edge 1.4 LTS is supported through November 12, 2024 to match the [.NET 6 release lifecycle](https://dotnet.microsoft.com/platform/support/policy/dotnet-core#lifecycle). <br> Automatic image clean-up of unused Docker images <br> Ability to pass a [custom JSON payload to DPS on provisioning](../iot-dps/how-to-send-additional-data.md#iot-edge-support) <br> Ability to require all modules in a deployment be downloaded before restart <br> Use of the TCG TPM2 Software Stack which enables TPM hierarchy authorization values, specifying the TPM index at which to persist the DPS authentication key, and accommodating more [TPM configurations](http://github.com/Azure/iotedge/blob/897aed8c5573e8cad4b602e5a1298bdc64cd28b4/edgelet/contrib/config/linux/template.toml#L262-L288) |
+| [1.4](https://github.com/Azure/azure-iotedge/releases/tag/1.4.0) | Long-term support (LTS) | August 2022 | November 12, 2024 | IoT Edge 1.4 LTS is supported through November 12, 2024 to match the [.NET 6 release lifecycle](https://dotnet.microsoft.com/platform/support/policy/dotnet-core#lifecycle). <br> Automatic image clean-up of unused Docker images <br> Ability to pass a [custom JSON payload to DPS on provisioning](../iot-dps/how-to-send-additional-data.md#iot-edge-support) <br> Ability to require all modules in a deployment be downloaded before restart <br> Use of the TCG TPM2 Software Stack which enables TPM hierarchy authorization values, specifying the TPM index at which to persist the DPS authentication key, and accommodating more [TPM configurations](https://github.com/Azure/iotedge/blob/897aed8c5573e8cad4b602e5a1298bdc64cd28b4/edgelet/contrib/config/linux/template.toml#L262-L288) |
| [1.3](https://github.com/Azure/azure-iotedge/releases/tag/1.3.0) | Stable | June 2022 | August 2022 | Support for Red Hat Enterprise Linux 8 on AMD and Intel 64-bit architectures.<br>Edge Hub now enforces that inbound/outbound communication uses minimum TLS version 1.2 by default<br>Updated runtime modules (edgeAgent, edgeHub) based on .NET 6 | | [1.2](https://github.com/Azure/azure-iotedge/releases/tag/1.2.0) | Stable | April 2021 | June 2022 | [IoT Edge devices behind gateways](how-to-connect-downstream-iot-edge-device.md)<br>[IoT Edge MQTT broker (preview)](how-to-publish-subscribe.md)<br>New IoT Edge packages introduced, with new installation and configuration steps. For more information, see [Update from 1.0 or 1.1 to latest release](how-to-update-iot-edge.md#special-case-update-from-10-or-11-to-latest-release).<br>Includes [Microsoft Defender for IoT micro-agent for Edge](../defender-for-iot/device-builders/overview.md).<br> Integration with Device Update. For more information, see [Update IoT Edge](how-to-update-iot-edge.md). | | [1.1](https://github.com/Azure/azure-iotedge/releases/tag/1.1.0) | Long-term support (LTS) | February 2021 | December 13, 2022 | IoT Edge 1.1 LTS is supported through December 13, 2022 to match the [.NET Core 3.1 release lifecycle](https://dotnet.microsoft.com/platform/support/policy/dotnet-core). <br> [Long-term support plan and supported systems updates](support.md) |
iot-hub Iot Hub Distributed Tracing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-distributed-tracing.md
Microsoft Azure IoT Hub currently supports distributed tracing as a [preview feature](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-IoT Hub is one of the first Azure services to support distributed tracing. As more Azure services support distributed tracing, you'll be able to trace Internet of Things (IoT) messages throughout the Azure services involved in your solution. For a background on the feature, see [What is distributed tracing?](../azure-monitor/app/distributed-tracing.md).
+IoT Hub is one of the first Azure services to support distributed tracing. As more Azure services support distributed tracing, you'll be able to trace Internet of Things (IoT) messages throughout the Azure services involved in your solution. For a background on the feature, see [What is distributed tracing?](../azure-monitor/app/distributed-tracing-telemetry-correlation.md).
When you enable distributed tracing for IoT Hub, you can:
iot-hub Iot Hub Mqtt Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-mqtt-support.md
In order to ensure a client/IoT Hub connection stays alive, both the service and
|Java | 230 seconds | [Yes](https://github.com/Azure/azure-iot-sdk-java/blob/main/iothub/device/iot-device-client/src/main/java/com/microsoft/azure/sdk/iot/device/ClientOptions.java#L64) | |C | 240 seconds | [Yes](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/Iothub_sdk_options.md#mqtt-transport) | |C# | 300 seconds* | [Yes](/dotnet/api/microsoft.azure.devices.client.transport.mqtt.mqtttransportsettings.keepaliveinseconds) |
-|Python | 60 seconds | [Yes](https://github.com/Azure/azure-iot-sdk-python/blob/main/azure-iot-device/azure/iot/device/iothub/abstract_clients.py#L343) |
+|Python | 60 seconds | [Yes](https://github.com/Azure/azure-iot-sdk-python/blob/v2/azure-iot-device/azure/iot/device/iothub/abstract_clients.py#L343) |
*The C# SDK defines the default value of the MQTT KeepAliveInSeconds property as 300 seconds. In reality, the SDK sends a ping request four times per keep-alive duration set. This means the SDK sends a keep-alive ping every 75 seconds.
key-vault Overview Vnet Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/overview-vnet-service-endpoints.md
Here's a list of trusted services that are allowed to access a key vault if the
| Azure Import/Export| [Use customer-managed keys in Azure Key Vault for Import/Export service](../../import-export/storage-import-export-encryption-key-portal.md) | Azure Information Protection|Allow access to tenant key for [Azure Information Protection.](/azure/information-protection/what-is-information-protection)| | Azure Machine Learning|[Secure Azure Machine Learning in a virtual network](../../machine-learning/how-to-secure-workspace-vnet.md)|
+| Azure NetApps | [Configure customer-managed keys for Azure NetApp Files volume encryption](../../azure-netapp-files/configure-customer-managed-keys.md)
| Azure Resource Manager template deployment service|[Pass secure values during deployment](../../azure-resource-manager/templates/key-vault-parameter.md).| | Azure Service Bus|[Allow access to a key vault for customer-managed keys scenario](../../service-bus-messaging/configure-customer-managed-key.md)| | Azure SQL Database|[Transparent Data Encryption with Bring Your Own Key support for Azure SQL Database and Azure Synapse Analytics](/azure/azure-sql/database/transparent-data-encryption-byok-overview).|
key-vault Rbac Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/rbac-guide.md
For more Information about how to create custom roles, see:
## Known limits and performance - Key Vault data plane RBAC is not supported in multi tenant scenarios like with Azure Lighthouse-- 2000 Azure role assignments per subscription
+- 4000 Azure role assignments per subscription
- Role assignments latency: at current expected performance, it will take up to 10 minutes (600 seconds) after role assignments is changed for role to be applied ## Frequently Asked Questions:
key-vault Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-bicep.md
resource vault 'Microsoft.KeyVault/vaults@2021-11-01-preview' = {
location: location properties: { accessPolicies:[]
- enableRbacAuthorization: false
- enableSoftDelete: false
+ enableRbacAuthorization: true
+ enableSoftDelete: true
+ softDeleteRetentionInDays: 90
enabledForDeployment: false enabledForDiskEncryption: false enabledForTemplateDeployment: false
key-vault Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-template.md
Title: Azure Quickstart - Create an Azure key vault and a key by using Azure Resource Manager template | Microsoft Docs description: Quickstart showing how to create Azure key vaults, and add key to the vaults by using Azure Resource Manager template (ARM template). -+ tags: azure-resource-manager Last updated 06/28/2022-+ #Customer intent: As a security admin who is new to Azure, I want to use Key Vault to securely store keys and passwords in Azure.
To complete this article:
"location": "[parameters('location')]", "properties": { "accessPolicies": [],
- "enableRbacAuthorization": false,
- "enableSoftDelete": false,
+ "enableRbacAuthorization": true,
+ "enableSoftDelete": true,
+ "softDeleteRetentionInDays": "90",
"enabledForDeployment": false, "enabledForDiskEncryption": false, "enabledForTemplateDeployment": false,
load-balancer Upgrade Basic Standard Virtual Machine Scale Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-basic-standard-virtual-machine-scale-sets.md
Title: Upgrade from Basic to Standard for Virtual Machine Scale Sets
description: This article shows you how to upgrade a load balancer from basic to standard SKU for Virtual Machine Scale Sets. -+ Previously updated : 09/22/2022-- Last updated : 04/17/2023++ # Upgrade a basic load balancer used with Virtual Machine Scale Sets
The PowerShell module performs the following functions:
- Install the latest version of [PowerShell](/powershell/scripting/install/installing-powershell) - Determine whether you have the latest Az PowerShell module installed (8.2.0)
- - Install the latest Az PowerShell module](/powershell/azure/install-az-ps)
+ - Install the latest [Az PowerShell module](/powershell/azure/install-az-ps)
## Install the 'AzureBasicLoadBalancerUpgrade' module
PS C:\> Start-AzBasicLoadBalancerUpgrade -FailedMigrationRetryFilePathLB C:\Reco
### Will the module migrate my frontend IP address to the new Standard Load Balancer?
-Yes, for both public and internal load balancers, the module ensures that front end IP addresses are maintained. For public IPs, the IP is converted to a static IP prior to migration (if necessary). For internal front ends, the module will attempt to reassign the same IP address freed up when the Basic Load Balancer was deleted; if the private IP isn't available the script will fail (see [What happens if my upgrade fails mid-migration?](#what-happens-if-my-upgrade-fails-mid-migration)).
+Yes, for both public and internal load balancers, the module ensures that front end IP addresses are maintained. For public IPs, the IP is converted to a static IP prior to migration (if necessary). For internal front ends, the module attempts to reassign the same IP address freed up when the Basic Load Balancer was deleted; if the private IP isn't available the script fails (see [What happens if my upgrade fails mid-migration?](#what-happens-if-my-upgrade-fails-mid-migration)).
### How long does the Upgrade take?
The script migrates the following from the Basic Load Balancer to the Standard L
- Updates the public IP SKU to Standard, if Basic - Upgrade all associated public IPs to the new Standard Load Balancer - Health Probes:
- - All probes will be migrated to the new Standard Load Balancer
+ - All probes are migrated to the new Standard Load Balancer
- Load balancing rules:
- - All load balancing rules will be migrated to the new Standard Load Balancer
+ - All load balancing rules are migrated to the new Standard Load Balancer
- Inbound NAT Rules:
- - All user-created NAT rules will be migrated to the new Standard Load Balancer
+ - All user-created NAT rules are migrated to the new Standard Load Balancer
- Inbound NAT Pools: - All inbound NAT Pools will be migrated to the new Standard Load Balancer - Outbound Rules:
- - Basic load balancers don't support configured outbound rules. The script will create an outbound rule in the Standard load balancer to preserve the outbound behavior of the Basic load balancer. For more information about outbound rules, see [Outbound rules](./outbound-rules.md).
+ - Basic load balancers don't support configured outbound rules. The script creates an outbound rule in the Standard load balancer to preserve the outbound behavior of the Basic load balancer. For more information about outbound rules, see [Outbound rules](./outbound-rules.md).
- Network security group
- - Basic Load Balancer doesn't require a network security group to allow outbound connectivity. In case there's no network security group associated with the Virtual Machine Scale Set, a new network security group will be created to preserve the same functionality. This new network security group will be associated to the Virtual Machine Scale Set backend pool member network interfaces. It will allow the same load balancing rules ports and protocols and preserve the outbound connectivity.
+ - Basic Load Balancer doesn't require a network security group to allow outbound connectivity. In case there's no network security group associated with the Virtual Machine Scale Set, a new network security group is created to preserve the same functionality. This new network security group is associated to the Virtual Machine Scale Set backend pool member network interfaces. It allows the same load balancing rules ports and protocols and preserve the outbound connectivity.
- Backend pools:
- - All backend pools will be migrated to the new Standard Load Balancer
- - All Virtual Machine Scale Set network interfaces and IP configurations will be migrated to the new Standard Load Balancer
+ - All backend pools are migrated to the new Standard Load Balancer
+ - All Virtual Machine Scale Set network interfaces and IP configurations are migrated to the new Standard Load Balancer
- If a Virtual Machine Scale Set is using Rolling Upgrade policy, the script will update the Virtual Machine Scale Set upgrade policy to "Manual" during the migration process and revert it back to "Rolling" after the migration is completed. **Internal Load Balancer:** - Private frontend IP configuration - Health Probes:
- - All probes will be migrated to the new Standard Load Balancer
+ - All probes are migrated to the new Standard Load Balancer
- Load balancing rules:
- - All load balancing rules will be migrated to the new Standard Load Balancer
+ - All load balancing rules are migrated to the new Standard Load Balancer
- Inbound NAT Pools: - All inbound NAT Pools will be migrated to the new Standard Load Balancer - Inbound NAT Rules:
- - All user-created NAT rules will be migrated to the new Standard Load Balancer
+ - All user-created NAT rules are migrated to the new Standard Load Balancer
- Backend pools:
- - All backend pools will be migrated to the new Standard Load Balancer
- - All Virtual Machine Scale Set network interfaces and IP configurations will be migrated to the new Standard Load Balancer
+ - All backend pools are migrated to the new Standard Load Balancer
+ - All Virtual Machine Scale Set network interfaces and IP configurations are migrated to the new Standard Load Balancer
- If there's a Virtual Machine Scale Set using Rolling Upgrade policy, the script will update the Virtual Machine Scale Set upgrade policy to "Manual" during the migration process and revert it back to "Rolling" after the migration is completed. >[!NOTE]
The module is designed to accommodate failures, either due to unhandled errors o
1. Address the cause of the migration failure. Check the log file `Start-AzBasicLoadBalancerUpgrade.log` for details 1. [Remove the new Standard Load Balancer](./update-load-balancer-with-vm-scale-set.md) (if created). Depending on which stage of the migration failed, you may have to remove the Standard Load Balancer reference from the Virtual Machine Scale Set network interfaces (IP configurations) and Health Probes in order to remove the Standard Load Balancer.
- 1. Locate the Basic Load Balancer state backup file. This file will either be in the directory where the script was executed, or at the path specified with the `-RecoveryBackupPath` parameter during the failed execution. The file will be named: `State_<basicLBName>_<basicLBRGName>_<timestamp>.json`
+ 1. Locate the Basic Load Balancer state backup file. This file will either be in the directory where the script was executed, or at the path specified with the `-RecoveryBackupPath` parameter during the failed execution. The file is named: `State_<basicLBName>_<basicLBRGName>_<timestamp>.json`
1. Rerun the migration script, specifying the `-FailedMigrationRetryFilePathLB <BasicLoadBalancerbackupFilePath>` and `-FailedMigrationRetryFilePathVMSS <VMSSBackupFile>` parameters instead of -BasicLoadBalancerName or passing the Basic Load Balancer over the pipeline ## Next steps
load-balancer Upgrade Basic Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-basic-standard.md
Previously updated : 03/17/2022 Last updated : 04/17/2023 -+ # Upgrade from a basic public to standard public load balancer
An Azure PowerShell script is available that does the following procedures:
* If the load balancer doesn't have a frontend IP configuration or backend pool, you'll encounter an error running the script. Ensure the load balancer has a frontend IP and backend pool
-* The script cannot migrate Virtual Machine Scale Set from Basic Load Balancer's backend to Standard Load Balancer's backend. For this type of upgrade, see [Upgrade a basic load balancer used with Virtual Machine Scale Sets](./upgrade-basic-standard-virtual-machine-scale-sets.md) for instructions and more information.
+* The script can't migrate Virtual Machine Scale Set from Basic Load Balancer's backend to Standard Load Balancer's backend. For this type of upgrade, see [Upgrade a basic load balancer used with Virtual Machine Scale Sets](./upgrade-basic-standard-virtual-machine-scale-sets.md) for instructions and more information.
### Change allocation method of the public IP address to static
Download the migration script from the [PowerShell Gallery](https://www.powershe
There are two options depending on your local PowerShell environment setup and preferences:
-* If you donΓÇÖt have the Azure Az modules installed, or donΓÇÖt mind uninstalling the Azure Az modules, use the `Install-Script` option to run the script.
+* If you donΓÇÖt have the Az PowerShell module installed, or donΓÇÖt mind uninstalling the Az PowerShell module, use the `Install-Script` option to run the script.
-* If you need to keep the Azure Az modules, download the script and run it directly.
+* If you need to keep the Az PowerShell module, download the script and run it directly.
-To determine if you have the Azure Az modules installed, run `Get-InstalledModule -Name az`. If you don't see any installed Az modules, then you can use the `Install-Script` method.
+To determine if you have the Az PowerShell module installed, run `Get-InstalledModule -Name az`. If you don't see any installed Az PowerShell module, then you can use the `Install-Script` method.
### Install with Install-Script
-To use this option, don't have the Azure Az modules installed on your computer. If they're installed, the following command displays an error. Uninstall the Azure Az modules, or use the other option to download the script manually and run it.
+To use this option, don't have the Az PowerShell module installed on your computer. If they're installed, the following command displays an error. Uninstall the Az PowerShell module, or use the other option to download the script manually and run it.
Run the script with the following command: ```azurepowershell Install-Script -Name AzurePublicLBUpgrade ```
-This command also installs the required Az modules.
+This command also installs the required Az PowerShell module.
### Install with the script directly
-If you do have Azure Az modules installed and can't uninstall them, or don't want to uninstall them,you can manually download the script using the **Manual Download** tab in the script download link. The script is downloaded as a raw **nupkg** file. To install the script from this **nupkg** file, see [Manual Package Download](/powershell/scripting/gallery/how-to/working-with-packages/manual-download).
+If you do have Az PowerShell module installed and can't uninstall it, or don't want to uninstall it,you can manually download the script using the **Manual Download** tab in the script download link. The script is downloaded as a raw **nupkg** file. To install the script from this **nupkg** file, see [Manual Package Download](/powershell/gallery/gallery/how-to/working-with-packages/manual-download)
To run the script: 1. Use `Connect-AzAccount` to connect to Azure.
-2. Use `Import-Module Az` to import the Az modules.
+2. Use `Import-Module Az` to import the Az PowerShell module.
3. Examine the required parameters:
load-balancer Upgrade Basicinternal Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-basicInternal-standard.md
Previously updated : 12/15/2022 Last updated : 04/17/2023
This command also installs the required Az PowerShell module.
### Install using the Manual Download method
-If you do have some Azure Az PowerShell module installed and can't uninstall them (or don't want to uninstall them), you can manually download the script using the **Manual Download** tab in the script download link. The script is downloaded as a raw nupkg file. To install the script from this nupkg file, see [Manual Package Download](/powershell/scripting/gallery/how-to/working-with-packages/manual-download).
+If you do have some Azure Az PowerShell module installed and can't uninstall them (or don't want to uninstall them), you can manually download the script using the **Manual Download** tab in the script download link. The script is downloaded as a raw nupkg file. To install the script from this nupkg file, see [Manual Package Download](/powershell/gallery/gallery/how-to/working-with-packages/manual-download).
### Run the script
load-balancer Upgrade Internalbasic To Publicstandard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-internalbasic-to-publicstandard.md
description: Learn how to upgrade a basic internal load balancer to a standard p
Previously updated : 03/17/2022 Last updated : 04/17/2023 -+ # Upgrade an internal basic load balancer - Outbound connections required
An Azure PowerShell script is available that does the following procedures:
* If the load balancer doesn't have a frontend IP configuration or backend pool, you'll encounter an error running the script. Ensure the load balancer has a frontend IP and backend pool
-* The script cannot migrate Virtual Machine Scale Set from Basic Load Balancer's backend to Standard Load Balancer's backend. For this type of upgrade, see [Upgrade a basic load balancer used with Virtual Machine Scale Sets](./upgrade-basic-standard-virtual-machine-scale-sets.md) for instructions and more information.
+* The script can't migrate Virtual Machine Scale Set from Basic Load Balancer's backend to Standard Load Balancer's backend. For this type of upgrade, see [Upgrade a basic load balancer used with Virtual Machine Scale Sets](./upgrade-basic-standard-virtual-machine-scale-sets.md) for instructions and more information.
## Download the script
Download the migration script from the [PowerShell Gallery](https://www.powershe
There are two options depending on your local PowerShell environment setup and preferences:
-* If you donΓÇÖt have the Azure Az modules installed, or donΓÇÖt mind uninstalling the Azure Az modules, use the `Install-Script` option to run the script.
+* If you donΓÇÖt have the Az PowerShell module installed, or donΓÇÖt mind uninstalling the Az PowerShell module, use the `Install-Script` option to run the script.
-* If you need to keep the Azure Az modules, download the script and run it directly.
+* If you need to keep the Az PowerShell module, download the script and run it directly.
-To determine if you have the Azure Az modules installed, run `Get-InstalledModule -Name az`. If you don't see any installed Az modules, then you can use the `Install-Script` method.
+To determine if you have the Az PowerShell module installed, run `Get-InstalledModule -Name az`. If the Az PowerShell module isn't installed, you can use the `Install-Script` method.
### Install with Install-Script
-To use this option, don't have the Azure Az modules installed on your computer. If they're installed, the following command displays an error. Uninstall the Azure Az modules, or use the other option to download the script manually and run it.
+To use this option, don't have the Az PowerShell module installed on your computer. If they're installed, the following command displays an error. Uninstall the Az PowerShell module, or use the other option to download the script manually and run it.
Run the script with the following command: ```azurepowershell Install-Script -Name AzureLBUpgrade ```
-This command also installs the required Az modules.
+This command also installs the required Az PowerShell module.
### Install with the script directly
-If you do have Azure Az modules installed and can't uninstall them, or don't want to uninstall them,you can manually download the script using the **Manual Download** tab in the script download link. The script is downloaded as a raw **nupkg** file. To install the script from this **nupkg** file, see [Manual Package Download](/powershell/scripting/gallery/how-to/working-with-packages/manual-download).
+If you do have Az PowerShell module installed and can't uninstall them, or don't want to uninstall them,you can manually download the script using the **Manual Download** tab in the script download link. The script is downloaded as a raw **nupkg** file. To install the script from this **nupkg** file, see [Manual Package Download](/powershell/gallery/gallery/how-to/working-with-packages/manual-download).
To run the script: 1. Use `Connect-AzAccount` to connect to Azure.
-2. Use `Import-Module Az` to import the Az modules.
+2. Use `Import-Module Az` to import the Az PowerShell module.
3. Examine the required parameters:
load-balancer Virtual Network Ipv4 Ipv6 Dual Stack Standard Load Balancer Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/virtual-network-ipv4-ipv6-dual-stack-standard-load-balancer-cli.md
Title: Deploy IPv6 dual stack application - Standard Load Balancer - CLI
-description: This article shows how deploy an IPv6 dual stack application in Azure virtual network using Azure CLI.
+description: This article shows how to deploy an IPv6 dual stack application in Azure virtual network using Azure CLI.
-+ Previously updated : 03/31/2020-- Last updated : 04/17/2023++ # Deploy an IPv6 dual stack application in Azure virtual network using Azure CLI
az vm availability-set create \
### Create network security group
-Create a network security group for the rules that will govern inbound and outbound communication in your VNet.
+Create a network security group for the rules that govern inbound and outbound communication in your VNet.
#### Create a network security group
az network nic ip-config create \
### Create virtual machines
-Create the VMs with [az vm create](/cli/azure/vm#az-vm-create). The following example creates two VMs and the required virtual network components if they do not already exist.
+Create the VMs with [az vm create](/cli/azure/vm#az-vm-create). The following example creates two VMs and the required virtual network components if they don't already exist.
Create virtual machine *dsVM0* as follows:
load-balancer Virtual Network Ipv4 Ipv6 Dual Stack Standard Load Balancer Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/virtual-network-ipv4-ipv6-dual-stack-standard-load-balancer-powershell.md
Title: Deploy IPv6 dual stack application - Standard Load Balancer - PowerShell
-description: This article shows how deploy an IPv6 dual stack application with Standard Load Balancer in Azure virtual network using Azure PowerShell.
+description: This article shows how to deploy an IPv6 dual stack application with Standard Load Balancer in Azure virtual network using Azure PowerShell.
-+ Previously updated : 04/01/2020-- Last updated : 04/17/2023++ # Deploy an IPv6 dual stack application in Azure virtual network using PowerShell
This article shows you how to deploy a dual stack (IPv4 + IPv6) application usin
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 6.9.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 6.9.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create a resource group
$PublicIP_v6 = New-AzPublicIpAddress `
-IpAddressVersion IPv6 ` -Sku Standard ```
-To access your virtual machines using a RDP connection, create a IPV4 public IP addresses for the virtual machines with [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress).
+To access your virtual machines using a RDP connection, create an IPV4 public IP addresses for the virtual machines with [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress).
```azurepowershell-interactive $RdpPublicIP_1 = New-AzPublicIpAddress `
$avset = New-AzAvailabilitySet `
### Create network security group
-Create a network security group for the rules that will govern inbound and outbound communication in your VNET.
+Create a network security group for the rules that govern inbound and outbound communication in your VNET.
#### Create a network security group rule for port 3389
Set an administrator username and password for the VMs with [Get-Credential](/po
$cred = get-credential -Message "DUAL STACK VNET SAMPLE: Please enter the Administrator credential to log into the VMs." ```
-Now you can create the VMs with [New-AzVM](/powershell/module/az.compute/new-azvm). The following example creates two VMs and the required virtual network components if they do not already exist.
+Now you can create the VMs with [New-AzVM](/powershell/module/az.compute/new-azvm). The following example creates two VMs and the required virtual network components if they don't already exist.
```azurepowershell-interactive $vmsize = "Standard_A2"
$VM2 = New-AzVM -ResourceGroupName $rg.ResourceGroupName -Location $rg.Location
``` ## Determine IP addresses of the IPv4 and IPv6 endpoints
-Get all Network Interface Objects in the resource group to summarize the IP's used in this deployment with `get-AzNetworkInterface`. Also, get the Load Balancer's frontend addresses of the IPv4 and IPv6 endpoints with `get-AzpublicIpAddress`.
+Get all Network Interface Objects in the resource group to summarize the IPs used in this deployment with `get-AzNetworkInterface`. Also, get the Load Balancer's frontend addresses of the IPv4 and IPv6 endpoints with `get-AzpublicIpAddress`.
```azurepowershell-interactive $rgName= "dsRG1"
load-balancer Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/whats-new.md
Previously updated : 11/17/2021 Last updated : 04/17/2023 -+ # What's new in Azure Load Balancer?
You can also find the latest Azure Load Balancer updates and subscribe to the RS
| Type |Name |Description |Date added | | ||||
-| SKU | [Basic Load Balancer is retiring on 30 September 2025](https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/) | Basic Load Balancer will retire on 30th September 2025. Make sure to [migrate to Standard SKU](load-balancer-basic-upgrade-guidance.md) before this date. | September 2022 |
+| SKU | [Basic Load Balancer is retiring on September 30, 2025](https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/) | Basic Load Balancer will retire on 30 September 2025. Make sure to [migrate to Standard SKU](load-balancer-basic-upgrade-guidance.md) before this date. | September 2022 |
| SKU | [Gateway Load Balancer now generally available](https://azure.microsoft.com/updates/generally-available-azure-gateway-load-balancer/) | Gateway Load Balancer is a new SKU of Azure Load Balancer targeted for scenarios requiring transparent NVA (network virtual appliance) insertion. Learn more about [Gateway Load Balancer](gateway-overview.md) or our supported [third party partners](gateway-partners.md). | July 2022 |
-| SKU | [Gateway Load Balancer public preview](https://azure.microsoft.com/updates/gateway-load-balancer-preview/) | Gateway Load Balancer is a fully managed service enabling you to deploy, scale, and enhance the availability of third party network virtual appliances (NVAs) in Azure. You can add your favorite third party appliance whether it is a firewall, inline DDoS appliance, deep packet inspection system, or even your own custom appliance into the network path transparently ΓÇô all with a single click.| November 2021 |
-| Feature | [Support for IP-based backend pools (General Availability)](https://azure.microsoft.com/updates/iplbg)|March 2021 |
-| Feature | [Instance Metadata support for Standard SKU Load Balancers and Public IPs](https://azure.microsoft.com/updates/standard-load-balancer-and-ip-addresses-metadata-now-available-through-azure-instance-metadata-service-imds/)|Metadata of Standard Public IP addresses and Standard Load Balancer can now be retrieved through Azure Instance Metadata Service (IMDS). The metadata is available from within the running instances of virtual machines (VMs) and virtual machine scale sets instances. You can leverage the metadata to manage your virtual machines. Learn more [here](instance-metadata-service-load-balancer.md)| February 2021 |
+| SKU | [Gateway Load Balancer public preview](https://azure.microsoft.com/updates/gateway-load-balancer-preview/) | Gateway Load Balancer is a fully managed service enabling you to deploy, scale, and enhance the availability of third party network virtual appliances (NVAs) in Azure. You can add your favorite third party appliance whether it's a firewall, inline DDoS appliance, deep packet inspection system, or even your own custom appliance into the network path transparently ΓÇô all with a single action.| November 2021 |
+| Feature | [Support for IP-based backend pools (General Availability)](https://azure.microsoft.com/updates/iplbg)|March 2021 |
+| Feature | [Instance Metadata support for Standard SKU Load Balancers and Public IPs](https://azure.microsoft.com/updates/standard-load-balancer-and-ip-addresses-metadata-now-available-through-azure-instance-metadata-service-imds/)|Metadata of Standard Public IP addresses and Standard Load Balancer can now be retrieved through Azure Instance Metadata Service (IMDS). The metadata is available from within the running instances of virtual machines (VMs) and Virtual Machine Scale Sets instances. You can use the metadata to manage your virtual machines. Learn more [here](instance-metadata-service-load-balancer.md)| February 2021 |
| Feature | [Public IP SKU upgrade from Basic to Standard without losing IP address](https://azure.microsoft.com/updates/public-ip-sku-upgrade-generally-available/) | As you move from Basic to Standard Load Balancers, retain your public IP address. Learn more [here](../virtual-network/ip-services/public-ip-upgrade-portal.md)| January 2021| | Feature | Support for moves across resource groups | Standard Load Balancer and Standard Public IP support for [resource group moves](https://azure.microsoft.com/updates/standard-resource-group-move/). | October 2020 | | Feature | [Cross-region load balancing with Global tier on Standard LB](https://azure.microsoft.com/updates/preview-azure-load-balancer-now-supports-crossregion-load-balancing/) | Azure Load Balancer supports Cross Region Load Balancing. Previously, Standard Load Balancer had a regional scope. With this release, you can load balance across multiple Azure regions via a single, static, global anycast Public IP address. | September 2020 |
The product group is actively working on resolutions for the following known iss
|Issue |Description |Mitigation | | - |||
-| IP based LB outbound IP | IP based LB leverages Azure's Default Outbound Access IP for outbound | In order to prevent outbound access from this IP, please leverage NAT Gateway for a predictable IP address and to prevent SNAT port exhaustion |
-| numberOfProbes, "Unhealthy threshold" | Health probe configuration property numberOfProbes, otherwise known as "Unhealthy threshold" in Portal, is not respected. Load Balancer health probes will probe up/down immediately after 1 probe regardless of the property's configured value | To reflect the current behavior, please set the value of numberOfProbes ("Unhealthy threshold" in Portal) as 1 |
-|Cross region balancer in West Europe| Currently, there are a limited amount of IP addresses available in West Europe for Azure's cross region Load Balancer. This may impact customers' ability to deploy cross region load balancers in the West Europe region.| We recommend that customers use another home region as part of their cross region deployment.|
+| IP based LB outbound IP | IP based LB uses Azure's Default Outbound Access IP for outbound | In order to prevent outbound access from this IP, use NAT Gateway for a predictable IP address and to prevent SNAT port exhaustion |
+| numberOfProbes, "Unhealthy threshold" | Health probe configuration property numberOfProbes, otherwise known as "Unhealthy threshold" in Portal, isn't respected. Load Balancer health probes will probe up/down immediately after one probe regardless of the property's configured value | To reflect the current behavior, set the value of numberOfProbes ("Unhealthy threshold" in Portal) as 1 |
+|Cross region balancer in West Europe| Currently, there are a limited amount of IP addresses available in West Europe for Azure's cross region Load Balancer. This may affect customers' ability to deploy cross region load balancers in the West Europe region.| We recommend that customers use another home region as part of their cross region deployment.|
logic-apps Create Maps Data Transformation Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-maps-data-transformation-visual-studio-code.md
To exchange messages that have different XML or JSON formats in an Azure Logic A
To visually create and edit a map, you can use Visual Studio Code with the Data Mapper extension within the context of a Standard logic app project. The Data Mapper tool provides a unified experience for XSLT mapping and transformation using drag and drop gestures, a prebuilt functions library for creating expressions, and a way to manually test the maps that you create and use in your workflows.
-After you create your map, you can directly call that map from a workflow in your logic app project or from a workflow in the Azure portal. For this task, add the **Data Mapper Operations** action named **Transform using Data Mapper XSLT** to your workflow. To use this action in the Azure portal, add the map to either of the following resources:
+After you create your map, you can directly call that map from a workflow in your logic app project or from a workflow in the Azure portal. For this task, you can use the **Data Mapper Operations** action named **Transform using Data Mapper XSLT** in your workflow.
-- An integration account for a Consumption or Standard logic app resource-- The Standard logic app resource itself-
-This how-to guide shows how to complete the following tasks:
+This how-to guide shows how to create a blank data map, choose your source and target schemas, select schema elements to start mapping, create various mappings, save and test your map, and then call the map from a workflow in your logic app project.
-- Create a blank data map.-- Specify the source and target schemas to use.-- Navigate the map.-- Select the target and source elements to map.-- Create a direct mapping between elements.-- Create a complex mapping between elements.-- Create a loop between arrays.-- Crete an if condition between elements.-- Save the map.-- Test the map.-- Call the map from a workflow in your logic app project.-
-## Limitations
+## Limitations and known issues
- The Data Mapper extension currently works only in Visual Studio Code running on Windows operating systems.
This how-to guide shows how to complete the following tasks:
- The map layout and item position are currently automatic and read only.
-## Known issues
-
-The Data Mapper extension currently works only with schemas in flat folder-structured projects.
+- The Data Mapper extension currently works only with schemas in flat folder-structured projects.
## Prerequisites
To use the same **Transform using Data Mapper XSLT** action in the Azure portal,
## Next steps -- For data transformations using B2B operations in Azure Logic Apps, see [Add maps for transformations in workflows with Azure Logic Apps](logic-apps-enterprise-integration-maps.md)
+- For data transformations using B2B operations in Azure Logic Apps, see [Add maps for transformations in workflows with Azure Logic Apps](logic-apps-enterprise-integration-maps.md)
logic-apps Logic Apps Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-azure-functions.md
Before you can set up your function app to use Azure AD authentication, you need
#### Find the tenant ID for your Azure AD
-To find your Azure AD tenant ID, either run the PowerShell command named [**Get-AzureAccount**](/powershell/module/servicemanagement/azure.service/get-azureaccount), or in the Azure portal, follow these steps:
+To find your Azure AD tenant ID, either run the PowerShell command named [**Get-AzureAccount**](/powershell/module/servicemanagement/azure/get-azureaccount), or in the Azure portal, follow these steps:
1. In the [Azure portal](https://portal.azure.com), open your Azure AD tenant. These steps use **Fabrikam** as the example tenant.
logic-apps Logic Apps Enterprise Integration Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-maps.md
Title: Add maps to use with workflows
description: Add maps for transform operations in workflows with Azure Logic Apps. ms.suite: integration-- Previously updated : 08/22/2022 Last updated : 04/18/2023 # Add maps for transformations in workflows with Azure Logic Apps
This article shows how to add a map to your integration account. If you're worki
* An Azure account and subscription. If you don't have a subscription yet, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* The map that you want to add. To create maps, you can use the following tools with the [Enterprise Integration SDK](https://aka.ms/vsmapsandschemas):
+* The map that you want to add. To create maps, you can use the following tools:
- * Visual Studio 2019 and the [Microsoft Azure Logic Apps Enterprise Integration Tools Extension](https://aka.ms/vsenterpriseintegrationtools).
+ * Visual Studio Code and the Data Mapper extension. To call the maps created with Data Mapper from your workflow, you must use the **Data Mapper Operations** action named **Transform using Data Mapper XSLT**, not the **XML Operations** action named **Transform XML**. For more information, see [Create maps for data transformation with Visual Studio Code](create-maps-data-transformation-visual-studio-code.md).
- * Visual Studio 2015 and the [Microsoft Azure Logic Apps Enterprise Integration Tools for Visual Studio 2015 2.0](https://aka.ms/vsmapsandschemas) extension.
+ * Visual Studio 2019 and the [Microsoft Azure Logic Apps Enterprise Integration Tools extension](https://aka.ms/vsenterpriseintegrationtools).
- > [!NOTE]
- > Don't install the extension alongside the BizTalk Server extension. Having both extensions might
- > produce unexpected behavior. Make sure that you only have one of these extensions installed.
- >
- > On high resolution monitors, you might experience a [display problem with the map designer](/visualstudio/designers/disable-dpi-awareness)
- > in Visual Studio. To resolve this display problem, either [restart Visual Studio in DPI-unaware mode](/visualstudio/designers/disable-dpi-awareness#restart-visual-studio-as-a-dpi-unaware-process),
- > or add the [DPIUNAWARE registry value](/visualstudio/designers/disable-dpi-awareness#add-a-registry-entry).
+ * Visual Studio 2015 and the [Microsoft Azure Logic Apps Enterprise Integration Tools for Visual Studio 2015 2.0 extension](https://aka.ms/vsmapsandschemas).
- For more information, review the [Create maps](#create-maps) section in this article.
+ > [!NOTE]
+ > Don't install the Microsoft Azure Logic Apps Enterprise Integration Tools extension alongside the BizTalk Server extension.
+ > Having both extensions might produce unexpected behavior. Make sure that you only have one of these extensions installed.
+ >
+ > On high resolution monitors, you might experience a [display problem with the map designer](/visualstudio/designers/disable-dpi-awareness)
+ > in Visual Studio. To resolve this display problem, either [restart Visual Studio in DPI-unaware mode](/visualstudio/designers/disable-dpi-awareness#restart-visual-studio-as-a-dpi-unaware-process),
+ > or add the [DPIUNAWARE registry value](/visualstudio/designers/disable-dpi-awareness#add-a-registry-entry).
+
+ For more information, review the [Create maps](#create-maps) section in this article.
* Based on whether you're working on a Consumption or Standard logic app workflow, you'll need an [integration account resource](logic-apps-enterprise-integration-create-integration-account.md). Usually, you need this resource when you want to define and store artifacts for use in enterprise integration and B2B workflows.
This article shows how to add a map to your integration account. If you're worki
## Create maps
-To create an XSLT document to use as a map, create an integration project in Visual Studio 2019 or 2015 using the [Enterprise Integration SDK](https://aka.ms/vsmapsandschemas). In the integration project, you can build an integration map file, which lets you visually map items between two XML schema files. These tools offer the following map capabilities:
+You can create maps using either Visual Studio Code with the Data Mapper extension or Visual Studio with the Microsoft Azure Logic Apps Enterprise Integration Tools extension.
+
+### Visual Studio Code
+
+When you create maps using Visual Studio Code and the Data Mapper extension, you can call these maps from your workflow, but only with the **Data Mapper Operations** action named **Transform using Data Mapper XSLT**, not the **XML Operations** action named **Transform XML**. For more information, see [Create maps for data transformation with Visual Studio Code](create-maps-data-transformation-visual-studio-code.md).
+
+### Visual Studio
+
+When you create maps using Visual Studio, you'll need to create an integration project with either of the following tools:
+
+* Visual Studio 2019 and the [Microsoft Azure Logic Apps Enterprise Integration Tools extension](https://aka.ms/vsenterpriseintegrationtools)
+
+* Visual Studio 2015 and the [Microsoft Azure Logic Apps Enterprise Integration Tools for Visual Studio 2015 2.0 extension](https://aka.ms/vsmapsandschemas).
+
+In the integration project, you can build an integration map file, which lets you visually map items between two XML schema files. These tools offer the following map capabilities:
* You work with a graphical representation of the map, which shows all the relationships and links you create.
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-limits-and-config.md
ms.suite: integration Previously updated : 03/27/2023 Last updated : 04/19/2023 # Limits and configuration reference for Azure Logic Apps
The following tables list the values for a single workflow definition:
| Name | Limit | Notes | | - | -- | -- |
-| Workflows per region per subscription | - Consumption: 1,000 workflows where each logic app is limited to 1 workflow <br><br>- Standard: Unlimited, based on the selected hosting plan, app activity, size of machine instances, and resource usage, where each logic app can have multiple workflows ||
+| Workflows per region per Azure subscription | - Consumption: 1,000 workflows where each logic app is limited to 1 workflow <br><br>- Standard: Unlimited, based on the selected hosting plan, app activity, size of machine instances, and resource usage, where each logic app can have multiple workflows ||
| Workflow - Maximum name length | - Consumption: 80 characters <br><br>- Standard: 43 characters || | Triggers per workflow | 10 triggers | This limit applies only when you work on the JSON workflow definition, whether in code view or an Azure Resource Manager (ARM) template, not the designer. | | Actions per workflow | 500 actions | To extend this limit, you can use nested workflows as necessary. |
For more information, review the following documentation:
| Name | Limit | ||-| | Managed identities per logic app resource | - Consumption: Either the system-assigned identity *or* only one user-assigned identity <p>- Standard: The system-assigned identity *and* any number of user-assigned identities <p>**Note**: By default, a **Logic App (Standard)** resource has the system-assigned managed identity automatically enabled to authenticate connections at runtime. This identity differs from the authentication credentials or connection string that you use when you create a connection. If you disable this identity, connections won't work at runtime. To view this setting, on your logic app's menu, under **Settings**, select **Identity**. |
-| Number of logic apps that have a managed identity in an Azure subscription per region | 1,000 |
+| Number of logic apps that have a managed identity in an Azure subscription per region | - Consumption: 1,000 logic apps <br>- Standard: Per [Azure App Service limits, if any](../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits) |
||| <a name="integration-account-limits"></a>
logic-apps Logic Apps Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-pricing.md
ms.suite: integration Previously updated : 11/02/2022 Last updated : 04/18/2023 # As a logic apps developer, I want to learn and understand how usage metering, billing, and pricing work in Azure Logic Apps.
The following table summarizes how the Consumption model handles metering and bi
| Trigger and action operations | The Consumption model includes an *initial number* of free built-in operations, per Azure subscription, that a workflow can run. Above this number, metering applies to *each execution*, and billing follows the [*Actions* pricing for the Consumption plan](https://azure.microsoft.com/pricing/details/logic-apps). For other operation types, such as managed connectors, billing follows the [*Standard* or *Enterprise* connector pricing for the Consumption plan](https://azure.microsoft.com/pricing/details/logic-apps). For more information, review [Trigger and action operations in the Consumption model](#consumption-operations). | | Storage operations | Metering applies *only to data retention-related storage consumption* such as saving inputs and outputs from your workflow's run history. Billing follows the [data retention pricing for the Consumption plan](https://azure.microsoft.com/pricing/details/logic-apps/). For more information, review [Storage operations](#storage-operations). | | Integration accounts | Metering applies based on the integration account type that you create and use with your logic app. Billing follows [*Integration Account* pricing](https://azure.microsoft.com/pricing/details/logic-apps/) unless your logic app is deployed and hosted in an [integration service environment (ISE)](#ise-pricing). For more information, review [Integration accounts](#integration-accounts). |
-|||
<a name="consumption-operations"></a>
The following table summarizes how the Consumption model handles metering and bi
| [*Built-in*](../connectors/built-in.md) | These operations run directly and natively with the Azure Logic Apps runtime. In the designer, you can find these operations under the **Built-in** label. <p>For example, the HTTP trigger and Request trigger are built-in triggers. The HTTP action and Response action are built-in actions. Other built-in operations include workflow control actions such as loops and conditions, data operations, batch operations, and others. | The Consumption model includes an *initial number of free built-in operations*, per Azure subscription, that a workflow can run. Above this number, built-in operation executions follow the [*Actions* pricing](https://azure.microsoft.com/pricing/details/logic-apps/). <p><p>**Note**: Some managed connector operations are *also* available as built-in operations, which are included in the initial free operations. Above the initially free operations, billing follows the [*Actions* pricing](https://azure.microsoft.com/pricing/details/logic-apps/), not the [*Standard* or *Enterprise* connector pricing](https://azure.microsoft.com/pricing/details/logic-apps/). | | [*Managed connector*](../connectors/managed.md) | These operations run separately in Azure. In the designer, you can find these operations under the **Standard** or **Enterprise** label. | These operation executions follow the [*Standard* or *Enterprise* connector pricing](https://azure.microsoft.com/pricing/details/logic-apps/). <p><p>**Note**: Preview Enterprise connector operation executions follow the [Consumption *Standard* connector pricing](https://azure.microsoft.com/pricing/details/logic-apps/). | | [*Custom connector*](../connectors/introduction.md#custom-connectors-and-apis) | These operations run separately in Azure. In the designer, you can find these operations under the **Custom** label. For limits number of connectors, throughput, and timeout, review [Custom connector limits in Azure Logic Apps](logic-apps-limits-and-config.md#custom-connector-limits). | These operation executions follow the [*Standard* connector pricing](https://azure.microsoft.com/pricing/details/logic-apps/). |
-||||
For more information about how the Consumption model works with operations that run inside other operations such as loops, process multiple items such as arrays, and retry policies, review [Other operation behavior](#other-operation-behavior).
To help you estimate more accurate consumption costs, review these tips:
In single-tenant Azure Logic Apps, a logic app and its workflows follow the [**Standard** plan](https://azure.microsoft.com/pricing/details/logic-apps/) for pricing and billing. You create such logic apps in various ways, for example, when you choose the **Logic App (Standard)** resource type or use the **Azure Logic Apps (Standard)** extension in Visual Studio Code. This pricing model requires that logic apps use a hosting plan and a pricing tier, which differs from the Consumption plan in that you're billed for reserved capacity and dedicated resources whether or not you use them.
-When you create or deploy logic apps with the **Logic App (Standard)** resource type, you can use the Workflow Standard hosting plan in all Azure regions. You also have the option to select an existing **App Service Environment v3** resource as your deployment location, but you can only use the [App Service plan](../app-service/overview-hosting-plans.md) with this option. If you choose this option, you're charged for the instances used by the App Service plan and for running your logic app workflows. No other charges apply.
+When you create or deploy logic apps with the **Logic App (Standard)** resource type, and you select any Azure region for deployment, you'll also select a Workflow Standard hosting plan. However, if you select an existing **App Service Environment v3** resource for your deployment location, you must then select an [App Service Plan](../app-service/overview-hosting-plans.md).
> [!IMPORTANT] > The following plans and resources are no longer available or supported with the public release of the **Logic App (Standard)** resource type in Azure regions:
-> Functions Premium plan, App Service Environment v1, and App Service Environment v2. Except with ASEv3, the App Service plan is unavailable and unsupported.
+> Functions Premium plan, App Service Environment v1, and App Service Environment v2. Except with ASEv3, the App Service Plan is unavailable and unsupported.
The following table summarizes how the Standard model handles metering and billing for the following components when used with a logic app and a workflow in single-tenant Azure Logic Apps:
The following table summarizes how the Standard model handles metering and billi
| Trigger and action operations | The Standard model includes an *unlimited number* of free built-in operations that your workflow can run. <p>If your workflow uses any managed connector operations, metering applies to *each call*, while billing follows the [same *Standard* or *Enterprise* connector pricing as the Consumption plan](https://azure.microsoft.com/pricing/details/logic-apps). For more information, review [Trigger and action operations in the Standard model](#standard-operations). | | Storage operations | Metering applies to any storage operations run by Azure Logic Apps. For example, storage operations run when the service saves inputs and outputs from your workflow's run history. Billing follows your chosen [pricing tier](#standard-pricing-tiers). For more information, review [Storage operations](#storage-operations). | | Integration accounts | If you create an integration account for your logic app to use, metering is based on the integration account type that you create. Billing follows the [*Integration Account* pricing](https://azure.microsoft.com/pricing/details/logic-apps/). For more information, review [Integration accounts](#integration-accounts). |
-|||
<a name="standard-pricing-tiers"></a> ### Pricing tiers in the Standard model
-The pricing tier that you choose for metering and billing your logic app includes specific amounts of compute in virtual CPU (vCPU) and memory resources. Currently, only the **Workflow Standard** hosting plan is available for the **Logic App (Standard)** resource type and offers the following pricing tiers:
+The pricing tier that you choose for metering and billing for your **Logic App (Standard)** resource includes specific amounts of compute in virtual CPU (vCPU) and memory resources. If you select an App Service Environment v3 as the deployment location and an App Service Plan, specifically an Isolated V2 Service Plan pricing tier, you're charged for the instances used by the App Service Plan and for running your logic app workflows. No other charges apply. For more information, see [App Service Plan - Isolated V2 Service Plan pricing tiers](https://azure.microsoft.com/pricing/details/app-service/windows/#pricing).
+
+If you select a **Workflow Standard** hosting plan, you can choose from the following tiers:
| Pricing tier | Virtual CPU (vCPU) | Memory (GB) | |--|--|-| | **WS1** | 1 | 3.5 | | **WS2** | 2 | 7 | | **WS3** | 4 | 14 |
-||||
> [!IMPORTANT] >
The pricing tier that you choose for metering and billing your logic app include
> |-|--| > | **vCPU** | $0.192 per vCPU | > | **Memory** | $0.0137 per GB |
-> |||
> > The following calculation provides an estimated monthly rate: >
The pricing tier that you choose for metering and billing your logic app include
> | **WS1** | 1 | 3.5 | $175.16 | > | **WS2** | 2 | 7 | $350.33 | > | **WS3** | 4 | 14 | $700.65 |
-> |||||
<a name="standard-operations"></a>
The following table summarizes how the Standard model handles metering and billi
| [*Built-in*](../connectors/built-in.md) | These operations run directly and natively with the Azure Logic Apps runtime. In the designer, you can find these operations under the **Built-in** label. <p>For example, the HTTP trigger and Request trigger are built-in triggers. The HTTP action and Response action are built-in actions. Other built-in operations include workflow control actions such as loops and conditions, data operations, batch operations, and others. | The Standard model includes unlimited free built-in operations. <p><p>**Note**: Some managed connector operations are *also* available as built-in operations. While built-in operations are free, the Standard model still meters and bills managed connector operations using the [same *Standard* or *Enterprise* connector pricing as the Consumption model](https://azure.microsoft.com/pricing/details/logic-apps/). | | [*Managed connector*](../connectors/managed.md) | These operations run separately in Azure. In the designer, you can find these operations under the combined **Azure** label. | The Standard model meters and bills managed connector operations based on the [same *Standard* and *Enterprise* connector pricing as the Consumption model](https://azure.microsoft.com/pricing/details/logic-apps/). <p><p>**Note**: Preview Enterprise connector operations follow the [Consumption *Standard* connector pricing](https://azure.microsoft.com/pricing/details/logic-apps/). | | [*Custom connector*](../connectors/introduction.md#custom-connectors-and-apis) | Currently, you can create and use only [custom built-in connector operations](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272) in single-tenant based logic app workflows. | The Standard model includes unlimited free built-in operations. For limits on throughput and timeout, review [Custom connector limits in Azure Logic Apps](logic-apps-limits-and-config.md#custom-connector-limits). |
-||||
For more information about how the Standard model works with operations that run inside other operations such as loops, process multiple items such as arrays, and retry policies, review [Other operation behavior](#other-operation-behavior).
The following table summarizes how the ISE model handles metering and billing fo
||-| | **Premium** | The base unit has [fixed capacity](logic-apps-limits-and-config.md#integration-service-environment-ise) and is [billed at an hourly rate for the Premium SKU](https://azure.microsoft.com/pricing/details/logic-apps). If you need more throughput, you can [add more scale units](../logic-apps/ise-manage-integration-service-environment.md#add-capacity) when you create your ISE or afterwards. Each scale unit is billed at an [hourly rate that's roughly half the base unit rate](https://azure.microsoft.com/pricing/details/logic-apps). <p><p>For capacity and limits information, see [ISE limits in Azure Logic Apps](logic-apps-limits-and-config.md#integration-service-environment-ise). | | **Developer** | The base unit has [fixed capacity](logic-apps-limits-and-config.md#integration-service-environment-ise) and is [billed at an hourly rate for the Developer SKU](https://azure.microsoft.com/pricing/details/logic-apps). However, this SKU has no service-level agreement (SLA), scale up capability, or redundancy during recycling, which means that you might experience delays or downtime. Backend updates might intermittently interrupt service. <p><p>**Important**: Make sure that you use this SKU only for exploration, experiments, development, and testing - not for production or performance testing. <p><p>For capacity and limits information, see [ISE limits in Azure Logic Apps](logic-apps-limits-and-config.md#integration-service-environment-ise). |
-|||
The following table summarizes how the ISE model handles the following components when used with a logic app and a workflow in an ISE:
The following table summarizes how the ISE model handles the following component
| Trigger and action operations | The ISE model includes free built-in, managed connector, and custom connector operations that your workflow can run, but subject to the [ISE limits in Azure Logic Apps](logic-apps-limits-and-config.md#integration-service-environment-ise) and [custom connector limits in Azure Logic Apps](logic-apps-limits-and-config.md#custom-connector-limits). For more information, review [Trigger and action operations in the ISE model](#integration-service-environment-operations). | | Storage operations | The ISE model includes free storage consumption, such as data retention. For more information, review [Storage operations](#storage-operations). | | Integration accounts | The ISE model includes a single free integration account tier, based on your selected ISE SKU. For an [extra cost](https://azure.microsoft.com/pricing/details/logic-apps/), you can create more integration accounts for your ISE to use up to the [total ISE limit](../logic-apps/logic-apps-limits-and-config.md#integration-account-limits). For more information, review [Integration accounts](#integration-accounts). |
-|||
<a name="integration-service-environment-operations"></a>
The following table summarizes how the ISE model handles the following operation
| [*Built-in*](../connectors/built-in.md) | These operations run directly and natively with the Azure Logic Apps runtime and in the same ISE as your logic app workflow. In the designer, you can find these operations under the **Built-in** label, but each operation also displays the **CORE** label. <p>For example, the HTTP trigger and Request trigger are built-in triggers. The HTTP action and Response action are built-in actions. Other built-in operations include workflow control actions such as loops and conditions, data operations, batch operations, and others. | The ISE model includes these operations *for free*, but are subject to the [ISE limits in Azure Logic Apps](logic-apps-limits-and-config.md#integration-service-environment-ise). | | [*Managed connector*](../connectors/managed.md) | Whether *Standard* or *Enterprise*, managed connector operations run in either your ISE or multi-tenant Azure, based on whether the connector or operation displays the **ISE** label. <p><p>- **ISE** label: These operations run in the same ISE as your logic app and work without requiring the [on-premises data gateway](#data-gateway). <p><p>- No **ISE** label: These operations run in multi-tenant Azure. | The ISE model includes both **ISE** and no **ISE** labeled operations *for free*, but are subject to the [ISE limits in Azure Logic Apps](logic-apps-limits-and-config.md#integration-service-environment-ise). | | [*Custom connector*](../connectors/introduction.md#custom-connectors-and-apis) | In the designer, you can find these operations under the **Custom** label. | The ISE model includes these operations *for free*, but are subject to [custom connector limits in Azure Logic Apps](logic-apps-limits-and-config.md#custom-connector-limits). |
-||||
For more information about how the ISE model works with operations that run inside other operations such as loops, process multiple items such as arrays, and retry policies, review [Other operation behavior](#other-operation-behavior).
The following table summarizes how the Consumption, Standard, and ISE models han
|--|-|-|-|--| | [Loop actions](logic-apps-control-flow-loops.md) | A loop action, such as the **For each** or **Until** loop, can include other actions that run during each loop cycle. | Except for the initial number of included built-in operations, the loop action and each action in the loop are metered each time the loop cycle runs. If an action processes any items in a collection, such as a list or array, the number of items is also used in the metering calculation. <p><p>For example, suppose you have a **For each** loop with actions that process a list. The service multiplies the number of list items against the number of actions in the loop, and adds the action that starts the loop. So, the calculation for a 10-item list is (10 * 1) + 1, which results in 11 action executions. <p><p>Pricing is based on whether the operation types are built-in, Standard, or Enterprise. | Except for the included built-in operations, same as the Consumption model. | Not metered or billed. | | [Retry policies](logic-apps-exception-handling.md#retry-policies) | On supported operations, you can implement basic exception and error handling by setting up a [retry policy](logic-apps-exception-handling.md#retry-policies). | Except for the initial number of built-in operations, the original execution plus each retried execution are metered. For example, an action that executes with 5 retries is metered and billed as 6 executions. <p><p>Pricing is based on whether the operation types are built-in, Standard, or Enterprise. | Except for the built-in included operations, same as the Consumption model. | Not metered or billed. |
-||||||
<a name="storage-operations"></a>
The following table summarizes how the Consumption, Standard, and ISE models han
| Consumption (multi-tenant) | Storage resources and usage are attached to the logic app resource. | Metering and billing *apply only to data retention-related storage consumption* and follow the [data retention pricing for the Consumption plan](https://azure.microsoft.com/pricing/details/logic-apps). | | Standard (single-tenant) | You can use your own Azure [storage account](../azure-functions/storage-considerations.md#storage-account-requirements), which gives you more control and flexibility over your workflow's data. | Metering and billing follow the [Azure Storage pricing model](https://azure.microsoft.com/pricing/details/storage/). Storage costs appear separately on your Azure billing invoice. <p><p>**Tip**: To help you better understand the number of storage operations that a workflow might run and their cost, try using the [Logic Apps Storage calculator](https://logicapps.azure.com/calculator). Select either a sample workflow or use an existing workflow definition. The first calculation estimates the number of storage operations in your workflow. You can then use these numbers to estimate possible costs using the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/). For more information, review [Estimate storage needs and costs for workflows in single-tenant Azure Logic Apps](estimate-storage-costs.md). | | Integration service environment (ISE) | Storage resources and usage are attached to the logic app resource. | Not metered or billed. |
-||||
For more information, review the following documentation:
The following table summarizes how the Consumption, Standard, and ISE models han
| Consumption (multi-tenant) | Metering and billing use the [integration account pricing](https://azure.microsoft.com/pricing/details/logic-apps/), based on the account tier that you use. | | Standard (single-tenant) | Metering and billing use the [integration account pricing](https://azure.microsoft.com/pricing/details/logic-apps/), based on the account tier that you use. | | ISE | This model includes a single integration account, based on your ISE SKU. For an [extra cost](https://azure.microsoft.com/pricing/details/logic-apps/), you can create more integration accounts for your ISE to use up to the [total ISE limit](../logic-apps/logic-apps-limits-and-config.md#integration-account-limits). |
-|||
For more information, review the following documentation:
machine-learning Dsvm Ubuntu Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-ubuntu-intro.md
Previously updated : 03/10/2020 Last updated : 04/18/2023 + #Customer intent: As a data scientist, I want to learn how to provision the Linux DSVM so that I can move my existing workflow to the cloud.
Here are the steps to create an instance of the Ubuntu 20.04 Data Science Virtua
## How to access the Ubuntu Data Science Virtual Machine
-You can access the Ubuntu DSVM in one of three ways:
+You can access the Ubuntu DSVM in one of four ways:
* SSH for terminal sessions
+ * xrdp for graphical sessions
* X2Go for graphical sessions * JupyterHub and JupyterLab for Jupyter notebooks ### SSH
-If you configured your VM with SSH authentication, you can log on using the account credentials that you created in the **Basics** section of step 3 for the text shell interface. On Windows, you can download an SSH client tool like [PuTTY](https://www.putty.org). If you prefer a graphical desktop (X Window System), you can use X11 forwarding on PuTTY.
+If you configured your VM with SSH authentication, you can log on using the account credentials that you created in the **Basics** section of step 3 for the text shell interface. [Learn more about connecting to a Linux VM](../../virtual-machines/linux-vm-connect.md).
-> [!NOTE]
-> The X2Go client performed better than X11 forwarding in testing. We recommend using the X2Go client for a graphical desktop interface.
+### xrdp
+xrdp is the standard tool for accessing Linux graphical sessions. While this isn't included in the distro by default, you can [install it by following these instructions](../../virtual-machines/linux/use-remote-desktop.md).
### X2Go
+> [!NOTE]
+> The X2Go client performed better than X11 forwarding in testing. We recommend using the X2Go client for a graphical desktop interface.
The Linux VM is already provisioned with X2Go Server and ready to accept client connections. To connect to the Linux VM graphical desktop, complete the following procedure on your client:
machine-learning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/overview.md
description: Overview of Azure Data Science Virtual Machine - An easy to use vir
keywords: data science tools, data science virtual machine, tools for data science, linux data science - + Last updated 06/23/2022
Last updated 06/23/2022
# What is the Azure Data Science Virtual Machine for Linux and Windows?
-The Data Science Virtual Machine (DSVM) is a customized VM image on the Azure cloud platform built specifically for doing data science. It has many popular data science tools preinstalled and pre-configured to jump-start building intelligent applications for advanced analytics.
+The Data Science Virtual Machine (DSVM) is a customized VM image on the Azure cloud platform built specifically for doing data science. It has many popular data science tools preinstalled and preconfigured to jump-start building intelligent applications for advanced analytics.
> [!IMPORTANT] > Items marked (preview) in this article are currently in public preview.
The DSVM is available on:
+ Windows Server 2019 + Ubuntu 20.04 LTS
-Additionally, we are excited to offer Azure DSVM for PyTorch (preview), which is an Ubuntu 20.04 image from Azure Marketplace that is optimized for large, distributed deep learning workloads. It comes preinstalled and validated with the latest PyTorch version to reduce setup costs and accelerate time to value. It comes packaged with various optimization functionalities (ONNX Runtime​, DeepSpeed​, MSCCL​, ORTMoE​, Fairscale​, Nvidia Apex​), as well as an up-to-date stack with the latest compatible versions of Ubuntu, Python, PyTorch, CUDA.
+Additionally, we're excited to offer Azure DSVM for PyTorch (preview), which is an Ubuntu 20.04 image from Azure Marketplace that is optimized for large, distributed deep learning workloads. It comes preinstalled and validated with the latest PyTorch version to reduce setup costs and accelerate time to value. It comes packaged with various optimization functionalities (ONNX Runtime​, DeepSpeed​, MSCCL​, ORTMoE​, Fairscale​, Nvidia Apex​), and an up-to-date stack with the latest compatible versions of Ubuntu, Python, PyTorch, CUDA.
## Comparison with Azure Machine Learning
The DSVM is a customized VM image for Data Science but [Azure Machine Learning](
[Azure Machine Learning Compute Instances](../concept-compute-instance.md) are a fully configured and __managed__ VM image whereas the DSVM is an __unmanaged__ VM.
-The key differences between these two product offerings are detailed below:
-
+Key differences between these:
|Feature |Data Science<br>VM |AzureML<br>Compute Instance | ||||
The key differences between these two product offerings are detailed below:
|Built-in<br>Hosted Notebooks | No<br>(requires additional configuration) | Yes | |Built-in SSO | No <br>(requires additional configuration) | Yes | |Built-in Collaboration | No | Yes |
-|Pre-installed Tools | Jupyter(lab), VSCode,<br> Visual Studio, PyCharm, Juno,<br>Power BI Desktop, SSMS, <br>Microsoft Office 365, Apache Drill | Jupyter(lab) |
+|Preinstalled Tools | Jupyter(lab), VSCode,<br> Visual Studio, PyCharm, Juno,<br>Power BI Desktop, SSMS, <br>Microsoft Office 365, Apache Drill | Jupyter(lab) |
## Sample use cases
-Below we illustrate some common use cases for DSVM customers.
+Here's some common use cases for DSVM customers.
### Short-term experimentation and evaluation
You can use the DSVM to evaluate or learn new data science [tools](./tools-inclu
### Deep learning with GPUs
-In the DSVM, your training models can use deep learning algorithms on hardware that's based on graphics processing units (GPUs). By taking advantage of the VM scaling capabilities of the Azure platform, the DSVM helps you use GPU-based hardware in the cloud according to your needs. You can switch to a GPU-based VM when you're training large models, or when you need high-speed computations while keeping the same OS disk. You can choose any of the N series GPUs enabled virtual machine SKUs with DSVM. Note GPU enabled virtual machine SKUs are not supported on Azure free accounts.
+In the DSVM, your training models can use deep learning algorithms on hardware that's based on graphics processing units (GPUs). By taking advantage of the VM scaling capabilities of the Azure platform, the DSVM helps you use GPU-based hardware in the cloud according to your needs. You can switch to a GPU-based VM when you're training large models, or when you need high-speed computations while keeping the same OS disk. You can choose any of the N series GPUs enabled virtual machine SKUs with DSVM. Note GPU enabled virtual machine SKUs aren't supported on Azure free accounts.
-The Windows editions of the DSVM come pre-installed with GPU drivers, frameworks, and GPU versions of deep learning frameworks. On the Linux editions, deep learning on GPUs is enabled on the Ubuntu DSVMs.
+The Windows editions of the DSVM come preinstalled with GPU drivers, frameworks, and GPU versions of deep learning frameworks. On the Linux editions, deep learning on GPUs is enabled on the Ubuntu DSVMs.
-You can also deploy the Ubuntu or Windows editions of the DSVM to an Azure virtual machine that isn't based on GPUs. In this case, all the deep learning frameworks will fall back to the CPU mode.
+You can also deploy the Ubuntu or Windows editions of the DSVM to an Azure virtual machine that isn't based on GPUs. In this case, all the deep learning frameworks falls back to the CPU mode.
[Learn more about available deep learning and AI frameworks](dsvm-tools-deep-learning-frameworks.md).
machine-learning Reference Ubuntu Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/reference-ubuntu-vm.md
- Previously updated : 06/23/2022+ Last updated : 04/18/2023
You can exit Rattle and R. Now you can modify the generated R script. Or, use th
## Next steps
-Have additional questions? Consider creating a [support ticket](https://azure.microsoft.com/support/create-ticket/).
+Have additional questions? Consider creating a [support ticket](https://azure.microsoft.com/support/create-ticket/).
machine-learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/release-notes.md
Previously updated : 12/14/2021 Last updated : 04/18/2023
machine-learning How To Automl Forecasting Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-automl-forecasting-faq.md
AutoML uses machine learning best practices, such as cross-validated model selec
- The training data uses **features that are not known into the future**, up to the forecast horizon. AutoML's regression models currently assume all features are known to the forecast horizon. We advise you to explore your data prior to training and remove any feature columns that are only known historically. - There are **significant structural differences - regime changes - between the training, validation, or test portions of the data**. For example, consider the effect of the COVID-19 pandemic on demand for almost any good during 2020 and 2021; this is a classic example of a regime change. Over-fitting due to regime change is the most challenging issue to address because it's highly scenario dependent and can require deep knowledge to identify. As a first line of defense, try to reserve 10 - 20% of the total history for validation, or cross-validation, data. It isn't always possible to reserve this amount of validation data if the training history is short, but is a best practice. See our guide on [configuring validation](./how-to-auto-train-forecast.md#training-and-validation-data) for more information.
+## What does it mean if my training job achieves perfect validation scores?
+
+It's possible to see perfect scores when viewing validation metrics from a training job. A perfect score means that the forecast and the actuals on the validation set are the same, or very nearly the same. For example, a root mean squared error equal to 0.0 or an R2 score of 1.0. A perfect validation score is _usually_ an indicator that the model is severely overfit, likely due to [data leakage](#how-can-i-prevent-over-fitting-and-data-leakage). The best course of action is to inspect the data for leaks and drop the column(s) that are causing the leak.
## What if my time series data doesn't have regularly spaced observations?
machine-learning How To Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-connection.md
Title: Use connections (preview)
+ Title: Create connections to external data sources (preview)
description: Learn how to use connections to connect to External data sources for training with Azure Machine Learning.
Previously updated : 04/11/2023 Last updated : 04/18/2023 # Customer intent: As an experienced data scientist with Python skills, I have data located in external sources outside of Azure. I need to make that data available to the Azure Machine Learning platform, to train my machine learning models.
In this article, learn how to connect to data sources located outside of Azure,
- An Azure Machine Learning workspace.
-> [!NOTE]
+> [!IMPORTANT]
> An Azure Machine Learning connection securely stores the credentials passed during connection creation in the Workspace Azure Key Vault. A connection references the credentials from the key vault storage location for further use. You won't need to directly deal with the credentials after they are stored in the key vault. You have the option to store the credentials in the YAML file. A CLI command or SDK can override them. We recommend that you **avoid** credential storage in a YAML file, because a security breach could lead to a credential leak.
+> [!NOTE]
+> For a successful data import, please verify that you have installed the latest azure-ai-ml package (version 1.5.0 or later) for SDK, and the ml extension (version 2.15.1 or later).
+>
+> If you have an older SDK package or CLI extension, please remove the old one and install the new one with the code shown in the tab section. Follow the instructions for SDK and CLI below:
+
+### Code versions
+
+# [SDK](#tab/SDK)
+
+```python
+pip uninstall azure-ai-ml
+pip install azure-ai-ml
+pip show azure-ai-ml #(the version value needs to be 1.5.0 or later)
+```
+
+# [CLI](#tab/CLI)
+
+```cli
+az extension remove -n ml
+az extension add -n ml --yes
+az extension show -n ml #(the version value needs to be 2.15.1 or later)
+```
+++ ## Create a Snowflake DB connection # [CLI: Username/password](#tab/cli-username-password)
from azure.ai.ml.entities import UsernamePasswordConfiguration
target= "jdbc:snowflake://<myaccount>.snowflakecomputing.com/?db=<mydb>&warehouse=<mywarehouse>&role=<myrole>" # add the Snowflake account, database, warehouse name and role name here. If no role name provided it will default to PUBLIC-
-wps_connection = WorkspaceConnection(type="snowflake",
+name= <my_snowflake_connection> # name of the connection
+wps_connection = WorkspaceConnection(name= name,
+type="snowflake",
target= target, credentials= UsernamePasswordConfiguration(username="XXXXX", password="XXXXXX") )
from azure.ai.ml.entities import UsernamePasswordConfiguration
target= "Server=tcp:<myservername>,<port>;Database=<mydatabase>;Trusted_Connection=False;Encrypt=True;Connection Timeout=30" # add the sql servername, port addresss and database
-wps_connection = WorkspaceConnection(type="azure_sql_db",
+name= <my_sql_connection> # name of the connection
+wps_connection = WorkspaceConnection(name= name,
+type="azure_sql_db",
target= target, credentials= UsernamePasswordConfiguration(username="XXXXX", password="XXXXXX") )
from azure.ai.ml.entities import WorkspaceConnection
from azure.ai.ml.entities import AccessKeyConfiguration target = "https://<mybucket>.amazonaws.com" # add the s3 bucket details
-wps_connection = WorkspaceConnection(type="s3",
+name=<my_s3_connection> # name of the connection
+wps_connection = WorkspaceConnection(name=name,
+type="s3",
target= target, credentials= AccessKeyConfiguration(access_key_id="XXXXXX",acsecret_access_key="XXXXXXXX") )
machine-learning How To Import Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-import-data-assets.md
Title: Import Data (preview)
+ Title: Import data (preview)
-description: Learn how to import data from external sources on to Azure Machine Learning platform
+description: Learn how to import data from external sources to the Azure Machine Learning platform.
Previously updated : 04/12/2023 Last updated : 04/18/2023
To create and work with data assets, you need:
* [Workspace connections created](how-to-connection.md)
-## Importing from external database sources / import from external sources to create a mltable data asset
+> [!NOTE]
+> For a successful data import, please verify that you have installed the latest azure-ai-ml package (version 1.5.0 or later) for SDK, and the ml extension (version 2.15.1 or later).
+>
+> If you have an older SDK package or CLI extension, please remove the old one and install the new one with the code shown in the tab section. Follow the instructions for SDK and CLI below:
+
+### Code versions
+
+# [SDK](#tab/SDK)
-> [!NOTE]
+```python
+pip uninstall azure-ai-ml
+pip install azure-ai-ml
+pip show azure-ai-ml #(the version value needs to be 1.5.0 or later)
+```
+
+# [CLI](#tab/CLI)
+
+```cli
+az extension remove -n ml
+az extension add -n ml --yes
+az extension show -n ml #(the version value needs to be 2.15.1 or later)
+```
+++
+## Importing from an external database source as a table data asset
+
+> [!NOTE]
> The external databases can have Snowflake, Azure SQL, etc. formats. The following code samples can import data from external databases. The `connection` that handles the import action determines the external database data source metadata. In this sample, the code imports data from a Snowflake resource. The connection points to a Snowflake source. With a little modification, the connection can point to an Azure SQL database source and an Azure SQL database source. The imported asset `type` from an external database source is `mltable`.
from azure.ai.ml import MLClient
# Supported connections include: # Connection: azureml:<workspace_connection_name> # Supported paths include:
-# Datastore: azureml://datastores/<data_store_name>/paths/<my_path>/${{name}}
+# path: azureml://datastores/<data_store_name>/paths/<my_path>/${{name}}
ml_client = MLClient.from_config()
ml_client.data.import_data(data_import=data_import)
-## Import data from external data and file system resources to create a uri_folder data asset
+## Import data from an external file system source as a folder data asset
> [!NOTE] > An Amazon S3 data resource can serve as an external file system resource.
$schema: http://azureml/sdk-2-0/DataImport.json
# Supported connections include: # Connection: azureml:<workspace_connection_name> # Supported paths include:
-# Datastore: azureml://datastores/<data_store_name>/paths/<my_path>/${{name}}
+# path: azureml://datastores/<data_store_name>/paths/<my_path>/${{name}}
type: uri_folder
from azure.ai.ml import MLClient
# Supported connections include: # Connection: azureml:<workspace_connection_name> # Supported paths include:
-# Datastore: azureml://datastores/<data_store_name>/paths/<my_path>/${{name}}
+# path: azureml://datastores/<data_store_name>/paths/<my_path>/${{name}}
ml_client = MLClient.from_config()
ml_client.data.show_materialization_status(name="<name>")
- [Read data in a job](how-to-read-write-data-v2.md#read-data-in-a-job) - [Working with tables in Azure Machine Learning](how-to-mltable.md)-- [Access data from Azure cloud storage during interactive development](how-to-access-data-interactive.md)
+- [Access data from Azure cloud storage during interactive development](how-to-access-data-interactive.md)
machine-learning How To Securely Attach Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-securely-attach-databricks.md
+
+ Title: Attach a secured Azure Databricks compute
+
+description: Use a private endpoint to attach an Azure Databricks compute to an Azure Machine Learning workspace configured for network isolation.
++++++ Last updated : 01/19/2023++
+monikerRange: 'azureml-api-2 || azureml-api-1'
++
+# Attach an Azure Databricks compute that is secured in a virtual network (VNet)
+
+Both Azure Machine Learning and Azure Databricks can be secured by using a VNet to restrict incoming and outgoing network communication. When both services are configured to use a VNet, you can use a private endpoint to allow Azure Machine Learning to attach Azure Databricks as a compute resource.
+
+The information in this article assumes that your Azure Machine Learning workspace and Azure Databricks are configured for two separate Azure Virtual Networks. To enable communication between the two services, Azure Private Link is used. A private endpoint for each service is created in the VNet for the other service. A private endpoint for Azure Machine Learning is added to communicate with the VNet used by Azure Databricks. A private endpoint for Azure Databricks is added to communicate with the VNet used by Azure Machine Learning.
++
+## Prerequisites
+
+* An Azure Machine Learning workspace that is configured for network isolation.
+
+* An [Azure Databricks deployment that is configured in a virtual network (VNet injection)](/azure/databricks/administration-guide/cloud-configurations/azure/vnet-inject).
+
+ > [!IMPORTANT]
+ > Azure Databricks requires two subnets (sometimes called the private and public subnet). Both of these subnets are delegated, and cannot be used by the Azure Machine Learning workspace when creating a private endpoint. We recommend adding a third subnet to the VNet used by Azure Databricks and using this subnet for the private endpoint.
+
+* The VNets used by Azure Machine Learning and Azure Databricks must use a different set of IP address ranges.
+
+## Limitations
+
+Scenarios where the Azure Machine Learning control plane needs to communicate with the Azure Databricks control plane are not supported. Currently the only scenario we have identified where this is a problem is when using the [DatabrickStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.databricks_step.databricksstep) in a machine learning pipeline. To work around this limitation, allows public access to your workspace. This can be either using a workspace that isn't configured with a private link or a workspace with a private link that is [configured to allow public access](how-to-configure-private-link.md#enable-public-access).
+
+## Create a private endpoint for Azure Machine Learning
+
+To allow the Azure Machine Learning workspace to communicate with the VNet that Azure Databricks is using, use the following steps:
+
+1. From the [Azure portal](https://portal.azure.com), select your __Azure Machine Learning workspace__.
+
+1. From the sidebar, select __Networking__, __Private endpoint connections__, and then __+ Private endpoint__.
+
+ :::image type="content" source="./media/how-to-securely-attach-databricks/add-private-endpoint.png" alt-text="Screenshot of the private endpoints connection page.":::
+
+1. From the __Create a private endpoint__ form, enter a name for the new private endpoint. Adjust the other values as needed by your scenario.
+
+ :::image type="content" source="./media/how-to-securely-attach-databricks/private-endpoint-basics.png" alt-text="Screenshot of the basics section of the private endpoint wizard.":::
+
+1. Select __Next__ until you arrive at the __Virtual Network__ tab. Select the __Virtual network__ that is used by __Azure Databricks__, and the __Subnet__ to connect to using the private endpoint.
+
+ :::image type="content" source="./media/how-to-securely-attach-databricks/private-endpoint-virtual-network.png" alt-text="Screenshot of the virtual network section of the private endpoint wizard.":::
+
+1. Select __Next__ until you can select __Create__ to create the resource.
+
+## Create a private endpoint for Azure Databricks
+
+To allow Azure Databricks to communicate with the VNet that the Azure Machine Learning workspace is using, use the following steps:
+
+1. From the [Azure portal](https://portal.azure.com), select your __Azure Databricks instance__.
+
+1. From the sidebar, select __Networking__, __Private endpoint connections__, and then __+ Private endpoint__.
+
+ :::image type="content" source="./media/how-to-securely-attach-databricks/databricks-add-private-endpoint.png" alt-text="Screenshot of the private endpoints connection page for Azure Databricks.":::
+
+1. From the __Create a private endpoint__ form, enter a name for the new private endpoint. Adjust the other values as needed by your scenario.
+
+1. Select __Next__ until you arrive at the __Virtual Network__ tab. Select the __Virtual network__ that is used by __Azure Machine Learning__, and the __Subnet__ to connect to using the private endpoint.
+
+## Attach the Azure Databricks compute
+
+1. From [Azure Machine Learning studio](https://ml.azure.com), select your workspace and then select __Compute__ from the sidebar. Select __Attached computes__, __+ New__, and then __Azure Databricks__.
+
+ :::image type="content" source="./media/how-to-securely-attach-databricks/add-attached-compute.png" alt-text="Screenshot of the add a compute page.":::
+
+1. From the __Attach Databricks compute__ form, provide the following information:
+
+ * __Compute name__: The name of the compute you're adding. This value can be different than the name of your Azure Databricks workspace.
+ * __Subscription__: The subscription that contains the Azure Databricks workspace.
+ * __Databricks workspace__: The Azure Databricks workspace that you're attaching.
+ * __Databricks access token__: For information on generating a token, see [Azure Databricks personal access tokens](/azure/databricks/dev-tools/auth#pat).
+
+ Select __Attach__ to complete the process.
+
+ :::image type="content" source="./media/how-to-securely-attach-databricks/attach-databricks.png" alt-text="Screenshot of the attach Databricks compute page.":::
+
+## Next steps
+
+* [Manage compute resources for training and deployment](how-to-create-attach-compute-studio.md)
machine-learning How To Troubleshoot Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-environments.md
This issue can happen when there's no package found that matches the version you
* [pypi](https://aka.ms/azureml/environment/pypi) * [Installing Python Modules](https://docs.python.org/3/installing/https://docsupdatetracker.net/index.html)
+### Invalid wheel filename
+<!--issueDescription-->
+This issue can happen when you've specified a wheel file incorrectly.
+
+**Potential causes:**
+* You spelled the wheel filename incorrectly or used improper formatting
+* The wheel file you specified can't be found
+
+**Affected areas (symptoms):**
+* Failure in building environments from UI, SDK, and CLI.
+* Failure in running jobs because Azure Machine Learning implicitly builds the environment in the first step.
+<!--/issueDescription-->
+
+**Troubleshooting steps**
+* Ensure that you've spelled the filename correctly and that it exists
+* Ensure that you're following the [format for wheel filenames](https://peps.python.org/pep-0491/#file-format)
+ ## *Make issues* ### No targets specified and no makefile found <!--issueDescription-->
machine-learning Reference Automated Ml Forecasting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automated-ml-forecasting.md
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| `frequency` | string | The frequency at which the forecast generation is desired, for example daily, weekly, yearly, etc. <br>If it isn't specified or set to None, then its default value is inferred from the dataset time index. The user can set its value greater than dataset's inferred frequency, but not less than it. For example, if dataset's frequency is daily, it can take values like daily, weekly, monthly, but not hourly as hourly is less than daily(24 hours).<br> Refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#dateoffset-objects) for more information.| | `None` | | `time_series_id_column_names` | string or list(strings) | The names of columns in the data to be used to group data into multiple time series. If time_series_id_column_names is not defined or set to None, the Automated ML uses auto-detection logic to detect the columns.| | `None` | | `feature_lags` | string | Represents if user wants to generate lags automatically for the provided numeric features. The default is set to `auto`, meaning that Automated ML uses autocorrelation-based heuristics to automatically select lag orders and generate corresponding lag features for all numeric features. "None" means no lags are generated for any numeric features.| `'auto'`, `None` | `None` |
-| `country_or_region_for_holidays` | string | The country or region to be used to generate holiday features. These characters should be represented in ISO 3166 two-letter country/region codes, for example 'US' or 'GB'. The list of the ISO codes can be found here: https://en.wikipedia.org/wiki/List_of_ISO_3166_country_codes| | `None` |
+| `country_or_region_for_holidays` | string | The country or region to be used to generate holiday features. These characters should be represented in ISO 3166 two-letter country/region codes, for example 'US' or 'GB'. The list of the ISO codes can be found at [https://wikipedia.org/wiki/List_of_ISO_3166_country_codes](https://wikipedia.org/wiki/List_of_ISO_3166_country_codes). | `None` |
| `cv_step_size` | string or integer | The number of periods between the origin_time of one CV fold and the next fold. For example, if it is set to 3 for daily data, the origin time for each fold is three days apart. If it set to None or not specified, then it's set to `auto` by default. If it is of integer type, minimum value it can take is 1 else it raises an error. | `auto`, [int] | `auto` | | `seasonality` | string or integer | The time series seasonality as an integer multiple of the series frequency. If seasonality is not specified, its value is set to `'auto'`, meaning it is inferred automatically by Automated ML. If this parameter is not set to `None`, the Automated ML assumes time series as non-seasonal, which is equivalent to setting it as integer value 1. | `'auto'`, [int] | `auto` | | `short_series_handling_config` | string | Represents how Automated ML should handle short time series if specified. It takes following values: <br><ul><li>`'auto'` : short series is padded if there are no long series, otherwise short series is dropped.</li><li>`'pad'`: all the short series is padded with zeros.</li><li>`'drop'`: all the short series is dropped.</li><li> `None`: the short series is not modified.</li><ul>| `'auto'`, `'pad'`, `'drop'`, `None` | `auto` |
machine-learning Reference Yaml Component Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-component-pipeline.md
The `az ml component` commands can be used for managing Azure Machine Learning c
## Examples
-Examples are available in the [examples GitHub repository](https://github.com/Azure/azureml-examples/tree/lochen/pipeline-component-pup/cli/jobs/pipelines-with-components/pipeline_with_pipeline_component).
+Examples are available in the [examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/pipelines-with-components/pipeline_with_pipeline_component).
## Next steps
managed-instance-apache-cassandra Network Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/network-rules.md
# Required outbound network rules
-The Azure Managed Instance for Apache Casandra service requires certain network rules to properly manage the service. By ensuring you have the proper rules exposed, you can keep your service secure and prevent operational issues.
+The Azure SQL Managed Instance for Apache Casandra service requires certain network rules to properly manage the service. By ensuring you have the proper rules exposed, you can keep your service secure and prevent operational issues.
+
+> [!WARNING]
+> We recommend exercising caution when applying changes to firewall rules for an existing cluster. For example, if rules are not applied correctly, they might not be applied to existing connections, so it may appear that firewall changes have not caused any problems. However, automatic updates of the Cassandra Managed Instance nodes may subsequently fail. We recommend monitoring connectivity after any major firewall updates for some time to ensure there are no issues.
## Virtual network service tags
-If you are using Azure Firewall to restrict outbound access, we highly recommend using [virtual network service tags](../virtual-network/service-tags-overview.md). Below are the tags required to make Azure Managed Instance for Apache Cassandra function properly.
+If you're using Azure Firewall to restrict outbound access, we highly recommend using [virtual network service tags](../virtual-network/service-tags-overview.md). Below are the tags required to make Azure SQL Managed Instance for Apache Cassandra function properly.
| Destination Service Tag | Protocol | Port | Use | |-|-|||
If you are using Azure Firewall to restrict outbound access, we highly recommend
## User-defined routes
-If you are using a 3rd party Firewall to restrict outbound access, we highly recommend configuring [user-defined routes (UDRs)](../virtual-network/virtual-networks-udr-overview.md#user-defined) for Microsoft address prefixes, rather than attempting to allow connectivity through your own Firewall. See sample [bash script](https://github.com/Azure-Samples/cassandra-managed-instance-tools/blob/main/configureUDR.sh) to add the required address prefixes in user-defined routes.
+If you're using a third-party Firewall to restrict outbound access, we highly recommend configuring [user-defined routes (UDRs)](../virtual-network/virtual-networks-udr-overview.md#user-defined) for Microsoft address prefixes, rather than attempting to allow connectivity through your own Firewall. See sample [bash script](https://github.com/Azure-Samples/cassandra-managed-instance-tools/blob/main/configureUDR.sh) to add the required address prefixes in user-defined routes.
## Azure Global required network rules
The system uses DNS names to reach the Azure services described in this article
## Internal port usage
-The following ports are only accessible within the VNET (or peered vnets./express routes). Managed Instance for Apache Cassandra instances do not have a public IP and should not be made accessible on the Internet.
+The following ports are only accessible within the VNET (or peered vnets./express routes). SQL Managed Instance for Apache Cassandra instances do not have a public IP and should not be made accessible on the Internet.
| Port | Use | | - | |
The following ports are only accessible within the VNET (or peered vnets./expres
## Next steps
-In this article, you learned about network rules to properly manage the service. Learn more about Azure Managed Instance for Apache Cassandra with the following articles:
+In this article, you learned about network rules to properly manage the service. Learn more about Azure SQL Managed Instance for Apache Cassandra with the following articles:
-* [Overview of Azure Managed Instance for Apache Cassandra](introduction.md)
-* [Manage Azure Managed Instance for Apache Cassandra resources using Azure CLI](manage-resources-cli.md)
+* [Overview of Azure SQL Managed Instance for Apache Cassandra](introduction.md)
+* [Manage Azure SQL Managed Instance for Apache Cassandra resources using Azure CLI](manage-resources-cli.md)
mariadb Concepts Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-data-in-replication.md
Title: Data-in replication - Azure Database for MariaDB description: Learn about using data-in replication to synchronize from an external server into the Azure Database for MariaDB service. --++ Last updated 06/24/2022
mariadb Concepts Query Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-query-store.md
Title: Query Store - Azure Database for MariaDB
-description: Learn about the Query Store feature in Azure Database for MariaDB to help you track performance over time.
+description: Learn about the Query Store feature in Azure Database for MariaDB to help you track performance over time.
+++ Last updated : 04/18/2023 -- Previously updated : 06/24/2022 + # Monitor Azure Database for MariaDB performance with Query Store **Applies to:** Azure Database for MariaDB 10.2
Query store can be used in many scenarios, including the following:
- Determining the number of times a query was executed in a given time window - Comparing the average execution time of a query across time windows to see large deltas
-## Enabling Query Store
+## Enable Query Store
Query Store is an opt-in feature, so it isn't active by default on a server. The query store is enabled or disabled globally for all the databases on a given server and cannot be turned on or off per database. ### Enable Query Store using the Azure portal 1. Sign in to the Azure portal and select your Azure Database for MariaDB server.
-2. Select **Server Parameters** in the **Settings** section of the menu.
-3. Search for the query_store_capture_mode parameter.
-4. Set the value to ALL and **Save**.
+1. Select **Server Parameters** in the **Settings** section of the menu.
+1. Search for the query_store_capture_mode parameter.
+1. Set the value to ALL and **Save**.
To enable wait statistics in your Query Store: 1. Search for the query_store_wait_sampling_capture_mode parameter.
-2. Set the value to ALL and **Save**.
+1. Set the value to ALL and **Save**.
Allow up to 20 minutes for the first batch of data to persist in the mysql database.
Or this query for wait statistics:
SELECT * FROM mysql.query_store_wait_stats; ```
-## Finding wait queries
+## Find wait queries
-> [!NOTE]
+> [!NOTE]
> Wait statistics should not be enabled during peak workload hours or be turned on indefinitely for sensitive workloads. <br>For workloads running with high CPU utilization or on servers configured with lower vCores, use caution when enabling wait statistics. It should not be turned on indefinitely. Wait event types combine different wait events into buckets by similarity. Query Store provides the wait event type, specific wait event name, and the query in question. Being able to correlate this wait information with the query runtime statistics means you can gain a deeper understanding of what contributes to query performance characteristics.
Wait event types combine different wait events into buckets by similarity. Query
Here are some examples of how you can gain more insights into your workload using the wait statistics in Query Store: | **Observation** | **Action** |
-|||
-|High Lock waits | Check the query texts for the affected queries and identify the target entities. Look in Query Store for other queries modifying the same entity, which is executed frequently and/or have high duration. After identifying these queries, consider changing the application logic to improve concurrency, or use a less restrictive isolation level. |
-|High Buffer IO waits | Find the queries with a high number of physical reads in Query Store. If they match the queries with high IO waits, consider introducing an index on the underlying entity, to do seeks instead of scans. This would minimize the IO overhead of the queries. Check the **Performance Recommendations** for your server in the portal to see if there are index recommendations for this server that would optimize the queries. |
-|High Memory waits | Find the top memory consuming queries in Query Store. These queries are probably delaying further progress of the affected queries. Check the **Performance Recommendations** for your server in the portal to see if there are index recommendations that would optimize these queries.|
+| | |
+| High Lock waits | Check the query texts for the affected queries and identify the target entities. Look in Query Store for other queries modifying the same entity, which is executed frequently and/or have high duration. After identifying these queries, consider changing the application logic to improve concurrency, or use a less restrictive isolation level. |
+| High Buffer IO waits | Find the queries with a high number of physical reads in Query Store. If they match the queries with high IO waits, consider introducing an index on the underlying entity, to do seeks instead of scans. This would minimize the IO overhead of the queries. Check the **Performance Recommendations** for your server in the portal to see if there are index recommendations for this server that would optimize the queries. |
+| High Memory waits | Find the top memory consuming queries in Query Store. These queries are probably delaying further progress of the affected queries. Check the **Performance Recommendations** for your server in the portal to see if there are index recommendations that would optimize these queries. |
## Configuration options
When Query Store is enabled it saves data in 15-minute aggregation windows, up t
The following options are available for configuring Query Store parameters. | **Parameter** | **Description** | **Default** | **Range** |
-|||||
+| | | | |
| query_store_capture_mode | Turn the query store feature ON/OFF based on the value. Note: If performance_schema is OFF, turning on query_store_capture_mode will turn on performance_schema and a subset of performance schema instruments required for this feature. | ALL | NONE, ALL | | query_store_capture_interval | The query store capture interval in minutes. Allows specifying the interval in which the query metrics are aggregated | 15 | 5 - 60 | | query_store_capture_utility_queries | Turning ON or OFF to capture all the utility queries that is executing in the system. | NO | YES, NO |
The following options are available for configuring Query Store parameters.
The following options apply specifically to wait statistics. | **Parameter** | **Description** | **Default** | **Range** |
-|||||
+| | | | |
| query_store_wait_sampling_capture_mode | Allows turning ON / OFF the wait statistics. | NONE | NONE, ALL | | query_store_wait_sampling_frequency | Alters frequency of wait-sampling in seconds. 5 to 300 seconds. | 30 | 5-300 |
-> [!NOTE]
+> [!NOTE]
> Currently **query_store_capture_mode** supersedes this configuration, meaning both **query_store_capture_mode** and **query_store_wait_sampling_capture_mode** have to be enabled to ALL for wait statistics to work. If **query_store_capture_mode** is turned off, then wait statistics is turned off as well since wait statistics utilizes the performance_schema enabled, and the query_text captured by query store. Use the [Azure portal](howto-server-parameters.md) to get or set a different value for a parameter.
Queries are normalized by looking at their structure after removing literals and
This view returns all the data in Query Store. There is one row for each distinct database ID, user ID, and query ID. | **Name** | **Data Type** | **IS_NULLABLE** | **Description** |
-|||||
-| `schema_name`| varchar(64) | NO | Name of the schema |
-| `query_id`| bigint(20) | NO| Unique ID generated for the specific query, if the same query executes in different schema, a new ID will be generated |
-| `timestamp_id` | timestamp| NO| Timestamp in which the query is executed. This is based on the query_store_interval configuration|
-| `query_digest_text`| longtext| NO| The normalized query text after removing all the literals|
-| `query_sample_text` | longtext| NO| First appearance of the actual query with literals|
-| `query_digest_truncated` | bit| YES| Whether the query text has been truncated. Value will be Yes if the query is longer than 1 KB|
-| `execution_count` | bigint(20)| NO| The number of times the query got executed for this timestamp ID / during the configured interval period|
-| `warning_count` | bigint(20)| NO| Number of warnings this query generated during the internal|
-| `error_count` | bigint(20)| NO| Number of errors this query generated during the interval|
-| `sum_timer_wait` | double| YES| Total execution time of this query during the interval|
-| `avg_timer_wait` | double| YES| Average execution time for this query during the interval|
-| `min_timer_wait` | double| YES| Minimum execution time for this query|
-| `max_timer_wait` | double| YES| Maximum execution time|
-| `sum_lock_time` | bigint(20)| NO| Total amount of time spent for all the locks for this query execution during this time window|
-| `sum_rows_affected` | bigint(20)| NO| Number of rows affected|
-| `sum_rows_sent` | bigint(20)| NO| Number of rows sent to client|
-| `sum_rows_examined` | bigint(20)| NO| Number of rows examined|
-| `sum_select_full_join` | bigint(20)| NO| Number of full joins|
-| `sum_select_scan` | bigint(20)| NO| Number of select scans |
-| `sum_sort_rows` | bigint(20)| NO| Number of rows sorted|
-| `sum_no_index_used` | bigint(20)| NO| Number of times when the query didn't use any indexes|
-| `sum_no_good_index_used` | bigint(20)| NO| Number of times when the query execution engine didn't use any good indexes|
-| `sum_created_tmp_tables` | bigint(20)| NO| Total number of temp tables created|
-| `sum_created_tmp_disk_tables` | bigint(20)| NO| Total number of temp tables created in disk (generates I/O)|
-| `first_seen` | timestamp| NO| The first occurrence (UTC) of the query during the aggregation window|
-| `last_seen` | timestamp| NO| The last occurrence (UTC) of the query during this aggregation window|
+| | | | |
+| `schema_name` | varchar(64) | NO | Name of the schema |
+| `query_id` | bigint(20) | NO | Unique ID generated for the specific query, if the same query executes in different schema, a new ID will be generated |
+| `timestamp_id` | timestamp | NO | Timestamp in which the query is executed. This is based on the query_store_interval configuration |
+| `query_digest_text` | longtext | NO | The normalized query text after removing all the literals |
+| `query_sample_text` | longtext | NO | First appearance of the actual query with literals |
+| `query_digest_truncated` | bit | YES | Whether the query text has been truncated. Value will be Yes if the query is longer than 1 KB |
+| `execution_count` | bigint(20) | NO | The number of times the query got executed for this timestamp ID / during the configured interval period |
+| `warning_count` | bigint(20) | NO | Number of warnings this query generated during the internal |
+| `error_count` | bigint(20) | NO | Number of errors this query generated during the interval |
+| `sum_timer_wait` | double | YES | Total execution time of this query during the interval |
+| `avg_timer_wait` | double | YES | Average execution time for this query during the interval |
+| `min_timer_wait` | double | YES | Minimum execution time for this query |
+| `max_timer_wait` | double | YES | Maximum execution time |
+| `sum_lock_time` | bigint(20) | NO | Total amount of time spent for all the locks for this query execution during this time window |
+| `sum_rows_affected` | bigint(20) | NO | Number of rows affected |
+| `sum_rows_sent` | bigint(20) | NO | Number of rows sent to client |
+| `sum_rows_examined` | bigint(20) | NO | Number of rows examined |
+| `sum_select_full_join` | bigint(20) | NO | Number of full joins |
+| `sum_select_scan` | bigint(20) | NO | Number of select scans |
+| `sum_sort_rows` | bigint(20) | NO | Number of rows sorted |
+| `sum_no_index_used` | bigint(20) | NO | Number of times when the query didn't use any indexes |
+| `sum_no_good_index_used` | bigint(20) | NO | Number of times when the query execution engine didn't use any good indexes |
+| `sum_created_tmp_tables` | bigint(20) | NO | Total number of temp tables created |
+| `sum_created_tmp_disk_tables` | bigint(20) | NO | Total number of temp tables created in disk (generates I/O) |
+| `first_seen` | timestamp | NO | The first occurrence (UTC) of the query during the aggregation window |
+| `last_seen` | timestamp | NO | The last occurrence (UTC) of the query during this aggregation window |
### mysql.query_store_wait_stats This view returns wait events data in Query Store. There's one row for each distinct database ID, user ID, query ID, and event.
-| **Name**| **Data Type** | **IS_NULLABLE** | **Description** |
-|||||
-| `interval_start` | timestamp | NO| Start of the interval (15-minute increment)|
-| `interval_end` | timestamp | NO| End of the interval (15-minute increment)|
-| `query_id` | bigint(20) | NO| Generated unique ID on the normalized query (from query store)|
-| `query_digest_id` | varchar(32) | NO| The normalized query text after removing all the literals (from query store) |
-| `query_digest_text` | longtext | NO| First appearance of the actual query with literals (from query store) |
-| `event_type` | varchar(32) | NO| Category of the wait event |
-| `event_name` | varchar(128) | NO| Name of the wait event |
-| `count_star` | bigint(20) | NO| Number of wait events sampled during the interval for the query |
-| `sum_timer_wait_ms` | double | NO| Total wait time (in milliseconds) of this query during the interval |
+| **Name** | **Data Type** | **IS_NULLABLE** | **Description** |
+| | | | |
+| `interval_start` | timestamp | NO | Start of the interval (15-minute increment) |
+| `interval_end` | timestamp | NO | End of the interval (15-minute increment) |
+| `query_id` | bigint(20) | NO | Generated unique ID on the normalized query (from query store) |
+| `query_digest_id` | varchar(32) | NO | The normalized query text after removing all the literals (from query store) |
+| `query_digest_text` | longtext | NO | First appearance of the actual query with literals (from query store) |
+| `event_type` | varchar(32) | NO | Category of the wait event |
+| `event_name` | varchar(128) | NO | Name of the wait event |
+| `count_star` | bigint(20) | NO | Number of wait events sampled during the interval for the query |
+| `sum_timer_wait_ms` | double | NO | Total wait time (in milliseconds) of this query during the interval |
### Functions
-| **Name**| **Description** |
-|||
+| **Name** | **Description** |
+| | |
| `mysql.az_purge_querystore_data(TIMESTAMP)` | Purges all query store data before the given time stamp | | `mysql.az_procedure_purge_querystore_event(TIMESTAMP)` | Purges all wait event data before the given time stamp | | `mysql.az_procedure_purge_recommendation(TIMESTAMP)` | Purges recommendations whose expiration is before the given time stamp |
mariadb Concepts Ssl Connection Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-ssl-connection-security.md
Title: SSL/TLS connectivity - Azure Database for MariaDB description: Information for configuring Azure Database for MariaDB and associated applications to properly use SSL connections+++ Last updated : 04/18/2023 -- Previously updated : 06/24/2022 # SSL/TLS connectivity in Azure Database for MariaDB Azure Database for MariaDB supports connecting your database server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against "man in the middle" attacks by encrypting the data stream between the server and your application.
->[!NOTE]
+> [!NOTE]
> Based on the feedback from customers we have extended the root certificate deprecation for our existing Baltimore Root CA till February 15, 2021 (02/15/2021).
-> [!IMPORTANT]
+> [!IMPORTANT]
> SSL root certificate is set to expire starting February 15, 2021 (02/15/2021). Please update your application to use the [new certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem). To learn more , see [planned certificate updates](concepts-certificate-rotation.md) ## Default settings
Azure Database for MariaDB supports encryption for clients connecting to your da
Azure Database for MariaDB provides the ability to enforce the TLS version for the client connections. To enforce the TLS version, use the **Minimum TLS version** option setting. The following values are allowed for this option setting:
-| Minimum TLS setting | Client TLS version supported |
-|:|-:|
-| TLSEnforcementDisabled (default) | No TLS required |
-| TLS1_0 | TLS 1.0, TLS 1.1, TLS 1.2 and higher |
-| TLS1_1 | TLS 1.1, TLS 1.2 and higher |
-| TLS1_2 | TLS version 1.2 and higher |
+| Minimum TLS setting | Client TLS version supported |
+| : | : |
+| TLSEnforcementDisabled (default) | No TLS required |
+| TLS1_0 | TLS 1.0, TLS 1.1, TLS 1.2 and higher |
+| TLS1_1 | TLS 1.1, TLS 1.2 and higher |
+| TLS1_2 | TLS version 1.2 and higher |
For example, setting the value of Minimum TLS setting version to TLS 1.0 means your server will allow connections from clients using TLS 1.0, 1.1, and 1.2+. Alternatively, setting this to 1.2 means that you only allow connections from clients using TLS 1.2+ and all connections with TLS 1.0 and TLS 1.1 will be rejected.
-> [!Note]
+> [!NOTE]
> By default, Azure Database for MariaDB does not enforce a minimum TLS version (the setting `TLSEnforcementDisabled`).
->
+>
> Once you enforce a minimum TLS version, you cannot later disable minimum version enforcement. To learn how to set the TLS setting for your Azure Database for MariaDB, refer to [How to configure TLS setting](howto-tls-configurations.md).
As part of the SSL/TLS communication, the cipher suites are validated and only s
### Cipher suite supported
-* TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
-* TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
-* TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
-* TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
+* TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
+* TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
+* TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
+* TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
## Next steps
mariadb Howto Configure Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-configure-ssl.md
Previously updated : 06/24/2022 Last updated : 04/19/2023 ms.devlang: csharp, golang, java, php, python, ruby
For existing connections, you can bind SSL by right-clicking on the connection i
Another way to bind the SSL certificate is to use the MySQL command-line interface by executing the following commands.
-```bash
+```terminal
mysql.exe -h mydemoserver.mariadb.database.azure.com -u Username@mydemoserver -p --ssl-mode=REQUIRED --ssl-ca=c:\ssl\BaltimoreCyberTrustRoot.crt.pem ```
Using the Azure portal, visit your Azure Database for MariaDB server, and then s
### Using Azure CLI You can enable or disable the **ssl-enforcement** parameter by using Enabled or Disabled values respectively in Azure CLI.+ ```azurecli-interactive az mariadb server update --resource-group myresource --name mydemoserver --ssl-enforcement Enabled ```
az mariadb server update --resource-group myresource --name mydemoserver --ssl-e
## Verify the SSL connection Execute the mysql **status** command to verify that you have connected to your MariaDB server using SSL:+ ```sql status ```+ Confirm the connection is encrypted by reviewing the output, which should show: **SSL: Cipher in use is AES256-SHA** ## Sample code
if (mysqli_connect_errno($conn)) {
die('Failed to connect to MySQL: '.mysqli_connect_error()); } ```+ ### Python (MySQLConnector Python) ```python
try:
except mysql.connector.Error as err: print(err) ```+ ### Python (PyMySQL) ```python
client = Mysql2::Client.new(
:ssl_mode => 'required' ) ```+ #### Ruby on Rails ```ruby
var connectionString string
connectionString = fmt.Sprintf("%s:%s@tcp(%s:3306)/%s?allowNativePasswords=true&tls=custom",'myadmin@mydemoserver' , 'yourpassword', 'mydemoserver.mariadb.database.azure.com', 'quickstartdb') db, _ := sql.Open("mysql", connectionString) ```+ ### Java (JDBC) ```java
properties.setProperty("user", 'myadmin@mydemoserver');
properties.setProperty("password", 'yourpassword'); conn = DriverManager.getConnection(url, properties); ```+ ### Java (MariaDB) ```java
migrate Create Manage Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/create-manage-projects.md
Set up a new project in an Azure subscription.
:::image type="content" source="./media/create-manage-projects/project-details.png" alt-text="Image of Azure Migrate page to input project settings.":::
-Wait a few minutes for the project to deploy.
+Wait for a few minutes for the project to deploy.
## Create a project in a specific region
migrate Tutorial Discover Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-hyper-v.md
As part of your migration journey to Azure, you discover your on-premises inventory and workloads.
-This tutorial shows you how to discover the servers that are running in your VMware environment by using the Azure Migrate: Discovery and assessment tool, a lightweight Azure Migrate appliance. You deploy the appliance as a server on Hyper-V host, to continuously discover servers and their performance metadata, applications that are running on servers, server dependencies, web apps, and SQL Server instances and databases.
+This tutorial shows you how to discover the servers that are running in your Hyper-V environment by using the Azure Migrate: Discovery and assessment tool, a lightweight Azure Migrate appliance. You deploy the appliance as a server on Hyper-V host, to continuously discover servers and their performance metadata, applications that are running on servers, server dependencies, web apps, and SQL Server instances and databases.
In this tutorial, you learn how to:
After discovery finishes, you can verify that the servers appear in the portal.
## Next steps - [Assess servers on Hyper-V environment](tutorial-assess-hyper-v.md) for migration to Azure VMs.-- [Review the data](discovered-metadata.md#collected-metadata-for-hyper-v-servers) that the appliance collects during discovery.
+- [Review the data](discovered-metadata.md#collected-metadata-for-hyper-v-servers) that the appliance collects during discovery.
mysql Connect Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-nodejs.md
Title: 'Quickstart: Connect using Node.js - Azure Database for MySQL - Flexible Server'
+ Title: "Quickstart: Connect using Node.js - Azure Database for MySQL - Flexible Server"
description: This quickstart provides several Node.js code samples you can use to connect and query data from Azure Database for MySQL - Flexible Server.+++ Last updated : 04/18/2023 --+
+ - mvc
+ - seo-javascript-september2019
+ - seo-javascript-october2019
+ - devx-track-js
+ - mode-api
ms.devlang: javascript- Previously updated : 01/27/2022 + # Quickstart: Use Node.js to connect and query data in Azure Database for MySQL - Flexible Server [!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
-In this quickstart, you connect to an Azure Database for MySQL - Flexible Server by using Node.js. You then use SQL statements to query, insert, update, and delete data in the database from Mac, Ubuntu Linux, and Windows platforms.
+In this quickstart, you connect to an Azure Database for MySQL - Flexible Server by using Node.js. You then use SQL statements to query, insert, update, and delete data in the database from Mac, Ubuntu Linux, and Windows platforms.
-This topic assumes that you're familiar with developing using Node.js, but you're new to working with Azure Database for MySQL - Flexible Server.
+This article assumes that you're familiar with developing using Node.js, but you're new to working with Azure Database for MySQL - Flexible Server.
## Prerequisites
This quickstart uses the resources created in either of these guides as a starti
- [Create an Azure Database for MySQL - Flexible Server using Azure portal](./quickstart-create-server-portal.md) - [Create an Azure Database for MySQL - Flexible Server using Azure CLI](./quickstart-create-server-cli.md)
-> [!IMPORTANT]
+> [!IMPORTANT]
> Ensure the IP address you're connecting from has been added the server's firewall rules using the [Azure portal](./how-to-manage-firewall-portal.md) or [Azure CLI](./how-to-manage-firewall-cli.md) ## Install Node.js and the MySQL connector
Depending on your platform, follow the instructions in the appropriate section t
### Windows 1. Visit the [Node.js downloads page](https://nodejs.org/en/download/), and then select your desired Windows installer option.
-2. Make a local project folder such as `nodejsmysql`.
-3. Open the command prompt, and then change directory into the project folder, such as `cd c:\nodejsmysql\`
-4. Run the NPM tool to install the mysql library into the project folder.
-
- ```cmd
- cd c:\nodejsmysql\
- "C:\Program Files\nodejs\npm" install mysql
- "C:\Program Files\nodejs\npm" list
- ```
+1. Make a local project folder such as `nodejsmysql`.
+1. Open the command prompt, and then change directory into the project folder, such as `cd c:\nodejsmysql\`
+1. Run the NPM tool to install the mysql library into the project folder.
+
+ ```cmd
+ cd c:\nodejsmysql\
+ "C:\Program Files\nodejs\npm" install mysql
+ "C:\Program Files\nodejs\npm" list
+ ```
-5. Verify the installation by checking the `npm list` output text. The version number may vary as new patches are released.
+1. Verify the installation by checking the `npm list` output text. The version number may vary as new patches are released.
### Linux (Ubuntu)
Depending on your platform, follow the instructions in the appropriate section t
apt-get install -y nodejs ```
-2. Run the following commands to create a project folder `mysqlnodejs` and install the mysql package into that folder.
+1. Run the following commands to create a project folder `mysqlnodejs` and install the mysql package into that folder.
```bash mkdir nodejsmysql
Depending on your platform, follow the instructions in the appropriate section t
npm install --save mysql npm list ```
-3. Verify the installation by checking npm list output text. The version number may vary as new patches are released.
+
+1. Verify the installation by checking npm list output text. The version number may vary as new patches are released.
### macOS 1. Visit the [Node.js downloads page](https://nodejs.org/en/download/), and then select your macOS installer.
-2. Run the following commands to create a project folder `mysqlnodejs` and install the mysql package into that folder.
+1. Run the following commands to create a project folder `mysqlnodejs` and install the mysql package into that folder.
```bash mkdir nodejsmysql
Depending on your platform, follow the instructions in the appropriate section t
npm list ```
-3. Verify the installation by checking the `npm list` output text. The version number may vary as new patches are released.
+1. Verify the installation by checking the `npm list` output text. The version number may vary as new patches are released.
## Get connection information Get the connection information needed to connect to the Azure Database for MySQL - Flexible Server. You need the fully qualified server name and sign in credentials. 1. Sign in to the [Azure portal](https://portal.azure.com/).
-2. From the left-hand menu in Azure portal, select **All resources**, and then search for the server you have created (such as **mydemoserver**).
-3. Select the server name.
-4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel.
+1. From the left-hand menu in Azure portal, select **All resources**, and then search for the server you have created (such as **mydemoserver**).
+1. Select the server name.
+1. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel.
-## Running the code samples
+## Run the code samples
1. Paste the JavaScript code into new text files, and then save it into a project folder with file extension .js (such as C:\nodejsmysql\createtable.js or /home/username/nodejsmysql/createtable.js). 1. Replace `host`, `user`, `password` and `database` config options in the code with the values that you specified when you created the MySQL flexible server and database.
-1. **Obtain SSL certificate**: To use encrypted connections with your client applications,you will need to download the [public SSL certificate](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem) which is also available in Azure portal Networking blade as shown in the screenshot below.
- :::image type="content" source="./media/how-to-connect-tls-ssl/download-ssl.png" alt-text="Screenshot showing how to download public SSL certificate from Azure portal.":::
-
- Save the certificate file to your preferred location.
-
+1. **Obtain SSL certificate**: To use encrypted connections with your client applications,you'll need to download the [public SSL certificate](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem) which is also available in Azure portal Networking blade as shown in the screenshot below.
+ :::image type="content" source="./media/how-to-connect-tls-ssl/download-ssl.png" alt-text="Screenshot showing how to download public SSL certificate from Azure portal." lightbox="./media/how-to-connect-tls-ssl/download-ssl.png":::
+
+Save the certificate file to your preferred location.
+ 1. In the `ssl` config option, replace the `ca-cert` filename with the path to this local file. This will allow the application to connect securely to the database over SSL. 1. Open the command prompt or bash shell, and then change directory into your project folder `cd nodejsmysql`. 1. To run the application, enter the node command followed by the file name, such as `node createtable.js`.
-1. On Windows, if the node application is not in your environment variable path, you may need to use the full path to launch the node application, such as `"C:\Program Files\nodejs\node.exe" createtable.js`
+1. On Windows, if the node application isn't in your environment variable path, you may need to use the full path to launch the node application, such as `"C:\Program Files\nodejs\node.exe" createtable.js`
## Connect, create table, and insert data Use the following code to connect and load the data by using **CREATE TABLE** and **INSERT INTO** SQL statements.
-The [mysql.createConnection()](https://github.com/mysqljs/mysql#establishing-connections) method is used to interface with the MySQL server. The [connect()](https://github.com/mysqljs/mysql#establishing-connections) function is used to establish the connection to the server. The [query()](https://github.com/mysqljs/mysql#performing-queries) function is used to execute the SQL query against MySQL database.
+The [mysql.createConnection()](https://github.com/mysqljs/mysql#establishing-connections) method is used to interface with the MySQL server. The [connect()](https://github.com/mysqljs/mysql#establishing-connections) function is used to establish the connection to the server. The [query()](https://github.com/mysqljs/mysql#performing-queries) function is used to execute the SQL query against MySQL database.
```javascript const mysql = require('mysql');
var config =
const conn = new mysql.createConnection(config); conn.connect(
- function (err) {
- if (err) {
+ function (err) {
+ if (err) {
console.log("!!! Cannot connect !!! Error:"); throw err; }
conn.connect(
function queryDatabase() {
- conn.query('DROP TABLE IF EXISTS inventory;',
- function (err, results, fields) {
- if (err) throw err;
+ conn.query('DROP TABLE IF EXISTS inventory;',
+ function (err, results, fields) {
+ if (err) throw err;
console.log('Dropped inventory table if existed.'); } )
- conn.query('CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);',
+ conn.query('CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);',
function (err, results, fields) { if (err) throw err; console.log('Created inventory table.'); } )
- conn.query('INSERT INTO inventory (name, quantity) VALUES (?, ?);', ['banana', 150],
+ conn.query('INSERT INTO inventory (name, quantity) VALUES (?, ?);', ['banana', 150],
function (err, results, fields) { if (err) throw err; else console.log('Inserted ' + results.affectedRows + ' row(s).'); } )
- conn.query('INSERT INTO inventory (name, quantity) VALUES (?, ?);', ['orange', 250],
+ conn.query('INSERT INTO inventory (name, quantity) VALUES (?, ?);', ['orange', 250],
function (err, results, fields) { if (err) throw err; console.log('Inserted ' + results.affectedRows + ' row(s).'); } )
- conn.query('INSERT INTO inventory (name, quantity) VALUES (?, ?);', ['apple', 100],
+ conn.query('INSERT INTO inventory (name, quantity) VALUES (?, ?);', ['apple', 100],
function (err, results, fields) { if (err) throw err; console.log('Inserted ' + results.affectedRows + ' row(s).'); } )
- conn.end(function (err) {
+ conn.end(function (err) {
if (err) throw err;
- else console.log('Done.')
+ else console.log('Done.')
}); }; ``` ## Read data
-Use the following code to connect and read the data by using a **SELECT** SQL statement.
+Use the following code to connect and read the data by using a **SELECT** SQL statement.
The [mysql.createConnection()](https://github.com/mysqljs/mysql#establishing-connections) method is used to interface with the MySQL server. The [connect()](https://github.com/mysqljs/mysql#establishing-connections) method is used to establish the connection to the server. The [query()](https://github.com/mysqljs/mysql#performing-queries) method is used to execute the SQL query against MySQL database. The results array is used to hold the results of the query.
var config =
const conn = new mysql.createConnection(config); conn.connect(
- function (err) {
- if (err) {
+ function (err) {
+ if (err) {
console.log("!!! Cannot connect !!! Error:"); throw err; }
conn.connect(
}); function readData(){
- conn.query('SELECT * FROM inventory',
+ conn.query('SELECT * FROM inventory',
function (err, results, fields) { if (err) throw err; else console.log('Selected ' + results.length + ' row(s).');
function readData(){
console.log('Done.'); }) conn.end(
- function (err) {
+ function (err) {
if (err) throw err;
- else console.log('Closing connection.')
+ else console.log('Closing connection.')
}); }; ``` ## Update data
-Use the following code to connect and update the data by using an **UPDATE** SQL statement.
+Use the following code to connect and update the data by using an **UPDATE** SQL statement.
-The [mysql.createConnection()](https://github.com/mysqljs/mysql#establishing-connections) method is used to interface with the MySQL server. The [connect()](https://github.com/mysqljs/mysql#establishing-connections) method is used to establish the connection to the server. The [query()](https://github.com/mysqljs/mysql#performing-queries) method is used to execute the SQL query against MySQL database.
+The [mysql.createConnection()](https://github.com/mysqljs/mysql#establishing-connections) method is used to interface with the MySQL server. The [connect()](https://github.com/mysqljs/mysql#establishing-connections) method is used to establish the connection to the server. The [query()](https://github.com/mysqljs/mysql#performing-queries) method is used to execute the SQL query against MySQL database.
```javascript const mysql = require('mysql');
var config =
const conn = new mysql.createConnection(config); conn.connect(
- function (err) {
- if (err) {
+ function (err) {
+ if (err) {
console.log("!!! Cannot connect !!! Error:"); throw err; }
conn.connect(
}); function updateData(){
- conn.query('UPDATE inventory SET quantity = ? WHERE name = ?', [75, 'banana'],
+ conn.query('UPDATE inventory SET quantity = ? WHERE name = ?', [75, 'banana'],
function (err, results, fields) { if (err) throw err; else console.log('Updated ' + results.affectedRows + ' row(s).'); }) conn.end(
- function (err) {
+ function (err) {
if (err) throw err;
- else console.log('Done.')
+ else console.log('Done.')
}); }; ``` ## Delete data
-Use the following code to connect and delete data by using a **DELETE** SQL statement.
-
-The [mysql.createConnection()](https://github.com/mysqljs/mysql#establishing-connections) method is used to interface with the MySQL server. The [connect()](https://github.com/mysqljs/mysql#establishing-connections) method is used to establish the connection to the server. The [query()](https://github.com/mysqljs/mysql#performing-queries) method is used to execute the SQL query against MySQL database.
+Use the following code to connect and delete data by using a **DELETE** SQL statement.
+The [mysql.createConnection()](https://github.com/mysqljs/mysql#establishing-connections) method is used to interface with the MySQL server. The [connect()](https://github.com/mysqljs/mysql#establishing-connections) method is used to establish the connection to the server. The [query()](https://github.com/mysqljs/mysql#performing-queries) method is used to execute the SQL query against MySQL database.
```javascript const mysql = require('mysql');
var config =
const conn = new mysql.createConnection(config); conn.connect(
- function (err) {
- if (err) {
+ function (err) {
+ if (err) {
console.log("!!! Cannot connect !!! Error:"); throw err; }
conn.connect(
}); function deleteData(){
- conn.query('DELETE FROM inventory WHERE name = ?', ['orange'],
+ conn.query('DELETE FROM inventory WHERE name = ?', ['orange'],
function (err, results, fields) { if (err) throw err; else console.log('Deleted ' + results.affectedRows + ' row(s).'); }) conn.end(
- function (err) {
+ function (err) {
if (err) throw err;
- else console.log('Done.')
+ else console.log('Done.')
}); }; ```
mysql Select Right Deployment Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/select-right-deployment-type.md
Title: Selecting the right deployment type - Azure Database for MySQL description: This article describes what factors to consider before you deploy Azure Database for MySQL as either infrastructure as a service (IaaS) or platform as a service (PaaS).--++ Previously updated : 03/27/2023 Last updated : 04/18/2023
mysql Concepts Migrate Dbforge Studio For Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-migrate-dbforge-studio-for-mysql.md
description: The article demonstrates how to migrate to Azure Database for MySQL
--++ Last updated 06/20/2022
mysql Concepts Migrate Dump Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-migrate-dump-restore.md
description: This article explains two common ways to back up and restore databa
--++ Last updated 06/20/2022
This article explains two common ways to back up and restore databases in your A
- Dump and restore from the command-line (using mysqldump) - Dump and restore using PHPMyAdmin
-You can also refer to [Database Migration Guide](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide) for detailed information and use cases about migrating databases to Azure Database for MySQL. This guide provides guidance that will lead the successful planning and execution of a MySQL migration to Azure.
+You can also refer to [Database Migration Guide](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide) for detailed information and use cases about migrating databases to Azure Database for MySQL. This guide provides guidance that leads the successful planning and execution of a MySQL migration to Azure.
## Before you begin To step through this how-to guide, you need to have:
Add the connection information into your MySQL Workbench.
## Preparing the target Azure Database for MySQL server for fast data loads To prepare the target Azure Database for MySQL server for faster data loads, the following server parameters and configuration needs to be changed. - max_allowed_packet ΓÇô set to 1073741824 (that is, 1 GB) to prevent any overflow issue due to long rows.-- slow_query_log ΓÇô set to OFF to turn off the slow query log. This will eliminate the overhead caused by slow query logging during data loads.-- query_store_capture_mode ΓÇô set to NONE to turn off the Query Store. This will eliminate the overhead caused by sampling activities by Query Store.
+- slow_query_log ΓÇô set to OFF to turn off the slow query log. This eliminates the overhead caused by slow query logging during data loads.
+- query_store_capture_mode ΓÇô set to NONE to turn off the Query Store. This eliminates the overhead caused by sampling activities by Query Store.
- innodb_buffer_pool_size ΓÇô Scale up the server to 32 vCore Memory Optimized SKU from the Pricing tier of the portal during migration to increase the innodb_buffer_pool_size. Innodb_buffer_pool_size can only be increased by scaling up compute for Azure Database for MySQL server. - innodb_io_capacity & innodb_io_capacity_max - Change to 9000 from the Server parameters in Azure portal to improve the IO utilization to optimize for migration speed. - innodb_write_io_threads & innodb_write_io_threads - Change to 4 from the Server parameters in Azure portal to improve the speed of migration.
For known issues, tips and tricks, we recommend you to look at our [techcommunit
## Next steps - [Connect applications to Azure Database for MySQL](./how-to-connection-string.md). - For more information about migrating databases to Azure Database for MySQL, see the [Database Migration Guide](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide).-- If you're looking to migrate large databases with database sizes more than 1 TBs, you may want to consider using community tools like **mydumper/myloader** which supports parallel export and import. Learn [how to migrate large MySQL databases](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/best-practices-for-migrating-large-databases-to-azure-database/ba-p/1362699).
+- If you're looking to migrate large databases with database sizes more than 1 TB, you may want to consider using community tools like **mydumper/myloader** which supports parallel export and import. Learn [how to migrate large MySQL databases](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/best-practices-for-migrating-large-databases-to-azure-database/ba-p/1362699).
mysql Concepts Migrate Import Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-migrate-import-export.md
Title: Import and export - Azure Database for MySQL description: This article explains common ways to import and export databases in Azure Database for MySQL, by using tools such as MySQL Workbench.--++
mysql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-monitoring.md
description: This article describes the metrics for monitoring and alerting for
--++ Last updated 06/20/2022
mysql Concepts Ssl Connection Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-ssl-connection-security.md
When provisioning a new Azure Database for MySQL server through the Azure portal
Connection strings for various programming languages are shown in the Azure portal. Those connection strings include the required SSL parameters to connect to your database. In the Azure portal, select your server. Under the **Settings** heading, select the **Connection strings**. The SSL parameter varies based on the connector, for example "ssl=true" or "sslmode=require" or "sslmode=required" and other variations.
-In some cases, applications require a local certificate file generated from a trusted Certificate Authority (CA) certificate file to connect securely. Currently customers can **only use** the predefined certificate to connect to an Azure Database for MySQL server which is located at https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem.
+In some cases, applications require a local certificate file generated from a trusted Certificate Authority (CA) certificate file to connect securely. Currently customers can **only use** the predefined certificate to connect to an Azure Database for MySQL server, which is located at https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem.
Similarly, the following links point to the certificates for servers in sovereign clouds: [Azure Government](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem), [Azure China](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem), and [Azure Germany](https://www.d-trust.net/cgi-bin/D-TRUST_Root_Class_3_CA_2_2009.crt).
Azure Database for MySQL provides the ability to enforce the TLS version for the
| TLS1_2 | TLS version 1.2 and higher |
-For example, setting the value of minimum TLS setting version to TLS 1.0 means your server will allow connections from clients using TLS 1.0, 1.1, and 1.2+. Alternatively, setting this to 1.2 means that you only allow connections from clients using TLS 1.2+ and all connections with TLS 1.0 and TLS 1.1 will be rejected.
+For example, setting the value of minimum TLS setting version to TLS 1.0 means your server allows connections from clients using TLS 1.0, 1.1, and 1.2+. Alternatively, setting this to 1.2 means that you only allow connections from clients using TLS 1.2+ and all connections with TLS 1.0 and TLS 1.1 will be rejected.
> [!NOTE] > By default, Azure Database for MySQL does not enforce a minimum TLS version (the setting `TLSEnforcementDisabled`). > > Once you enforce a minimum TLS version, you cannot later disable minimum version enforcement.
-The minimum TLS version setting doesnt require any restart of the server can be set while the server is online. To learn how to set the TLS setting for your Azure Database for MySQL, refer to [How to configure TLS setting](how-to-tls-configurations.md).
+The minimum TLS version setting doesn't require any restart of the server can be set while the server is online. To learn how to set the TLS setting for your Azure Database for MySQL, refer to [How to configure TLS setting](how-to-tls-configurations.md).
## Cipher support by Azure Database for MySQL single server
-As part of the SSL/TLS communication, the cipher suites are validated and only support cipher suits are allowed to communicate to the database serer. The cipher suite validation is controlled in the [gateway layer](concepts-connectivity-architecture.md#connectivity-architecture) and not explicitly on the node itself. If the cipher suites doesn't match one of suites listed below, incoming client connections will be rejected.
+As part of the SSL/TLS communication, the cipher suites are validated and only support cipher suits are allowed to communicate to the database server. The cipher suite validation is controlled in the [gateway layer](concepts-connectivity-architecture.md#connectivity-architecture) and not explicitly on the node itself. If the cipher suites don't match one of suites listed below, incoming client connections will be rejected.
### Cipher suite supported
mysql Connect Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-nodejs.md
Last updated 06/20/2022
In this quickstart, you connect to an Azure Database for MySQL by using Node.js. You then use SQL statements to query, insert, update, and delete data in the database from Mac, Ubuntu Linux, and Windows platforms.
-This topic assumes that you're familiar with developing using Node.js, but you're new to working with Azure Database for MySQL.
+This article assumes that you're familiar with developing using Node.js, but you're new to working with Azure Database for MySQL.
## Prerequisites
Get the connection information needed to connect to the Azure Database for MySQL
1. In the `ssl` config option, replace the `ca-cert` filename with the path to this local file. 1. Open the command prompt or bash shell, and then change directory into your project folder `cd nodejsmysql`. 1. To run the application, enter the node command followed by the file name, such as `node createtable.js`.
-1. On Windows, if the node application is not in your environment variable path, you may need to use the full path to launch the node application, such as `"C:\Program Files\nodejs\node.exe" createtable.js`
+1. On Windows, if the node application isn't in your environment variable path, you may need to use the full path to launch the node application, such as `"C:\Program Files\nodejs\node.exe" createtable.js`
## Connect, create table, and insert data
mysql Connect Workbench https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-workbench.md
Title: 'Quickstart: Connect - MySQL Workbench - Azure Database for MySQL'
+ Title: "Quickstart: Connect - MySQL Workbench - Azure Database for MySQL"
description: This Quickstart provides the steps to use MySQL Workbench to connect and query data from Azure Database for MySQL.+++ Last updated : 04/18/2023 --- Previously updated : 06/20/2022+
+ - mvc
+ - mode-other
# Quickstart: Use MySQL Workbench to connect and query data in Azure Database for MySQL
This quickstart demonstrates how to connect to an Azure Database for MySQL using
## Prerequisites This quickstart uses the resources created in either of these guides as a starting point:+ - [Create an Azure Database for MySQL server using Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md) - [Create an Azure Database for MySQL server using Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md)
-> [!IMPORTANT]
+> [!IMPORTANT]
> Ensure the IP address you're connecting from has been added the server's firewall rules using the [Azure portal](./how-to-manage-firewall-using-portal.md) or [Azure CLI](./how-to-manage-firewall-using-cli.md) ## Install MySQL Workbench+ Download and install MySQL Workbench on your computer from [the MySQL website](https://dev.mysql.com/downloads/workbench/). ## Get connection information+ Get the connection information needed to connect to the Azure Database for MySQL. You need the fully qualified server name and login credentials. 1. Log in to the [Azure portal](https://portal.azure.com/).
-2. From the left-hand menu in Azure portal, click **All resources**, and then search for the server you have created (such as **mydemoserver**).
+1. From the left-hand menu in Azure portal, select **All resources**, and then search for the server you have created (such as **mydemoserver**).
+
+1. Select the server name.
-3. Click the server name.
+1. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel.
+ :::image type="content" source="./media/connect-php/1-server-overview-name-login.png" alt-text="Azure Database for MySQL server name" lightbox="./media/connect-php/1-server-overview-name-login.png":::
-4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel.
- :::image type="content" source="./media/connect-php/1-server-overview-name-login.png" alt-text="Azure Database for MySQL server name":::
+## Connect to the server by using MySQL Workbench
-## Connect to the server by using MySQL Workbench
To connect to Azure MySQL Server by using the GUI tool MySQL Workbench:
-1. Launch the MySQL Workbench application on your computer.
+1. Launch the MySQL Workbench application on your computer.
-2. In **Setup New Connection** dialog box, enter the following information on the **Parameters** tab:
+1. In **Setup New Connection** dialog box, enter the following information on the **Parameters** tab:
+ :::image type="content" source="./media/connect-workbench/2-setup-new-connection.png" alt-text="setup new connection" lightbox="./media/connect-workbench/2-setup-new-connection.png":::
-| **Setting** | **Suggested value** | **Field description** |
-||||
-| Connection Name | Demo Connection | Specify a label for this connection. |
-| Connection Method | Standard (TCP/IP) | Standard (TCP/IP) is sufficient. |
-| Hostname | *server name* | Specify the server name value that was used when you created the Azure Database for MySQL earlier. Our example server shown is mydemoserver.mysql.database.azure.com. Use the fully qualified domain name (\*.mysql.database.azure.com) as shown in the example. Follow the steps in the previous section to get the connection information if you do not remember your server name. |
-| Port | 3306 | Always use port 3306 when connecting to Azure Database for MySQL. |
-| Username | *server admin login name* | Type in the server admin login username supplied when you created the Azure Database for MySQL earlier. Our example username is myadmin@mydemoserver. Follow the steps in the previous section to get the connection information if you do not remember the username. The format is *username\@servername*.
-| Password | your password | Click **Store in Vault...** button to save the password. |
+ | **Setting** | **Suggested value** | **Field description** |
+ | | | |
+ | Connection Name | Demo Connection | Specify a label for this connection. |
+ | Connection Method | Standard (TCP/IP) | Standard (TCP/IP) is sufficient. |
+ | Hostname | *server name* | Specify the server name value that was used when you created the Azure Database for MySQL earlier. Our example server shown is mydemoserver.mysql.database.azure.com. Use the fully qualified domain name (\*.mysql.database.azure.com) as shown in the example. Follow the steps in the previous section to get the connection information if you don't remember your server name. |
+ | Port | 3306 | Always use port 3306 when connecting to Azure Database for MySQL. |
+ | Username | *server admin login name* | Type in the server admin login username supplied when you created the Azure Database for MySQL earlier. Our example username is myadmin@mydemoserver. Follow the steps in the previous section to get the connection information if you don't remember the username. The format is *username\@servername*.
+ | Password | your password | Select **Store in Vault...** button to save the password. |
+
+1. Select **Test Connection** to test if all parameters are correctly configured.
-3. Click **Test Connection** to test if all parameters are correctly configured.
+1. Select **OK** to save the connection.
-4. Click **OK** to save the connection.
+1. In the listing of **MySQL Connections**, select the tile corresponding to your server, and then wait for the connection to be established.
-5. In the listing of **MySQL Connections**, click the tile corresponding to your server, and then wait for the connection to be established.
+ A new SQL tab opens with a blank editor where you can type your queries.
- A new SQL tab opens with a blank editor where you can type your queries.
-
- > [!NOTE]
- > By default, SSL connection security is required and enforced on your Azure Database for MySQL server. Although typically no additional configuration with SSL certificates is required for MySQL Workbench to connect to your server, we recommend binding the SSL CA certification with MySQL Workbench. For more information on how to download and bind the certification, see [Configure SSL connectivity in your application to securely connect to Azure Database for MySQL](./how-to-configure-ssl.md). If you need to disable SSL, visit the Azure portal and click the Connection security page to disable the Enforce SSL connection toggle button.
+ > [!NOTE]
+ > By default, SSL connection security is required and enforced on your Azure Database for MySQL server. Although typically no additional configuration with SSL certificates is required for MySQL Workbench to connect to your server, we recommend binding the SSL CA certification with MySQL Workbench. For more information on how to download and bind the certification, see [Configure SSL connectivity in your application to securely connect to Azure Database for MySQL](./how-to-configure-ssl.md). If you need to disable SSL, visit the Azure portal and select the Connection security page to disable the Enforce SSL connection toggle button.
## Create a table, insert data, read data, update data, delete data+ 1. Copy and paste the sample SQL code into a blank SQL tab to illustrate some sample data. This code creates an empty database named quickstartdb, and then creates a sample table named inventory. It inserts some rows, then reads the rows. It changes the data with an update statement, and reads the rows again. Finally it deletes a row, and then reads the rows again.
-
+ ```sql -- Create a database -- DROP DATABASE IF EXISTS quickstartdb;
To connect to Azure MySQL Server by using the GUI tool MySQL Workbench:
``` The screenshot shows an example of the SQL code in SQL Workbench and the output after it has been run.
-
- :::image type="content" source="media/connect-workbench/3-workbench-sql-tab.png" alt-text="MySQL Workbench SQL Tab to run sample SQL code":::
-2. To run the sample SQL Code, click the lightening bolt icon in the toolbar of the **SQL File** tab.
-3. Notice the three tabbed results in the **Result Grid** section in the middle of the page.
-4. Notice the **Output** list at the bottom of the page. The status of each command is shown.
+ :::image type="content" source="media/connect-workbench/3-workbench-sql-tab.png" alt-text="MySQL Workbench SQL Tab to run sample SQL code" lightbox="media/connect-workbench/3-workbench-sql-tab.png":::
+
+1. To run the sample SQL Code, select the lightening bolt icon in the toolbar of the **SQL File** tab.
+
+1. Notice the three tabbed results in the **Result Grid** section in the middle of the page.
+
+1. Notice the **Output** list at the bottom of the page. The status of each command is shown.
Now, you have connected to Azure Database for MySQL by using MySQL Workbench, and you have queried data using the SQL language.
az group delete \
``` ## Next steps+ > [!div class="nextstepaction"] > [Migrate your database using Export and Import](./concepts-migrate-import-export.md)
openshift Support Policies V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/support-policies-v4.md
Azure Red Hat OpenShift 4 supports node instances on the following virtual machi
|Dsv4|Standard_D16s_v4|16|64| |Dsv4|Standard_D32s_v4|32|128| |Dsv4|Standard_D64s_v4|64|256|
-|Dsv4|Standard_D96s_v4|96|384|
|Dsv5|Standard_D4s_v5|4|16| |Dsv5|Standard_D8s_v5|8|32| |Dsv5|Standard_D16s_v5|16|64|
Azure Red Hat OpenShift 4 supports node instances on the following virtual machi
|Esv4|Standard_E32s_v4|32|256| |Esv4|Standard_E48s_v4|48|384| |Esv4|Standard_E64s_v4|64|504|
-|Esv4|Standard_E96s_v4|96|672|
|Esv5|Standard_E2s_v5|2|16| |Esv5|Standard_E4s_v5|4|32| |Esv5|Standard_E8s_v5|8|64|
operator-nexus Howto Baremetal Bmc Ssh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-baremetal-bmc-ssh.md
+
+ Title: Manage emergency access to a bare metal machine using the `az networkcloud cluster bmckeyset` command for Azure Operator Nexus
+description: Step by step guide on using the `az networkcloud cluster bmckeyset` command to manage emergency access to a bare metal machine.
++++ Last updated : 04/18/2023+++
+# Manage emergency access to a bare metal machine using the `az networkcloud cluster bmckeyset`
+
+> [!CAUTION]
+> Please note this process is used in emergency situations when all other troubleshooting options via Azure have been exhausted. SSH access to these bare metal machines (BMM) is restricted to users managed via this method from the specified jump host list.
+
+There are rare situations where a user needs to investigate & resolve issues with a BMM and all other ways using Azure have been exhausted. Operator Nexus provides the `az networkcloud cluster bmckeyset` command so users can manage SSH access to the baseboard management controller (BMC) on these BMMs.
+
+When the command runs, it executes on each BMM in the Cluster. If a BMM is unavailable or powered off at the time of command execution, the status of the command reflects which BMMs couldn't have the command executed. There's a reconciliation process that runs periodically that retries the command on any BMM that wasn't available at the time of the original command. Multiple commands execute in the order received.
+
+There's a maximum number of 12 users defined per Cluster. Attempts to add more than 12 users results in an error. Delete a user before adding another one when 12 already exists.
+
+## Prerequisites
+
+- Install the latest version of the
+ [appropriate CLI extensions](./howto-install-cli-extensions.md)
+- The on-premises Cluster must have connectivity to Azure.
+- Get the Resource group name that you created for `Cluster` resource
+- The process applies keysets to all running BMMs.
+- The users added must be part of an Azure Active Directory (Azure AD) group. For more information, see [How to Manage Groups](../active-directory/fundamentals/how-to-manage-groups.md).
+- To restrict access for managing keysets, create a custom role. For more information, see [Azure Custom Roles](../role-based-access-control/custom-roles.md). In this instance, add or exclude permissions for `Microsoft.NetworkCloud/clusters/bmcKeySets`. The options are `/read`, `/write` and `/delete`.
+
+## Creating a BMC keyset
+
+The `bmckeyset create` command creates SSH access to the BMM in a Cluster for a group of users.
+
+The command syntax is:
+
+```azurecli
+az networkcloud cluster bmckeyset create \
+ --name <BMC Keyset Name> \
+ --extended-location name=<Extended Location ARM ID> \
+ type="CustomLocation" \
+ --location <Azure Region> \
+ --azure-group-id <Azure AAD Group ID> \
+ --expiration <Expiration Timestamp> \
+ --jump-hosts-allowed <List of jump server IP addresses> \
+ --privilege-level <"Administrator" or "ReadOnly"> \
+ --user-list '[{"description":"<User description>","azureUserName":"<User Name>", \
+ "sshPublicKey":{"keyData":"<SSH Public Key>"}}]' \
+ --tags key1=<Key Value> key2=<Key Value> \
+ --cluster-name <Cluster Name> \
+ --resource-group <Resource Group Name>
+```
+
+### Create Arguments
+
+```azurecli
+ --azure-group-id [Required] : The object ID of Azure Active Directory
+ group that all users in the list must
+ be in for access to be granted. Users
+ that are not in the group do not have
+ access.
+ --bmc-key-set-name --name -n [Required] : The name of the BMC key set.
+ --cluster-name [Required] : The name of the cluster.
+ --expiration [Required] : The date and time after which the users
+ in this key set are removed from
+ the BMCs. The limit is up to 1 year from creation.
+ Format is "YYYY-MM-DDTHH:MM:SS.000Z"
+ --extended-location [Required] : The extended location of the cluster
+ associated with the resource.
+ Usage: --extended-location name=XX type=XX
+ name: Required. The resource ID of the extended location on which the resource is created.
+ type: Required. The extended location type: "CustomLocation".
+ --privilege-level [Required] : The access level allowed for the users
+ in this key set. Allowed values:
+ "Standard" or "Superuser".
+ --resource-group -g [Required] : Name of resource group. Optional if
+ configuring the default group using `az
+ configure --defaults group=<name>`.
+ --user-list [Required] : The unique list of permitted users.
+ Usage: --user-list azure-user-name=XX description=XX key-data=XX
+ azure-user-name: Required. The Azure Active Directory user name (email name).
+ description: The free-form description for this user.
+ key-data: Required. The public ssh key of the user.
+
+ Multiple users can be specified by using more than one --user-list argument.
+ --tags : Space-separated tags: key[=value]
+ [key[=value] ...]. Use '' to clear
+ existing tags.
+ --location -l : Azure Region. Values from: `az account
+ list-locations`. You can configure the
+ default location using `az configure
+ --defaults location=<location>`.
+ --no-wait : Do not wait for the long-running
+ operation to finish.
+```
+
+### Global Azure CLI arguments (applicable to all commands)
+
+```azurecli
+ --debug : Increase logging verbosity to show all
+ debug logs.
+ --help -h : Show this help message and exit.
+ --only-show-errors : Only show errors, suppressing warnings.
+ --output -o : Output format. Allowed values: json,
+ jsonc, none, table, tsv, yaml, yamlc.
+ Default: json.
+ --query : JMESPath query string. See
+ http://jmespath.org/ for more
+ information and examples.
+ --subscription [Required] : Name or ID of subscription. Optional if
+ configuring the default subscription
+ using `az account set -s NAME_OR_ID`.
+ --verbose : Increase logging verbosity. Use --debug
+ for full debug logs.
+```
+
+This example creates a new keyset with two users that have standard access from two jump hosts.
+
+```azurecli
+az networkcloud cluster bmckeyset create \
+ --name "bmcKeySetName" \
+ --extended-location name="/subscriptions/subscriptionId/resourceGroups/resourceGroupName/providers/Microsoft.ExtendedLocation/customLocations/clusterExtendedLocationName" \
+ type="CustomLocation" \
+ --location "location" \
+ --azure-group-id "f110271b-XXXX-4163-9b99-214d91660f0e" \
+ --expiration "2023-12-31T23:59:59.008Z" \
+ --privilege-level "Standard" \
+ --user-list '[{"description":"Needs access for troubleshooting as a part of the support team",\
+ "azureUserName":"userABC","sshPublicKey":{"keyData":"ssh-rsa AAtsE3njSONzDYRIZv/WLjVuMfrUSByHp+jfaaOLHTIIB4fJvo6dQUZxE20w2iDHV3tEkmnTo84eba97VMueQD6OzJPEyWZMRpz8UYWOd0IXeRqiFu1lawNblZhwNT/ojNZfpB3af/YDzwQCZgTcTRyNNhL4o/blKUmug0daSsSXISTRnIDpcf5qytjs1XoyYyJMvzLL59mhAyb3p/cD+Y3/s3WhAx+l0XOKpzXnblrv9d3q4c2tWmm/SyFqthaqd0= admin@vm"}},\
+ {"description":"Needs access for troubleshooting as a part of the support team",\
+ "azureUserName":"userXYZ","sshPublicKey":{"keyData":"ssh-rsa AAtsE3njSONzDYRIZv/WLjVuMfrUSByHp+jfaaOLHTIIB4fJvo6dQUZxE20w2iDHV3tEkmnTo84eba97VMueQD6OzJPEyWZMRpz8UYWOd0IXeRqiFu1lawNblZhwNT/ojNZfpB3af/YDzwQCZgTcTRyNNhL4o/blKUmug0daSsSXTSTRnIDpcf5qytjs1XoyYyJMvzLL59mhAyb3p/cD+Y3/s3WhAx+l0XOKpzXnblrv9d3q4c2tWmm/SyFqthaqd0= admin@vm"}}]' \
+ --tags key1="myvalue1" key2="myvalue2" \
+ --cluster-name "clusterName" \
+ --resource-group "resourceGroupName"
+```
+
+For assistance in creating the `--user-list` structure, see [Azure CLI Shorthand](https://github.com/Azure/azure-cli/blob/dev/doc/shorthand_syntax.md).
+
+## Deleting a BMC keyset
+
+The `bmckeyset delete` command removes SSH access to the BMC for a group of users. All members of the group will no longer have SSH access to any of the BMCs in the Cluster.
+
+The command syntax is:
+
+```azurecli
+az networkcloud cluster bmckeyset delete \
+ --name <BMM Keyset Name> \
+ --cluster-name <Cluster Name> \
+ --resource-group <Resource Group Name> \
+```
+
+### Delete Arguments
+
+```azurecli
+ --bmc-key-set-name --name -n [Required] : The name of the BMC key set to be deleted.
+ --cluster-name [Required] : The name of the cluster.
+ --resource-group -g [Required] : Name of resource group. Optional if configuring the
+ default group using `az configure --defaults
+ group=<name>`.
+ --no-wait : Do not wait for the long-running operation to finish.
+ --yes -y : Do not prompt for confirmation.
+```
+
+This example removes the "bmcKeysetName" keyset group in the "clusterName" Cluster.
+
+```azurecli
+az networkcloud cluster bmckeyset delete \
+ --name "bmcKeySetName" \
+ --cluster-name "clusterName" \
+ --resource-group "resourceGroupName" \
+```
+
+## Updating a BMC Keyset
+
+The `bmckeyset update` command allows users to make changes to an existing keyset group.
+
+The command syntax is:
+
+```azurecli
+az networkcloud cluster bmckeyset update \
+ --name <BMM Keyset Name> \
+ --jump-hosts-allowed <List of jump server IP addresses> \
+ --privilege-level <"Standard" or "Superuser"> \
+ --user-list '[{"description":"<User description>",\
+ "azureUserName":"<UserName>", \
+ "sshPublicKey":{"keyData":"<SSH Public Key>"}}]' \
+ --tags key1=<Key Value> key2=<Key Value> \
+ --cluster-name <Cluster Name> \
+ --resource-group <Resource Group Name>
+```
+
+### Update Arguments
+
+```azurecli
+ --bmc-key-set-name --name -n [Required] : The name of the BMC key set.
+ --cluster-name [Required] : The name of the cluster.
+ --expiration : The date and time after which the users
+ in this key set are removed from
+ the BMCs. Format is:
+ "YYYY-MM-DDTHH:MM:SS.000Z"
+ --jump-hosts-allowed : The list of IP addresses of jump hosts
+ with management network access from
+ which a login is allowed for the
+ users. Supports IPv4 or IPv6 addresses.
+ --privilege-level : The access level allowed for the users
+ in this key set. Allowed values:
+ "Standard" or "Superuser".
+ --user-list : The unique list of permitted users.
+ Usage: --user-list azure-user-name=XX description=XX key-data=XX
+ azure-user-name: Required. The Azure Active Directory user name (email name).
+ description: The free-form description for this user.
+ key-data: Required. The public SSH key of the user.
+
+ Multiple users can be specified by using more than one --user-list argument.
+ --resource-group -g [Required] : Name of resource group. Optional if
+ configuring the default group using `az
+ configure --defaults group=<name>`.
+ --tags : Space-separated tags: key[=value]
+ [key[=value] ...]. Use '' to clear
+ existing tags.
+ --no-wait : Do not wait for the long-running
+ operation to finish.
+```
+
+This example adds two new users to the "bmcKeySetName" group and changes the expiry time for the group.
+
+```azurecli
+az networkcloud cluster bmckeyset update \
+ --name "bmcKeySetName" \
+ --expiration "2023-12-31T23:59:59.008Z" \
+ --user-list '[{"description":"Needs access for troubleshooting as a part of the support team",\
+ "azureUserName":"userDEF","sshPublicKey":{"keyData":"ssh-rsa AAtsE3njSONzDYRIZv/WLjVuMfrUSByHp+jfaaOLHTIIB4fJvo6dQUZxE20w2iDHV3tEkmnTo84eba97VMueQD6OzJPEyWZMRpz8UYWOd0IXeRqiFu1lawNblZhwNT/ojNZfpB3af/YDzwQCZgTcTRyNNhL4o/blKUmug0daSsSXISTRnIDpcf5qytjs1XoyYyJMvzLL59mhAyb3p/cD+Y3/s3WhAx+l0XOKpzXnblrv9d3q4c2tWmm/SyFqthaqd0= admin@vm"}}]\
+ --cluster-name "clusterName" \
+ --resource-group "resourceGroupName"
+```
+
+## Listing BMC Keysets
+
+The `bmckeyset list` command allows users to see the existing keyset groups in a Cluster.
+
+The command syntax is:
+
+```azurecli
+az networkcloud cluster bmckeyset list \
+ --cluster-name <Cluster Name> \
+ --resource-group <Resource Group Name>
+```
+
+### List Arguments
+
+```azurecli
+ --cluster-name [Required] : The name of the cluster.
+ --resource-group -g [Required] : Name of resource group. Optional if
+ configuring the default group using `az
+ configure --defaults group=<name>`.
+```
+
+## Show BMC Keyset Details
+
+The `bmckeyset show` command allows users to see the details of an existing keyset group in a Cluster.
+
+The command syntax is:
+
+```azurecli
+az networkcloud cluster bmckeyset show \
+ --cluster-name <Cluster Name> \
+ --resource-group <Resource Group Name>
+```
+
+### Show Arguments
+
+```azurecli
+ --bmc-key-set-name --name -n [Required] : The name of the BMC key set.
+ --cluster-name [Required] : The name of the cluster.
+ --resource-group -g [Required] : Name of resource group. You can
+ configure the default group using `az
+ configure --defaults group=<name>`.
+```
operator-nexus Howto Baremetal Bmm Ssh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-baremetal-bmm-ssh.md
+
+ Title: Manage emergency access to a bare metal machine using the `az networkcloud cluster baremetalmachinekeyset` command for Azure Operator Nexus
+description: Step by step guide on using the `az networkcloud cluster baremetalmachinekeyset` command to manage emergency access to a bare metal machine.
++++ Last updated : 04/18/2023+++
+# Manage emergency access to a bare metal machine using the `az networkcloud cluster baremetalmachinekeyset`
+
+> [!CAUTION]
+> Please note this process is used in emergency situations when all other troubleshooting options using Azure have been exhausted. SSH access to these bare metal machines (BMM) is restricted to users managed via this method from the specified jump host list.
+
+There are rare situations where a user needs to investigate & resolve issues with a BMM and all other ways have been exhausted via Azure. Azure Operator Nexus provides the `az networkcloud cluster baremetalmachinekeyset` command so users can manage SSH access to these BMM.
+
+When the command runs, it executes on each BMM in the Cluster. If a BMM is unavailable or powered off at the time of command execution, the status of the command reflects which BMMs couldn't have the command executed. There is a reconciliation process that runs periodically that retries the command on any BMM that wasn't available at the time of the original command. Multiple commands execute in the order received.
+
+There's no limit to the number of users in a group.
+
+> [!CAUTION]
+> Notes for jump host IP addresses
+
+- The keyset create/update process adds the jump host IP addresses to the IP tables for the Cluster. The process adds these addresses to IP tables and restricts SSH access to only those IPs.
+- It's important to specify the Cluster facing IP addresses for the jump hosts. These IP addresses may be different than the public facing IP address used to access the jump host.
+- Once added, users are able to access BMMs from any specified jump host IP including a jump host IP defined in another BMM keyset group.
+- Existing SSH access remains when adding first BMM keyset. However, the keyset command limits an existing user's SSH access to the specified jump host IPs in the keyset commands.
+
+## Prerequisites
+
+- Install the latest version of the
+ [appropriate CLI extensions](./howto-install-cli-extensions.md)
+- The on-premises Cluster must have connectivity to Azure.
+- Get the Resource group name that you created for `Cluster` resource
+- The process applies keysets to all running BMMs.
+- The added users must be part of an Azure Active Directory (Azure AD) group. For more information, see [How to Manage Groups](../active-directory/fundamentals/how-to-manage-groups.md).
+- To restrict access for managing keysets, create a custom role. For more information, see [Azure Custom Roles](../role-based-access-control/custom-roles.md). In this instance, add or exclude permissions for `Microsoft.NetworkCloud/clusters/bareMetalMachineKeySets`. The options are `/read`, `/write` and `/delete`.
++
+## Creating a bare metal machine keyset
+
+The `baremetalmachinekeyset create` command creates SSH access to the BMM in a Cluster for a group of users.
+
+The command syntax is:
+
+```azurecli
+az networkcloud cluster baremetalmachinekeyset create \
+ --name <BMM Keyset Name> \
+ --extended-location name=<Extended Location ARM ID> \
+ type="CustomLocation" \
+ --location <Azure Region> \
+ --azure-group-id <Azure AAD Group ID> \
+ --expiration <Expiration Timestamp> \
+ --jump-hosts-allowed <List of jump server IP addresses> \
+ --os-group-name <Name of the Operating System Group> \
+ --privilege-level <"Standard" or "Superuser"> \
+ --user-list '[{"description":"<User List Description>","azureUserName":"<User Name>",\
+ "sshPublicKey":{"keyData":"<SSH Public Key>"}}]' \
+ --tags key1=<Key Value> key2=<Key Value> \
+ --cluster-name <Cluster Name> \
+ --resource-group <Resource Group>
+```
+
+### Create Arguments
+
+```azurecli
+ --azure-group-id [Required] : The object ID of Azure Active Directory
+ group that all users in the list must
+ be in for access to be granted. Users
+ that are not in the group do not have
+ access.
+ --bare-metal-machine-key-set-name --name -n [Required] : The name of the bare metal machine key
+ set.
+ --cluster-name [Required] : The name of the cluster.
+ --expiration [Required] : The date and time after which the users
+ in this key set are removed from
+ the bare metal machines. Format is:
+ "YYYY-MM-DDTHH:MM:SS.000Z"
+ --extended-location [Required] : The extended location of the cluster
+ associated with the resource.
+ Usage: --extended-location name=XX type=XX
+ name: Required. The resource ID of the extended location on which the resource is created.
+ type: Required. The extended location type: "CustomLocation".
+ --jump-hosts-allowed [Required] : The list of IP addresses of jump hosts
+ with management network access from
+ which a login is be allowed for the
+ users. Supports IPv4 or IPv6 addresses.
+ --privilege-level [Required] : The access level allowed for the users
+ in this key set. Allowed values:
+ "Standard" or "Superuser".
+ --resource-group -g [Required] : Name of resource group. Optional if
+ configuring the default group using `az
+ configure --defaults group=<name>`.
+ --user-list [Required] : The unique list of permitted users.
+ Usage: --user-list azure-user-name=XX description=XX key-data=XX
+ azure-user-name: Required. The Azure Active Directory user name (email name).
+ description: The free-form description for this user.
+ key-data: Required. The public ssh key of the user.
+
+ Multiple users can be specified by using more than one --user-list argument.
+ --os-group-name : The name of the group that users are assigned
+ to on the operating system of the machines.
+ --tags : Space-separated tags: key[=value]
+ [key[=value] ...]. Use '' to clear
+ existing tags.
+ --location -l : Azure Region. Values from: `az account
+ list-locations`. You can configure the
+ default location using `az configure
+ --defaults location=<location>`.
+ --no-wait : Do not wait for the long-running
+ operation to finish.
+```
+
+### Global Azure CLI arguments (applicable to all commands)
+
+```azurecli
+ --debug : Increase logging verbosity to show all
+ debug logs.
+ --help -h : Show this help message and exit.
+ --only-show-errors : Only show errors, suppressing warnings.
+ --output -o : Output format. Allowed values: json,
+ jsonc, none, table, tsv, yaml, yamlc.
+ Default: json.
+ --query : JMESPath query string. See
+ http://jmespath.org/ for more
+ information and examples.
+ --subscription [Required] : Name or ID of subscription. Optional if
+ configuring the default subscription
+ using `az account set -s NAME_OR_ID`.
+ --verbose : Increase logging verbosity. Use --debug
+ for full debug logs.
+```
+
+This example creates a new keyset with two users that have standard access from two jump hosts.
+
+```azurecli
+az networkcloud cluster baremetalmachinekeyset create \
+ --name "bareMetalMachineKeySetName" \
+ --extended-location name="/subscriptions/subscriptionId/resourceGroups/resourceGroupName/providers/Microsoft.ExtendedLocation/customLocations/clusterExtendedLocationName" \
+ type="CustomLocation" \
+ --location "location" \
+ --azure-group-id "f110271b-XXXX-4163-9b99-214d91660f0e" \
+ --expiration "2022-12-31T23:59:59.008Z" \
+ --jump-hosts-allowed "192.0.2.1" "192.0.2.5" \
+ --os-group-name "standardAccessGroup" \
+ --privilege-level "Standard" \
+ --user-list '[{"description":"Needs access for troubleshooting as a part of the support team","azureUserName":"userABC", "sshPublicKey":{"keyData":"ssh-rsa AAtsE3njSONzDYRIZv/WLjVuMfrUSByHp+jfaaOLHTIIB4fJvo6dQUZxE20w2iDHV3tEkmnTo84eba97VMueQD6OzJPEyWZMRpz8UYWOd0IXeRqiFu1lawNblZhwNT/ojNZfpB3af/YDzwQCZgTcTRyNNhL4o/blKUmug0daSsSXISTRnIDpcf5qytjs1XoyYyJMvzLL59mhAyb3p/cD+Y3/s3WhAx+l0XOKpzXnblrv9d3q4c2tWmm/SyFqthaqd0= admin@vm"}},\
+ {"description":"Needs access for troubleshooting as a part of the support team","azureUserName":"userXYZ","sshPublicKey":{"keyData":"ssh-rsa AAtsE3njSONzDYRIZv/WLjVuMfrUSByHp+jfaaOLHTIIB4fJvo6dQUZxE20w2iDHV3tEkmnTo84eba97VMueQD6OzJPEyWZMRpz8UYWOd0IXeRqiFu1lawNblZhwNT/ojNZfpB3af/YDzwQCZgTcTRyNNhL4o/blKUmug0daSsSXTSTRnIDpcf5qytjs1XoyYyJMvzLL59mhAyb3p/cD+Y3/s3WhAx+l0XOKpzXnblrv9d3q4c2tWmm/SyFqthaqd0= admin@vm"}}]' \
+ --tags key1="myvalue1" key2="myvalue2" \
+ --cluster-name "clusterName"
+ --resource-group "resourceGroupName"
+```
+
+For assistance in creating the `--user-list` structure, see [Azure CLI Shorthand](https://github.com/Azure/azure-cli/blob/dev/doc/shorthand_syntax.md).
+
+## Deleting a bare metal machine keyset
+
+The `baremetalmachinekeyset delete` command removes SSH access to the BMM for a group of users. All members of the group no longer have SSH access to any of the BMM in the Cluster.
+
+The command syntax is:
+
+```azurecli
+az networkcloud cluster baremetalmachinekeyset delete \
+ --name <BMM Keyset Name> \
+ --cluster-name <Cluster Name> \
+ --resource-group <Resource Group Name>
+```
+
+### Delete Arguments
+
+```azurecli
+ --bare-metal-machine-key-set-name --name -n [Required] : The name of the bare metal machine key set to be
+ deleted.
+ --cluster-name [Required] : The name of the cluster.
+ --resource-group -g [Required] : Name of resource group. Optional if configuring the
+ default group using `az configure --defaults
+ group=<name>`.
+ --no-wait : Do not wait for the long-running operation to
+ finish.
+ --yes -y : Do not prompt for confirmation.
+```
+
+This example removes the "bareMetalMachineKeysetName" keyset group in the "clusterName" Cluster.
+
+```azurecli
+az networkcloud cluster baremetalmachinekeyset delete \
+ --name "bareMetalMachineKeySetName" \
+ --cluster-name "clusterName" \
+ --resource-group "resourceGroupName"
+```
+
+## Updating a Bare Metal Machine Keyset
+
+The `baremetalmachinekeyset update` command allows users to make changes to an existing keyset group.
+
+The command syntax is:
+
+```azurecli
+az networkcloud cluster baremetalmachinekeyset update \
+ --name <BMM Keyset Name> \
+ --jump-hosts-allowed <List of jump server IP addresses> \
+ --privilege-level <"Standard" or "Superuser"> \
+ --user-list '[{"description":"<User List Description>","azureUserName":"<User Name>",\
+ "sshPublicKey":{"keyData":"<SSH Public Key>"}}]' \
+ --tags key1=<Key Value> key2=<Key Value> \
+ --cluster-name <Cluster Name> \
+ --resource-group <Resource Group>
+```
+
+### Update Arguments
+
+```azurecli
+ --bare-metal-machine-key-set-name --name -n [Required] : The name of the BMM key set.
+ --cluster-name [Required] : The name of the cluster.
+ --expiration : The date and time after which the users
+ in this key set are removed from
+ the BMMs. Format is:
+ "YYYY-MM-DDTHH:MM:SS.000Z"
+ --jump-hosts-allowed : The list of IP addresses of jump hosts
+ with management network access from
+ which a login is allowed for the
+ users. Supports IPv4 or IPv6 addresses.
+ --privilege-level : The access level allowed for the users
+ in this key set. Allowed values:
+ "Standard" or "Superuser".
+ --user-list : The unique list of permitted users.
+ Usage: --user-list azure-user-name=XX description=XX key-data=XX
+ azure-user-name: Required. The Azure Active Directory user name (email name).
+ description: The free-form description for this user.
+ key-data: Required. The public SSH key of the user.
+
+ Multiple users can be specified by using more than one --user-list argument.
+ --resource-group -g [Required] : Name of resource group. Optional if
+ configuring the default group using `az
+ configure --defaults group=<name>`.
+ --tags : Space-separated tags: key[=value]
+ [key[=value] ...]. Use '' to clear
+ existing tags.
+ --no-wait : Do not wait for the long-running
+ operation to finish.
+```
+
+This example adds two new users to the "baremetalMachineKeySetName" group and changes the expiry time for the group.
+
+```azurecli
+az networkcloud cluster baremetalmachinekeyset update \
+ --name "bareMetalMachineKeySetName" \
+ --expiration "2023-12-31T23:59:59.008Z" \
+ --user-list '[{"description":"Needs access for troubleshooting as a part of the support team",\
+ "azureUserName":"userABC","sshPublicKey":{"keyData":"ssh-rsa AAtsE3njSONzDYRIZv/WLjVuMfrUSByHp+jfaaOLHTIIB4fJvo6dQUZxE20w2iDHV3tEkmnTo84eba97VMueQD6OzJPEyWZMRpz8UYWOd0IXeRqiFu1lawNblZhwNT/ojNZfpB3af/YDzwQCZgTcTRyNNhL4o/blKUmug0daSsSXISTRnIDpcf5qytjs1XoyYyJMvzLL59mhAyb3p/cD+Y3/s3WhAx+l0XOKpzXnblrv9d3q4c2tWmm/SyFqthaqd0= admin@vm"}},\
+ {"description":"Needs access for troubleshooting as a part of the support team",\
+ "azureUserName":"userXYZ","sshPublicKey":{"keyData":"ssh-rsa AAtsE3njSONzDYRIZv/WLjVuMfrUSByHp+jfaaOLHTIIB4fJvo6dQUZxE20w2iDHV3tEkmnTo84eba97VMueQD6OzJPEyWZMRpz8UYWOd0IXeRqiFu1lawNblZhwNT/ojNZfpB3af/YDzwQCZgTcTRyNNhL4o/blKUmug0daSsSXTSTRnIDpcf5qytjs1XoyYyJMvzLL59mhAyb3p/cD+Y3/s3WhAx+l0XOKpzXnblrv9d3q4c2tWmm/SyFqthaqd0= admin@vm"}}]' \
+ --cluster-name "clusterName" \
+ --resource-group "resourceGroupName"
+```
+
+## Listing Bare Metal Machine Keysets
+
+The `baremetalmachinekeyset list` command allows users to see the existing keyset groups in a Cluster.
+
+The command syntax is:
+
+```azurecli
+az networkcloud cluster baremetalmachinekeyset list \
+ --cluster-name <Cluster Name> \
+ --resource-group <Resource Group>
+```
+
+### List Arguments
+
+```azurecli
+ --cluster-name [Required] : The name of the cluster.
+ --resource-group -g [Required] : Name of resource group. Optional if
+ configuring the default group using `az
+ configure --defaults group=<name>`.
+```
+
+## Show Bare Metal Machine Keyset Details
+
+The `baremetalmachinekeyset show` command allows users to see the details of an existing keyset group in a Cluster.
+
+The command syntax is:
+
+```azurecli
+az networkcloud cluster baremetalmachinekeyset show \
+ --cluster-name <Cluster Name> \
+ --resource-group <Resource Group>
+```
+
+### Show Arguments
+
+```azurecli
+ --bare-metal-machine-key-set-name --name -n [Required] : The name of the bare metal machine key
+ set.
+ --cluster-name [Required] : The name of the cluster.
+ --resource-group -g [Required] : Name of resource group. You can
+ configure the default group using `az
+ configure --defaults group=<name>`.
+```
operator-nexus Howto Configure Network Fabric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-configure-network-fabric.md
This article describes how to create a Network Fabric by using the Azure Command
## Fabric Configuration
-The following table specifies parameters used to create Network Fabric
+The following table specifies parameters used to create Network Fabric,
+
+**$prefix:** /subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkFabric/networkFabricControllers
| Parameter | Description | Example | Required | |--|-||-|
The following table specifies parameters used to create Network Fabric
| location | Operator-Nexus Azure region | "eastus" |True | | resource-name | Name of the FabricResource | NF-ResourceName |True | | nf-sku |Fabric SKU ID is the SKU of the ordered BoM. Two SKUs are supported (**M4-A400-A100-C16-aa** and **M8-A400-A100-C16-aa**). | M4-A400-A100-C16-aa |True | String|
-|nfc-id|Network Fabric Controller ARM resource id|/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkFabric/networkFabricControllers/NFCName|True |
+|nfc-id|Network Fabric Controller ARM resource id|**$prefix**/NFCName|True |
|rackcount|Number of compute racks per fabric. Possible values are 2-8|8|True | |serverCountPerRack|Number of compute servers per rack. Possible values are 4, 8, 12 or 16|16|True | |ipv4Prefix|IPv4 Prefix of the management network. This Prefix should be unique across all Network Fabrics in a Network Fabric Controller. Prefix length should be at least 19 (/20 isn't allowed, /18 and lower are allowed) | 10.246.0.0/19|True |
Expected output:
"networkFabricSku": "NFSKU", "operationalState": null, "provisioningState": "Accepted",
- "rackCount": 3,
+ "rackCount": 4,
"racks": null, "resourceGroup": "NFResourceGroupName", "routerId": null,
- "serverCountPerRack": 7,
+ "serverCountPerRack": 8,
"systemData": { "createdAt": "2023-XX-X-6T12:52:11.769525+00:00", "createdBy": "email@address.com",
Expected output:
"networkFabricSku": "NFSKU", "operationalState": null, "provisioningState": "Succeeded",
- "rackCount": 3,
+ "rackCount": 4,
"racks": [ "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourcegroups/NFResourceGroup/providers/microsoft.managednetworkfabric/networkracks/NFName-aggrack", "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourcegroups/NFResourceGroup/providers/microsoft.managednetworkfabric/networkracks/NFName-comprack1",
Expected output:
], "resourceGroup": "NFResourceGroup", "routerId": null,
- "serverCountPerRack": 7,
+ "serverCountPerRack": 8,
"systemData": { "createdAt": "2023-XX-XXT12:52:11.769525+00:00", "createdBy": "email@address.com",
Expected output:
"networkFabricSku": "NFSKU", "operationalState": "Provisioned", "provisioningState": "Succeeded",
- "rackCount": 3,
+ "rackCount": 4,
"racks": [ "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourcegroups/NFResourceGroup/providers/microsoft.managednetworkfabric/networkracks/NFName-aggrack", "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourcegroups/NFResourceGroup/providers/microsoft.managednetworkfabric/networkracks/NFName-comprack1",
Expected output:
], "resourceGroup": "NFResourceGroup", "routerId": null,
- "serverCountPerRack": 7,
+ "serverCountPerRack": 8,
"systemData": { "createdAt": "2023-XX-XXT12:52:11.769525+00:00", "createdBy": "email@address.com",
The following table specifies parameters used to create Network to Network Inter
|| |*layer2Configuration*| Layer 2 configuration || ||
-|portCount| Number of ports that are part of the port-channel. Maximum value is based on Fabric SKU|2||
+|portCount| Number of ports that are part of the port-channel. Maximum value is based on Fabric SKU|3||
|mtu| Maximum transmission unit between CE and PE. |1500|| || |*layer3Configuration*| Layer 3 configuration between CEs and PEs||True
partner-solutions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/overview.md
Previously updated : 10/10/2022 Last updated : 04/19/2023
A list of features of any Azure Native ISV Service is listed below.
- Logs and metrics: Seamlessly direct logs and metrics from Azure Monitor to the Azure Native ISV Service using just a few gestures. You can configure auto-discovery of resources to monitor, and set up automatic log forwarding and metrics shipping. You can easily do the setup in Azure, without needing to create additional infrastructure or write custom code. - VNet injection: Provides private data plane access to Azure Native ISV services from customersΓÇÖ virtual networks. - Unified billing: Engage with a single entity, Microsoft Azure Marketplace, for billing. No separate license purchase is required to use Azure Native ISV Services.-
partner-solutions Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/partners.md
Azure Native ISV Services are available through the Marketplace.
|Partner |Description | ||-| |[Apache Kafka for Confluent Cloud](apache-kafka-confluent-cloud/overview.md) | Fully managed event streaming platform powered by Apache Kafka. |
-|[Azure Native Qumulo Scalable File Service Preview](qumulo/qumulo-overview.md) | Multi-petabyte scale, single namespace, multi-protocol file data platform with the performance, security, and simplicity to meet the most demanding enterprise workloads. |
+|[Azure Native Qumulo Scalable File Service](qumulo/qumulo-overview.md) | Multi-petabyte scale, single namespace, multi-protocol file data platform with the performance, security, and simplicity to meet the most demanding enterprise workloads. |
## Networking and security
postgresql Concepts Compare Single Server Flexible Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-compare-single-server-flexible-server.md
Previously updated : 07/05/2022 Last updated : 04/18/2023 # Comparison chart - Azure Database for PostgreSQL Single Server and Flexible Server
The following table provides a list of high-level features and capabilities comp
| Support for PgLogical extension | No | Yes | | Support logical replication with HA | N/A | [Limited](concepts-high-availability.md#high-availabilitylimitations) | | **Disaster Recovery** | | |
-| Cross region DR | Using read replicas, geo-redundant backup | Geo-redundant backup (in [selected regions](overview.md#azure-regions)) |
-| DR using replica | Using async physical replication | Preview |
+| Cross region DR | Using read replicas, geo-redundant backup | Using read replicas, Geo-redundant backup (in [selected regions](overview.md#azure-regions)) |
+| DR using replica | Using async physical replication | Using async physical replication |
| Automatic failover | No | No | | Can use the same r/w endpoint | No | No | | **Backup and Recovery** | | |
private-5g-core Azure Private 5G Core Release Notes 2209 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/azure-private-5g-core-release-notes-2209.md
This article applies to the Azure Private 5G Core 2209 release (PMN-4-17-2). Thi
- **Updated template for Log Analytics** - There is a new version of the Log Analytics Dashboard Quickstart template. This is required to view metrics on Packet Core versions 4.17 and above. To continue using your Log Analytics Dashboard, you must redeploy it with the new template. See [Create an overview Log Analytics dashboard using an ARM template](./create-overview-dashboard.md).
+> [!NOTE]
+> Monitoring Azure Private 5G Core with Log Analytics is no longer supported. Use [Azure Monitor platform metrics](monitor-private-5g-core-with-platform-metrics.md) instead.
+ ## Issues fixed in the 2209 release The following table provides a summary of issues fixed in this release.
private-5g-core Azure Stack Edge Disconnects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/azure-stack-edge-disconnects.md
The following functions are not supported while disconnected:
While disconnected, you cannot enable local monitoring authentication or sign in to the [distributed tracing](distributed-tracing.md) and [packet core dashboards](packet-core-dashboards.md) using Azure Active Directory. However, you can access both distributed tracing and packet core dashboards via local access if enabled.
-If you expect to need access to your local monitoring tools while the ASE device is disconnected, you can change your authentication method to local usernames and passwords by following [Modify the local access configuration in a site](modify-local-access-configuration.md).
+New [Azure Monitor platform metrics](monitor-private-5g-core-with-platform-metrics.md) won't be collected while in disconnected mode. Once the disconnect ends, Azure Monitor will automatically resume gathering metrics about the packet core instance.
-Once the disconnect ends, log analytics on Azure updates with the stored data, excluding rate and gauge type metrics.
+If you expect to need access to your local monitoring tools while the ASE device is disconnected, you can change your authentication method to local usernames and passwords by following [Modify the local access configuration in a site](modify-local-access-configuration.md).
### Configuration and provisioning actions during temporary disconnects
private-5g-core Create Overview Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-overview-dashboard.md
- Title: Create an overview Log Analytics dashboard-
-description: Information on how to use an ARM template to create an overview Log Analytics dashboard you can use to monitor a packet core instance.
---- Previously updated : 03/20/2022---
-# Create an overview Log Analytics dashboard using an ARM template
-
-> [!IMPORTANT]
-> Monitoring Azure Private 5G Core using Log Analytics will soon become unsupported. If you're considering integrating Log Analytics into your deployment, we recommend contacting your support representative to discuss options to suit your cloud monitoring needs.
-
-Log Analytics dashboards can visualize all of your saved log queries, giving you the ability to find, correlate, and share data about your private mobile network. In this how-to guide, you'll learn how to create an example overview dashboard using an Azure Resource Manager (ARM) template. This dashboard includes charts to monitor important Key Performance Indicators (KPIs) for a packet core instance's operation, including throughput and the number of connected devices.
--
-If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
-
-[![Deploy to Azure.](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.mobilenetwork%2Fmobilenetwork-create-dashboard%2Fazuredeploy.json)
-
-## Prerequisites
--- Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have the built-in Contributor or Owner role at the subscription scope. -- Carry out the steps in [Enabling Log Analytics for Azure Private 5G Core](enable-log-analytics-for-private-5g-core.md).-- Collect the following information.-
- - The name of the **Kubernetes - Azure Arc** resource that represents the Kubernetes cluster on which your packet core instance is running.
- - The name of the resource group containing the **Kubernetes - Azure Arc** resource.
- - The Azure region in which you deployed your private mobile network.
-
-## Review the template
-
-The template used in this how-to guide is from [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/mobilenetwork-create-dashboard). The template for this article is too long to show here. To view the template, see [azuredeploy.json](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.mobilenetwork/mobilenetwork-create-dashboard/azuredeploy.json).
-
-The template defines one [**Microsoft.Portal/dashboards**](/azure/templates/microsoft.portal/dashboards) resource, which is a dashboard that displays data about your packet core instance's activity.
-
-## Deploy the template
-
-1. Select the following link to sign in to Azure and open a template.
-
- [![Deploy to Azure.](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.mobilenetwork%2Fmobilenetwork-create-dashboard%2Fazuredeploy.json)
-
-1. Select or enter the following values, using the information you retrieved in [Prerequisites](#prerequisites).
-
- - **Subscription:** set this to the Azure subscription you used to create your private mobile network.
- - **Resource group:** set this to the resource group in which you want to create the dashboard. You can use an existing resource group or create a new one.
- - **Region:** select the region in which you deployed the private mobile network.
- - **Connected Cluster Name:** enter the name of the **Kubernetes - Azure Arc** resource that represents the Kubernetes cluster on which your packet core instance is running.
- - **Connected Cluster Resource Group:** enter the name of the resource group containing the **Kubernetes - Azure Arc** resource.
- - **Dashboard Display Name:** enter the name you want to use for the dashboard.
- - **Location:** leave this field unchanged.
-
- :::image type="content" source="media/create-overview-dashboard/dashboard-configuration-fields.png" alt-text="Screenshot of the Azure portal showing the configuration fields for the dashboard ARM template.":::
-
-1. Select **Review + create**.
-1. Azure will now validate the configuration values you've entered. You should see a message indicating that your values have passed validation.
-
- If the validation fails, you'll see an error message and the **Configuration** tab(s) containing the invalid configuration will be flagged. Select the flagged tab(s) and use the error messages to correct invalid configuration before returning to the **Review + create** tab.
-
-1. Once your configuration has been validated, you can select **Create** to create the dashboard. The Azure portal will display a confirmation screen when the dashboard has been created.
-
-## Review deployed resources
-
-1. On the confirmation screen, select **Go to resource**.
-
- :::image type="content" source="media/create-overview-dashboard/deployment-confirmation.png" alt-text="Screenshot of the Azure portal showing a deployment confirmation for the ARM template.":::
-
-1. Select **Go to dashboard**.
-
- :::image type="content" source="media/create-overview-dashboard/go-to-dashboard-option.png" alt-text="Screenshot of the Azure portal showing the Go to dashboard option.":::
-
-1. The Azure portal displays the new overview dashboard, with several tiles providing information on important KPIs for the packet core instance.
-
- :::image type="content" source="media/create-overview-dashboard/overview-dashboard.png" alt-text="Screenshot of the Azure portal showing the overview dashboard. It includes tiles for connected devices, gNodeBs, PDU sessions and throughput." lightbox="media/create-overview-dashboard/overview-dashboard.png":::
-
-## Next steps
-
-You can now begin using the overview dashboard to monitor your packet core instance's activity. You can also use the following articles to add more queries to the dashboard.
--- [Learn more about constructing queries](monitor-private-5g-core-with-log-analytics.md#construct-queries).-- [Learn more about how to pin a query to the dashboard](../azure-monitor/visualize/tutorial-logs-dashboards.md#visualize-a-log-query).
private-5g-core Data Plane Packet Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/data-plane-packet-capture.md
Data plane packet capture works by mirroring packets to a Linux kernel interface
For more options to monitor your deployment and view analytics: -- [Learn more about enabling log analytics Azure Private 5G Core](enable-log-analytics-for-private-5g-core.md)-- [Learn more about monitoring Azure Private 5G Core using Log Analytics](monitor-private-5g-core-with-log-analytics.md)
+- [Learn more about monitoring Azure Private 5G Core using Azure Monitor platform metrics](monitor-private-5g-core-with-platform-metrics.md)
private-5g-core Enable Log Analytics For Private 5G Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/enable-log-analytics-for-private-5g-core.md
- Title: Enable Log Analytics for a packet core instance-
-description: In this how-to guide, you'll learn how to enable Log Analytics to allow you to monitor and analyze activity for a packet core instance.
---- Previously updated : 03/08/2022---
-# Enable Log Analytics for a packet core instance
-
-> [!IMPORTANT]
-> Monitoring Azure Private 5G Core using Log Analytics will soon become unsupported. If you're considering integrating Log Analytics into your deployment, we recommend contacting your support representative to discuss options to suit your cloud monitoring needs.
-
-Log Analytics is a tool in the Azure portal used to edit and run log queries with data in Azure Monitor Logs. You can write queries to retrieve records or visualize data in charts, allowing you to monitor and analyze activity in your private mobile network. In this how-to guide, you'll learn how to enable Log Analytics for a packet core instance.
-
-> [!IMPORTANT]
-> Log Analytics is part of Azure Monitor and is chargeable. [Estimate costs](monitor-private-5g-core-with-log-analytics.md#estimate-costs) provides information on estimating the cost of using Log Analytics to monitor your private mobile network. You shouldn't enable Log Analytics if you don't want to incur any costs. If you don't enable Log Analytics, you can still monitor your packet core instances from the local network using the [packet core dashboards](packet-core-dashboards.md).
-
-## Prerequisites
--- Identify the Kubernetes - Azure Arc resource representing the Azure Arc-enabled Kubernetes cluster on which your packet core instance is running.-- Ensure you have [Contributor](../role-based-access-control/built-in-roles.md#contributor) role assignment on the Azure subscription containing the Kubernetes - Azure Arc resource.-- Ensure your local machine has admin kubectl access to the Azure Arc-enabled Kubernetes cluster. This requires a core kubeconfig file, see [Set up kubectl access](commission-cluster.md#set-up-kubectl-access) for instructions on how to obtain this file.-
-## Create an Azure Monitor extension
-
-Follow the steps in [Azure Monitor Container Insights for Azure Arc-enabled Kubernetes clusters](../azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md) to create an Azure Monitor extension for the Azure Arc-enabled Kubernetes cluster. Ensure that you use the instructions for the Azure CLI, and that you choose **Option 4 - On Azure Stack Edge** when you carry out [Create extension instance using Azure CLI](../azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md?tabs=cli#create-extension-instance).
-
-## Configure and deploy the ConfigMap
-
-In this step, you'll configure and deploy a ConfigMap which will allow Container Insights to collect Prometheus metrics from the Azure Arc-enabled Kubernetes cluster.
-
-1. Copy the following yaml file into a text editor and save it as *99-azure-monitoring-configmap.yml*.
-
- ```yml
- kind: ConfigMap
- apiVersion: v1
- data:
- schema-version:
- # string.used by agent to parse config. supported versions are {v1}. Configs with other schema versions will be
- # rejected by the agent.
- v1
- config-version:
- # string.used by customer to keep track of this config file's version in their source control/repository (max
- # allowed 10 chars, other chars will be truncated)
- ver1
- log-data-collection-settings: |-
- # Log data collection settings
- # Any errors related to config map settings can be found in the KubeMonAgentEvents table in the Log Analytics
- # workspace that the cluster is sending data to.
-
- [log_collection_settings]
- [log_collection_settings.stdout]
- # In the absense of this configmap, default value for enabled is true
- enabled = false
- # exclude_namespaces setting holds good only if enabled is set to true.
- # kube-system log collection is disabled by default in the absence of 'log_collection_settings.stdout'
- # setting. If you want to enable kube-system, remove it from the following setting.
- # If you want to continue to disable kube-system log collection keep this namespace in the following setting
- # and add any other namespace you want to disable log collection to the array.
- # In the absense of this configmap, default value for exclude_namespaces = ["kube-system"].
- exclude_namespaces = ["kube-system"]
-
- [log_collection_settings.stderr]
- # Default value for enabled is true
- enabled = false
- # exclude_namespaces setting holds good only if enabled is set to true.
- # kube-system log collection is disabled by default in the absence of 'log_collection_settings.stderr'
- # setting. If you want to enable kube-system, remove it from the following setting.
- # If you want to continue to disable kube-system log collection keep this namespace in the following setting
- # and add any other namespace you want to disable log collection to the array.
- # In the absense of this cofigmap, default value for exclude_namespaces = ["kube-system"].
- exclude_namespaces = ["kube-system"]
-
- [log_collection_settings.env_var]
- # In the absense of this configmap, default value for enabled is true
- enabled = false
-
- [log_collection_settings.enrich_container_logs]
- # In the absense of this configmap, default value for enrich_container_logs is false.
- # When this is enabled (enabled = true), every container log entry (both stdout & stderr)
- # will be enriched with container Name & container Image.
- enabled = false
-
- [log_collection_settings.collect_all_kube_events]
- # In the absense of this configmap, default value for collect_all_kube_events is false.
- # When the setting is set to false, only the kube events with !normal event type will be collected.
- # When this is enabled (enabled = true), all kube events including normal events will be collected.
- enabled = false
-
- prometheus-data-collection-settings: |-
- # Custom Prometheus metrics data collection settings
- [prometheus_data_collection_settings.cluster]
- # Cluster level scrape endpoint(s). These metrics will be scraped from agent's Replicaset (singleton)
- # Any errors related to prometheus scraping can be found in the KubeMonAgentEvents table in the Log Analytics
- # workspace that the cluster is sending data to.
-
- # Interval specifying how often to scrape for metrics. This is duration of time and can be specified for
- # supporting settings by combining an integer value and time unit as a string value. Valid time units are ns,
- # us (or ┬╡s), ms, s, m, h.
- interval = "1m"
-
- ## Uncomment the following settings with valid string arrays for prometheus scraping
- fieldpass = ["subscribers_count", "amf_registered_subscribers", "amf_registered_subscribers_connected", "amf_connected_gnb", "subgraph_counts", "cppe_bytes_total", "amfcc_mm_initial_registration_failure", "amfcc_n1_auth_failure", "amfcc_n1_auth_reject", "amfn2_n2_pdu_session_resource_setup_request", "amfn2_n2_pdu_session_resource_setup_response", "amfn2_n2_pdu_session_resource_modify_request", "amfn2_n2_pdu_session_resource_modify_response", "amfn2_n2_pdu_session_resource_release_command", "amfn2_n2_pdu_session_resource_release_response", "amfcc_n1_service_reject", "amfn2_n2_pathswitch_request_failure", "amfn2_n2_handover_failure"]
-
- #fielddrop = ["metric_to_drop"]
-
- # An array of urls to scrape metrics from.
- # urls = ["http://myurl:9101/metrics"]
-
- # An array of Kubernetes services to scrape metrics from.
- # kubernetes_services = ["http://my-service-dns.my-namespace:9102/metrics"]
-
- # When monitor_kubernetes_pods = true, replicaset will scrape Kubernetes pods for the following prometheus
- # annotations:
- # - prometheus.io/scrape: Enable scraping for this pod
- # - prometheus.io/scheme: If the metrics endpoint is secured then you will need to
- # set this to `https` & most likely set the tls config.
- # - prometheus.io/path: If the metrics path is not /metrics, define it with this annotation.
- # - prometheus.io/port: If port is not 9102 use this annotation
- monitor_kubernetes_pods = true
-
- ## Restricts Kubernetes monitoring to namespaces for pods that have annotations set and are scraped using the
- ## monitor_kubernetes_pods setting.
- ## This will take effect when monitor_kubernetes_pods is set to true
- ## ex: monitor_kubernetes_pods_namespaces = ["default1", "default2", "default3"]
- # monitor_kubernetes_pods_namespaces = ["default1"]
-
- [prometheus_data_collection_settings.node]
- # Node level scrape endpoint(s). These metrics will be scraped from agent's DaemonSet running in every node in
- # the cluster
- # Any errors related to prometheus scraping can be found in the KubeMonAgentEvents table in the Log Analytics
- # workspace that the cluster is sending data to.
-
- # Interval specifying how often to scrape for metrics. This is duration of time and can be specified for
- # supporting settings by combining an integer value and time unit as a string value. Valid time units are ns,
- # us (or ┬╡s), ms, s, m, h.
- interval = "1m"
-
- ## Uncomment the following settings with valid string arrays for prometheus scraping
-
- # An array of urls to scrape metrics from. $NODE_IP (all upper case) will substitute of running Node's IP
- # address
- # urls = ["http://$NODE_IP:9103/metrics"]
-
- #fieldpass = ["metric_to_pass1", "metric_to_pass12"]
-
- #fielddrop = ["metric_to_drop"]
-
- metric_collection_settings: |-
- # Metrics collection settings for metrics sent to Log Analytics and MDM
- [metric_collection_settings.collect_kube_system_pv_metrics]
- # In the absense of this configmap, default value for collect_kube_system_pv_metrics is false
- # When the setting is set to false, only the persistent volume metrics outside the kube-system namespace will be
- # collected
- enabled = false
- # When this is enabled (enabled = true), persistent volume metrics including those in the kube-system namespace
- # will be collected
-
- alertable-metrics-configuration-settings: |-
- # Alertable metrics configuration settings for container resource utilization
- [alertable_metrics_configuration_settings.container_resource_utilization_thresholds]
- # The threshold(Type Float) will be rounded off to 2 decimal points
- # Threshold for container cpu, metric will be sent only when cpu utilization exceeds or becomes equal to the
- # following percentage
- container_cpu_threshold_percentage = 95.0
- # Threshold for container memoryRss, metric will be sent only when memory rss exceeds or becomes equal to the
- # following percentage
- container_memory_rss_threshold_percentage = 95.0
- # Threshold for container memoryWorkingSet, metric will be sent only when memory working set exceeds or becomes
- # equal to the following percentage
- container_memory_working_set_threshold_percentage = 95.0
-
- # Alertable metrics configuration settings for persistent volume utilization
- [alertable_metrics_configuration_settings.pv_utilization_thresholds]
- # Threshold for persistent volume usage bytes, metric will be sent only when persistent volume utilization
- # exceeds or becomes equal to the following percentage
- pv_usage_threshold_percentage = 60.0
- integrations: |-
- [integrations.azure_network_policy_manager]
- collect_basic_metrics = false
- collect_advanced_metrics = false
- metadata:
- name: container-azm-ms-agentconfig
- namespace: kube-system
- ```
-1. In a command line with kubectl access to the Azure Arc-enabled Kubernetes cluster, navigate to the folder containing the *99-azure-monitoring-configmap.yml* file and run the following command.
-
- `kubectl apply -f 99-azure-monitoring-configmap.yml`
-
- The command will return quickly with a message that's similar to the following: `configmap "container-azm-ms-agentconfig" created`. However, the configuration change can take a few minutes to finish before taking effect, and all omsagent pods in the cluster will restart. The restart is a rolling restart for all omsagent pods, not all restart at the same time.
-
-## Run a query
-
-In this step, you'll run a query in the Log Analytics workspace to confirm that you can retrieve logs for the packet core instance.
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Search for and select the Log Analytics workspace you used when creating the Azure Monitor extension in [Create an Azure Monitor extension](#create-an-azure-monitor-extension).
-1. Select **Logs** from the resource menu.
- :::image type="content" source="media/log-analytics-workspace.png" alt-text="Screenshot of the Azure portal showing a Log Analytics workspace resource. The Logs option is highlighted.":::
-1. If it appears, select **X** to dismiss the **Queries** window.
-1. Select **Select scope**.
-
- :::image type="content" source="media/enable-log-analytics-for-private-5g-core/select-scope.png" alt-text="Screenshot of the Log Analytics interface. The Select scope option is highlighted.":::
-
-1. Under **Select a scope**, deselect the Log Analytics workspace.
-1. Search for and select the **Kubernetes - Azure Arc** resource representing the Azure Arc-enabled Kubernetes cluster.
-1. Select **Apply**.
-
- :::image type="content" source="media/enable-log-analytics-for-private-5g-core/select-kubernetes-cluster-scope.png" alt-text="Screenshot of the Azure portal showing the Select a scope screen. The search bar, Kubernetes - Azure Arc resource and Apply option are highlighted.":::
-
-1. Copy and paste the following query into the query window, and then select **Run**.
-
- ```kusto
- InsightsMetrics
- | where Namespace == "prometheus"
- | where Name == "amf_connected_gnb"
- | extend Time=TimeGenerated
- | extend GnBs=Val
- | project GnBs, Time
- ```
-
- :::image type="content" source="media/enable-log-analytics-for-private-5g-core/run-query.png" alt-text="Screenshot of the Log Analytics interface. The Run option is highlighted." lightbox="media/enable-log-analytics-for-private-5g-core/run-query.png":::
-
-1. Verify that the results window displays the results of the query, showing how many gNodeBs have been connected to the packet core instance in the last 24 hours.
-
- :::image type="content" source="media/enable-log-analytics-for-private-5g-core/query-results.png" alt-text="Screenshot of the results window displaying results from a query.":::
-
-## Next steps
--- [Learn more about monitoring Azure Private 5G Core using Log Analytics](monitor-private-5g-core-with-log-analytics.md)-- [Create an overview Log Analytics dashboard using an ARM template](create-overview-dashboard.md)-- [Learn more about Log Analytics in Azure Monitor](../azure-monitor/logs/log-analytics-overview.md)
private-5g-core Gather Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/gather-diagnostics.md
You must already have an AP5GC site deployed to collect diagnostics.
To continue to monitor your 5G core: -- [Enable log analytics](enable-log-analytics-for-private-5g-core.md)-- [Monitor log analytics](monitor-private-5g-core-with-log-analytics.md)
+- [Monitor Azure Private 5G Core with Azure Monitor platform metrics](monitor-private-5g-core-with-platform-metrics.md)
+- [Monitor Azure Private 5G Core with packet core dashboards](packet-core-dashboards.md)
private-5g-core Modify Packet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/modify-packet-core.md
If you made changes that triggered a packet core reinstall, reconfigure your dep
## Next steps - If you made a configuration change that requires you to manually perform packet core reinstall, follow [Reinstall the packet core instance in a site](reinstall-packet-core.md).-- Use [Log Analytics](monitor-private-5g-core-with-log-analytics.md) or the [packet core dashboards](packet-core-dashboards.md) to confirm your packet core instance is operating normally after you modify it.
+- Use [Azure Monitor](monitor-private-5g-core-with-platform-metrics.md) or the [packet core dashboards](packet-core-dashboards.md) to confirm your packet core instance is operating normally after you modify it.
private-5g-core Modify Service Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/modify-service-plan.md
To modify your service plan:
## Next steps
-Use [Azure Monitor](monitor-private-5g-core-with-log-analytics.md) or the [packet core dashboards](packet-core-dashboards.md) to confirm your packet core instance is operating normally after you modify the service plan.
+Use [Azure Monitor](monitor-private-5g-core-with-platform-metrics.md) or the [packet core dashboards](packet-core-dashboards.md) to confirm your packet core instance is operating normally after you modify the service plan.
private-5g-core Monitor Private 5G Core With Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/monitor-private-5g-core-with-log-analytics.md
- Title: Monitor Azure Private 5G Core with Log Analytics
-description: Information on using Log Analytics to monitor and analyze activity in your private mobile network.
---- Previously updated : 03/08/2022---
-# Monitor Azure Private 5G Core with Log Analytics
-
-> [!IMPORTANT]
-> Monitoring Azure Private 5G Core using Log Analytics will soon become unsupported. If you're considering integrating Log Analytics into your deployment, we recommend contacting your support representative to discuss options to suit your cloud monitoring needs.
-
-Log Analytics is a tool in the Azure portal used to edit and run log queries with data in Azure Monitor Logs. You can write queries to retrieve records or visualize data in charts, allowing you to monitor and analyze activity in your private mobile network.
-
-> [!IMPORTANT]
-> Log Analytics currently can only be used to monitor private mobile networks that support 5G UEs. You can still monitor private mobile networks supporting 4G UEs from the local network using the [packet core dashboards](packet-core-dashboards.md).
-
-## Enable Log Analytics
-
-You'll need to carry out the steps in [Enabling Log Analytics for Azure Private 5G Core](enable-log-analytics-for-private-5g-core.md) before you can use Log Analytics with Azure Private 5G Core.
-
-> [!IMPORTANT]
-> Log Analytics is part of Azure Monitor and is chargeable. [Estimate costs](#estimate-costs) provides information on estimating the cost of using Log Analytics to monitor your private mobile network. You shouldn't enable Log Analytics if you don't want to incur any costs. If you don't enable Log Analytics, you can still monitor your packet core instances from the local network using the [packet core dashboards](packet-core-dashboards.md).
-
-## Access Log Analytics for a packet core instance
-
-Once you've enabled Log Analytics, you can begin working with it in the Azure portal. Navigate to the Log Analytics workspace you assigned to the Kubernetes cluster on which a packet core instance is running. Select **Logs** from the left hand menu.
--
-You'll then be shown the Log Analytics tool where you can enter your queries.
--
-For detailed information on using the Log Analytics tool, see [Overview of Log Analytics in Azure Monitor](../azure-monitor/logs/log-analytics-overview.md).
-
-## Construct queries
-
-You can find a tutorial for writing queries using the Log Analytics tool at [Get started with log queries in Azure Monitor](../azure-monitor/logs/get-started-queries.md). Each packet core instance streams the following logs to the Log Analytics tool. You can use these logs to construct queries that will allow you to monitor your private mobile network. You'll need to run all queries at the scope of the **Kubernetes - Azure Arc** resource that represents the Kubernetes cluster on which your packet core instance is running.
-
-| Log name | Description |
-|--|--|
-| subscribers_count | Number of successfully provisioned SIMs. |
-| amf_registered_subscribers | Number of currently registered SIMs. |
-| amf_connected_gnb | Number of gNodeBs that are currently connected to the Access and Mobility Management Function (AMF). |
-| subgraph_counts | Number of active PDU sessions being handled by the User Plane Function (UPF). |
-| cppe_bytes_total | Total number of bytes received or transmitted by the UPF at each interface since the UPF last restarted. The value is given as a 64-bit unsigned integer. |
-| amfcc_mm_initial_registration_failure | Total number of failed initial registration attempts handled by the AMF. |
-| amfcc_n1_auth_failure | Counter of Authentication Failure Non-Access Stratum (NAS) messages. The Authentication Failure NAS message is sent by the user equipment (UE) to the AMF to indicate that authentication of the network has failed. |
-| amfcc_n1_auth_reject | Counter of Authentication Reject NAS messages. The Authentication Reject NAS message is sent by the AMF to the UE to indicate that the authentication procedure has failed and that the UE shall abort all activities. |
-| amfn2_n2_pdu_session_resource_setupΓÇï_request | Total number of PDU SESSION RESOURCE SETUP REQUEST Next Generation Application Protocol (NGAP) messages received by the AMF. |
-| amfn2_n2_pdu_session_resource_setupΓÇï_response | Total number of PDU SESSION RESOURCE SETUP RESPONSE NGAP messages received by the AMF. |
-| amfn2_n2_pdu_session_resource_modifyΓÇï_request | Total number of PDU SESSION RESOURCE MODIFY REQUEST NGAP messages received by the AMF. |
-| amfn2_n2_pdu_session_resource_modifyΓÇï_response | Total number of PDU SESSION RESOURCE MODIFY RESPONSE NGAP messages received by the AMF. |
-| amfn2_n2_pdu_session_resource_releaseΓÇï_command | Total number of PDU SESSION RESOURCE RELEASE COMMAND NGAP messages received by the AMF. |
-| amfn2_n2_pdu_session_resource_releaseΓÇï_response | Total number of PDU SESSION RESOURCE RELEASE RESPONSE NGAP messages received by the AMF. |
-| amfcc_n1_service_reject | Total number of Service reject NAS messages received by the AMF. |
-| amfn2_n2_pathswitch_request_failure | Total number of PATH SWITCH REQUEST FAILURE NGAP messages received by the AMF. |
-| amfn2_n2_handover_failure | Total number of HANDOVER FAILURE NGAP messages received by the AMF. |
-
-
-## Example queries
-
-The following are some example queries you can run to retrieve logs relating to Key Performance Indicators (KPIs) for your private mobile network. You should run all of these queries at the scope of the **Kubernetes - Azure Arc** resource that represents the Kubernetes cluster on which your packet core instance is running.
-
-### PDU sessions
-
-```Kusto
-InsightsMetrics
- | where Namespace == "prometheus"
- | where Name == "subgraph_counts"
- | summarize PduSessions=max(Val) by Time=TimeGenerated
-```
-
-### Registered UEs
-
-```Kusto
-let Time = InsightsMetrics
- | where Namespace == "prometheus"
- | summarize by Time=TimeGenerated;
-let RegisteredDevices = InsightsMetrics
- | where Namespace == "prometheus"ΓÇ»
- | where Name == "amf_registered_subscribers"
- | summarize by RegisteredDevices=Val, Time=TimeGenerated;
-Time
- | join kind=leftouter (RegisteredDevices) on Time
- | project Time, RegisteredDevices
-```
-
-### Connected gNodeBs
-
-```kusto
-InsightsMetrics
- | where Namespace == "prometheus"
- | where Name == "amf_connected_gnb"
- | extend Time=TimeGenerated
- | extend GnBs=Val
- | project GnBs, Time
-```
-
-## Log Analytics dashboards
-
-Log Analytics dashboards can visualize all of your saved log queries, giving you the ability to find, correlate, and share data about your private mobile network.
-
-You can find information on how to create a Log Analytics dashboard in [Create and share dashboards of Log Analytics data](../azure-monitor/visualize/tutorial-logs-dashboards.md).
-
-You can also follow the steps in [Create an overview Log Analytics dashboard using an ARM template](create-overview-dashboard.md) to create an example overview dashboard. This dashboard includes charts to monitor important Key Performance Indicators (KPIs) for your private mobile network's operation, including throughput and the number of connected devices.
-
-## Estimate costs
-
-Log Analytics will ingest an average of 1.4 GB of data a day from each packet core instance. [Monitor usage and estimated costs in Azure Monitor](../azure-monitor/usage-estimated-costs.md) provides information on how to estimate the cost of using Log Analytics to monitor Azure Private 5G Core.
-
-## Next steps
-- [Enable Log Analytics for Azure Private 5G Core](enable-log-analytics-for-private-5g-core.md)-- [Create an overview Log Analytics dashboard using an ARM template](create-overview-dashboard.md)-- [Learn more about Log Analytics in Azure Monitor](../azure-monitor/logs/log-analytics-overview.md)
private-5g-core Monitor Private 5G Core With Platform Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/monitor-private-5g-core-with-platform-metrics.md
+
+ Title: Monitor Azure Private 5G Core with Azure Monitor platform metrics
+description: Information on using Azure Monitor platform metrics to monitor activity and analyze statistics in your private mobile network.
++++ Last updated : 11/22/2022+++
+# Monitor Azure Private 5G Core with Azure Monitor platform metrics
+
+*Platform metrics* are measurements over time collected from Azure resources and stored by [Azure Monitor Metrics](/azure/azure-monitor/essentials/data-platform-metrics). You can use the Azure Monitor Metrics Explorer to analyze metrics in the Azure portal, or query the Azure Monitor REST API for metrics to analyze with third-party monitoring tools.
+
+Azure Private 5G Core (AP5GC) platform metrics are collected per site and allow you to monitor key statistics relating to your deployment. See [Supported metrics with Azure Monitor](/azure/azure-monitor/essentials/metrics-supported#microsoftkubernetesconfigurationextensions) for the available AP5GC metrics. AP5GC metrics are included under *microsoft.kubernetesconfiguration/extensions*.
+
+Once you create a **Mobile Network Site** resource, Azure Monitor automatically starts gathering metrics about the packet core instance. For more information on creating a mobile network site, see [Collect the required information for a site](collect-required-information-for-a-site.md).
+
+Platform metrics are available for monitoring and retrieval for up to 92 days. If you want to store your data for longer, you can export them using the Azure Monitor REST API. Once exported, metrics can be saved to a storage account that allows longer data retention. See [Azure Storage](/azure/storage/) for some examples of storage accounts you can use.
+
+If you want to use the Azure portal to analyze your packet core metrics, see [Visualize metrics using the Azure portal](#visualize-metrics-using-the-azure-portal).
+
+If you want to export metrics for analysis using your tool of choice or for longer storage periods, see [Export metrics using the Azure Monitor REST API](#export-metrics-using-the-azure-monitor-rest-api).
+
+## Visualize metrics using the Azure portal
+
+You can use the Azure portal to monitor your deployment's health and performance on the **Packet Core Control Plane** resource's **Overview** page. This displays data captured from both the control plane and data plane:
+
+- The control plane generates metrics relating to access, mobility and session management, such as registration and session establishment successes and failures.
+- The data plane generates metrics relating to the data plane, such as throughput and packet drops.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Search for and select the **Packet Core Control Plane** resource for the site you're interested in monitoring:
+ 1. Select **All resources**.
+ 1. Enter *packet core control plane* into the filter text box.
+ 1. Select the **Packet Core Control Plane** resource.
+
+ :::image type="content" source="media/packet-core-control-plane-filter.png" alt-text="Screenshot of the Azure portal showing the All resources page filtered to show Packet Core Control Plane resources only.":::
+
+1. Select the **Monitoring** tab.
+
+ :::image type="content" source="media/platform-metrics-dashboard.png" alt-text="Screenshot of the Azure portal showing the Packet Core Control Plane resource's Monitoring tab." lightbox="media/platform-metrics-dashboard.png":::
+
+You should now see the Azure Monitor dashboard displaying important key performance indicators (KPIs), including the number of connected devices and session establishment failures.
+
+You can select individual dashboard panes to open an expanded view where you can specify details such as the graph's time range and time granularity. You can also create additional dashboards using the platform metrics available. For detailed information on interacting with the Azure Monitor graphics, see [Get started with metrics explorer](/azure/azure-monitor/essentials/metrics-getting-started).
+
+> [!TIP]
+> You can also find the **Packet Core Control Plane** resource under **Network functions** on the **Site** resource.
+
+## Export metrics using the Azure Monitor REST API
+
+In addition to the monitoring functionalities offered by the Azure portal, you can export Azure Private 5G Core metrics for analysis with other tools using the [Azure Monitor REST API](/rest/api/monitor/). Once this data is retrieved, you may want to sava it in a separate data store that allows longer data retention, or use your tools of choice to monitor and analyze your deployment.
+
+For example, you can export the platform metrics to data storage and processing services such as [Azure Monitor Log Analytics](/azure/azure-monitor/logs/log-analytics-overview), [Azure Storage](/azure/storage/), or [Azure Event Hubs](/azure/event-hubs/). You can also leverage [Azure Managed Grafana](/azure/managed-grafan).
+
+> [!NOTE]
+> Exporting metrics to another application for analysis or storage may incur extra costs. Check the pricing information for the applications you want to use.
+
+See [Supported metrics with Azure Monitor](/azure/azure-monitor/essentials/metrics-supported#microsoftkubernetesconfigurationextensions) for the AP5GC metrics available for retrieval. AP5GC metrics are included under *microsoft.kubernetesconfiguration/extensions*. You can find more information on using the Azure Monitor REST API to construct queries and retrieve metrics at [Azure monitoring REST API walkthrough](/azure/azure-monitor/essentials/rest-api-walkthrough).
+
+## Next steps
+
+- [Learn more about the Azure Monitor Metrics](/azure/azure-monitor/essentials/data-platform-metrics)
private-5g-core Private 5G Core Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/private-5g-core-overview.md
Azure Private 5G Core provides:
- **Azure visibility**
- Azure Private 5G Core integrates with Azure Monitor and Log Analytics to collect data from across the sites and provide real-time monitoring of the entire private mobile network. You can extend this capability to capture radio analytics to provide a complete network view from Azure.
+ Azure Private 5G Core integrates with Azure Monitor to collect data from across the sites and provide real-time monitoring of the entire private mobile network. You can extend this capability to capture radio analytics to provide a complete network view from Azure.
You'll also need the following to deploy a private mobile network using Azure Private 5G Core. These aren't included as part of the service.
Azure Private 5G Core is available as a native Azure service, offering the same
- Deploy and configure a packet core instance on your Azure Stack Edge device in minutes. - Create a virtual representation of your physical mobile network through Azure using mobile network and site resources. - Provision SIM resources to authenticate devices in the network, while also supporting redundancy.-- Employ Log Analytics and other observability services to view the health of your network and take corrective action through Azure.
+- Employ Azure Monitor and other observability services to view the health of your network and take corrective action through Azure.
- Use Azure role-based access control (RBAC) to allow granular access to the private mobile network to different personnel or teams within your organization, or even a managed service provider. - Use an Azure Stack Edge device's compute capabilities to run applications that can benefit from low-latency networks. - Seamlessly connect your existing Azure deployments to your new private mobile network using Azure hybrid compute, networking, and IoT services.
Azure Private 5G Core is available as a native Azure service, offering the same
## Azure centralized monitoring
-Azure Private 5G Core is integrated with Azure Monitor. You can write queries to retrieve records or visualize data in charts. This lets you monitor and analyze activity in your private mobile network directly from the Azure portal.
+Azure Private 5G Core is integrated with Azure Monitor Metrics Explorer, allowing you to monitor and analyze activity in your private mobile network directly from the Azure portal. You can write queries to retrieve records or visualize data in dashboards.
+
+For more information on using Azure Monitor to analyze metrics in your deployment, see [Monitor Azure Private 5G Core with Azure Monitor platform metrics](monitor-private-5g-core-with-platform-metrics.md).
## Next steps
private-5g-core Region Move Private Mobile Network Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/region-move-private-mobile-network-resources.md
Configure your deployment in the new region using the information you gathered i
## Verify
-Use [Azure Monitor](monitor-private-5g-core-with-log-analytics.md) or the [packet core dashboards](packet-core-dashboards.md) to confirm your deployment is operating normally after the region move.
+Use [Azure Monitor](monitor-private-5g-core-with-platform-metrics.md) or the [packet core dashboards](packet-core-dashboards.md) to confirm your deployment is operating normally after the region move.
## Next steps
private-5g-core Reinstall Packet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/reinstall-packet-core.md
Reconfigure your deployment using the information you gathered in [Back up deplo
## Verify reinstall 1. Navigate to the **Packet Core Control Plane** resource and check that the **Packet core installation state** field contains **Installed**, as described in [View the packet core instance's installation status](#view-the-packet-core-instances-installation-status).
-1. Use [Log Analytics](monitor-private-5g-core-with-log-analytics.md) or the [packet core dashboards](packet-core-dashboards.md) to confirm your packet core instance is operating normally after the reinstall.
+1. Use [Azure Monitor platform metrics](monitor-private-5g-core-with-platform-metrics.md) or the [packet core dashboards](packet-core-dashboards.md) to confirm your packet core instance is operating normally after the reinstall.
## Next steps You've finished reinstalling your packet core instance. You can now use Azure Monitor or the packet core dashboards to monitor your deployment. -- [Monitor Azure Private 5G Core with Log Analytics](monitor-private-5g-core-with-log-analytics.md)-- [Packet core dashboards](packet-core-dashboards.md)
+- [Monitor Azure Private 5G Core with Azure Monitor platform metrics](monitor-private-5g-core-with-platform-metrics.md)
+- [Monitor Azure Private 5G Core with packet core dashboards](packet-core-dashboards.md)
private-5g-core Upgrade Packet Core Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/upgrade-packet-core-arm-template.md
If your environment meets the prerequisites, you're familiar with using ARM temp
## Prerequisites -- You must have a running packet core. Use Log Analytics or the packet core dashboards to confirm your packet core instance is operating normally.
+- You must have a running packet core. Use Azure monitor platform metrics or the packet core dashboards to confirm your packet core instance is operating normally.
- Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have the built-in Contributor or Owner role at the subscription scope. - Identify the name of the site that hosts the packet core instance you want to upgrade. - If you use Azure Active Directory (Azure AD) to authenticate access to your local monitoring tools, ensure your local machine has core kubectl access to the Azure Arc-enabled Kubernetes cluster. This requires a core kubeconfig file, which you can obtain by following [Set up kubectl access](commission-cluster.md#set-up-kubectl-access).
Reconfigure your deployment using the information you gathered in [Back up deplo
Once the upgrade completes, check if your deployment is operating normally.
-1. Use [Log Analytics](monitor-private-5g-core-with-log-analytics.md) or the [packet core dashboards](packet-core-dashboards.md) to confirm your packet core instance is operating normally.
+1. Use [Azure Monitor platform metrics](monitor-private-5g-core-with-platform-metrics.md) or the [packet core dashboards](packet-core-dashboards.md) to confirm your packet core instance is operating normally.
1. Execute the testing plan you prepared in [Plan for your upgrade](#plan-for-your-upgrade). ## Rollback
If any of the configuration you set while your packet core instance was running
You've finished upgrading your packet core instance. - If your deployment contains multiple sites, upgrade the packet core instance in another site.-- Use [Log Analytics](monitor-private-5g-core-with-log-analytics.md) or the [packet core dashboards](packet-core-dashboards.md) to monitor your deployment.
+- Use [Azure Monitor platform metrics](monitor-private-5g-core-with-platform-metrics.md) or the [packet core dashboards](packet-core-dashboards.md) to monitor your deployment.
private-5g-core Upgrade Packet Core Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/upgrade-packet-core-azure-portal.md
If your deployment contains multiple sites, we recommend upgrading the packet co
## Prerequisites -- You must have a running packet core. Use Log Analytics or the packet core dashboards to confirm your packet core instance is operating normally.
+- You must have a running packet core. Use Azure monitor platform metrics or the packet core dashboards to confirm your packet core instance is operating normally.
- Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have the built-in Contributor or Owner role at the subscription scope. - If you use Azure Active Directory (Azure AD) to authenticate access to your local monitoring tools, ensure your local machine has core kubectl access to the Azure Arc-enabled Kubernetes cluster. This requires a core kubeconfig file, which you can obtain by following [Set up kubectl access](commission-cluster.md#set-up-kubectl-access).
Reconfigure your deployment using the information you gathered in [Back up deplo
Once the upgrade completes, check if your deployment is operating normally. 1. Navigate to the **Packet Core Control Plane** resource as described in [View the current packet core version](#view-the-current-packet-core-version). Check the **Version** field under the **Configuration** heading to confirm that it displays the new software version.
-1. Use [Log Analytics](monitor-private-5g-core-with-log-analytics.md) or the [packet core dashboards](packet-core-dashboards.md) to confirm your packet core instance is operating normally.
+1. Use [Azure Monitor platform metrics](monitor-private-5g-core-with-platform-metrics.md) or the [packet core dashboards](packet-core-dashboards.md) to confirm your packet core instance is operating normally.
1. Execute the testing plan you prepared in [Plan for your upgrade](#plan-for-your-upgrade). ## Rollback
If any of the configuration you set while your packet core instance was running
You've finished upgrading your packet core instance. - If your deployment contains multiple sites, upgrade the packet core instance in another site.-- Use [Log Analytics](monitor-private-5g-core-with-log-analytics.md) or the [packet core dashboards](packet-core-dashboards.md) to monitor your deployment.
+- Use [Azure Monitor platform metrics](monitor-private-5g-core-with-platform-metrics.md) or the [packet core dashboards](packet-core-dashboards.md) to monitor your deployment.
purview Manage Integration Runtimes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/manage-integration-runtimes.md
Previously updated : 03/01/2023 Last updated : 04/20/2023 # Create and manage a self-hosted integration runtime
There are two supported configuration options by Microsoft Purview:
- **Use custom proxy**: Configure the HTTP proxy setting to use for the self-hosted integration runtime, instead of using configurations in diahost.exe.config and diawp.exe.config. **Address** and **Port** values are required. **User Name** and **Password** values are optional, depending on your proxy's authentication setting. All settings are encrypted with Windows DPAPI on the self-hosted integration runtime and stored locally on the machine. > [!NOTE]
-> Proxy is supported when scanning Azure data sources and SQL Server; scanning other sources doesn't support proxy.
+> Connecting to data sources via proxy is not supported for connectors other than Azure data sources and Power BI.
The integration runtime host service restarts automatically after you save the updated proxy settings.
purview Register Scan Azure Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-databricks.md
Previously updated : 02/16/2023 Last updated : 04/20/2023
This connector brings metadata from Databricks metastore. Comparing to scan via
- The Databricks workspace info is captured. - The relationship between tables and storage assets is captured.
+### Known limitations
+
+When object is deleted from the data source, currently the subsequent scan won't automatically remove the corresponding asset in Microsoft Purview.
+ ## Prerequisites * You must have an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
Use the following steps to scan Azure Databricks to automatically identify asset
`/mnt/ADLS2=abfss://samplelocation1@azurestorage1.dfs.core.windows.net/;/mnt/Blob=wasbs://samplelocation2@azurestorage2.blob.core.windows.net`
- 1. **Maximum memory available**: Maximum memory (in gigabytes) available on the customer's machine for the scanning processes to use. This value is dependent on the size of Hive Metastore database to be scanned.
+ 1. **Maximum memory available**: Maximum memory (in gigabytes) available on the customer's machine for the scanning processes to use. This value is dependent on the size of Azure Databricks to be scanned.
+
+ > [!Note]
+ > As a thumb rule, please provide 1GB memory for every 1000 tables.
:::image type="content" source="media/register-scan-azure-databricks/scan.png" alt-text="Screenshot of setting up Azure Databricks scan." border="true":::
purview Register Scan Cassandra Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-cassandra-source.md
Previously updated : 05/04/2022 Last updated : 04/20/2023
When scanning Cassandra source, Microsoft Purview supports:
When setting up scan, you can choose to scan an entire Cassandra instance, or scope the scan to a subset of keyspaces matching the given name(s) or name pattern(s).
+### Known limitations
+
+When object is deleted from the data source, currently the subsequent scan won't automatically remove the corresponding asset in Microsoft Purview.
+ ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
purview Register Scan Db2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-db2.md
Previously updated : 10/21/2022 Last updated : 04/20/2023
When scanning IBM Db2 source, Microsoft Purview supports:
When setting up scan, you can choose to scan an entire Db2 database, or scope the scan to a subset of schemas matching the given name(s) or name pattern(s).
+### Known limitations
+
+When object is deleted from the data source, currently the subsequent scan won't automatically remove the corresponding asset in Microsoft Purview.
+ ## Prerequisites * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
purview Register Scan Erwin Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-erwin-source.md
Previously updated : 10/21/2022 Last updated : 04/20/2023
When scanning erwin Mart source, Microsoft Purview supports:
When setting up scan, you can choose to scan an entire erwin Mart server, or scope the scan to a list of models matching the given name(s).
+### Known limitations
+
+When object is deleted from the data source, currently the subsequent scan won't automatically remove the corresponding asset in Microsoft Purview.
+ ## Prerequisites * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
purview Register Scan Google Bigquery Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-google-bigquery-source.md
Previously updated : 10/21/2022 Last updated : 04/20/2023
When scanning Google BigQuery source, Microsoft Purview supports:
When setting up scan, you can choose to scan an entire Google BigQuery project, or scope the scan to a subset of datasets matching the given name(s) or name pattern(s).
->[!NOTE]
-> Currently, Microsoft Purview only supports scanning Google BigQuery datasets in US multi-regional location. If the specified dataset is in other location e.g. us-east1 or EU, you will observe scan completes but no assets shown up in Microsoft Purview.
+### Known limitations
+
+- Currently, Microsoft Purview only supports scanning Google BigQuery datasets in US multi-regional location. If the specified dataset is in other location e.g. us-east1 or EU, you will observe scan completes but no assets shown up in Microsoft Purview.
+- When object is deleted from the data source, currently the subsequent scan won't automatically remove the corresponding asset in Microsoft Purview.
## Prerequisites
purview Register Scan Hive Metastore Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-hive-metastore-source.md
Previously updated : 05/04/2022 Last updated : 04/20/2023
When scanning Hive metastore source, Microsoft Purview supports:
When setting up scan, you can choose to scan an entire Hive metastore database, or scope the scan to a subset of schemas matching the given name(s) or name pattern(s).
+### Known limitations
+
+When object is deleted from the data source, currently the subsequent scan won't automatically remove the corresponding asset in Microsoft Purview.
+ ## Prerequisites * You must have an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
Use the following steps to scan Hive Metastore databases to automatically identi
1. **Maximum memory available**: Maximum memory (in gigabytes) available on the customer's machine for the scanning processes to use. This value is dependent on the size of Hive Metastore database to be scanned.
+ > [!Note]
+ > As a thumb rule, please provide 1GB memory for every 1000 tables.
+ :::image type="content" source="media/register-scan-hive-metastore-source/scan.png" alt-text="Screenshot that shows boxes for scan details." border="true"::: 1. Select **Continue**.
purview Register Scan Looker Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-looker-source.md
Previously updated : 05/04/2022 Last updated : 04/20/2023
When scanning Looker source, Microsoft Purview supports:
When setting up scan, you can choose to scan an entire Looker server, or scope the scan to a subset of Looker projects matching the given name(s).
+### Known limitations
+
+When object is deleted from the data source, currently the subsequent scan won't automatically remove the corresponding asset in Microsoft Purview.
+ ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
purview Register Scan Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-mongodb.md
Previously updated : 10/21/2022 Last updated : 04/20/2023
During scan, Microsoft Purview retrieves and analyzes sample documents to infer
When setting up scan, you can choose to scan one or more MongoDB database(s) entirely, or further scope the scan to a subset of collections matching the given name(s) or name pattern(s).
+### Known limitations
+
+When object is deleted from the data source, currently the subsequent scan won't automatically remove the corresponding asset in Microsoft Purview.
+ ## Prerequisites * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
purview Register Scan Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-mysql.md
Previously updated : 10/21/2022 Last updated : 04/20/2023
When scanning MySQL source, Microsoft Purview supports:
When setting up scan, you can choose to scan an entire MySQL server, or scope the scan to a subset of databases matching the given name(s) or name pattern(s).
+### Known limitations
+
+When object is deleted from the data source, currently the subsequent scan won't automatically remove the corresponding asset in Microsoft Purview.
+ ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
purview Register Scan Oracle Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-oracle-source.md
Previously updated : 03/15/2023 Last updated : 04/20/2023
This article outlines how to register Oracle, and how to authenticate and intera
\* *Besides the lineage on assets within the data source, lineage is also supported if dataset is used as a source/sink in [Data Factory](how-to-link-azure-data-factory.md) or [Synapse pipeline](how-to-lineage-azure-synapse-analytics.md).*
-The supported Oracle server versions are 6i to 19c. Proxy server isn't supported when scanning Oracle source.
+The supported Oracle server versions are 6i to 19c. Oracle proxy server isn't supported when scanning Oracle source.
When scanning Oracle source, Microsoft Purview supports:
When scanning Oracle source, Microsoft Purview supports:
- Synonyms - Types including the type attributes -- Fetching static lineage on assets relationships among tables, views and stored procedures. Stored procedure lineage is supported for static SQL returning result set.
+- Fetching static lineage on assets relationships among tables and views.
When setting up scan, you can choose to scan an entire Oracle server, or scope the scan to a subset of schemas matching the given name(s) or name pattern(s).
-Currently, the Oracle service name isn't captured in the metadata or hierarchy.
+### Known limitations
+
+- Currently, the Oracle service name isn't captured in the metadata or hierarchy.
+- When object is deleted from the data source, currently the subsequent scan won't automatically remove the corresponding asset in Microsoft Purview.
## Prerequisites
purview Register Scan Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-postgresql.md
Previously updated : 10/21/2022 Last updated : 04/20/2023
When scanning PostgreSQL source, Microsoft Purview supports:
When setting up scan, you can choose to scan an entire PostgreSQL database, or scope the scan to a subset of schemas matching the given name(s) or name pattern(s).
+### Known limitations
+
+When object is deleted from the data source, currently the subsequent scan won't automatically remove the corresponding asset in Microsoft Purview.
+ ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
purview Register Scan Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-salesforce.md
Previously updated : 10/21/2022 Last updated : 04/20/2023
When scanning Salesforce source, Microsoft Purview supports extracting technical
When setting up scan, you can choose to scan an entire Salesforce organization, or scope the scan to a subset of objects matching the given name(s) or name pattern(s).
+### Known limitations
+
+When object is deleted from the data source, currently the subsequent scan won't automatically remove the corresponding asset in Microsoft Purview.
+ ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
purview Register Scan Sap Bw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-sap-bw.md
Previously updated : 11/01/2022 Last updated : 04/20/2023
When scanning SAP BW source, Microsoft Purview supports extracting technical met
- Dimension - Time dimension
+### Known limitations
+
+When object is deleted from the data source, currently the subsequent scan won't automatically remove the corresponding asset in Microsoft Purview.
+ ## Prerequisites * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
purview Register Scan Sap Hana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-sap-hana.md
Previously updated : 10/21/2022 Last updated : 04/20/2023
When scanning SAP HANA source, Microsoft Purview supports extracting technical m
- Databases - Schemas - Tables including the columns, foreign keys, indexes, and unique constraints-- Views including the columns
+- Views including the columns. Note SAP HANA Calculation Views are not supported now.
- Stored procedures including the parameter dataset and result set - Functions including the parameter dataset - Sequences
When scanning SAP HANA source, Microsoft Purview supports extracting technical m
When setting up scan, you can choose to scan an entire SAP HANA database, or scope the scan to a subset of schemas matching the given name(s) or name pattern(s).
+### Known limitations
+
+When object is deleted from the data source, currently the subsequent scan won't automatically remove the corresponding asset in Microsoft Purview.
+ ## Prerequisites * You must have an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
GRANT SELECT ON SCHEMA _SYS_BIC TO <user>;
## Register
-This section describes how to register a SAP HANA in Microsoft Purview by using [the Microsoft Purview governance portal](https://web.purview.azure.com/).
+This section describes how to register an SAP HANA in Microsoft Purview by using [the Microsoft Purview governance portal](https://web.purview.azure.com/).
1. Open the Microsoft Purview governance portal by: - Browsing directly to [https://web.purview.azure.com](https://web.purview.azure.com) and selecting your Microsoft Purview account.
This section describes how to register a SAP HANA in Microsoft Purview by using
1. For **Name**, enter a name that Microsoft Purview will list as the data source.
- 1. For **Server**, enter the host name or IP address used to connect to a SAP HANA source. For example, `MyDatabaseServer.com` or `192.169.1.2`.
+ 1. For **Server**, enter the host name or IP address used to connect to an SAP HANA source. For example, `MyDatabaseServer.com` or `192.169.1.2`.
1. For **Port**, enter the port number used to connect to the database server (39013 by default for SAP HANA).
Use the following steps to scan SAP HANA databases to automatically identify ass
### Authentication for a scan
-The supported authentication type for a SAP HANA source is **Basic authentication**.
+The supported authentication type for an SAP HANA source is **Basic authentication**.
### Create and run scan
purview Register Scan Sapecc Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-sapecc-source.md
Previously updated : 11/01/2022 Last updated : 04/20/2023
When scanning SAP ECC source, Microsoft Purview supports:
- Fetching static lineage on assets relationships among tables and views.
+### Known limitations
+
+When object is deleted from the data source, currently the subsequent scan won't automatically remove the corresponding asset in Microsoft Purview.
+ ## Prerequisites * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
purview Register Scan Saps4hana Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-saps4hana-source.md
Previously updated : 11/01/2022 Last updated : 04/20/2023
When scanning SAP S/4HANA source, Microsoft Purview supports:
- Fetching static lineage on assets relationships among tables and views.
+### Known limitations
+
+When object is deleted from the data source, currently the subsequent scan won't automatically remove the corresponding asset in Microsoft Purview.
+ ## Prerequisites * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
purview Register Scan Snowflake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-snowflake.md
Previously updated : 10/21/2022 Last updated : 04/20/2023
When scanning Snowflake source, Microsoft Purview supports:
When setting up scan, you can choose to scan one or more Snowflake database(s) entirely, or further scope the scan to a subset of schemas matching the given name(s) or name pattern(s).
+### Known limitations
+
+When object is deleted from the data source, currently the subsequent scan won't automatically remove the corresponding asset in Microsoft Purview.
+ ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
purview Register Scan Teradata Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-teradata-source.md
Previously updated : 03/31/2023 Last updated : 04/20/2023
When scanning Teradata source, Microsoft Purview supports:
- Stored procedures including the parameter dataset and result set - Functions including the parameter dataset -- Fetching static lineage on assets relationships among tables, views and stored procedures.
+- Fetching static lineage on assets relationships among tables and views.
When setting up scan, you can choose to scan an entire Teradata server, or scope the scan to a subset of databases matching the given name(s) or name pattern(s).
+### Known limitations
+
+When object is deleted from the data source, currently the subsequent scan won't automatically remove the corresponding asset in Microsoft Purview.
+ ### Required permissions for scan Microsoft Purview supports basic authentication (username and password) for scanning Teradata. The user should have SELECT permission granted for every individual system table listed below:
purview Supported Classifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/supported-classifications.md
Previously updated : 03/07/2023 Last updated : 04/10/2023 #Customer intent: As a data steward or catalog administrator, I need to understand what's supported under classifications.
Microsoft Purview classifies data by using [RegEx](https://wikipedia.org/wiki/Re
## Bloom Filter based classifications
-### City, Country, and Place
+### World Cities, Country
-The City, Country, and Place filters have been prepared using best datasets available for preparing the data.
+The City and Country classifier identifies the data based on their full names as well as short codes.
+
+#### Keywords
+
+##### Keywords for City
+- burg
+- city
+- cities
+- city names
+- cosmopolis
+- metropolis
+- municipality
+- place
+- town
+
+##### Keywords for Country
+- country
+- countries
+- country names
+- nation
+- nationality
+
+-
## Machine Learning based classifications
The City, Country, and Place filters have been prepared using best datasets avai
### Person's Name
-Person Name machine learning model has been trained using global datasets of names in English language.
+Person Name machine learning model has been trained using global datasets of names in English language. Microsoft Purview classifies full names stored in the same column as well as first and last names in separate columns.
-> [!NOTE]
-> Microsoft Purview classifies full names stored in the same column as well as first/last names in separate columns.
+-
### Person's Address Person's address classification is used to detect full address stored in a single column containing the following elements: House number, Street Name, City, State, Country, Zip Code. Person's Address classifier uses machine learning model that is trained on the global addresses data set in English language.
Currently the address model supports the following formats in the same column:
- street, city, pincode or zipcode - landmark, city
+-
+ ### Person's Gender Person's Gender machine learning model has been trained using US Census data and other public data sources in English language. It supports classifying 50+ genders out of the box.
Person's Gender machine learning model has been trained using US Census data and
- gender - orientation
+-
### Person's Age Person's Age machine learning model detects age of an individual specified in various different formats. The qualifiers for days, months, and years must be in English language.
Person's Age machine learning model detects age of an individual specified in va
- {%y}.{%m} - {%y}.{%yd}
+-
+ ## RegEx Classifications ### ABA routing number
role-based-access-control Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/troubleshooting.md
na Previously updated : 04/05/2023 Last updated : 04/19/2023
When you try to assign a role, you get the following error message:
Azure supports up to **4000** role assignments per subscription. This limit includes role assignments at the subscription, resource group, and resource scopes, but not at the management group scope.
-> [!NOTE]
-> For specialized clouds, such as Azure Government and Azure China 21Vianet, the limit is **2000** role assignments per subscription.
- **Solution** Try to reduce the number of role assignments in the subscription. Here are some ways that you can reduce the number of role assignments:
route-server Quickstart Configure Route Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/quickstart-configure-route-server-cli.md
RouteServerIps : {10.5.10.4, 10.5.10.5} "virtualRouterAsn": 65515,
## Configure route exchange
-If you have an ExpressRoute and an Azure VPN gateway in the same virtual network and you want them to exchange routes, you can enable route exchange on the Azure Route Server.
+If you have a virtual network gateway (ExpressRoute or VPN) in the same virtual network, you can enable *b2b traffic* to exchange routes between the gateway and the Route Server.
-> [!IMPORTANT]
-> For greenfield deployments make sure to create the Azure VPN gateway before creating Azure Route Server; otherwise the deployment of Azure VPN Gateway will fail.
->
1. To enable route exchange between Azure Route Server and the gateway(s), use [az network routerserver update](/cli/azure/network/routeserver#az-network-routeserver-update) with the `--allow-b2b-traffic`` flag set to **true**:
route-server Quickstart Configure Route Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/quickstart-configure-route-server-portal.md
You'll need the Azure Route Server's peer IPs and ASN to complete the configurat
## Configure route exchange
-If you have an ExpressRoute gateway and/or VPN gateway and you want them to exchange routes with the Route Server, you can enable route exchange.
+If you have a virtual network gateway (ExpressRoute or VPN) in the same virtual network, you can enable *branch-to-branch* traffic to exchange routes between the gateway and the Route Server.
+ 1. Go to [Route Server](./overview.md) in the Azure portal and select the Route Server you want to configure.
route-server Quickstart Configure Route Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/quickstart-configure-route-server-powershell.md
RouteServerIps : {10.5.10.4, 10.5.10.5}
## <a name = "route-exchange"></a>Configure route exchange
-If you have an ExpressRoute and an Azure VPN gateway in the same virtual network and you want them to exchange routes, you can enable route exchange on the Azure Route Server.
+If you have a virtual network gateway (ExpressRoute or VPN) in the same virtual network, you can enable *BranchToBranchTraffic* to exchange routes between the gateway and the Route Server.
-> [!IMPORTANT]
-> Azure VPN gateway must be configured in **active-active** mode and have the ASN set to 65515.
1. To enable route exchange between Azure Route Server and the gateway(s), use [Update-AzRouteServer](/powershell/module/az.network/update-azrouteserver) with the *-AllowBranchToBranchTraffic* flag:
route-server Quickstart Configure Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/quickstart-configure-template.md
Title: 'Quickstart: Create an Azure Route Server - ARM template'
-description: This quickstart shows you how to create an Azure Route Server using Azure Resource Manager template (ARM template).
+description: In this quickstart, you learn how to create an Azure Route Server using Azure Resource Manager template (ARM template).
Previously updated : 04/05/2021 Last updated : 04/18/2023 -+ # Quickstart: Create an Azure Route Server using an ARM template
-This quickstart describes how to use an Azure Resource Manager template (ARM template) to deploy an Azure Route Server into a new or existing virtual network.
+This quickstart helps you learn how to use an Azure Resource Manager template (ARM template) to deploy an Azure Route Server into a new or existing virtual network.
[!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
-If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button to open the template in the Azure portal.
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.network%2Froute-server%2Fazuredeploy.json) ## Prerequisites
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Review the [service limits for Azure Route Server](route-server-faq.md#limitations).
## Review the template The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/route-server).
-In this quickstart, you'll deploy an Azure Route Server into a new or existing virtual network. A dedicated subnet named `RouteServerSubnet` will be created to host the Route Server. The Route Server will also be configured with the Peer ASN and Peer IP to establish a BGP peering.
+Using this template, you deploy an Azure Route Server into a new or existing virtual network. A dedicated subnet named `RouteServerSubnet` is created to host the Route Server. The Route Server will also be configured with the Peer ASN and Peer IP to establish a BGP peering.
Multiple Azure resources have been defined in the template:
-* [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualNetworks)
-* [**Microsoft.Network/virtualNetworks/subnets**](/azure/templates/microsoft.network/virtualNetworks/subnets) (two subnets, one named `routeserversubnet`)
-* [**Microsoft.Network/virtualHubs**](/azure/templates/microsoft.network/virtualhubs) (Route Server deployment)
-* [**Microsoft.Network/virtualHubs/ipConfigurations**](/azure/templates/microsoft.network/virtualhubs/ipConfigurations)
-* [**Microsoft.Network/virtualHubs/bgpConnections**](/azure/templates/microsoft.network/virtualhubs/bgpconnections) (Peer ASN and Peer IP configuration)
+* [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualNetworks?pivots=deployment-language-arm-template)
+* [**Microsoft.Network/virtualNetworks/subnets**](/azure/templates/microsoft.network/virtualNetworks/subnets?pivots=deployment-language-arm-template) (two subnets, one named `routeserversubnet`)
+* [**Microsoft.Network/virtualHubs**](/azure/templates/microsoft.network/virtualhubs?pivots=deployment-language-arm-template) (Route Server deployment)
+* [**Microsoft.Network/virtualHubs/ipConfigurations**](/azure/templates/microsoft.network/virtualhubs/ipConfigurations?pivots=deployment-language-arm-template)
+* [**Microsoft.Network/virtualHubs/bgpConnections**](/azure/templates/microsoft.network/virtualhubs/bgpconnections?pivots=deployment-language-arm-template) (Peer ASN and Peer IP configuration)
-To find more templates that are related to ExpressRoute, see [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Network&pageNumber=1&sort=Popular).
+To find more templates, see [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Network&pageNumber=1&sort=Popular).
## Deploy the template
-1. Select **Try it** from the following code block to open Azure Cloud Shell, and then follow the instructions to sign in to Azure.
+1. Select **Open Cloudshell** from the following code block to open Azure Cloud Shell, and then follow the instructions to sign in to Azure.
```azurepowershell-interactive $projectName = Read-Host -Prompt "Enter a project name that is used for generating resource names"
Azure PowerShell is used to deploy the template. In addition to Azure PowerShell
## Clean up resources
-When you no longer need the resources that you created with the Route Server, delete the resource group. This removes the Route Server and all the related resources.
+When you no longer need the resources that you created with the Route Server, delete the resource group to remove the Route Server and all the related resources.
To delete the resource group, call the `Remove-AzResourceGroup` cmdlet:
Remove-AzResourceGroup -Name <your resource group name>
In this quickstart, you created a:
-* Route Server
* Virtual Network * Subnet
+* Route Server
After you create the Azure Route Server, continue to learn about how Azure Route Server interacts with ExpressRoute and VPN Gateways:
sap High Availability Guide Windows Azure Files Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-windows-azure-files-smb.md
Title: Azure VMs HA for SAP NW on Windows with Azure Files (SMB)| Microsoft Docs
-description: High availability for SAP NetWeaver on Azure VMs on Windows with Azure Files (SMB) for SAP applications
+ Title: Install HA SAP NetWeaver with Azure Files SMB| Microsoft Docs
+description: Learn how to install high availability for SAP NetWeaver on Azure VMs on Windows with Azure Files (SMB) for SAP applications.
documentationcenter: saponazure
-# High availability for SAP NetWeaver on Azure VMs on Windows with Azure Files Premium SMB for SAP applications
+# Install HA SAP NetWeaver with Azure Files SMB
-## Introduction
-Azure Files Premium SMB is now fully supported by Microsoft and SAP. **SWPM 1.0 SP32** and **SWPM 2.0 SP09** and higher support Azure Files Premium SMB storage. There are special requirements for sizing Azure Files Premium SMB shares. This documentation contains specific recommendations on how to distribute workload on Azure Files Premium SMB, how to adequately size Azure Files Premium SMB and the minimum installation requirements for Azure Files Premium SMB.
+Microsoft and SAP now fully support Azure Files premium Server Message Block (SMB) file shares. SAP Software Provisioning Manager (SWPM) 1.0 SP32 and SWPM 2.0 SP09 (and later) support Azure Files premium SMB storage.
-High Availability SAP solutions need a highly available File share for hosting **sapmnt**, **trans** and **interface directories**. Azure Files Premium SMB is a simple Azure PaaS solution for Shared File Systems for SAP on Windows environments. Azure Files Premium SMB can be used with Availability Sets and Availability Zones. Azure Files Premium SMB can also be used for Disaster Recovery scenarios to another region.
+There are special requirements for sizing Azure Files premium SMB shares. This article contains specific recommendations on how to distribute workloads, choose an adequate storage size, and meet minimum installation requirements for Azure Files premium SMB.
+
+High-availability (HA) SAP solutions need a highly available file share for hosting *sapmnt*, *transport*, and *interface* directories. Azure Files premium SMB is a simple Azure platform as a service (PaaS) solution for shared file systems for SAP on Windows environments. You can use Azure Files premium SMB with availability sets and availability zones. You can also use Azure Files premium SMB for disaster recovery (DR) scenarios to another region.
> [!NOTE]
-> Clustering SAP ASCS/SCS instances by using a file share is supported for SAP systems with SAP Kernel 7.22 (and later). For details see SAP note [2698948](https://launchpad.support.sap.com/#/notes/2698948)
-
-## Sizing & Distribution of Azure Files Premium SMB for SAP Systems
-
-The following points should be evaluated when planning the deployment of Azure Files Premium SMB:
-* The File share name **sapmnt** can be created once per storage account. It's possible to create additional SIDs as directories on the same **/sapmnt** share such as - **/sapmnt/\<SID1\>** and **/sapmnt/\<SID2\>**
-* Choose an appropriate size, IOPS and throughput. A suggested size for the share is 256 GB per SID. The maximum size for a Share is 5120 GB
-* Azure Files Premium SMB may not perform well for very large **sapmnt** shares with more than 1-2 million files per storage account.  Customers that have millions of batch jobs creating millions of job log files should regularly reorganize them as per [SAP Note 16083][16083] If needed, old job logs may be moved/archived to another Azure Files Premium SMB.  If **sapmnt** is expected to be very large, then other options (such as Azure ANF) should be considered.
-* It's recommended to use a Private Network Endpoint
-* Avoid putting too many SIDs to a single storage account and its file share.
-* As general guidance no more than between 2 to 4 nonprod SIDs can be put together.
-* Don't put the entire Development, QAS + Production landscape in one storage account and/or file share.ΓÇ» Failure of the share leads to downtime of the entire SAP landscape.
-* It's recommended to put the **sapmnt** and **transport directories** on the different storage account except for smaller systems. During the installation of the SAP PAS Instance, SAPInst will requests Transport Hostname. The FQDN of a different storage account should be entered <storage_account>.file.core.windows.net.
-* Don't put the file system used for Interfaces onto the same storage account as **/sapmnt/\<SID>**
-* The SAP users/groups must be added to the ΓÇÿsapmntΓÇÖ share and should have this permission set in the Azure portal: **Storage File Data SMB Share Elevated Contributor**.
-
-There are important reasons for splitting **Transport**, **Interface** and **sapmnt** among separate storage accounts. Distributing these components among separate storage accounts improves throughput, resiliency and simplifies the performance analysis. If many SIDs and other file systems are put within a single Azure Files Storage account and the storage account performance is poor due to hitting the throughput limits, it's very difficult to identify which SID or application is causing the problem.
-
-## Planning
+> Clustering SAP ASCS/SCS instances by using a file share is supported for SAP systems with SAP Kernel 7.22 (and later). For details, see SAP Note [2698948](https://launchpad.support.sap.com/#/notes/2698948).
+
+## Sizing and distribution of Azure Files premium SMB for SAP systems
+
+Evaluate the following points when you're planning the deployment of Azure Files premium SMB:
+
+* The file share name *sapmnt* can be created once per storage account. It's possible to create additional storage IDs (SIDs) as directories on the same */sapmnt* share, such as */sapmnt/\<SID1\>* and */sapmnt/\<SID2\>*.
+* Choose an appropriate size, IOPS, and throughput. A suggested size for the share is 256 GB per SID. The maximum size for a share is 5,120 GB.
+* Azure Files premium SMB might not perform well for very large *sapmnt* shares with more than 1 million files per storage account.  Customers who have millions of batch jobs that create millions of job log files should regularly reorganize them, as described in SAP Note [16083][16083]. If needed, you can move or archive old job logs to another Azure Files premium SMB file share. If you expect *sapmnt* to be very large, consider other options (such as Azure NetApp Files).
+* We recommend that you use a private network endpoint.
+* Avoid putting too many SIDs in a single storage account and its file share.
+* As general guidance, don't put together more than four nonproduction SIDs.
+* Don't put the entire development, production, and quality assurance system (QAS) landscape in one storage account or file share. Failure of the share leads to downtime of the entire SAP landscape.
+* We recommend that you put the *sapmnt* and *transport* directories on different storage accounts, except in smaller systems. During the installation of the SAP primary application server, SAPinst will request the *transport* host name. Enter the FQDN of a different storage account as *<storage_account>.file.core.windows.net*.
+* Don't put the file system used for interfaces onto the same storage account as */sapmnt/\<SID>*.
+* You must add the SAP users and groups to the *sapmnt* share. Set the Storage File Data SMB Share Elevated Contributor permission for them in the Azure portal.
+
+Distributing *transport*, *interface*, and *sapmnt* among separate storage accounts improves throughput and resiliency. It also simplifies performance analysis. If you put many SIDs and other file systems in a single Azure Files storage account, and the storage account's performance is poor because you're hitting the throughput limits, it's difficult to identify which SID or application is causing the problem.
+
+## Planning
+ > [!IMPORTANT]
-> The installation of SAP High Availability Systems on Azure Files Premium SMB with Active Directory Integration requires cross team collaboration. It is highly recommended, that the Basis Team, the Active Directory Team and the Azure Team work together to achieve these tasks:
+> The installation of SAP HA systems on Azure Files premium SMB with Active Directory integration requires cross-team collaboration. We recommend that the following teams work together to achieve tasks:
>
-* Azure Team ΓÇô setup and configuration of Storage Account, Script Execution and AD Directory Synchronization.
-* Active Directory Team ΓÇô Creation of User Accounts and Groups.
-* Basis Team ΓÇô Run SWPM and set ACLs (if necessary).
+> * Azure team: Set up and configure storage accounts, script execution, and Active Directory synchronization.
+> * Active Directory team: Create user accounts and groups.
+> * Basis team: Run SWPM and set access control lists (ACLs), if necessary.
-Prerequisites for the installation of SAP NetWeaver High Availability Systems on Azure Files Premium SMB with Active Directory Integration.
+Here are prerequisites for the installation of SAP NetWeaver HA systems on Azure Files premium SMB with Active Directory integration:
-* The SAP servers must be joined to an Active Directory Domain.
-* The Active Directory Domain containing the SAP servers must be replicated to Azure Active Directory using Azure AD connect.
-* It is highly recommended, that there is at least one Active Directory Domain controller in the Azure landscape to avoid traversing the Express Route to contact Domain Controllers on-premises.
-* The Azure support team should review the Azure Files SMB with [Active Directory Integration](../../storage/files/storage-files-identity-auth-active-directory-enable.md#videos) documentation. *The video shows extra configuration options, which were modified (DNS) and skipped (DFS-N) for simplification reasons.* Nevertheless these are valid configuration options.
-* The user executing the Azure Files PowerShell script must have permission to create objects in Active Directory.
-* **SWPM version 1.0 SP32 and SWPM 2.0 SP09 or higher are required. SAPInst patch must be 749.0.91 or higher.**
-* An up-to-date release of PowerShell should be installed on the Windows Server where the script is executed.
+* Join the SAP servers to an Active Directory domain.
+* Replicate the Active Directory domain that contains the SAP servers to Azure Active Directory (Azure AD) by using Azure AD Connect.
+* Make sure that at least one Active Directory domain controller is in the Azure landscape, to avoid traversing Azure ExpressRoute to contact domain controllers on-premises.
+* Make sure that the Azure support team reviews the documentation for Azure Files SMB with [Active Directory integration](../../storage/files/storage-files-identity-auth-active-directory-enable.md#videos). The video shows extra configuration options, which were modified (DNS) and skipped (DFS-N) for simplification reasons. But these are valid configuration options.
+* Make sure that the user who's running the Azure Files PowerShell script has permission to create objects in Active Directory.
+* Use SWPM version 1.0 SP32 and SWPM 2.0 SP09 or later for the installation. The SAPinst patch must be 749.0.91 or later.
+* Install an up-to-date release of PowerShell on the Windows Server instance where the script is run.
## Installation sequence
- 1. The Active Directory administrator should create in advance 3 Domain users with **Local Administrator** rights and one global group in the **local Windows AD**: **SAPCONT_ADMIN@SAPCONTOSO.local** has Domain Admin rights and is used to run **SAPInst**, **\<sid>adm** and **SAPService\<SID>** as SAP system users and the **SAP_\<SAPSID>_GlobalAdmin** group. The SAP Installation Guide contains the specific details required for these accounts. **SAP user accounts should not be Domain Administrator**. It is generally recommended **not to use \<sid>adm to run SAPInst**.
- 2. The Active Directory administrator or Azure Administrator should check **Azure AD Connect** Synchronization Service Manager. By default it takes approximately 30 minutes to replicate to the **Azure Active Directory**.
- 3. The Azure administrator should complete the following tasks:
- 1. Create a Storage Account with either **Premium ZRS** or **LRS**. Customers with Zonal deployment should choose ZRS. Here the choice between setting up a **Standard** or **Premium Account** needs to be made:
- ![Azure portal Screenshot for create storage account - Step 1](media/virtual-machines-shared-sap-high-availability-guide/create-storage-account-1.png)Azure portal Screenshot for create storage account - Step 1
- > [!IMPORTANT]
- > For productive use the recommendation is using a **Premium Account**. For non-productive using a **Standard Account** will be sufficient.
- >
- ![Azure portal Screenshot for create storage account - Step 2](media/virtual-machines-shared-sap-high-availability-guide/create-storage-account-2.png)Azure portal Screenshot for create storage account - Step 2
-
- In this screen, the default settings should be ok.
-
- ![Azure portal Screenshot for create storage account - Step 3](media/virtual-machines-shared-sap-high-availability-guide/create-sa-4.png)Azure portal Screenshot for create storage account - Step 3
-
- In this step the decision to use a private endpoint is made.
- 1. **Select Private Network Endpoint** for the storage account.
- If necessary add a DNS A-Record into Windows DNS for the **<storage_account_name>.file.core.windows.net** (this may need to be in a new DNS Zone). Discuss this topic with the DNS administrator. The new zone should not update outside of an organization.
- ![pivate-endpoint-creation](media/virtual-machines-shared-sap-high-availability-guide/create-sa-3.png)Azure portal screenshot for the private endpoint definition.
- ![private-endpoint-dns-1](media/virtual-machines-shared-sap-high-availability-guide/pe-dns-1.png)DNS server screenshot for private endpoint DNS definition.
- 1. Create the **sapmnt** File share with an appropriate size. The suggested size is 256 GB, which delivers 650 IOPS, 75 MB/sec Egress and 50 MB/sec Ingress.
- ![create-storage-account-5](media/virtual-machines-shared-sap-high-availability-guide/create-sa-5.png)Azure portal screenshot for the SMB share definition.
-
- 1. Download the [Azure Files GitHub](../../storage/files/storage-files-identity-ad-ds-enable.md#download-azfileshybrid-module) content and execute the [script](../../storage/files/storage-files-identity-ad-ds-enable.md#run-join-azstorageaccount).
- This script creates either a Computer Account or Service Account in Active Directory. The user running the script must have the following properties:
- * The user running the script must have permission to create objects in the Active Directory Domain containing the SAP servers. Typically, a domain administrator account is used such as **SAPCONT_ADMIN@SAPCONTOSO.local**
- * Before executing the script confirm that this Active Directory Domain user account is synchronized with Azure Active Directory (Azure AD). An example of this would be to open the Azure portal and navigate to Azure AD users and check that the user **SAPCONT_ADMIN@SAPCONTOSO.local** exists and verify the Azure AD user account **SAPCONT_ADMIN@SAPCONTOSO.onmicrosoft.com**.
- * Grant the **Contributor RBAC** role to this Azure Active Directory user account for the Resource Group containing the storage account holding the File Share. In this example, the user **SAPCONT_ADMIN@SAPCONTOSO.onmicrosoft.com** is granted **Contributor Role** to the respective Resource Group
- * The script should be executed while logged on to a Windows server using an Active Directory Domain user account with the permission as specified above, in this example the account **SAPCONT_ADMIN@SAPCONTOSO.local** would be used.
- >[!IMPORTANT]
- > When executing the PowerShell script command **Connect-AzAccount**, it is highly recommended to enter the Azure Active Directory user account that corresponds and maps to the Active Directory Domain user account used to logon to a Windows Server, in this example this is the user account **SAPCONT_ADMIN@SAPCONTOSO.onmicrosoft.com**
- >
- In this example scenario, the Active Directory Administrator would logon to the Windows Server as **SAPCONT_ADMIN@SAPCONTOSO.local** and when using the **PS command Connect-AzAccount** connect as user **SAPCONT_ADMIN@SAPCONTOSO.onmicrosoft.com**. Ideally the Active Directory Administrator and the Azure Administrator should work together on this task.
- ![Screenshot of the PowerShell script creating local AD account.](media/virtual-machines-shared-sap-high-availability-guide/ps-script-1.png)
-
- ![smb-configured-screenshot](media/virtual-machines-shared-sap-high-availability-guide/smb-config-1.png)Azure portal screenshot after successful PowerShell script execution.
-
- The following should appear as ΓÇ£ConfiguredΓÇ¥
- Storage -> Files Shares ΓÇ£Active Directory: ConfiguredΓÇ¥
- 1. Assign SAP users **\<sid>adm**, **SAPService\<SID>** and the **SAP_\<SAPSID>_GlobalAdmin** group to the Azure Files Premium SMB File Share with Role **Storage File Data SMB Share Elevated Contributor** in the Azure portal
- 1. Check the ACL on the **sapmnt file share** after the installation and add **DOMAIN\CLUSTER_NAME$** account, **DOMAIN\\\<sid>adm**, **DOMAIN\SAPService\<SID>** and the **Group SAP_\<SID>_GlobalAdmin**. These accounts and group **should have full control of sapmnt directory**.
-
- > [!IMPORTANT]
- > This step must be completed before the SAPInst installation or it will be difficult or impossible to change ACLs after SAPInst has created directories and files on the File Share
- >
-
- The following screenshots show how to add Computer machine accounts by selecting the Object Types -> Computers
- ![Windows Server screenshot of adding the cluster name to the local AD](media/virtual-machines-shared-sap-high-availability-guide/add-computer-account-2.png)Windows Server screenshot of adding the cluster name to the local AD.
-
- The DOMAIN\CLUSTER_NAME$ can be found by selecting ΓÇ£ComputersΓÇ¥ from the ΓÇ£Object TypesΓÇ¥
- ![Screenshot of adding AD computer account - Step 2](media/virtual-machines-shared-sap-high-availability-guide/add-computer-account-3.png)Screenshot of adding AD computer account - Step 2
- ![Screenshot of adding AD computer account - Step 3](media/virtual-machines-shared-sap-high-availability-guide/add-computer-account-4.png)Screenshot of adding AD computer account - Step 3
- ![Screenshot of computer account access properties](media/virtual-machines-shared-sap-high-availability-guide/add-computer-account-5.png)Screenshot of computer account access properties.
-
- 8. If necessary move the Computer Account created for Azure Files to an Active Directory Container that doesn't have account expiry. The name of the Computer Account will be the short name of the storage account
-
-
- > [!IMPORTANT]
- > In order to initialize the Windows ACL for the SMB share the share needs to be mounted once to a drive letter.
- >
- The storage key is the password and the user is **Azure\\\<SMB share name>** as shown here:
- ![one time net use mount](media/virtual-machines-shared-sap-high-availability-guide/one-time-net-use-mount-1.png)Windows screenshot of the net use one-time mount of the SMB share.
-
- 4. Basis administrator should complete the tasks below:
- 1. [Install the Windows Cluster on ASCS/ERS Nodes and add the Cloud witness](sap-high-availability-infrastructure-wsfc-shared-disk.md#0d67f090-7928-43e0-8772-5ccbf8f59aab)
- 2. The first Cluster Node installation asks for the Azure Files SMB storage account name. Enter the FQDN <storage_account_name>.file.core.windows.net. If SAPInst doesn't accept >13 characters, then the SWPM version is too old.
- 3. [Modify the SAP Profile of the ASCS/SCS Instance](sap-high-availability-installation-wsfc-shared-disk.md#10822f4f-32e7-4871-b63a-9b86c76ce761)
- 4. [Update the Probe Port for the SAP \<SID> role in WSFC](sap-high-availability-installation-wsfc-shared-disk.md#10822f4f-32e7-4871-b63a-9b86c76ce761)
- 5. Continue with SWPM Installation for the second ASCS/ERS Node. SWPM will only require path of profile directory. Enter the full UNC path to the profile directory.
- 6. Enter the UNC profile path for the DB and PAS/AAS Installation.
- 7. PAS Installation asks for Transport hostname. Provide the FQDN of a separate storage account name for transport directory.
- 8. Verify the ACLs on the SID and trans directory.
-
-## Disaster Recovery setup
-Disaster Recovery scenarios or Cross-Region Replication scenarios are supported with Azure Files Premium SMB. All data in Azure Files Premium SMB directories can be continuously synchronized to a DR region storage account using [Synchronize Files under Transfer data with AzCopy and file storage.](../../storage/common/storage-use-azcopy-files.md#synchronize-files) After a Disaster Recovery event and failover of the ASCS instance to the DR region, change the SAPGLOBALHOST profile parameter to the point to Azure Files SMB in the DR region. The same preparation steps should be performed on the DR storage account to join the storage account to Active Directory and assign RBAC roles for SAP users and groups.
+
+### Create users and groups
+
+The Active Directory administrator should create, in advance, three domain users with Local Administrator rights and one global group in the local Windows Server Active Directory instance.
+
+*SAPCONT_ADMIN@SAPCONTOSO.local* has Domain Administrator rights and is used to run *SAPinst*, *\<sid>adm*, and *SAPService\<SID>* as SAP system users and the *SAP_\<SAPSID>_GlobalAdmin* group. The SAP Installation Guide contains the specific details required for these accounts.
+
+> [!NOTE]
+> SAP user accounts should not be Domain Administrator. We generally recommend that you don't use *\<sid>adm* to run SAPinst.
+
+### Check Synchronization Service Manager
+
+The Active Directory administrator or Azure administrator should check Synchronization Service Manager in Azure AD Connect. By default, it takes about 30 minutes to replicate to Azure AD.
+
+### Create a storage account, private endpoint, and file share
+
+The Azure administrator should complete the following tasks:
+
+1. On the **Basics** tab, create a storage account with either premium zone-redundant storage (ZRS) or locally redundant storage (LRS). Customers with zonal deployment should choose ZRS. Here, the administrator needs to make the choice between setting up a **Standard** or **Premium** account.
+
+ ![Screenshot of the Azure portal that shows basic information for creating a storage account.](media/virtual-machines-shared-sap-high-availability-guide/create-storage-account-1.png)
+
+ > [!IMPORTANT]
+ > For production use, we recommend choosing a **Premium** account. For non-production use, a **Standard** account should be sufficient.
+
+1. On the **Advanced** tab, the default settings should be OK.
+
+ ![Screenshot of the Azure portal that shows advanced information for creating a storage account.](media/virtual-machines-shared-sap-high-availability-guide/create-storage-account-2.png)
+
+1. On the **Networking** tab, the administrator makes the decision to use a private endpoint.
+
+ ![Screenshot of the Azure portal that shows networking information for creating a storage account.](media/virtual-machines-shared-sap-high-availability-guide/create-sa-4.png)
+
+ 1. Select **Add private endpoint** for the storage account, and then enter the information for creating a private endpoint.
+
+ ![Screenshot of the Azure portal that shows options for private endpoint definition.](media/virtual-machines-shared-sap-high-availability-guide/create-sa-3.png)
+
+ 1. If necessary, add a DNS A record into Windows DNS for *<storage_account_name>.file.core.windows.net*. (This might need to be in a new DNS zone.) Discuss this topic with the DNS administrator. The new zone should not update outside an organization.
+
+ ![Screenshot of DNS Manager that shows private endpoint DNS definition.](media/virtual-machines-shared-sap-high-availability-guide/pe-dns-1.png)
+
+1. Create the *sapmnt* file share with an appropriate size. The suggested size is 256 GB, which delivers 650 IOPS, 75-MB/sec egress, and 50-MB/sec ingress.
+
+ ![Screenshot of the Azure portal that shows SMB share definition.](media/virtual-machines-shared-sap-high-availability-guide/create-sa-5.png)
+
+1. Download the [Azure Files GitHub](../../storage/files/storage-files-identity-ad-ds-enable.md#download-azfileshybrid-module) content and run the [script](../../storage/files/storage-files-identity-ad-ds-enable.md#run-join-azstorageaccount).
+
+ This script creates either a computer account or a service account in Active Directory. It has the following requirements:
+
+ * The user who's running the script must have permission to create objects in the Active Directory domain that contains the SAP servers. Typically, an organization uses a Domain Administrator account such as *SAPCONT_ADMIN@SAPCONTOSO.local*.
+ * Before the user runs the script, confirm that this Active Directory domain user account is synchronized with Azure AD. An example of this would be to open the Azure portal and go to Azure AD users, check that the user *SAPCONT_ADMIN@SAPCONTOSO.local* exists, and verify the Azure AD user account.
+ * Grant the Contributor role-based access control (RBAC) role to this Azure AD user account for the resource group that contains the storage account that holds the file share. In this example, the user *SAPCONT_ADMIN@SAPCONTOSO.onmicrosoft.com* is granted the Contributor role to the respective resource group.
+ * The user should run the script while logged on to a Windows Server instance by using an Active Directory domain user account with the permission as specified earlier.
+
+ In this example scenario, the Active Directory administrator would log on to the Windows Server instance as *SAPCONT_ADMIN@SAPCONTOSO.local*. When the administrator is using the PowerShell command `Connect-AzAccount`, the administrator connects as user *SAPCONT_ADMIN@SAPCONTOSO.onmicrosoft.com*. Ideally, the Active Directory administrator and the Azure administrator should work together on this task.
+
+ ![Screenshot of the PowerShell script that creates a local Active Directory account.](media/virtual-machines-shared-sap-high-availability-guide/ps-script-1.png)
+
+ ![Screenshot of the Azure portal after successful PowerShell script execution.](media/virtual-machines-shared-sap-high-availability-guide/smb-config-1.png)
+
+ > [!IMPORTANT]
+ > When a user is running the PowerShell script command `Connect-AzAccount`, we highly recommend entering the Azure AD user account that corresponds and maps to the Active Directory domain user account that was used to log on to a Windows Server instance.
+
+ After the script runs successfully, go to **Storage** > **File Shares** and verify that **Active Directory: Configured** appears.
+
+1. Assign SAP users *\<sid>adm* and *SAPService\<SID>*, and the *SAP_\<SAPSID>_GlobalAdmin* group, to the Azure Files premium SMB file share. Select the role **Storage File Data SMB Share Elevated Contributor** in the Azure portal.
+1. Check the ACL on the *sapmnt* file share after the installation. Then add the *DOMAIN\CLUSTER_NAME$* account, *DOMAIN\\\<sid>adm* account, *DOMAIN\SAPService\<SID>* account, and *SAP_\<SID>_GlobalAdmin* group. These accounts and group should have full control of the *sapmnt* directory.
+
+ > [!IMPORTANT]
+ > Complete this step before the SAPinst installation. It will be difficult or impossible to change ACLs after SAPinst has created directories and files on the file share.
+
+ The following screenshots show how to add computer machine accounts.
+
+ ![Screenshot of Windows Server that shows adding the cluster name to the local Active Directory instance.](media/virtual-machines-shared-sap-high-availability-guide/add-computer-account-2.png)
+
+ You can find the *DOMAIN\CLUSTER_NAME$* account by selecting **Computers** under **Object types**.
+
+ ![Screenshot of selecting an object type for an Active Directory computer account.](media/virtual-machines-shared-sap-high-availability-guide/add-computer-account-3.png)
+
+ ![Screenshot of options for the computer object type.](media/virtual-machines-shared-sap-high-availability-guide/add-computer-account-4.png)
+
+ ![Screenshot of computer account access properties.](media/virtual-machines-shared-sap-high-availability-guide/add-computer-account-5.png)
+
+1. If necessary, move the computer account created for Azure Files to an Active Directory container that doesn't have account expiration. The name of the computer account is the short name of the storage account.
+
+ > [!IMPORTANT]
+ > To initialize the Windows ACL for the SMB share, mount the share once to a drive letter.
+
+ The storage key is the password, and the user is *Azure\\\<SMB share name>*.
+
+ ![Windows screenshot of the one-time mount of the SMB share.](media/virtual-machines-shared-sap-high-availability-guide/one-time-net-use-mount-1.png)
+
+### Complete SAP Basis tasks
+
+An SAP Basis administrator should complete these tasks:
+
+1. [Install the Windows cluster on ASCS/ERS nodes and add the cloud witness](sap-high-availability-infrastructure-wsfc-shared-disk.md#0d67f090-7928-43e0-8772-5ccbf8f59aab).
+2. The first cluster node installation asks for the Azure Files SMB storage account name. Enter the FQDN *<storage_account_name>.file.core.windows.net*. If SAPinst doesn't accept more than 13 characters, the SWPM version is too old.
+3. [Modify the SAP profile of the ASCS/SCS instance](sap-high-availability-installation-wsfc-shared-disk.md#10822f4f-32e7-4871-b63a-9b86c76ce761).
+4. [Update the probe port for the SAP \<SID> role in Windows Server Failover Cluster (WSFC)](sap-high-availability-installation-wsfc-shared-disk.md#10822f4f-32e7-4871-b63a-9b86c76ce761).
+5. Continue with SWPM installation for the second ASCS/ERS node. SWPM requires only the path of the profile directory. Enter the full UNC path to the profile directory.
+6. Enter the UNC profile path for the database and for the installation of the primary application server (PAS) and additional application server (AAS).
+7. The PAS installation asks for the *transport* host name. Provide the FQDN of a separate storage account name for the *transport* directory.
+8. Verify the ACLs on the SID and *transport* directory.
+
+## Disaster recovery setup
+
+Azure Files premium SMB supports disaster recovery scenarios and cross-region replication scenarios. All data in Azure Files premium SMB directories can be continuously synchronized to a DR region's storage account. For more information, see the procedure for synchronizing files in [Transfer data with AzCopy and file storage](../../storage/common/storage-use-azcopy-files.md#synchronize-files).
+
+After a DR event and failover of the ASCS instance to the DR region, change the `SAPGLOBALHOST` profile parameter to point to Azure Files SMB in the DR region. Perform the same preparation steps on the DR storage account to join the storage account to Active Directory and assign RBAC roles for SAP users and groups.
## Troubleshooting
-The PowerShell scripts downloaded in step 3.c contain a debug script to conduct some basic checks to validate the configuration.
+
+The PowerShell scripts that you downloaded earlier contain a debug script to conduct basic checks for validating the configuration.
+ ```powershell Debug-AzStorageAccountAuth -StorageAccountName $StorageAccountName -ResourceGroupName $ResourceGroupName -Verbose ```
-![Screenshot of PowerShell script to validate configuration.](media/virtual-machines-shared-sap-high-availability-guide/smb-share-validation-2.png)PowerShell screenshot of the debug script output.
-![Screenshot of PowerShell script to retrieve technical info.](media/virtual-machines-shared-sap-high-availability-guide/smb-share-validation-1.png)The following screen shows the technical information to validate a successful domain join.
-## Useful links & resources
+Here's a PowerShell screenshot of the debug script output.
+
+![Screenshot of the PowerShell script to validate configuration.](media/virtual-machines-shared-sap-high-availability-guide/smb-share-validation-2.png)
+
+The following screenshot shows the technical information to validate a successful domain join.
-* SAP Note [2273806][2273806] SAP support for storage or file system related solutions
-* [Install SAP NetWeaver high availability on a Windows failover cluster and file share for SAP ASCS/SCS instances on Azure](./sap-high-availability-installation-wsfc-file-share.md)
+![Screenshot of the PowerShell script to retrieve technical info.](media/virtual-machines-shared-sap-high-availability-guide/smb-share-validation-1.png)
+
+## Useful links and resources
+
+* [SAP Note 2273806][2273806] (SAP support for solutions related to storage or file systems)
+* [Install SAP NetWeaver high availability on a Windows failover cluster and file share for SAP ASCS/SCS instances on Azure](./sap-high-availability-installation-wsfc-file-share.md)
* [Azure Virtual Machines high-availability architecture and scenarios for SAP NetWeaver](./sap-high-availability-architecture-scenarios.md)
-* [Add probe port in ASCS cluster configuration](sap-high-availability-installation-wsfc-file-share.md)
-* [Installation of an (A)SCS Instance on a Failover Cluster](https://www.sap.com/documents/2017/07/f453332f-c97c-0010-82c7-eda71af511fa.html)
+* [Add a probe port in an ASCS cluster configuration](sap-high-availability-installation-wsfc-file-share.md)
+* [Installation of an (A)SCS Instance on a Failover Cluster with no Shared Disks](https://www.sap.com/documents/2017/07/f453332f-c97c-0010-82c7-eda71af511fa.html) (SAP documentation)
[16083]:https://launchpad.support.sap.com/#/notes/16083 [2273806]:https://launchpad.support.sap.com/#/notes/2273806 ## Optional configurations
-The following diagrams show multiple SAP instances on Azure VMs running Microsoft Windows Failover Cluster to reduce the total number of VMs.
+The following diagrams show multiple SAP instances on Azure VMs running Windows Server Failover Cluster to reduce the total number of VMs.
-This can either be local SAP Application Servers on an SAP ASCS/SCS cluster or an SAP ASCS/SCS Cluster Role on Microsoft SQL Server Always On nodes.
+This configuration can be either local SAP application servers on an SAP ASCS/SCS cluster or an SAP ASCS/SCS cluster role on Microsoft SQL Server Always On nodes.
> [!IMPORTANT]
-> Installing a local SAP Application Server on a SQL Server Always On node is not supported.
->
+> Installing a local SAP application server on a SQL Server Always On node is not supported.
+
+Both SAP ASCS/SCS and the Microsoft SQL Server database are single points of failure (SPOFs). Using Azure Files SMB helps protect these SPOFs in a Windows environment.
-Both, SAP ASCS/SCS and the Microsoft SQL Server database, are single points of failure (SPOF). To protect these SPOFs in a Windows environment Azure Files SMB is used.
+Although the resource consumption of the SAP ASCS/SCS is fairly small, we recommend a reduction of the memory configuration by 2 GB for either SQL Server or the SAP application server.
-While the resource consumption of the SAP ASCS/SCS is fairly small, a reduction of the memory configuration for either SQL Server or the SAP Application Server by 2 GB is recommended.
+### <a name="5121771a-7618-4f36-ae14-ccf9ee5f2031"></a>SAP application servers on WSFC nodes using Azure Files SMB
-### <a name="5121771a-7618-4f36-ae14-ccf9ee5f2031"></a>SAP Application Servers on WSFC nodes using Azure Files SMB
+The following diagram shows SAP application servers locally installed.
-![Screenshot of HA setup with additional application servers.](media/virtual-machines-shared-sap-high-availability-guide/ha-azure-files-smb-as.png)SAP application Servers locally installed.
+![Diagram of a high-availability setup with additional application servers.](media/virtual-machines-shared-sap-high-availability-guide/ha-azure-files-smb-as.png)
> [!NOTE]
-> The picture shows the use of additional local disks. This is optional for customers who will not install application software on the OS drive (C:\)
->
+> The diagram shows the use of additional local disks. This setup is optional for customers who won't install application software on the OS drive (drive C).
### <a name="01541cf2-0a03-48e3-971e-e03575fa7b4f"></a> SAP ASCS/SCS on SQL Server Always On nodes using Azure Files SMB
+The following diagram shows Azure Files SMB with local SQL Server setup.
+ > [!IMPORTANT] > Using Azure Files SMB for any SQL Server volume is not supported.
->
-![Diagram of SAP ASCS/SCS on SQL Server Always On nodes using Azure Screenshot of Azure Files SMB with local SQL Server setup.](media/virtual-machines-shared-sap-high-availability-guide/ha-sql-ascs-azure-files-smb.png)SAP ASCS/SCS on SQL Server Always On nodes using Azure Files SMB
+![Diagram of SAP ASCS/SCS on SQL Server Always On nodes using Azure.](media/virtual-machines-shared-sap-high-availability-guide/ha-sql-ascs-azure-files-smb.png)
> [!NOTE]
-> The picture shows the use of additional local disks. This is optional for customers who will not install application software on the OS drive (C:\)
->
+> The diagram shows the use of additional local disks. This setup is optional for customers who won't install application software on the OS drive (drive C).
sap Lama Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/lama-installation.md
Title: SAP LaMa connector for Azure | Microsoft Docs
-description: Managing SAP Systems on Azure using SAP LaMa
+ Title: SAP LaMa connector for Azure
+description: Learn how to manage SAP systems on Azure by using SAP LaMa.
documentationcenter: ''
[planning-guide]:planning-guide.md [hana-ops-guide]:hana-vm-operations.md
-> [!NOTE]
-> General Support Statement: Please always open an incident with SAP on component BC-VCM-LVM-HYPERV if you need support for SAP LaMa or the Azure connector.
-
-SAP LaMa is used by many customers to operate and monitor their SAP landscape. Since SAP LaMa 3.0 SP05, it ships with a connector to Azure by default. You can use this connector to deallocate and start virtual machines, copy and relocate managed disks, and delete managed disks. With these basic operations, you can relocate, copy, clone, and refresh SAP systems using SAP LaMa.
+Many customers use SAP Landscape Management (LaMa) to operate and monitor their SAP landscape. Since version 3.0 SP05, SAP LaMa includes a connector to Azure by default. You can use this connector to deallocate and start virtual machines (VMs), copy and relocate managed disks, and delete managed disks. With these basic operations, you can relocate, copy, clone, and refresh SAP systems by using SAP LaMa.
-This guide describes how you set up the Azure connector for SAP LaMa, create virtual machines that can be used to install adaptive SAP systems and how to configure them.
+This guide describes how to set up the SAP LaMa connector for Azure. It also describes how to create and configure virtual machines that you can use to install adaptive SAP systems.
> [!NOTE]
-> The connector is only available in the SAP LaMa Enterprise Edition
+> The connector is available only in SAP LaMa Enterprise Edition.
## Resources
The following SAP Notes are related to the topic of SAP LaMa on Azure:
| Note number | Title | | | | | [2343511] |Microsoft Azure connector for SAP Landscape Management (LaMa) |
-| [2350235] |SAP Landscape Management 3.0 - Enterprise edition |
+| [2350235] |SAP Landscape Management 3.0 - Enterprise Edition |
-Also read the [SAP Help Portal for SAP LaMa](https://help.sap.com/viewer/p/SAP_LANDSCAPE_MANAGEMENT_ENTERPRISE).
+You can find more information in the [SAP Help Portal for SAP LaMa](https://help.sap.com/viewer/p/SAP_LANDSCAPE_MANAGEMENT_ENTERPRISE).
-## General remarks
-
-* Make sure to enable *Automatic Mountpoint Creation* in Setup -> Settings -> Engine
- If SAP LaMa mounts volumes using the SAP Adaptive Extensions on a virtual machine, the mount point must exist if this setting is not enabled.
+> [!NOTE]
+> If you need support for SAP LaMa or the connector for Azure, open an incident with SAP on component BC-VCM-LVM-HYPERV.
-* Use a separate subnet and don't use dynamic IP addresses to prevent IP address "stealing" when deploying new VMs and SAP instances are unprepared
- - If you use dynamic IP address allocation in the subnet, which is also used by SAP LaMa, preparing an SAP system with SAP LaMa might fail. If an SAP system is unprepared, the IP addresses are not reserved and might get allocated to other virtual machines.
+## General remarks
-* If you sign in to managed hosts, make sure to not block file systems from being unmounted
- - If you sign in to a Linux virtual machines and change the working directory to a directory in a mount point, for example /usr/sap/AH1/ASCS00/exe, the volume cannot be unmounted and a relocate or unprepare fails.
+* Be sure to enable **Automatic Mountpoint Creation** in **Setup** > **Settings** > **Engine**.
+
+ If SAP LaMa mounts volumes by using SAP Adaptive Extensions (SAPACEXT) on a virtual machine, the mount point must exist if this setting is not enabled.
-* Make sure to disable CLOUD_NETCONFIG_MANAGE on SUSE SLES Linux virtual machines. For more details, see [SUSE KB 7023633](https://www.suse.com/support/kb/doc/?id=7023633).
+* Use a separate subnet, and don't use dynamic IP addresses to prevent IP address "stealing" when you're deploying new VMs and SAP instances are unprepared.
+
+ If you use dynamic IP address allocation in the subnet that SAP LaMa also uses, preparing an SAP system with SAP LaMa might fail. If an SAP system is unprepared, the IP addresses are not reserved and might get allocated to other virtual machines.
-## Set up Azure connector for SAP LaMa
+* If you sign in to managed hosts, don't block file systems from being unmounted.
+
+ If you sign in to a Linux virtual machine and change the working directory to a directory in a mount point (for example, */usr/sap/AH1/ASCS00/exe*), the volume can't be unmounted and a relocate or unprepare operation fails.
-The Azure connector is shipped as of SAP LaMa 3.0 SP05. We recommend always installing the latest support package and patch for SAP LaMa 3.0.
+* Be sure to disable `CLOUD_NETCONFIG_MANAGE` on SUSE SLES Linux virtual machines. For more information, see [SUSE KB 7023633](https://www.suse.com/support/kb/doc/?id=7023633).
-The Azure connector uses the Azure Resource Manager API to manage your Azure resources. SAP LaMa can use a Service Principal or a Managed Identity to authenticate against this API. If your SAP LaMa is running on an Azure VM, we recommend using a Managed Identity as described in chapter [Use a Managed Identity to get access to the Azure API](lama-installation.md#af65832e-6469-4d69-9db5-0ed09eac126d). If you want to use a Service Principal, follow the steps in chapter [Use a Service Principal to get access to the Azure API](lama-installation.md#913c222a-3754-487f-9c89-983c82da641e).
+## Set up the SAP LaMa connector for Azure
-### <a name="913c222a-3754-487f-9c89-983c82da641e"></a>Use a Service Principal to get access to the Azure API
+The connector for Azure is included in SAP LaMa as of version 3.0 SP05. We recommend always installing the latest support package and patch for SAP LaMa 3.0.
-The Azure connector can use a Service Principal to authorize against Microsoft Azure. Follow these steps to create a Service Principal for SAP Landscape Management (LaMa).
+The connector for Azure uses the Azure Resource Manager API to manage your Azure resources. SAP LaMa can use a service principal or a managed identity to authenticate against this API. If your SAP LaMa instance is running on an Azure VM, we recommend using a managed identity.
-1. Go to https://portal.azure.com
-1. Open the Azure Active Directory blade
-1. Click on App registrations
-1. Click on New registration
-1. Enter a name and click on Register
-1. Select the new App and click on Certificates & secrets in the Settings tab
-1. Create a new client secret, enter a description for a new key, select when the secret should expire and click on Save
-1. Write down the Value. It is used as the password for the Service Principal
-1. Write down the Application ID. It is used as the username of the Service Principal
+### <a name="913c222a-3754-487f-9c89-983c82da641e"></a>Use a service principal to get access to the Azure API
-By default the Service Principal doesn't have permissions to access your Azure resources.
-Assign the Contributor role to the Service Principal at resource group scope for all resource groups that contain SAP systems that should be managed by SAP LaMa.
+Follow these steps to create a service principal for the SAP LaMa connector for Azure:
-For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+1. Go to the [Azure portal](https://portal.azure.com).
+1. Open the **Azure Active Directory** pane.
+1. Select **App registrations**.
+1. Select **New registration**.
+1. Enter a name, and then select **Register**.
+1. Select the new app, and then on the **Settings** tab, select **Certificates & secrets**.
+1. Create a new client secret, enter a description for a new key, select when the secret should expire, and then select **Save**.
+1. Write down the value. You'll use it as the password for the service principal.
+1. Write down the application ID. You'll use it as the username of the service principal.
-### <a name="af65832e-6469-4d69-9db5-0ed09eac126d"></a>Use a Managed Identity to get access to the Azure API
+By default, the service principal doesn't have permissions to access your Azure resources. Assign the Contributor role to the service principal at resource group scope for all resource groups that contain SAP systems that SAP LaMa should manage. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
-To be able to use a Managed Identity, your SAP LaMa instance has to run on an Azure VM that has a system or user assigned identity. For more information about Managed Identities, read [What is managed identities for Azure resources?](../../active-directory/managed-identities-azure-resources/overview.md) and [Configure managed identities for Azure resources on a VM using the Azure portal](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md).
+### <a name="af65832e-6469-4d69-9db5-0ed09eac126d"></a>Use a managed identity to get access to the Azure API
-By default the Managed Identity doesn't have permissions to access your Azure resources.
-Assign the Contributor role to the Virtual Machine identity at resource group scope for all resource groups that contain SAP systems that should be managed by SAP LaMa.
+To be able to use a managed identity, your SAP LaMa instance has to run on an Azure VM that has a system-assigned or user-assigned identity. For more information about managed identities, read [What are managed identities for Azure resources?](../../active-directory/managed-identities-azure-resources/overview.md) and [Configure managed identities for Azure resources on a VM using the Azure portal](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md).
-For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+By default, the managed identity doesn't have permissions to access your Azure resources. Assign the Contributor role to the VM identity at resource group scope for all resource groups that contain SAP systems that SAP LaMa should manage. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
-In your SAP LaMa Azure connector configuration, select 'Use Managed Identity' to enable the use of the Managed Identity. If you want to use a system assigned identity, make sure to leave the User Name field empty. If you want to use a user assigned identity, enter the user assigned identity ID into the User Name field.
+In your configuration of the SAP LaMa connector for Azure, select **Use Managed Identity** to enable the use of the managed identity. If you want to use a system-assigned identity, leave the **User Name** field empty. If you want to use a user-assigned identity, enter its ID in the **User Name** field.
### Create a new connector in SAP LaMa
-Open the SAP LaMa website and navigate to Infrastructure. Go to tab Cloud Managers and click on Add. Select the Microsoft Azure Cloud Adapter and click Next. Enter the following information:
+Open the SAP LaMa website and go to **Infrastructure**. On the **Cloud Managers** tab, select **Add**. Select **Microsoft Azure Cloud Adapter**, and then select **Next**. Enter the following information:
+
+* **Label**: Choose a name for the connector instance.
+* **User Name**: Enter the service principal application ID or the ID of the user-assigned identity of the virtual machine.
+* **Password**: Enter the service principal key/password. You can leave this field empty if you use a system-assigned or user-assigned identity.
+* **URL**: Keep the default `https://management.azure.com/`.
+* **Monitoring Interval (Seconds)**: Enter an interval of at least 300.
+* **Use Managed Identity**: Select to enable SAP LaMa to use a system-assigned or user-assigned identity to authenticate against the Azure API.
+* **Subscription ID**: Enter the Azure subscription ID.
+* **Azure Active Directory Tenant ID**: Enter the ID of the Active Directory tenant.
+* **Proxy host**: Enter the host name of the proxy if SAP LaMa needs a proxy to connect to the internet.
+* **Proxy port**: Enter the TCP port of the proxy.
+* **Change Storage Type to save costs**: Enable this setting if the Azure adapter should change the storage type of the managed disks to save costs when the disks are not in use.
-* Label: Choose a name for the connector instance
-* User Name: Service Principal Application ID or ID of the user assigned identity of the virtual machine. See [Using a System or User Assigned Identity] for more information
-* Password: Service Principal key/password. You can leave this field empty if you use a system or user assigned identity.
-* URL: Keep default `https://management.azure.com/`
-* Monitoring Interval (Seconds): Should be at least 300
-* Use Managed Identity: SAP LaMa can use a system or user assigned identity to authenticate against the Azure API. See chapter [Use a Managed Identity to get access to the Azure API](lama-installation.md#af65832e-6469-4d69-9db5-0ed09eac126d) in this guide.
-* Subscription ID: Azure subscription ID
-* Azure Active Directory Tenant ID: ID of the Active Directory tenant
-* Proxy host: Hostname of the proxy if SAP LaMa needs a proxy to connect to the internet
-* Proxy port: TCP port of the proxy
-* Change Storage Type to save costs: Enable this setting if the Azure Adapter should change the storage type of the Managed Disks to save costs when the disks are not in use. For data disks that are referenced in an SAP instance configuration, the adapter changes the disk type to Standard Storage during an instance unprepare and back to the original storage type during an instance prepare. If you stop a virtual machine in SAP LaMa, the adapter changes the storage type of all attached disks, including the OS disk to Standard Storage. If you start a virtual machine in SAP LaMa, the adapter changes the storage type back to the original storage type.
+ For data disks that are referenced in an SAP instance configuration, the adapter changes the disk type to Standard Storage during an instance unprepare operation and back to the original storage type during an instance prepare operation.
-Click on Test Configuration to validate your input. You should see
+ If you stop a virtual machine in SAP LaMa, the adapter changes the storage type of all attached disks, including the OS disk, to Standard Storage. If you start a virtual machine in SAP LaMa, the adapter changes the storage type back to the original storage type.
-Connection successful: Connection to Microsoft cloud was successful. 7 resource groups found (only 10 groups requested)
+Select **Test Configuration** to validate your input. You should see the following message at the bottom of the website:
-at the bottom of the website.
+"Connection successful: Connection to Microsoft cloud was successful. 7 resource groups found (only 10 groups requested)."
## Provision a new adaptive SAP system
-You can manually deploy a new virtual machine or use one of the Azure templates in the [quickstart repository](https://github.com/Azure/azure-quickstart-templates). It contains templates for [SAP NetWeaver ASCS](https://github.com/Azure/azure-quickstart-templates/tree/master/application-workloads/sap/sap-lama-ascs), [SAP NetWeaver application servers](https://github.com/Azure/azure-quickstart-templates/tree/master/application-workloads/sap/sap-lama-apps), and the [database](https://github.com/Azure/azure-quickstart-templates/tree/master/application-workloads/sap/sap-lama-database). You can also use these templates to provision new hosts as part of a system copy/clone etc.
+You can manually deploy a new virtual machine or use one of the Azure templates in the [quickstart repository](https://github.com/Azure/azure-quickstart-templates). The repository contains templates for [SAP NetWeaver ASCS](https://github.com/Azure/azure-quickstart-templates/tree/master/application-workloads/sap/sap-lama-ascs), [SAP NetWeaver application servers](https://github.com/Azure/azure-quickstart-templates/tree/master/application-workloads/sap/sap-lama-apps), and the [database](https://github.com/Azure/azure-quickstart-templates/tree/master/application-workloads/sap/sap-lama-database). You can also use these templates to provision new hosts as part of a system copy, clone, or similar activity.
-We recommend using a separate subnet for all virtual machines that you want to manage with SAP LaMa and don't use dynamic IP addresses to prevent IP address "stealing" when deploying new virtual machines and SAP instances are unprepared.
+We recommend using a separate subnet for all virtual machines that you want to manage with SAP LaMa. We also recommend that you don't use dynamic IP addresses to prevent IP address "stealing" when you're deploying new virtual machines and SAP instances are unprepared.
> [!NOTE]
-> If possible, remove all virtual machine extensions as they might cause long runtimes for detaching disks from a virtual machine.
+> If possible, remove all virtual machine extensions. They might cause long runtimes for detaching disks from a virtual machine.
-Make sure that user \<hanasid>adm, \<sapsid>adm and group sapsys exist on the target machine with the same ID and gid or use LDAP. Enable and start the NFS server on the virtual machines that should be used to run the SAP NetWeaver (A)SCS.
+Make sure that the user *\<hanasid\>adm*, the user *\<sapsid\>adm*, and the group *sapsys* exist on the target machine with the same ID and group ID, or use LDAP. Enable and start the Network File Sharing (NFS) server on the virtual machines that should be used to run SAP NetWeaver ABAP Central Services (ASCS) or SAP Central Services (SCS).
-### Manual Deployment
+### Manual deployment
-SAP LaMa communicates with the virtual machine using the SAP Host Agent. If you deploy the virtual machines manually or not using the Azure Resource Manager template from the quickstart repository, make sure to install the latest SAP Host Agent and the SAP Adaptive Extensions. For more information about the required patch levels for Azure, see SAP Note [2343511].
+SAP LaMa communicates with the virtual machine by using the SAP Host Agent. If you deploy the virtual machines manually or are not using the Azure Resource Manager template from the quickstart repository, be sure to install the latest SAP Host Agent and the SAP Adaptive Extensions. For more information about the required patch levels for Azure, see SAP Note [2343511].
-#### Manual deployment of a Linux Virtual Machine
+#### Manual deployment of a Linux virtual machine
-Create a new virtual machine with one of the supported operation systems listed in SAP Note [2343511]. Add more IP configurations for the SAP instances. Each instance needs at least on IP address and must be installed using a virtual hostname.
+Create a new virtual machine with one of the supported operating systems listed in SAP Note [2343511]. Add more IP configurations for the SAP instances. Each instance needs at least one IP address and must be installed using a virtual host name.
-The SAP NetWeaver ASCS instance needs disks for /sapmnt/\<SAPSID>, /usr/sap/\<SAPSID>, /usr/sap/trans, and /usr/sap/\<sapsid>adm. The SAP NetWeaver application servers do not need more disks. Everything related to the SAP instance must be stored on the ASCS and exported via NFS. Otherwise, it is currently not possible to add more application servers using SAP LaMa.
+The SAP NetWeaver ASCS instance needs disks for */sapmnt/\<SAPSID\>*, */usr/sap/\<SAPSID\>*, */usr/sap/trans*, and */usr/sap/\<sapsid\>adm*. The SAP NetWeaver application servers don't need more disks. Everything related to the SAP instance must be stored on ASCS and exported via NFS. Otherwise, you currently can't add more application servers by using SAP LaMa.
-![SAP NetWeaver ASCS on Linux](media/lama/sap-lama-ascs-app-linux.png)
+![Diagram that shows SAP NetWeaver ASCS on Linux.](media/lama/sap-lama-ascs-app-linux.png)
#### Manual deployment for SAP HANA
-Create a new virtual machine with one of the supported operation systems for SAP HANA as listed in SAP Note [2343511]. Add one extra IP configuration for SAP HANA and one per HANA tenant.
+Create a new virtual machine with one of the supported operating systems for SAP HANA, as listed in SAP Note [2343511]. Add one extra IP configuration for SAP HANA and one per HANA tenant.
-SAP HANA needs disks for /hana/shared, /hana/backup, /hana/data, and /hana/log
+SAP HANA needs disks for */hana/shared*, */hana/backup*, */hana/data*, and */hana/log*.
-![SAP HANA on Linux](media/lama/sap-lama-db-hana.png)
+![Diagram that shows SAP HANA on Linux.](media/lama/sap-lama-db-hana.png)
#### Manual deployment for Oracle Database on Linux
-Create a new virtual machine with one of the supported operation systems for Oracle databases as listed in SAP Note [2343511]. Add one extra IP configuration for the Oracle database.
+Create a new virtual machine with one of the supported operating systems for Oracle databases, as listed in SAP Note [2343511]. Add one extra IP configuration for the Oracle database.
-The Oracle database needs disks for /oracle, /home/oraod1, and /home/oracle
+The Oracle database needs disks for */oracle*, */home/oraod1*, and */home/oracle*.
![Diagram that shows an Oracle database on Linux and the disks it needs.](media/lama/sap-lama-db-ora-lnx.png) #### Manual deployment for Microsoft SQL Server
-Create a new virtual machine with one of the supported operation systems for Microsoft SQL Server as listed in SAP Note [2343511]. Add one extra IP configuration for the SQL Server instance.
+Create a new virtual machine with one of the supported operating systems for Microsoft SQL Server, as listed in SAP Note [2343511]. Add one extra IP configuration for the SQL Server instance.
-The SQL Server database server needs disks for the database data and log files and disks for c:\usr\sap.
+The SQL Server database server needs disks for the database data and log files. It also needs disks for *c:\usr\sap*.
-![Oracle database on Linux](media/lama/sap-lama-db-sql.png)
+![Diagram that shows an Oracle database on Linux.](media/lama/sap-lama-db-sql.png)
-Make sure to install a supported Microsoft ODBC driver for SQL Server on a virtual machine that you want to use to relocate an SAP NetWeaver application server to or as a system copy/clone target.
+Be sure to install a supported Microsoft ODBC driver for SQL Server on a virtual machine that you want to use as a target for relocating an SAP NetWeaver application server or as a system copy/clone target. SAP LaMa can't relocate SQL Server itself, so a virtual machine that you want to use for these purposes needs SQL Server preinstalled.
-SAP LaMa cannot relocate SQL Server itself so a virtual machine that you want to use to relocate a database instance to or as a system copy/clone target needs SQL Server preinstalled.
+### Deploy a virtual machine by using an Azure template
-### Deploy Virtual Machine Using an Azure Template
+Download the following latest available archives from the [SAP Software Download Center](https://support.sap.com/swdc) for the operating system of the virtual machines:
-Download the following latest available archives from the [SAP Software Marketplace](https://support.sap.com/swdc) for the operating system of the virtual machines:
+* SAPCAR 7.21
+* SAP Host Agent 7.21
+* SAP Adaptive Extension 1.0 EXT
-1. SAPCAR 7.21
-1. SAP HOST AGENT 7.21
-1. SAP ADAPTIVE EXTENSION 1.0 EXT
+Also download the following components from the [Microsoft Download Center](https://www.microsoft.com/download):
-Also download the following components from the [Microsoft Download Center](https://www.microsoft.com/download)
+* Microsoft Visual C++ 2010 Redistributable Package (x64) (Windows only)
+* Microsoft ODBC Driver for SQL Server (SQL Server only)
-1. Microsoft Visual C++ 2010 Redistributable Package (x64) (Windows only)
-1. Microsoft ODBC Driver for SQL Server (SQL Server only)
-
-The components are required to deploy the template. The easiest way to make them available to the template is to upload them to an Azure storage account and create a Shared Access Signature (SAS).
+The components are required for template deployment. The easiest way to make them available to the template is to upload them to an Azure storage account and create a shared access signature (SAS).
The templates have the following parameters:
-* sapSystemId: The SAP system ID. It is used to create the disk layout (for example /usr/sap/\<sapsid>).
+* `sapSystemId`: The SAP system ID (SID). It's used to create the disk layout (for example, */usr/sap/\<sapsid\>*).
-* computerName: The computer name of the new virtual machine. This parameter is also used by SAP LaMa. When you use this template to provision a new virtual machine as part of a system copy, SAP LaMa waits until the host with this computer name can be reached.
+* `computerName`: The computer name of the new virtual machine. SAP LaMa also uses this parameter. When you use this template to provision a new virtual machine as part of a system copy, SAP LaMa waits until the host with this computer name can be reached.
-* osType: The type of the operating system you want to deploy.
+* `osType`: The type of the operating system that you want to deploy.
-* dbtype: The type of the database. This parameter is used to determine how many extra IP configurations need to be added and how the disk layout should look like.
+* `dbtype`: The type of the database. This parameter is used to determine how many extra IP configurations need to be added and how the disk layout should look.
-* sapSystemSize: The size of the SAP System you want to deploy. It is used to determine the virtual machine instance type and size.
+* `sapSystemSize`: The size of the SAP system that you want to deploy. It's used to determine the type and size of the virtual machine instance.
-* adminUsername: Username for the virtual machine.
+* `adminUsername`: The username for the virtual machine.
-* adminPassword: Password for the virtual machine. You can also provide a public key for SSH.
+* `adminPassword`: The password for the virtual machine. You can also provide a public key for SSH.
-* sshKeyData: Public SSH key for the virtual machines. Only supported for Linux operating systems.
+* `sshKeyData`: The public SSH key for the virtual machine. It's supported only for Linux operating systems.
-* subnetId: The ID of the subnet you want to use.
+* `subnetId`: The ID of the subnet that you want to use.
-* deployEmptyTarget: You can deploy an empty target if you want to use the virtual machine as a target for an instance relocate or similar. In this case, no additional disks or IP configurations are attached.
+* `deployEmptyTarget`: An empty target that you can deploy if you want to use the virtual machine as a target for an instance relocation or something similar. In this case, no additional disks or IP configurations are attached.
-* sapcarLocation: The location for the sapcar application that matches the operating system you deploy. sapcar is used to extract the archives you provide in other parameters.
+* `sapcarLocation`: The location for the SAPCAR application that matches the operating system that you deploy. SAPCAR is used to extract the archives that you provide in other parameters.
-* sapHostAgentArchiveLocation: The location of the SAP Host Agent archive. SAP Host Agent is deployed as part of this template deployment.
+* `sapHostAgentArchiveLocation`: The location of the SAP Host Agent archive. The SAP Host Agent is deployed as part of this template deployment.
-* sapacExtLocation: The location of the SAP Adaptive Extensions. SAP Note [2343511] lists the minimum patch level required for Azure.
+* `sapacExtLocation`: The location of the SAP Adaptive Extensions. SAP Note [2343511] lists the minimum patch level required for Azure.
-* vcRedistLocation: The location of the VC Runtime that is required to install the SAP Adaptive Extensions. This parameter is only required for Windows.
+* `vcRedistLocation`: The location of the Variant Configuration runtime that's required to install the SAP Adaptive Extensions. This parameter is required only for Windows.
-* odbcDriverLocation: The location of the ODBC driver you want to install. Only Microsoft ODBC driver for SQL Server is supported.
+* `odbcDriverLocation`: The location of the ODBC driver that you want to install. Only the Microsoft ODBC driver for SQL Server is supported.
-* sapadmPassword: The password for the sapadm user.
+* `sapadmPassword`: The password for the *sapadm* user.
-* sapadmId: The Linux User ID of the sapadm user. Not required for Windows.
+* `sapadmId`: The Linux user ID of the *sapadm* user. It's not required for Windows.
-* sapsysGid: The Linux group ID of the sapsys group. Not required for Windows.
+* `sapsysGid`: The Linux group ID of the *sapsys* group. It's not required for Windows.
-* _artifactsLocation: The base URI, where artifacts required by this template are located. When the template is deployed using the accompanying scripts, a private location in the subscription is used and this value is automatically generated. Only needed if you do not deploy the template from GitHub.
+* `_artifactsLocation`: The base URI, which contains artifacts that this template requires. When you deploy the template by using the accompanying scripts, a private location in the subscription is used and this value is automatically generated. You need this URI only if you don't deploy the template from GitHub.
-* _artifactsLocationSasToken: The sasToken required to access _artifactsLocation. When the template is deployed using the accompanying scripts, a sasToken is automatically generated. Only needed if you do not deploy the template from GitHub.
+* `_artifactsLocationSasToken`: The SAS token required to access `_artifactsLocation`. When you deploy the template by using the accompanying scripts, an SAS token is automatically generated. You need this token only if you don't deploy the template from GitHub.
### SAP HANA
-In the following examples, we assume that you install SAP HANA with system ID HN1 and the SAP NetWeaver system with system ID AH1. The virtual hostnames are hn1-db for the HANA instance, ah1-db for the HANA tenant used by the SAP NetWeaver system, ah1-ascs for the SAP NetWeaver ASCS and ah1-di-0 for the first SAP NetWeaver application server.
+The following examples assume that you install the SAP HANA system with SID *HN1* and the SAP NetWeaver system with SID *AH1*. The virtual host names are:
+
+* *hn1-db* for the HANA instance
+* *ah1-db* for the HANA tenant that the SAP NetWeaver system uses
+* *ah1-ascs* for SAP NetWeaver ASCS
+* *ah1-di-0* for the first SAP NetWeaver application server
-#### Install SAP NetWeaver ASCS for SAP HANA using Azure Managed Disks
+#### Install SAP NetWeaver ASCS for SAP HANA by using Azure managed disks
-Before you start the SAP Software Provisioning Manager (SWPM), you need to mount the IP address of virtual hostname of the ASCS. The recommended way is to use sapacext. If you mount the IP address using sapacext, make sure to remount the IP address after a reboot.
+Before you start the SAP Software Provisioning Manager (SWPM), you need to mount the IP address of the virtual host name of ASCS. The recommended way is to use SAPACEXT. If you mount the IP address by using SAPACEXT, be sure to remount the IP address after a reboot.
![Linux logo.][Logo_Linux] Linux
Before you start the SAP Software Provisioning Manager (SWPM), you need to mount
C:\Program Files\SAP\hostctrl\exe\sapacext.exe -a ifup -i "Ethernet 3" -h ah1-ascs -n 255.255.255.128 ```
-Run SWPM and use *ah1-ascs* for the *ASCS Instance Host Name*.
+Run SWPM. For **ASCS Instance Host Name**, use **ah1-ascs**.
![Linux logo.][Logo_Linux] Linux
-Add the following profile parameter to the SAP Host Agent profile, which is located at /usr/sap/hostctrl/exe/host_profile. For more information, see SAP Note [2628497].
-```
+
+Add the following profile parameter to the SAP Host Agent profile, which is located at */usr/sap/hostctrl/exe/host_profile*. For more information, see SAP Note [2628497].
+
+```bash
acosprep/nfs_paths=/home/ah1adm,/usr/sap/trans,/sapmnt/AH1,/usr/sap/AH1 ```
-#### Install SAP NetWeaver ASCS for SAP HANA on Azure NetAppFiles (ANF)
+#### Install SAP NetWeaver ASCS for SAP HANA on Azure NetApp Files
-ANF provides NFS for Azure. In the context of SAP LaMa this simplifies the creation of the ABAP Central Services (ASCS) instances and the subsequent installation of application servers. Previously the ASCS instance had to act as NFS server as well and the parameter acosprep/nfs_paths had to be added to the host_profile of the SAP Hostagent.
+Azure NetApp Files provides NFS for Azure. In the context of SAP LaMa, this simplifies the creation of the ASCS instances and the subsequent installation of application servers. Previously, the ASCS instance also had to act as an NFS server, and the parameter `acosprep/nfs_paths` had to be added to the host profile of the SAP Host Agent.
-#### Network Requirements
+#### Network requirements
-ANF requires a delegated subnet, which must be part of the same VNET as the SAP servers. Here's an example for such a configuration.
-This screen shows the creation of the VNET and the first subnet:
+Azure NetApp Files requires a delegated subnet, which must be part of the same virtual network as the SAP servers. Here's an example for such a configuration:
-![SAP LaMa create virtual network for Azure ANF ](media/lama/sap-lama-createvn-50.png)
+1. Create the virtual network and the first subnet.
-The next step creates the delegated subnet for Microsoft.NetApp/volumes.
+ ![Screenshot that shows selections for creating a virtual network for Azure NetApp Files.](media/lama/sap-lama-createvn-50.png)
-![SAP LaMa add delegated subnet ](media/lama/sap-lama-addsubnet-50.png)
+1. Create the delegated subnet for *Microsoft.NetApp/volumes*.
-![SAP LaMa list of subnets ](media/lama/sap-lama-subnets.png)
+ ![Screenshot that shows selections for adding a delegated subnet.](media/lama/sap-lama-addsubnet-50.png)
-Now a NetApp account needs to be created within the Azure portal:
+ ![Screenshot that shows a list of subnets.](media/lama/sap-lama-subnets.png)
-![SAP LaMa create NetApp account ](media/lama/sap-lama-create-netappaccount-50.png)
+1. Create a NetApp account in the Azure portal.
-![SAP LaMa NetApp account created ](media/lama/sap-lama-netappaccount.png)
+ ![Screenshot that shows selections for creating a NetApp account.](media/lama/sap-lama-create-netappaccount-50.png)
-Within the NetApp account, the capacity pool specifies the size and type of disks for each pool:
+ ![Screenshot that shows a created LaMa NetApp account.](media/lama/sap-lama-netappaccount.png)
-![SAP LaMa create NetApp capacity pool ](media/lama/sap-lama-capacitypool-50.png)
+ Within the NetApp account, the capacity pool specifies the size and type of disks for each pool.
-![SAP LaMa NetApp capacity pool created ](media/lama/sap-lama-capacitypool-list.png)
+ ![Screenshot that shows selections for creating a NetApp capacity pool.](media/lama/sap-lama-capacitypool-50.png)
-The NFS volumes can now be defined. Since there might be volumes for multiple systems in one pool, a self-explaining naming scheme should be chosen. Adding the SID helps to group related volumes together. For the ASCS and the AS instance, the following mounts are needed: */sapmnt/\<SID\>*, */usr/sap/\<SID\>*, and */home/\<sid\>adm*. Optionally, */usr/sap/trans* is needed for the central transport directory, which is at least used by all systems of one landscape.
+ ![Screenshot that shows a created NetApp capacity pool.](media/lama/sap-lama-capacitypool-list.png)
-![SAP LaMa create a volume 1 ](media/lama/sap-lama-createvolume-80.png)
+1. Define the NFS volumes.
-![SAP LaMa create a volume 2 ](media/lama/sap-lama-createvolume2-80.png)
+ Because one pool might contain volumes for multiple systems, choose a self-explaining naming scheme. Adding the SID helps to group related volumes together.
-![SAP LaMa create a volume 3 ](media/lama/sap-lama-createvolume3-80.png)
+ For the ASCS and AS instances, you need the following mounts: */sapmnt/\<SID\>*, */usr/sap/\<SID\>*, and */home/\<sid\>adm*. Optionally, you need */usr/sap/trans* for the central transport directory, which is at least used by all systems of one landscape.
-These steps need to be repeated for the other volumes as well.
+ ![Screenshot that shows basic details for creating a volume.](media/lama/sap-lama-createvolume-80.png)
-![SAP LaMa list of created volumes ](media/lama/sap-lama-volumes.png)
+ ![Screenshot that shows protocol details for creating a volume.](media/lama/sap-lama-createvolume2-80.png)
-Now these volumes need to be mounted to the systems where the initial installation with the SAP SWPM is performed.
+ ![Screenshot that shows the tab for reviewing details before creating a volume.](media/lama/sap-lama-createvolume3-80.png)
-First the mount points need to be created. In this case, the SID is AN1 so the following commands need to be executed:
+1. Repeat the preceding steps for the other volumes.
-```bash
-mkdir -p /home/an1adm
-mkdir -p /sapmnt/AN1
-mkdir -p /usr/sap/AN1
-mkdir -p /usr/sap/trans
-```
-Next the ANF volumes are mounted with the following commands:
+ ![Screenshot that shows a list of created volumes.](media/lama/sap-lama-volumes.png)
-```bash
-# sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=3,tcp 9.9.9.132:/an1-home-sidadm /home/an1adm
-# sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=3,tcp 9.9.9.132:/an1-sapmnt-sid /sapmnt/AN1
-# sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=3,tcp 9.9.9.132:/an1-usr-sap-sid /usr/sap/AN1
-# sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=3,tcp 9.9.9.132:/global-usr-sap-trans /usr/sap/trans
-```
-The mount commands can also be looked up from the portal. The local mount points need to be adjusted.
+1. Mount the volumes to the systems where the initial installation with SAP SWPM is performed:
-Use the df -h command to verify.
+ 1. Create the mount points. In this case, the SID is *AN1*, so you run the following commands:
-![SAP LaMa mount points OS level ](media/lama/sap-lama-mounts.png)
+ ```bash
+ mkdir -p /home/an1adm
+ mkdir -p /sapmnt/AN1
+ mkdir -p /usr/sap/AN1
+ mkdir -p /usr/sap/trans
+ ```
-Now the installation with SWPM must be performed.
+ 1. Mount the Azure NetApp Files volumes by using the following commands:
-The same steps must be performed for at least one AS instance.
+ ```bash
+ # sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=3,tcp 9.9.9.132:/an1-home-sidadm /home/an1adm
+ # sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=3,tcp 9.9.9.132:/an1-sapmnt-sid /sapmnt/AN1
+ # sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=3,tcp 9.9.9.132:/an1-usr-sap-sid /usr/sap/AN1
+ # sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=3,tcp 9.9.9.132:/global-usr-sap-trans /usr/sap/trans
+ ```
-After the successful installation, the system must be discovered within SAP LaMa.
+ You can also look up the mount commands from the portal. The local mount points need to be adjusted.
-The mount points should look like this for the ASCS and the AS instance:
+ 1. Run the `df -h` command. Check the output to verify that you mounted the volumes correctly.
-![SAP LaMa mount points in LaMa ](media/lama/sap-lama-ascs.png)
-(This is an example. The IP addresses and export path are different from the ones used before)
+ ![Screenshot of OS-level mount points in output.](media/lama/sap-lama-mounts.png)
+1. Perform the installation with SWPM. The same steps must be performed for at least one AS instance.
+
+ After the successful installation, the system must be discovered within SAP LaMa. The mount points should look like the following screenshot for the ASCS and AS instances.
+
+ ![Screenshot that shows SAP LaMa mount points.](media/lama/sap-lama-ascs.png)
+
+ > [!NOTE]
+ > This is an example. The IP addresses and export path are different from the ones that you used before.
#### Install SAP HANA
-If you install SAP HANA using the commandline tool hdblcm, use parameter --hostname to provide a virtual hostname. You need to add the IP address of the virtual hostname of the database to a network interface. The recommended way is to use sapacext. If you mount the IP address using sapacext, make sure to remount the IP address after a reboot.
+If you install SAP HANA by using the SAP HANA database lifecycle manager (HDBLCM) command-line tool, use the `--hostname` parameter to provide a virtual host name.
+
+Add the IP address of the virtual host name of the database to a network interface. The recommended way is to use SAPACEXT. If you mount the IP address by using SAPACEXT, be sure to remount the IP address after a reboot.
-Add another virtual hostname and IP address for the name that is used by the application servers to connect to the HANA tenant.
+Add another virtual host name and IP address for the name that the application servers use to connect to the HANA tenant:
```bash # /usr/sap/hostctrl/exe/sapacext -a ifup -i <network interface> -h <virtual hostname or IP address> -n <subnet mask>
Add another virtual hostname and IP address for the name that is used by the app
/usr/sap/hostctrl/exe/sapacext -a ifup -i eth0 -h ah1-db -n 255.255.255.128 ```
-Run the database instance installation of SWPM on the application server virtual machine, not on the HANA virtual machine. Use *ah1-db* for the *Database Host* in dialog *Database for SAP System*.
+Run the database instance installation of SWPM on the application server VM, not on the HANA VM. In the **Database for SAP System** dialog, for **Database Host**, use **ah1-db**.
#### Install SAP NetWeaver Application Server for SAP HANA
-Before you start the SAP Software Provisioning Manager (SWPM), you need to mount the IP address of virtual hostname of the application server. The recommended way is to use sapacext. If you mount the IP address using sapacext, make sure to remount the IP address after a reboot.
+Before you start SWPM, you need to mount the IP address of the virtual host name of the application server. The recommended way is to use SAPACEXT. If you mount the IP address by using SAPACEXT, be sure to remount the IP address after a reboot.
![Linux logo.][Logo_Linux] Linux
Before you start the SAP Software Provisioning Manager (SWPM), you need to mount
C:\Program Files\SAP\hostctrl\exe\sapacext.exe -a ifup -i "Ethernet 3" -h ah1-di-0 -n 255.255.255.128 ```
-It is recommended to use SAP NetWeaver profile parameter dbs/hdb/hdb_use_ident to set the identity that is used to find the key in the HDB userstore. You can add this parameter manually after the database instance installation with SWPM or run SWPM with
+We recommend that you use the SAP NetWeaver profile parameter `dbs/hdb/hdb_use_ident` to set the identity that's used to find the key in the SAP HANA user store (*hdbuserstore*). You can add this parameter manually after the database instance installation with SWPM or run SWPM with the following code:
```bash # from https://blogs.sap.com/2015/04/14/sap-hana-client-software-different-ways-to-set-the-connectivity-data/ /sapdb/DVDs/IM_LINUX_X86_64/sapinst HDB_USE_IDENT=SYSTEM_COO ```
-If you set it manually, you also need to create new HDB userstore entries.
+If you set it manually, you also need to create new *hdbuserstore* entries:
```bash # run as <sapsid>adm
If you set it manually, you also need to create new HDB userstore entries.
/usr/sap/AH1/hdbclient/hdbuserstore SET DEFAULT ah1-db:35041@AH1 SAPABAP1 <password> ```
-Use *ah1-di-0* for the *PAS Instance Host Name* in dialog *Primary Application Server Instance*.
+In the **Primary Application Server Instance** dialog, for **PAS Instance Host Name**, use **ah1-di-0**.
-#### Post-Installation Steps for SAP HANA
+#### Post-installation steps for SAP HANA
-Make sure to back up the SYSTEMDB and all tenant databases before you try to do a tenant copy, tenant move or create a system replication.
+Back up SYSTEMDB and all tenant databases before you try to copy a tenant, move a tenant, or create a system replication.
### Microsoft SQL Server
-In the following examples, we assume that you install the SAP NetWeaver system with system ID AS1. The virtual hostnames are as1-db for the SQL Server instance used by the SAP NetWeaver system, as1-ascs for the SAP NetWeaver ASCS and as1-di-0 for the first SAP NetWeaver application server.
+The following examples assume that you install the SAP NetWeaver system with SID *AS1*. The virtual host names are:
+
+* *as1-db* for the SQL Server instance that the SAP NetWeaver system uses
+* *as1-ascs* for SAP NetWeaver ASCS
+* *as1-di-0* for the first SAP NetWeaver application server
#### Install SAP NetWeaver ASCS for SQL Server
-Before you start the SAP Software Provisioning Manager (SWPM), you need to mount the IP address of virtual hostname of the ASCS. The recommended way is to use sapacext. If you mount the IP address using sapacext, make sure to remount the IP address after a reboot.
+Before you start SWPM, you need to mount the IP address of the virtual host name of ASCS. The recommended way is to use SAPACEXT. If you mount the IP address by using SAPACEXT, be sure to remount the IP address after a reboot.
```bash # C:\Program Files\SAP\hostctrl\exe\sapacext.exe -a ifup -i <network interface> -h <virtual hostname or IP address> -n <subnet mask> C:\Program Files\SAP\hostctrl\exe\sapacext.exe -a ifup -i "Ethernet 3" -h as1-ascs -n 255.255.255.128 ```
-Run SWPM and use *as1-ascs* for the *ASCS Instance Host Name*.
+Run SWPM. For **ASCS Instance Host Name**, use **as1-ascs**.
#### Install SQL Server
-You need to add the IP address of the virtual hostname of the database to a network interface. The recommended way is to use sapacext. If you mount the IP address using sapacext, make sure to remount the IP address after a reboot.
+Before you start SWPM, you need to add the IP address of the virtual host name of the database to a network interface. The recommended way is to use SAPACEXT. If you mount the IP address by using SAPACEXT, be sure to remount the IP address after a reboot.
```bash # C:\Program Files\SAP\hostctrl\exe\sapacext.exe -a ifup -i <network interface> -h <virtual hostname or IP address> -n <subnet mask> C:\Program Files\SAP\hostctrl\exe\sapacext.exe -a ifup -i "Ethernet 3" -h as1-db -n 255.255.255.128 ```
-Run the database instance installation of SWPM on the SQL server virtual machine. Use SAPINST_USE_HOSTNAME=*as1-db* to override the hostname used to connect to SQL Server. If you deployed the virtual machine using the Azure Resource Manager template, make sure to set the directory used for the database data files to *C:\sql\data* and database log file to *C:\sql\log*.
+Run the database instance installation of SWPM on the SQL Server virtual machine. Use `SAPINST_USE_HOSTNAME=as1-db` to override the host name that's used to connect to SQL Server. If you deployed the virtual machine by using the Azure Resource Manager template, set the directory that's used for the database data files to *C:\sql\data*, and set the database log file to *C:\sql\log*.
-Make sure that the user *NT AUTHORITY\SYSTEM* has access to the SQL Server and has the server role *sysadmin*. For more information, see SAP Note [1877727] and [2562184].
+Make sure that the user *NT AUTHORITY\SYSTEM* has access to the SQL Server instance and has the server role *sysadmin*. For more information, see SAP Notes [1877727] and [2562184].
-#### Install SAP NetWeaver Application Server
+#### Install the SAP NetWeaver application server
-Before you start the SAP Software Provisioning Manager (SWPM), you need to mount the IP address of virtual hostname of the application server. The recommended way is to use sapacext. If you mount the IP address using sapacext, make sure to remount the IP address after a reboot.
+Before you start SWPM, you need to mount the IP address of the virtual host name of the application server. The recommended way is to use SAPACEXT. If you mount the IP address by using SAPACEXT, be sure to remount the IP address after a reboot.
```bash # C:\Program Files\SAP\hostctrl\exe\sapacext.exe -a ifup -i <network interface> -h <virtual hostname or IP address> -n <subnet mask> C:\Program Files\SAP\hostctrl\exe\sapacext.exe -a ifup -i "Ethernet 3" -h as1-di-0 -n 255.255.255.128 ```
-Use *as1-di-0* for the *PAS Instance Host Name* in dialog *Primary Application Server Instance*.
+In the **Primary Application Server Instance** dialog, for **PAS Instance Host Name**, use **as1-di-0**.
## Troubleshooting
-### Errors and Warnings during Discover
-
-* The SELECT permission was denied
- * [Microsoft][ODBC SQL Server Driver][SQL Server]The SELECT permission was denied on the object 'log_shipping_primary_databases', database 'msdb', schema 'dbo'. [SOAPFaultException]
- The SELECT permission was denied on the object 'log_shipping_primary_databases', database 'msdb', schema 'dbo'.
- * Solution
- Make sure that *NT AUTHORITY\SYSTEM* can access the SQL Server. See SAP Note [2562184]
--
-### Errors and Warnings for Instance Validation
-
-* An exception was raised in validation of the HDB userstore
- * see Log Viewer
- com.sap.nw.lm.aci.monitor.api.validation.RuntimeValidationException: Exception in validator with ID 'RuntimeHDBConnectionValidator' (Validation: 'VALIDATION_HDB_USERSTORE'): Could not retrieve the hdbuserstore
- HANA userstore is not in the correct location
- * Solution
- Make sure that /usr/sap/AH1/hdbclient/install/installation.ini is correct
-
-### Errors and Warnings during a System Copy
-
-* An error occurred when validating the system provisioning step
- * Caused by: com.sap.nw.lm.aci.engine.base.api.util.exception.HAOperationException
- Calling '/usr/sap/hostctrl/exe/sapacext -a ShowHanaBackups -m HN1 -f 50 -h hn1-db -o level=0\;status=5\;port=35013 pf=/usr/sap/hostctrl/exe/host_profile -R -T dev_lvminfo -u SYSTEM -p hook -r' | /usr/sap/hostctrl/exe/sapacext -a ShowHanaBackups -m HN1 -f 50 -h hn1-db -o level=0\;status=5\;port=35013 pf=/usr/sap/hostctrl/exe/host_profile -R -T dev_lvminfo -u SYSTEM -p hook -r
- * Solution
- Take backup of all databases in source HANA system
-
-* System Copy Step *Start* of database instance
- * Host Agent Operation '000D3A282BC91EE8A1D76CF1F92E2944' failed (OperationException. FaultCode: '127', Message: 'Command execution failed. : [Microsoft][ODBC SQL Server Driver][SQL Server]User does not have permission to alter database 'AS2', the database does not exist, or the database is not in a state that allows access checks.')
- * Solution
- Make sure that *NT AUTHORITY\SYSTEM* can access the SQL Server. See SAP Note [2562184]
-
-### Errors and Warnings during a System Clone
-
-* Error occurred when trying to register instance agent in step *Forced Register and Start Instance Agent* of application server or ASCS
- * Error occurred when trying to register instance agent. (RemoteException: 'Failed to load instance data from profile '\\as1-ascs\sapmnt\AS1\SYS\profile\AS1_D00_as1-di-0': Cannot access profile '\\as1-ascs\sapmnt\AS1\SYS\profile\AS1_D00_as1-di-0': No such file or directory.')
- * Solution
- Make sure that the sapmnt share on the ASCS/SCS has Full Access for SAP_AS1_GlobalAdmin
-
-* Error in step *Enable Startup Protection for Clone*
- * Failed to open file '\\as1-ascs\sapmnt\AS1\SYS\profile\AS1_D00_as1-di-0' Cause: No such file or directory
- * Solution
- The computer account of the application server needs write access to the profile
-
-### Errors and Warnings during Create System Replication
-
-* Exception when clicking on Create System Replication
- * Caused by: com.sap.nw.lm.aci.engine.base.api.util.exception.HAOperationException
- Calling '/usr/sap/hostctrl/exe/sapacext -a ShowHanaBackups -m HN1 -f 50 -h hn1-db -o level=0\;status=5\;port=35013 pf=/usr/sap/hostctrl/exe/host_profile -R -T dev_lvminfo -u SYSTEM -p hook -r' | /usr/sap/hostctrl/exe/sapacext -a ShowHanaBackups -m HN1 -f 50 -h hn1-db -o level=0\;status=5\;port=35013 pf=/usr/sap/hostctrl/exe/host_profile -R -T dev_lvminfo -u SYSTEM -p hook -r
- * Solution
- Test if sapacext can be executed as `<hanasid`>adm
-
-* Error when full copy is not enabled in Storage Step
- * An error occurred when reporting a context attribute message for path IStorageCopyData.storageVolumeCopyList:1 and field targetStorageSystemId
- * Solution
- Ignore Warnings in step and try again. This issue is fixed in a new support package/patch of SAP LaMa.
-
-### Errors and Warnings during Relocate
-
-* Path '/usr/sap/AH1' is not allowed for nfs reexports.
- * Check SAP Note [2628497] for details.
- * Solution
- Add ASCS exports to ASCS HostAgent Profile. See SAP Note [2628497]
-
-* Function not implemented when relocating ASCS
- * Command Output: exportfs: host:/usr/sap/AX1: Function not implemented
- * Solution
- Make sure that the NFS server service is enabled on the relocate target virtual machine
-
-### Errors and Warnings during Application Server Installation
-
-* Error executing SAPinst step: getProfileDir
- * ERROR: (Last error reported by the step: Caught ESAPinstException in module call: Validator of step '|NW_DI|ind|ind|ind|ind|0|0|NW_GetSidFromProfiles|ind|ind|ind|ind|getSid|0|NW_readProfileDir|ind|ind|ind|ind|readProfile|0|getProfileDir' reported an error: Node \\\as1-ascs\sapmnt\AS1\SYS\profile does not exist. Start SAPinst in interactive mode to solve this problem)
- * Solution
- Make sure that SWPM is running with a user that has access to the profile. This user can be configured in the Application Server Installation wizard
-
-* Error executing SAPinst step: askUnicode
- * ERROR: (Last error reported by the step: Caught ESAPinstException in module call: Validator of step '|NW_DI|ind|ind|ind|ind|0|0|NW_GetSidFromProfiles|ind|ind|ind|ind|getSid|0|NW_getUnicode|ind|ind|ind|ind|unicode|0|askUnicode' reported an error: Start SAPinst in interactive mode to solve this problem)
- * Solution
- If you use a recent SAP kernel, SWPM cannot determine whether the system is a unicode system anymore using the message server of the ASCS. See SAP Note [2445033] for more details.
- This issue will be fixed in a new support package/patch of SAP LaMa.
- Set profile parameter OS_UNICODE=uc in the default profile of your SAP system to work around this issue.
-
-* Error executing SAPinst step: dCheckGivenServer
- * Error executing SAPinst step: dCheckGivenServer" version="1.0" ERROR: (Last error reported by the step: \<p> Installation was canceled by user. \</p>
- * Solution
- Make sure that SWPM is running with a user that has access to the profile. This user can be configured in the Application Server Installation wizard
-
-* Error executing SAPinst step: checkClient
- * Error executing SAPinst step: checkClient" version="1.0" ERROR: (Last error reported by the step: \<p> Installation was canceled by user. \</p>)
- * Solution
- Make sure that the Microsoft ODBC driver for SQL Server is installed on the virtual machine on which you want to install the application server
-
-* Error executing SAPinst step: copyScripts
- * Last error reported by the step: System call failed. DETAILS: Error 13 (0x0000000d) (Permission denied) in execution of system call 'fopenU' with parameter (\\\as1-ascs/sapmnt/AS1/SYS/exe/uc/NTAMD64/strdbs.cmd, w), line (494) in file (\bas/bas/749_REL/bc_749_REL/src/ins/SAPINST/impl/src/syslib/filesystem/syxxcfstrm2.cpp), stack trace:
- CThrThread.cpp: 85: CThrThread::threadFunction()
- CSiServiceSet.cpp: 63: CSiServiceSet::executeService()
- CSiStepExecute.cpp: 913: CSiStepExecute::execute()
- EJSController.cpp: 179: EJSControllerImpl::executeScript()
- JSExtension.hpp: 1136: CallFunctionBase::call()
- iaxxcfile.cpp: 183: iastring CIaOsFileConnect::callMemberFunction(iastring const& name, args_t const& args)
- iaxxcfile.cpp: 1849: iastring CIaOsFileConnect::newFileStream(args_t const& _args)
- iaxxbfile.cpp: 773: CIaOsFile::newFileStream_impl(4)
- syxxcfile.cpp: 233: CSyFileImpl::openStream(ISyFile::eFileOpenMode)
- syxxcfstrm.cpp: 29: CSyFileStreamImpl::CSyFileStreamImpl(CSyFileStream*,iastring,ISyFile::eFileOpenMode)
- syxxcfstrm.cpp: 265: CSyFileStreamImpl::open()
- syxxcfstrm2.cpp: 58: CSyFileStream2Impl::CSyFileStream2Impl(const CSyPath & \\\aw1-ascs/sapmnt/AW1/SYS/exe/uc/NTAMD64/strdbs.cmd, 0x4)
- syxxcfstrm2.cpp: 456: CSyFileStream2Impl::open()
- * Solution
- Make sure that SWPM is running with a user that has access to the profile. This user can be configured in the Application Server Installation wizard
-
-* Error executing SAPinst step: askPasswords
- * Last error reported by the step: System call failed. DETAILS: Error 5 (0x00000005) (Access is denied.) in execution of system call 'NetValidatePasswordPolicy' with parameter (...), line (359) in file (\bas/bas/749_REL/bc_749_REL/src/ins/SAPINST/impl/src/syslib/account/synxcaccmg.cpp), stack trace:
- CThrThread.cpp: 85: CThrThread::threadFunction()
- CSiServiceSet.cpp: 63: CSiServiceSet::executeService()
- CSiStepExecute.cpp: 913: CSiStepExecute::execute()
- EJSController.cpp: 179: EJSControllerImpl::executeScript()
- JSExtension.hpp: 1136: CallFunctionBase::call()
- CSiStepExecute.cpp: 764: CSiStepExecute::invokeDialog()
- DarkModeGuiEngine.cpp: 56: DarkModeGuiEngine::showDialogCalledByJs()
- DarkModeDialog.cpp: 85: DarkModeDialog::submit()
- EJSController.cpp: 179: EJSControllerImpl::executeScript()
- JSExtension.hpp: 1136: CallFunctionBase::call()
- iaxxcaccount.cpp: 107: iastring CIaOsAccountConnect::callMemberFunction(iastring const& name, args_t const& args)
- iaxxcaccount.cpp: 1186: iastring CIaOsAccountConnect::validatePasswordPolicy(args_t const& _args)
- iaxxbaccount.cpp: 430: CIaOsAccount::validatePasswordPolicy_impl()
- synxcaccmg.cpp: 297: ISyAccountMgt::PasswordValidationMessage CSyAccountMgtImpl::validatePasswordPolicy(saponazure,*****) const )
- * Solution
- Make sure to add a Host rule in step *Isolation* to allow communication from the VM to the domain controller
+### Errors and warnings during discovery
+
+* The *SELECT* permission was denied.
+ * **Error**:
+
+ `[Microsoft][ODBC SQL Server Driver][SQL Server]The SELECT permission was denied on the object 'log_shipping_primary_databases', database 'msdb', schema 'dbo'. [SOAPFaultException]`
+ `The SELECT permission was denied on the object 'log_shipping_primary_databases', database 'msdb', schema 'dbo'.`
+ * **Solution**: Make sure that *NT AUTHORITY\SYSTEM* can access the SQL Server instance. See SAP Note [2562184].
+
+### Errors and warnings during instance validation
+
+* An exception was raised in the validation of *hdbuserstore*. See Log Viewer.
+ * **Caused by**: `com.sap.nw.lm.aci.monitor.api.validation`
+
+ * **Error**:
+
+ `RuntimeValidationException`
+
+ `Exception in validator with ID 'RuntimeHDBConnectionValidator' (Validation: 'VALIDATION_HDB_USERSTORE'): Could not retrieve the hdbuserstore`
+ `HANA userstore is not in the correct location`
+ * **Solution**: Make sure that */usr/sap/AH1/hdbclient/install/installation.ini* is correct.
+
+### Errors and warnings during a system copy
+
+* An error occurred in validating the system provisioning step.
+ * **Caused by**: `com.sap.nw.lm.aci.engine.base.api.util.exception`
+ * **Error**:
+
+ `HAOperationException`
+
+ `Calling '/usr/sap/hostctrl/exe/sapacext -a ShowHanaBackups -m HN1 -f 50 -h hn1-db -o level=0\;status=5\;port=35013 pf=/usr/sap/hostctrl/exe/host_profile -R -T dev_lvminfo -u SYSTEM -p hook -r' | /usr/sap/hostctrl/exe/sapacext -a ShowHanaBackups -m HN1 -f 50 -h hn1-db -o level=0\;status=5\;port=35013 pf=/usr/sap/hostctrl/exe/host_profile -R -T dev_lvminfo -u SYSTEM -p hook -r`
+ * **Solution**: Back up all databases in the source HANA system.
+
+* An error occurred in the system copy **Start** step of the database instance.
+ * **Error**:
+
+ `Host Agent Operation '000D3A282BC91EE8A1D76CF1F92E2944' failed (OperationException. FaultCode: '127', Message: 'Command execution failed. : [Microsoft][ODBC SQL Server Driver][SQL Server]User does not have permission to alter database 'AS2', the database does not exist, or the database is not in a state that allows access checks.')`
+ * **Solution**: Make sure that *NT AUTHORITY\SYSTEM* can access the SQL Server instance. See SAP Note [2562184].
+
+### Errors and warnings during a system clone
+
+* An error occurred in trying to register an instance agent in the **Forced Register and Start Instance Agent** step of the application server or ASCS.
+ * **Error**:
+
+ `Error occurred when trying to register instance agent. (RemoteException: 'Failed to load instance data from profile '\\as1-ascs\sapmnt\AS1\SYS\profile\AS1_D00_as1-di-0': Cannot access profile '\\as1-ascs\sapmnt\AS1\SYS\profile\AS1_D00_as1-di-0': No such file or directory.')`
+ * **Solution**: Make sure that the *sapmnt* share on ASCS/SCS has full access for *SAP_AS1_GlobalAdmin*.
+
+* An error occurred in the **Enable Startup Protection for Clone** step.
+ * **Error**:
+
+ `Failed to open file '\\as1-ascs\sapmnt\AS1\SYS\profile\AS1_D00_as1-di-0' Cause: No such file or directory`
+ * **Solution**: The computer account of the application server needs write access to the profile.
+
+### Errors and warnings during creation of system replication
+
+* An exception was raised in selecting **Create System Replication**.
+ * **Caused by**: `com.sap.nw.lm.aci.engine.base.api.util.exception`
+ * **Error**:
+
+ `HAOperationException`
+
+ `Calling '/usr/sap/hostctrl/exe/sapacext -a ShowHanaBackups -m HN1 -f 50 -h hn1-db -o level=0\;status=5\;port=35013 pf=/usr/sap/hostctrl/exe/host_profile -R -T dev_lvminfo -u SYSTEM -p hook -r' | /usr/sap/hostctrl/exe/sapacext -a ShowHanaBackups -m HN1 -f 50 -h hn1-db -o level=0\;status=5\;port=35013 pf=/usr/sap/hostctrl/exe/host_profile -R -T dev_lvminfo -u SYSTEM -p hook -r`
+ * **Solution**: Test if SAPACEXT can be executed as *\<hanasid\>adm*.
+
+* An error occurred when full copy was not enabled in the storage step.
+ * **Error**:
+
+ `An error occurred when reporting a context attribute message for path IStorageCopyData.storageVolumeCopyList:1 and field targetStorageSystemId`
+ * **Solution**: Ignore warnings in the step and try again. This issue was fixed in a support package/patch of SAP LaMa.
+
+### Errors and warnings during relocation
+
+* The path */usr/sap/AH1* is not allowed for NFS re-exports.
+ * **Solution**: Add ASCS exports to the ASCS Host Agent profile. See SAP Note [2628497].
+
+* A function is not implemented in relocating ASCS.
+ * **Command output**:
+
+ `exportfs: host:/usr/sap/AX1: Function not implemented`
+ * **Solution**: Make sure that the NFS server service is enabled on the target virtual machine for relocation.
+
+### Errors and warnings during application server installation
+
+* An error occurred in executing the SAPinst `getProfileDir` step.
+ * **Error**:
+
+ `Last error reported by the step: Caught ESAPinstException in module call: Validator of step '|NW_DI|ind|ind|ind|ind|0|0|NW_GetSidFromProfiles|ind|ind|ind|ind|getSid|0|NW_readProfileDir|ind|ind|ind|ind|readProfile|0|getProfileDir' reported an error: Node \\\as1-ascs\sapmnt\AS1\SYS\profile does not exist. Start SAPinst in interactive mode to solve this problem`
+ * **Solution**: Make sure that SWPM is running with a user who has access to the profile. You can configure this user in the Application Server Installation wizard.
+
+* An error occurred in executing the SAPinst `askUnicode` step.
+ * **Error**:
+
+ `Last error reported by the step: Caught ESAPinstException in module call: Validator of step '|NW_DI|ind|ind|ind|ind|0|0|NW_GetSidFromProfiles|ind|ind|ind|ind|getSid|0|NW_getUnicode|ind|ind|ind|ind|unicode|0|askUnicode' reported an error: Start SAPinst in interactive mode to solve this problem`
+ * **Solution**: If you use a recent SAP kernel, SWPM can't determine whether the system is a Unicode system anymore by using the message server of ASCS. See SAP Note [2445033].
+
+ Until this issue is fixed in a new support package/patch of SAP LaMa, work around it by setting the profile parameter `OS_UNICODE=uc` in the default profile of your SAP system.
+
+* An error occurred in executing the SAPinst `dCheckGivenServer" version="1.0"` step.
+ * **Error**:
+
+ `Last error reported by the step: Installation was canceled by user.`
+ * **Solution**: Make sure that SWPM is running with a user who has access to the profile. You can configure this user in the Application Server Installation wizard.
+
+* An error occurred in executing the SAPinst `checkClient" version="1.0"` step.
+ * **Error**:
+
+ `Last error reported by the step: Installation was canceled by user.`
+ * **Solution**: Make sure that the Microsoft ODBC driver for SQL Server is installed on the virtual machine on which you want to install the application server.
+
+* An error occurred in executing the SAPinst `copyScripts` step.
+ * **Error**:
+
+ `Last error reported by the step: System call failed. DETAILS: Error 13 (0x0000000d) (Permission denied) in execution of system call 'fopenU' with parameter (\\\as1-ascs/sapmnt/AS1/SYS/exe/uc/NTAMD64/strdbs.cmd, w), line (494) in file (\bas/bas/749_REL/bc_749_REL/src/ins/SAPINST/impl/src/syslib/filesystem/syxxcfstrm2.cpp), stack trace:
+ CThrThread.cpp: 85: CThrThread::threadFunction()
+ CSiServiceSet.cpp: 63: CSiServiceSet::executeService()
+ CSiStepExecute.cpp: 913: CSiStepExecute::execute()
+ EJSController.cpp: 179: EJSControllerImpl::executeScript()
+ JSExtension.hpp: 1136: CallFunctionBase::call()
+ iaxxcfile.cpp: 183: iastring CIaOsFileConnect::callMemberFunction(iastring const& name, args_t const& args)
+ iaxxcfile.cpp: 1849: iastring CIaOsFileConnect::newFileStream(args_t const& _args)
+ iaxxbfile.cpp: 773: CIaOsFile::newFileStream_impl(4)
+ syxxcfile.cpp: 233: CSyFileImpl::openStream(ISyFile::eFileOpenMode)
+ syxxcfstrm.cpp: 29: CSyFileStreamImpl::CSyFileStreamImpl(CSyFileStream*,iastring,ISyFile::eFileOpenMode)
+ syxxcfstrm.cpp: 265: CSyFileStreamImpl::open()
+ syxxcfstrm2.cpp: 58: CSyFileStream2Impl::CSyFileStream2Impl(const CSyPath & \\\aw1-ascs/sapmnt/AW1/SYS/exe/uc/NTAMD64/strdbs.cmd, 0x4)
+ syxxcfstrm2.cpp: 456: CSyFileStream2Impl::open()`
+ * **Solution**: Make sure that SWPM is running with a user who has access to the profile. You can configure this user in the Application Server Installation wizard.
+
+* An error occurred in executing the SAPinst `askPasswords` step.
+ * **Error**:
+
+ `Last error reported by the step: System call failed. DETAILS: Error 5 (0x00000005) (Access is denied.) in execution of system call 'NetValidatePasswordPolicy' with parameter (...), line (359) in file (\bas/bas/749_REL/bc_749_REL/src/ins/SAPINST/impl/src/syslib/account/synxcaccmg.cpp), stack trace:
+ CThrThread.cpp: 85: CThrThread::threadFunction()
+ CSiServiceSet.cpp: 63: CSiServiceSet::executeService()
+ CSiStepExecute.cpp: 913: CSiStepExecute::execute()
+ EJSController.cpp: 179: EJSControllerImpl::executeScript()
+ JSExtension.hpp: 1136: CallFunctionBase::call()
+ CSiStepExecute.cpp: 764: CSiStepExecute::invokeDialog()
+ DarkModeGuiEngine.cpp: 56: DarkModeGuiEngine::showDialogCalledByJs()
+ DarkModeDialog.cpp: 85: DarkModeDialog::submit()
+ EJSController.cpp: 179: EJSControllerImpl::executeScript()
+ JSExtension.hpp: 1136: CallFunctionBase::call()
+ iaxxcaccount.cpp: 107: iastring CIaOsAccountConnect::callMemberFunction(iastring const& name, args_t const& args)
+ iaxxcaccount.cpp: 1186: iastring CIaOsAccountConnect::validatePasswordPolicy(args_t const& _args)
+ iaxxbaccount.cpp: 430: CIaOsAccount::validatePasswordPolicy_impl()
+ synxcaccmg.cpp: 297: ISyAccountMgt::PasswordValidationMessage CSyAccountMgtImpl::validatePasswordPolicy(saponazure,*****) const`
+ * **Solution**: Add a host rule in the isolation step to allow communication from the VM to the domain controller.
## Next steps+ * [SAP HANA on Azure operations guide][hana-ops-guide] * [Azure Virtual Machines planning and implementation for SAP][planning-guide] * [Azure Virtual Machines deployment for SAP][deployment-guide]
search Index Ranking Similarity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-ranking-similarity.md
Previously updated : 10/14/2022 Last updated : 04/18/2023 # Configure relevance scoring
Configuration changes are scoped to individual indexes, which means you can adju
## Default scoring algorithm
-Depending on the age of your search service, Azure Cognitive Search supports two [similarity scoring algorithms](index-similarity-and-scoring.md) for assigning relevance to results in a full text search query:
+Depending on the age of your search service, Azure Cognitive Search supports two [similarity scoring algorithms](index-similarity-and-scoring.md) for a full text search query:
-+ An *Okapi BM25* algorithm, used in all search services created after July 15, 2020
-+ A *classic similarity* algorithm, used by all search services created before July 15, 2020
++ Okapi BM25 algorithm (after July 15, 2020)++ Classic similarity algorithm (before July 15, 2020) BM25 ranking is the default because it tends to produce search rankings that align better with user expectations. It includes [parameters](#set-bm25-parameters) for tuning results based on factors such as document size. For search services created after July 2020, BM25 is the only scoring algorithm. If you try to set "similarity" to ClassicSimilarity on a new service, an HTTP 400 error will be returned because that algorithm is not supported by the service.
search Index Similarity And Scoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-similarity-and-scoring.md
Previously updated : 10/14/2022 Last updated : 04/18/2023 # Relevance and scoring in Azure Cognitive Search
If you want to break the tie among repeating scores, you can add an **$orderby**
## Scoring algorithms in Search
-Azure Cognitive Search provides the `BM25Similarity` ranking algorithm. On older search services, you might be using `ClassicSimilarity`.
+Azure Cognitive Search provides the following scoring algorithms:
+
+| Algorithm | Usage | Range |
+|--|-|-|
+| BM25Similarity | Fixed algorithm on all search services created after July 2020. You can configure this algorithm, but you can't switch to an older one (classic). | Unbounded. |
+|ClassicSimilarity | Present on older search services. You can [opt-in for BM25](index-ranking-similarity.md) and choose an algorithm on a per-index basis. | 0 < 1.00 |
Both BM25 and Classic are TF-IDF-like retrieval functions that use the term frequency (TF) and the inverse document frequency (IDF) as variables to calculate relevance scores for each document-query pair, which is then used for ranking results. While conceptually similar to classic, BM25 is rooted in probabilistic information retrieval that produces more intuitive matches, as measured by user research.
The following video segment fast-forwards to an explanation of the generally ava
> [!VIDEO https://www.youtube.com/embed/Y_X6USgvB1g?version=3&start=322&end=643]
+## Score variation
+
+Search scores convey general sense of relevance, reflecting the strength of match relative to other documents in the same result set. But scores aren't always consistent from one query to the next, so as you work with queries, you might notice small discrepancies in how search documents are ordered. There are several explanations for why this might occur.
+
+| Cause | Description |
+|--|-|
+| Data volatility | Index content varies as you add, modify, or delete documents. Term frequencies will change as index updates are processed over time, affecting the search scores of matching documents. |
+| Multiple replicas | For services using multiple replicas, queries are issued against each replica in parallel. The index statistics used to calculate a search score are calculated on a per-replica basis, with results merged and ordered in the query response. Replicas are mostly mirrors of each other, but statistics can differ due to small differences in state. For example, one replica might have deleted documents contributing to their statistics, which were merged out of other replicas. Typically, differences in per-replica statistics are more noticeable in smaller indexes. For more information about this condition, see [Concepts: search units, replicas, partitions, shards](search-capacity-planning.md#concepts-search-units-replicas-partitions-shards) in the capacity planning documentation. |
+| Identical scores | If multiple documents have the same score, any one of them might appear first. |
+ <a name="scoring-statistics"></a> ## Scoring statistics and sticky sessions
search Search Indexer How To Access Private Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-how-to-access-private-sql.md
Previously updated : 04/12/2023 Last updated : 04/18/2023 # Create a shared private link for a SQL Managed Instance from Azure Cognitive Search
-This article explains how to configure an outbound indexer connection in Azure Cognitive Search to a SQL Managed Instance over a private endpoint.
+This article explains how to configure an indexer in Azure Cognitive Search for a private connection to a SQL Managed Instance that runs within a virtual network.
On a private connection to a SQL Managed Instance, the fully qualified domain name (FQDN) of the instance must include the [DNS Zone](/azure/azure-sql/managed-instance/connectivity-architecture-overview#virtual-cluster-connectivity-architecture). Currently, only the Azure Cognitive Search Management REST API provides a `resourceRegion` parameter for accepting the DNS zone specification.
-Although you can call the Management REST API directly, it's easier to use the Azure CLI `az rest` module to send Management REST API calls from a command line.
+Although you can call the Management REST API directly, it's easier to use the Azure CLI `az rest` module to send Management REST API calls from a command line. This article uses the Azure CLI with REST to set up the private link.
> [!NOTE]
-> This article relies on Azure portal for obtaining properties and confirming steps. However, when creating the shared private link for SQL Managed Instance, be sure to use the REST API. Although the Networking tab lists `Microsoft.Sql/managedInstances` as an option, the portal doesn't currently support the extended URL format used by SQL Managed Instance.
+> This article refers to Azure portal for obtaining properties and confirming steps. However, when creating the shared private link for SQL Managed Instance, make sure you're using the REST API. Although the Networking tab lists `Microsoft.Sql/managedInstances` as an option, the portal doesn't currently support the extended URL format used by SQL Managed Instance.
## Prerequisites
Although you can call the Management REST API directly, it's easier to use the A
+ Azure Cognitive Search, Basic or higher. If you're using [AI enrichment](cognitive-search-concept-intro.md) and skillsets, use Standard 2 (S2) or higher. See [Service limits](search-limits-quotas-capacity.md#shared-private-link-resource-limits) for details.
-+ Azure SQL Managed Instance, configured to run in a virtual network, with a private endpoint created through Azure Private Link.
++ Azure SQL Managed Instance, configured to run in a virtual network. + You should have a minimum of Contributor permissions on both Azure Cognitive Search and SQL Managed Instance.
-## 1 - Private endpoint verification
-
-Check whether the managed instance has a private endpoint.
-
-1. [Sign in to Azure portal](https://portal.azure.com/).
-
-1. Type "private link" in the top search bar, and then select **Private Link** to open the Private Link Center.
-
-1. Select **Private endpoints** to view existing endpoints. You should see your SQL Managed Instance in this list.
+> [!NOTE]
+> Azure Private Link is used internally, at no charge, to set up the shared private link.
-## 2 - Retrieve connection information
+## 1 - Retrieve connection information
Retrieve the FQDN of the managed instance, including the DNS zone. The DNS zone is part of the domain name of the SQL Managed Instance. For example, if the FQDN of the SQL Managed Instance is `my-sql-managed-instance.a1b22c333d44.database.windows.net`, the DNS zone is `a1b22c333d44`.
Retrieve the FQDN of the managed instance, including the DNS zone. The DNS zone
For more information about connection properties, see [Create an Azure SQL Managed Instance](/azure/azure-sql/managed-instance/instance-create-quickstart?view=azuresql#retrieve-connection-details-to-sql-managed-instance&preserve-view=true).
-## 3 - Create the body of the request
+## 2 - Create the body of the request
1. Using a text editor, create the JSON for the shared private link.
For more information about connection properties, see [Create an Azure SQL Manag
1. In the Azure CLI, type `dir` to note the current location of the file.
-## 4 - Create a shared private link
+## 3 - Create a shared private link
1. From the command line, sign into Azure using `az login`.
For more information about connection properties, see [Create an Azure SQL Manag
1. Call the `az rest` command to use the [Management REST API](/rest/api/searchmanagement/2021-04-01-preview/shared-private-link-resources/create-or-update) of Azure Cognitive Search.
- Because shared private link support for SQL managed instances is still in preview, you need a preview version of the REST API. You can use either `2021-04-01-preview` or `2020-08-01-preview`.
+ Because shared private link support for SQL managed instances is still in preview, you need a preview version of the REST API. Use `2021-04-01-preview` for this step`.
```azurecli az rest --method put --uri https://management.azure.com/subscriptions/{{search-service-subscription-ID}}/resourceGroups/{{search service-resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}/sharedPrivateLinkResources/{{shared-private-link-name}}?api-version=2021-04-01-preview --body @create-pe.json
For more information about connection properties, see [Create an Azure SQL Manag
When you complete these steps, you should have a shared private link that's provisioned in a pending state. **It takes several minutes to create the link**. Once it's created, the resource owner needs to approve the request before it's operational.
-## 5 - Approve the private endpoint connection
+## 4 - Approve the private endpoint connection
On the SQL Managed Instance side, the resource owner must approve the private connection request you created.
On the SQL Managed Instance side, the resource owner must approve the private co
After the private endpoint is approved, Azure Cognitive Search creates the necessary DNS zone mappings in the DNS zone that's created for it.
-## 6 - Check shared private link status
+## 5 - Check shared private link status
On the Azure Cognitive Search side, you can confirm request approval by revisiting the Shared Private Access tab of the search service **Networking** page. Connection state should be approved. ![Screenshot of the Azure portal, showing an "Approved" shared private link resource.](media\search-indexer-howto-secure-access\new-shared-private-link-resource-approved.png)
-## 7 - Configure the indexer to run in the private environment
+## 6 - Configure the indexer to run in the private environment
You can now configure an indexer and its data source to use an outbound private connection to your managed instance.
This article assumes Postman or equivalent tool, and uses the REST APIs to make
``` > [!NOTE]
- > If you're familiar with data source definitions in Cognitive Search, you'll notice that data source properties don't vary when using a shared private link. That's because the private connection is detected and handled internally.
+ > If you're familiar with data source definitions in Cognitive Search, you'll notice that data source properties don't vary when using a shared private link. That's because Search will always use a shared private link on the connection if one exists.
1. [Create the indexer definition](search-howto-create-indexers.md), setting the indexer execution environment to "private".
You can monitor the status of the indexer in Azure portal or by using the [Index
You can use [**Search explorer**](search-explorer.md) in Azure portal to check the contents of the index.
-## 8 - Test the shared private link
+## 7 - Test the shared private link
If you ran the indexer in the previous step and successfully indexed content from your managed instance, then the test was successful. However, if the indexer fails or there's no content in the index, you can modify your objects and repeat testing by choosing any client that can invoke an outbound request from an indexer.
An easy choice is [running an indexer](search-howto-run-reset-indexers.md) in Az
Here are some reminders for testing:
-+ If you use Postman or another web testing tool, use the [Management REST API](/rest/api/searchmanagement/) and a [preview API version](/rest/api/searchmanagement/management-api-versions) to create the shared private link. Use the [Search REST API](/rest/api/searchservice/) and a [stable API version](/rest/api/searchservice/search-service-api-versions) to create and invoke indexers and data sources.
++ If you use Postman or another web testing tool, use the [Management REST API](/rest/api/searchmanagement/) and the [2021-04-01-Preview API version](/rest/api/searchmanagement/management-api-versions) to create the shared private link. Use the [Search REST API](/rest/api/searchservice/) and a [stable API version](/rest/api/searchservice/search-service-api-versions) to create and invoke indexers and data sources. + You can use the Import data wizard to create an indexer, data source, and index. However, the generated indexer won't have the correct execution environment setting.
search Search Indexer Howto Access Private https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-howto-access-private.md
Previously updated : 02/22/2023 Last updated : 04/18/2023
-# Make outbound connections through a private endpoint
+# Make outbound connections through a private link
-If you have an Azure PaaS resource that has a private connection enabled through [Azure Private Link](../private-link/private-link-overview.md), you'll need to create a *shared private link* to reach those resources from Azure Cognitive Search. This article walks you through the steps for creating, testing, and managing a private link.
+This article explains how to configure private, outbound calls from Azure Cognitive Search to Azure PaaS resources that run within a virtual network.
-If you're setting up a private connection to a SQL Managed Instance, see [this article](search-indexer-how-to-access-private-sql.md) instead.
+Setting up a private connection allows Azure Cognitive Search to connect to Azure PaaS through a virtual network IP address instead of a port that's open to the internet. The object created for the connection is called a *shared private link*. On the connection, Search uses the shared private link internally to reach an Azure PaaS resource inside the network boundary.
+
+> [!NOTE]
+> If you're setting up a private indexer connection to a SQL Managed Instance, see [this article](search-indexer-how-to-access-private-sql.md) instead.
## When to use a shared private link
Cognitive Search makes outbound calls to other Azure PaaS resources in the follo
+ Encryption key requests to Azure Key Vault + Custom skill requests to Azure Functions or similar resource
-For those service-to-service communication scenarios, Search typically sends a request over a public internet connection. However, if your data, key vault, or function is accessed through a [private endpoint](../private-link/private-endpoint-overview.md), then your search service needs a way to reach that endpoint. The mechanism by which a search service connects to a private endpoint is called a *shared private link*.
+In service-to-service communications, Search typically sends a request over a public internet connection. However, if your data, key vault, or function should be accessed through a [private endpoint](../private-link/private-endpoint-overview.md), you can create a *shared private link*.
A shared private link is:
When evaluating shared private links for your scenario, remember these constrain
+ An Azure Cognitive Search at the Basic tier or higher. If you're using [AI enrichment](cognitive-search-concept-intro.md) and skillsets, the tier must be Standard 2 (S2) or higher. See [Service limits](search-limits-quotas-capacity.md#shared-private-link-resource-limits) for details.
-+ An Azure PaaS resource from the following list of supported resource types, configured to run in a virtual network, with a private endpoint created through Azure Private Link.
++ An Azure PaaS resource from the following list of supported resource types, configured to run in a virtual network. + You should have a minimum of Contributor permissions on both Azure Cognitive Search and the Azure PaaS resource for which you're creating the shared private link.
+> [!NOTE]
+> Azure Private Link is used internally, at no charge, to set up the shared private link.
+ <a name="group-ids"></a> ### Supported resource types
You can create a shared private link for the following resources.
<sup>4</sup> See [Create a shared private link for a SQL Managed Instance](search-indexer-how-to-access-private-sql.md) for instructions.
-### Private endpoint verification
-
-1. Sign in to [Azure portal](https://portal.azure.com/).
-
-1. Type "private link" in the top search bar, and then select **Private Link** to open the Private Link Center.
-
-1. Select **Private endpoints** to view existing endpoints. The Azure PaaS resource for which you're creating a shared private link must have a private endpoint in this list. See [Manage private endpoint connections](../private-link/manage-private-endpoint.md?tabs=manage-private-link-powershell#manage-private-endpoint-connections-on-azure-paas-resources) for details.
-
-These Private Link tutorials provide steps for creating a private endpoint for Azure PaaS:
-
-+ [Tutorial: Connect to a storage account using an Azure Private Endpoint](../private-link/tutorial-private-endpoint-storage-portal.md)
-
-+ [Tutorial: Connect to an Azure Cosmos DB account using an Azure Private Endpoint](../private-link/tutorial-private-endpoint-cosmosdb-portal.md)
-
-+ [Tutorial: Connect to a web app using an Azure Private Endpoint](../private-link/tutorial-private-endpoint-webapp-portal.md)
- ## 1 - Create a shared private link Use the Azure portal, Management REST API, the Azure CLI, or Azure PowerShell to create a shared private link.
Here are a few tips:
+ Give the private link a meaningful name. In the Azure PaaS resource, a shared private link appears alongside other private endpoints. A name like "shared-private-link-for-search" can remind you how it's used.
-+ Don't skip the [private link verification](#private-endpoint-verification) step. It's possible to create a shared private link for an Azure PaaS resource that doesn't have a private endpoint. The link won't work if the resource isn't registered.
-
-When you complete these steps, you have a shared private link that's provisioned in a pending state. **It takes several minutes to create the link**. Once it's created, the resource owner needs to approve the request before it's operational.
+When you complete the steps in this section, you have a shared private link that's provisioned in a pending state. **It takes several minutes to create the link**. Once it's created, the resource owner needs to approve the request before it's operational.
### [**Azure portal**](#tab/portal-create)
search Search Pagination Page Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-pagination-page-layout.md
Previously updated : 11/02/2022 Last updated : 04/18/2023 # How to work with search results in Azure Cognitive Search
Parameters on the query determine:
Results are tabular, composed of fields of either all "retrievable" fields, or limited to just those fields specified in the **`$select`** parameters. Rows are the matching documents.
-While a search document might consist of a large number of fields, typically only a few are needed to represent each document in the result set. On a query request, append `$select=<field list>` to specify which fields include in the response. A field must be attributed as "retrievable" in the index to be included in a result.
+You can choose which fields are in search results. While a search document might have a large number of fields, typically only a few are needed to represent each document in results. On a query request, append `$select=<field list>` to specify which "retrievable" fields should appear in the response.
-Fields that work best include those that contrast and differentiate among documents, providing sufficient information to invite a click-through response on the part of the user. On an e-commerce site, it might be a product name, description, brand, color, size, price, and rating. For the built-in hotels-sample index, it might be the "select" fields in the following example:
+Pick fields that offer contrast and differentiation among documents, providing sufficient information to invite a click-through response on the part of the user. On an e-commerce site, it might be a product name, description, brand, color, size, price, and rating. For the built-in hotels-sample index, it might be the "select" fields in the following example:
```http POST /indexes/hotels-sample-index/docs/search?api-version=2020-06-30
POST /indexes/hotels-sample-index/docs/search?api-version=2020-06-30
``` > [!NOTE]
-> If want to include image files in a result, such as a product photo or logo, store them outside of Azure Cognitive Search, but include a field in your index to reference the image URL in the search document. Sample indexes that support images in the results include the **realestate-sample-us** demo (a built-in sample dataset that you can build easily in the Import Data wizard), and the [New York City Jobs demo app](https://aka.ms/azjobsdemo).
+> For images in results, such as a product photo or logo, store them outside of Azure Cognitive Search, but add a field in your index to reference the image URL in the search document. Sample indexes that demonstrate images in the results include the **realestate-sample-us** demo (a built-in sample dataset that you can build easily in the Import Data wizard), and the [New York City Jobs demo app](https://aka.ms/azjobsdemo).
### Tips for unexpected results
Count won't be affected by routine maintenance or other workloads on the search
## Paging results
-By default, the search engine returns up to the first 50 matches. The top 50 are determined by search score, assuming the query is full text search or semantic search. Otherwise, the top 50 are an arbitrary order for exact match queries (where "@searchScore=1.0").
+By default, the search engine returns up to the first 50 matches. The top 50 are determined by search score, assuming the query is full text search or semantic search. Otherwise, the top 50 are an arbitrary order for exact match queries (where uniform "@searchScore=1.0" indicates arbitrary ranking).
To control the paging of all documents returned in a result set, add `$top` and `$skip` parameters to the query request. The following list explains the logic.
Notice that document 2 is fetched twice. This is because the new document 5 has
## Ordering results
-In a full text search query, results can be ranked by a search score, a semantic reranker score (if using [semantic search](semantic-search-overview.md)), or by an **`$orderby`** expression in the query request that specifies an explicit sort order.
+In a full text search query, results can be ranked by:
-Sorting methodologies aren't designed to be used together. For example, if you're sorting with **`$orderby`** for primary sorting, you can't apply a secondary sort based on search score (because the search score will be uniform).
++ a search score++ a semantic reranker score++ a sort order on a "sortable" field
-### Ordering by search score
+You can also boost any matches found in specific fields by adding a scoring profile.
-For full text search queries, results are automatically ranked by a search score, calculated based on term frequency and proximity in a document (derived from [TF-IDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf)), with higher scores going to documents having more or stronger matches on a search term.
+### Order by search score
-The "@search.score" range is 0 up to (but not including) 1.00. A "@search.score" equal to 1.00 indicates an unscored or unranked result set, where the 1.0 score is uniform across all results. Unscored results occur when the query form is fuzzy search, wildcard or regex queries, or an empty search (`search=*`). If you need to impose a ranking structure over unscored results, an **`$orderby`** expression will help you achieve that objective.
+For full text search queries, results are automatically [ranked by a search score](index-similarity-and-scoring.md), calculated based on term frequency and proximity in a document (derived from [TF-IDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf)), with higher scores going to documents having more or stronger matches on a search term.
-Search scores convey general sense of relevance, reflecting the strength of match relative to other documents in the same result set. But scores aren't always consistent from one query to the next, so as you work with queries, you might notice small discrepancies in how search documents are ordered. There are several explanations for why this might occur.
+The "@search.score" range is either unbounded, or 0 up to (but not including) 1.00 on older services.
-| Cause | Description |
-|--|-|
-| Data volatility | Index content varies as you add, modify, or delete documents. Term frequencies will change as index updates are processed over time, affecting the search scores of matching documents. |
-| Multiple replicas | For services using multiple replicas, queries are issued against each replica in parallel. The index statistics used to calculate a search score are calculated on a per-replica basis, with results merged and ordered in the query response. Replicas are mostly mirrors of each other, but statistics can differ due to small differences in state. For example, one replica might have deleted documents contributing to their statistics, which were merged out of other replicas. Typically, differences in per-replica statistics are more noticeable in smaller indexes. For more information about this condition, see [Concepts: search units, replicas, partitions, shards](search-capacity-planning.md#concepts-search-units-replicas-partitions-shards) in the capacity planning documentation. |
-| Identical scores | If multiple documents have the same score, any one of them might appear first. |
+For either algorithm, a "@search.score" equal to 1.00 indicates an unscored or unranked result set, where the 1.0 score is uniform across all results. Unscored results occur when the query form is fuzzy search, wildcard or regex queries, or an empty search (`search=*`). If you need to impose a ranking structure over unscored results, consider an **`$orderby`** expression to achieve that objective.
-### Ordering by the semantic reranker
+### Order by the semantic reranker
If you're using [semantic search](semantic-search-overview.md), the "@search.rerankerScore" determines the sort order of your results. The "@search.rerankerScore" range is 1 to 4.00, where a higher score indicates a stronger semantic match.
-### Ordering with $orderby
+### Order with $orderby
-If consistent ordering is an application requirement, you can explicitly define an [**`$orderby`** expression](query-odata-filter-orderby-syntax.md) on a field. Only fields that are indexed as "sortable" can be used to order results.
+If consistent ordering is an application requirement, you can define an [**`$orderby`** expression](query-odata-filter-orderby-syntax.md) on a field. Only fields that are indexed as "sortable" can be used to order results.
Fields commonly used in an **`$orderby`** include rating, date, and location. Filtering by location requires that the filter expression calls the [**`geo.distance()` function**](search-query-odata-geo-spatial-functions.md?#order-by-examples), in addition to the field name.
String fields (Edm.String, Edm.ComplexType subfields) are sorted in either [ASCI
+ Strings that lead with diacritics appear last (Äpfel, Öffnen, Üben)
-### Use a scoring profile to influence relevance
+### Boost relevance using a scoring profile
Another approach that promotes order consistency is using a [custom scoring profile](index-add-scoring-profiles.md). Scoring profiles give you more control over the ranking of items in search results, with the ability to boost matches found in specific fields. The extra scoring logic can help override minor differences among replicas because the search scores for each document are farther apart. We recommend the [ranking algorithm](index-ranking-similarity.md) for this approach.
search Search Query Odata Search Score Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-odata-search-score-function.md
Previously updated : 09/16/2021 Last updated : 04/18/2023 translation.priority.mt: - "de-de" - "es-es"
translation.priority.mt:
# OData `search.score` function in Azure Cognitive Search
-When you send a query to Azure Cognitive Search without the [**$orderby** parameter](search-query-odata-orderby.md), the results that come back will be sorted in descending order by relevance score. Even when you do use **$orderby**, the relevance score will be used to break ties by default. However, sometimes it is useful to use the relevance score as an initial sort criteria, and some other criteria as the tie-breaker. The `search.score` function allows you to do this.
+When you send a query to Azure Cognitive Search without the [**$orderby** parameter](search-query-odata-orderby.md), the results that come back will be sorted in descending order by relevance score. Even when you do use **$orderby**, the relevance score is used to break ties by default. However, sometimes it's useful to use the relevance score as an initial sort criteria, and some other criteria as the tie-breaker. The example in this article demonstrates using the `search.score` function for sorting.
+
+> [!NOTE]
+> The relevance score is computed by the similarity ranking algorithm, and the range varies depending on which algorithm you use. For more information, see [Relevance and scoring in Azure Cognitive Search](index-similarity-and-scoring.md).
## Syntax
-The syntax for `search.score` in **$orderby** is `search.score()`. The function `search.score` does not take any parameters. It can be used with the `asc` or `desc` sort-order specifier, just like any other clause in the **$orderby** parameter. It can appear anywhere in the list of sort criteria.
+The syntax for `search.score` in **$orderby** is `search.score()`. The function `search.score` doesn't take any parameters. It can be used with the `asc` or `desc` sort-order specifier, just like any other clause in the **$orderby** parameter. It can appear anywhere in the list of sort criteria.
## Example
security Encryption Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/encryption-models.md
The Azure services that support each encryption model:
| Azure SQL Database for MariaDB | Yes | - | - | | Azure SQL Database for MySQL | Yes | Yes | - | | Azure SQL Database for PostgreSQL | Yes | Yes | - |
-| Azure Synapse Analytics | Yes | Yes, RSA 3072-bit, including Managed HSM | - |
+| Azure Synapse Analytics (dedicated SQL pool (formerly SQL DW) only) | Yes | Yes, RSA 3072-bit, including Managed HSM | - |
| SQL Server Stretch Database | Yes | Yes, RSA 3072-bit | Yes | | Table Storage | Yes | Yes | Yes | | Azure Cosmos DB | Yes ([learn more](../../cosmos-db/database-security.md?tabs=sql-api)) | Yes, including Managed HSM ([learn more](../../cosmos-db/how-to-setup-cmk.md) and [learn more](../../cosmos-db/how-to-setup-customer-managed-keys-mhsm.md)) | - |
security Operational Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/operational-best-practices.md
na Previously updated : 01/16/2023 Last updated : 04/18/2023
The best practices are based on a consensus of opinion, and they work with curre
## Define and deploy strong operational security practices Azure operational security refers to the services, controls, and features available to users for protecting their data, applications, and other assets in Azure. Azure operational security is built on a framework that incorporates the knowledge gained through capabilities that are unique to Microsoft, including the [Security Development Lifecycle (SDL)](https://www.microsoft.com/sdl), the [Microsoft Security Response Center](https://www.microsoft.com/msrc?rtc=1) program, and deep awareness of the cybersecurity threat landscape.
+## Enforce multi-factor verification for users
+
+We recommend that you require two-step verification for all of your users. This includes administrators and others in your organization who can have a significant impact if their account is compromised (for example, financial officers).
+
+There are multiple options for requiring two-step verification. The best option for you depends on your goals, the Azure AD edition you're running, and your licensing program. See [How to require two-step verification for a user](../../active-directory/authentication/howto-mfa-userstates.md) to determine the best option for you. See the [Azure AD](https://azure.microsoft.com/pricing/details/active-directory/) and [Azure AD Multi-Factor Authentication](https://azure.microsoft.com/pricing/details/multi-factor-authentication/) pricing pages for more information about licenses and pricing.
+
+Following are options and benefits for enabling two-step verification:
+
+**Option 1**: Enable MFA for all users and login methods with Azure AD Security Defaults
+**Benefit**: This option enables you to easily and quickly enforce MFA for all users in your environment with a stringent policy to:
+
+* Challenge administrative accounts and administrative logon mechanisms
+* Require MFA challenge via Microsoft Authenticator for all users
+* Restrict legacy authentication protocols.
+
+This method is available to all licensing tiers but is not able to be mixed with existing Conditional Access policies. You can find more information in [Azure AD Security Defaults](../../active-directory/fundamentals/concept-fundamentals-security-defaults.md)
+
+**Option 2**: [Enable Multi-Factor Authentication by changing user state](../../active-directory/authentication/howto-mfa-userstates.md).
+**Benefit**: This is the traditional method for requiring two-step verification. It works with both [Azure AD Multi-Factor Authentication in the cloud and Azure AD Multi-Factor Authentication Server](../../active-directory/authentication/concept-mfa-howitworks.md). Using this method requires users to perform two-step verification every time they sign in and overrides Conditional Access policies.
+
+To determine where Multi-Factor Authentication needs to be enabled, see [Which version of Azure AD MFA is right for my organization?](../../active-directory/authentication/concept-mfa-howitworks.md).
+
+**Option 3**: [Enable Multi-Factor Authentication with Conditional Access policy](../../active-directory/authentication/howto-mfa-getstarted.md).
+**Benefit**: This option allows you to prompt for two-step verification under specific conditions by using [Conditional Access](../../active-directory/conditional-access/concept-conditional-access-policy-common.md). Specific conditions can be user sign-in from different locations, untrusted devices, or applications that you consider risky. Defining specific conditions where you require two-step verification enables you to avoid constant prompting for your users, which can be an unpleasant user experience.
+
+This is the most flexible way to enable two-step verification for your users. Enabling a Conditional Access policy works only for Azure AD Multi-Factor Authentication in the cloud and is a premium feature of Azure AD. You can find more information on this method in [Deploy cloud-based Azure AD Multi-Factor Authentication](../../active-directory/authentication/howto-mfa-getstarted.md).
+
+**Option 4**: Enable Multi-Factor Authentication with Conditional Access policies by evaluating [Risk-based Conditional Access policies](../../active-directory/conditional-access/howto-conditional-access-policy-risk.md).
+**Benefit**: This option enables you to:
+
+* Detect potential vulnerabilities that affect your organization's identities.
+* Configure automated responses to detected suspicious actions that are related to your organization's identities.
+* Investigate suspicious incidents and take appropriate action to resolve them.
+
+This method uses the Azure AD Identity Protection risk evaluation to determine if two-step verification is required based on user and sign-in risk for all cloud applications. This method requires Azure Active Directory P2 licensing. You can find more information on this method in [Azure Active Directory Identity Protection](../../active-directory/identity-protection/overview-identity-protection.md).
+
+> [!Note]
+> Option 2, enabling Multi-Factor Authentication by changing the user state, overrides Conditional Access policies. Because options 3 and 4 use Conditional Access policies, you cannot use option 2 with them.
+
+Organizations that don't add extra layers of identity protection, such as two-step verification, are more susceptible for credential theft attack. A credential theft attack can lead to data compromise.
+ ## Manage and monitor user passwords The following table lists some best practices related to managing user passwords:
sentinel Add Advanced Conditions To Automation Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/add-advanced-conditions-to-automation-rules.md
# Add advanced conditions to Microsoft Sentinel automation rules
-> [!IMPORTANT]
->
-> The advanced conditions capability for automation rules is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- This article explains how to add advanced "Or" conditions to automation rules in Microsoft Sentinel, for more effective triage of incidents. Add "Or" conditions in the form of *condition groups* in the Conditions section of your automation rule.
Let's create a rule that will change the severity of an incoming incident from w
In this first example, we'll create a simple condition group: If either condition A **or** condition B is true, the rule will run and the incident's severity will be set to *High*.
-1. Select the **+ Add** expander and choose **Condition group (Or) (Preview)** from the drop-down list.
+1. Select the **+ Add** expander and choose **Condition group (Or)** from the drop-down list.
:::image type="content" source="media/add-advanced-conditions-to-automation-rules/add-condition-group.png" alt-text="Screenshot of adding a condition group to an automation rule's condition set.":::
sentinel Deploy Sap Btp Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deploy-sap-btp-solution.md
+
+ Title: Deploy Microsoft Sentinel Solution for SAP® BTP
+description: This article introduces you to the process of deploying the Microsoft Sentinel Solution for SAP® BTP.
+++ Last updated : 03/30/2023++
+# Deploy Microsoft Sentinel Solution for SAP® BTP
+
+This article describes how to deploy the Microsoft Sentinel Solution for SAP® BTP. The Microsoft Sentinel Solution for SAP® BTP monitors and protects your SAP Business Technology Platform (BTP) system: It collects audits and activity logs from the BTP infrastructure and BTP based apps, and detects threats, suspicious activities, illegitimate activities, and more. Read more about the solution. [Read more about the solution](sap-btp-solution-overview.md).
+
+> [!IMPORTANT]
+> The Microsoft Sentinel Solution for SAP® BTP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Prerequisites
+
+### Fill in the sign-up form
+
+To get started, **first [complete the sign-up form](https://forms.microsoft.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR5CavmNiVgxCqhcQeRrOyvxUM0Q3NVdHU1hWNjMzQkM3RVNNNldTR1hYOS4u)** so that we can provision your subscription with access to the preview. WeΓÇÖll send a confirmation email once your subscription is active.
+
+### Additional prerequisites
+
+Before you begin, verify that:
+
+- The Microsoft Sentinel solution is enabled.
+- You have a defined Microsoft Sentinel workspace and have read and write permissions to the workspace.
+- Your organization uses SAP BTP (in a Cloud Foundry environment) to streamline interactions with SAP applications and other business applications.
+- You have an SAP BTP account (which supports BTP accounts in the Cloud Foundry environment). You can also use a [SAP BTP trial account](https://cockpit.hanatrial.ondemand.com/).
+- You have the SAP BTP auditlog-management service and service key (see [Set up the BTP account and solution](#set-up-the-btp-account-and-solution)).
+- You can create an [Azure Function App](../../azure-functions/functions-overview.md) with the `Microsoft.Web/Sites`, `Microsoft.Web/ServerFarms`, `Microsoft.Insights/Components`, andΓÇ»`Microsoft.Storage/StorageAccounts` permissions.
+- You can create [Data Collection Rules/Endpoints](../../azure-monitor/essentials/data-collection-rule-overview.md) with the permissions:
+ - `Microsoft.Insights/DataCollectionEndpoints`, and `Microsoft.Insights/DataCollectionRules`.
+ - Assign the Monitoring Metrics Publisher role to the Azure Function.
+- You have an [Azure Key Vault](../../key-vault/general/overview.md) to hold the SAP BTP client secret.
+
+## Set up the BTP account and solution
+
+1. After you can log into your BTP account (see the [prerequisites](#prerequisites),) follow these [audit log retrieval steps](https://help.sap.com/docs/btp/sap-business-technology-platform/audit-log-retrieval-api-usage-for-subaccounts-in-cloud-foundry-environment) on the SAP BTP system.
+1. In the SAP BTP Cockpit, select the **Audit Log Management Service**.
+
+ :::image type="content" source="./media/deploy-sap-btp-solution/btp-audit-log-management-service.png" alt-text="Screenshot of selecting the BTP Audit Log Management Service." lightbox="./media/deploy-sap-btp-solution/btp-audit-log-management-service.png":::
+
+1. Create an instance of the Audit Log Management Service in the sub account.
+
+ :::image type="content" source="./media/deploy-sap-btp-solution/btp-audit-log-sub-account.png" alt-text="Screenshot of creating an instance of the BTP subaccount." lightbox="./media/deploy-sap-btp-solution/btp-audit-log-sub-account.png":::
+
+1. Create a service key and record the `url`, `uaa.clientid`, `uaa.clientecret` and `uaa.url` values. These are required to deploy the data connector.
+
+ Here's an example of these field values.
+
+ - **url**: `https://auditlog-management.cfapps.us10.hana.ondemand.com`
+ - **uaa.clientid**: `sb-ac79fee5-8ad0-4f88-be71-d3f9c566e73a!b136532|auditlog-management!b1237`
+ - **uaa.clientsecret**: `682323d2-42a0-45db-a939-74639efde986$gR3x3ohHTB8iyYSKHW0SNIWG4G0tQkkMdBwO7lKhwcQ=`
+ - **uaa.url**: `https://915a0312trial.authentication.us10.hana.ondemand.com`
+
+1. Log into the Azure portal with the [solution preview feature flag](https://portal.azure.com/?feature.loadTemplateSolutions=true).
+1. Navigate to the **Microsoft Sentinel** service.
+1. Select **Content hub**, and in the search bar, search for *BTP*.
+1. Select **Sentinel Solution for SAP BTP**.
+1. Select **Install**.
+
+ For more information about how to manage the solution components, see [Discover and deploy out-of-the-box content](../sentinel-solutions-deploy.md).
+
+1. Select **Create**.
+
+ :::image type="content" source="./media/deploy-sap-btp-solution/sap-btp-create-solution.png" alt-text="Screenshot of how to create the Microsoft Sentinel Solution® for SAP BTP." lightbox="./media/deploy-sap-btp-solution/sap-btp-create-solution.png":::
+
+1. Select the resource group and the Sentinel workspace in which you want to deploy the solution.
+1. Select **Next** until you pass validation and select **Create**.
+1. Once the solution deployment is complete, return to your Sentinel workspace and select **Data connectors**.
+1. In the search bar, type *BTP*, and select **SAP BTP (using Azure Function)**.
+1. Select **Open connector page**.
+1. In the connector page, make sure that you meet the required prerequisites and follow the configuration steps. In step 2 of the data connector configuration, specify the parameters you defined in step 4 of this procedure.
+
+ > [!NOTE]
+ > Retrieving audits for the global account doesn't automatically retrieve audits for the subaccount. Follow the connector configuration steps for each of the subaccounts you want to monitor, and also follow these steps for the global account. Review these [account auditing configuration considerations](#account-auditing-configuration-considerations).
+
+1. Complete all configuration steps, including the Function App deployment and the Key Vault access policy configuration.
+1. Make sure that BTP logs are flowing into the Microsoft Sentinel workspace:
+ 1. Log in to your BTP subaccount and run a few activities that generate logs, such as logins, adding users, changing permissions, changing settings, and so on.
+ 1. Allow 20-30 minutes for the logs to start flowing.
+ 1. In the **SAP BTP** connector page, confirm that Microsoft Sentinel receives the BTP data, or query the `SAPBTPAuditLog_CL` table directly.
+
+1. Enable the [workbook](sap-btp-security-content.md#sap-btp-workbook) and the [analytics rules](sap-btp-security-content.md#built-in-analytics-rules) provided as part of the solution by following [these guidelines](../sentinel-solutions-deploy.md#analytics-rule).
+
+## Account auditing configuration considerations
+
+### Global account auditing configuration
+
+When you enable audit log retrieval in the BTP cockpit for the Global account: If the subaccount for which you want to entitle the Audit Log Management Service is under a directory, you must entitle the service at the directory level first, and only then you can entitle the service at the subaccount level.
+
+### Subaccount auditing configuration
+
+To enable auditing for a subaccount, follow the steps in the [SAP subaccounts audit retrieval API documentation](https://help.sap.com/docs/btp/sap-business-technology-platform/audit-log-retrieval-api-usage-for-subaccounts-in-cloud-foundry-environment).
+
+However, while this guide explains how to enable the audit log retrieval using the Cloud Foundry CLI, you can also retrieve the logs via the UI:
+
+1. In your subaccount Service Marketplace, create an instance of the **Audit Log Management Service**.
+1. Create a service key in the new **Audit Log Management Service** instance.
+1. View the Service key and retrieve the required parameters mentioned in step 2 of the configuration instructions in the data connector UI (**url**, **uaa.url**, **uaa.clientid**, **uaa.clientsecret**).
+
+## Next steps
+
+In this article, you learned how to deploy the Microsoft Sentinel Solution® for SAP BTP.
+>
+> - [Learn how to enable the security content](../sentinel-solutions-deploy.md#analytics-rule)
+> - [Review the solution's security content](sap-btp-security-content.md)
sentinel Sap Btp Security Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-btp-security-content.md
+
+ Title: Microsoft Sentinel Solution for SAP® BTP - security content reference
+description: Learn about the built-in security content provided by the Microsoft Sentinel Solution for SAP® BTP.
+++ Last updated : 03/30/2023++
+# Microsoft Sentinel Solution for SAP® BTP: security content reference
+
+This article details the security content available for the Microsoft Sentinel Solution for SAP® BTP.
+
+> [!IMPORTANT]
+> The Microsoft Sentinel Solution for SAP® BTP is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Available security content currently includes a built-in workbook and analytics rules. You can also add SAP-related [watchlists](../watchlists.md) to use in your search, detection rules, threat hunting, and response playbooks.
+
+[Learn more about the solution](sap-btp-solution-overview.md).
+
+## SAP BTP workbook
+
+The BTP Activity Workbook provides a dashboard overview of BTP activity.
++
+The **Overview** tab shows:
+
+- An overview of BTP subaccounts, helping analysts identify the most active accounts and the type of ingested data.
+- Subaccount sign-in activity, helping analysts identify spikes and trends that may be associated with sign-in failures in SAP Business Application Studio (BAS).
+- Timeline of BTP activity and number of BTP security alerts, helping analysts search for any correlation between the two.
+
+The **Identity Management** tab shows a grid of identity management events, such as user and security role changes, in a human-readable format. The search bar lets you quickly find specific changes.
++
+For more information, see [Tutorial: Visualize and monitor your data](../monitor-your-data.md) and [Deploy Microsoft Sentinel Solution for SAP® BTP](deploy-sap-btp-solution.md).
+
+## Built-in analytics rules
+
+| Rule name | Description | Source action | Tactics |
+| | | | |
+| **BTP - Failed access attempts across multiple BAS subaccounts** |Identifies failed Business Application Studio (BAS) access attempts over a predefined number of subaccounts.<br>Default threshold: 3 | | |
+| **BTP - Malware detected in BAS dev space** |Identifies instances of malware detected by the SAP internal malware agent within BAS developer spaces. | | |
+| **BTP - User added to sensitive privileged role collection** |Identifies identity management actions where a user is added to a set of monitored privileged role collections. | | |
+| **BTP - Trust and authorization Identity Provider monitor** |Identifies create, read, update, and delete (CRUD) operations on Identity Provider settings within a subaccount. |
+| **BTP - Mass user deletion in a sub account** |Identifies user account deletion activity where the number of deleted users exceeds a predefined threshold.<br>Default threshold: 10 | | |
+
+## Next steps
+
+In this article, you learned about the security content provided with the Microsoft Sentinel Solution for SAP® BTP.
+
+- [Deploy Microsoft Sentinel solution for SAP® BTP](deploy-sap-btp-solution.md)
+- [Microsoft Sentinel Solution for SAP® BTP overview](sap-btp-solution-overview.md)
sentinel Sap Btp Solution Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-btp-solution-overview.md
+
+ Title: Microsoft Sentinel Solution for SAP® BTP overview
+description: This article introduces the Microsoft Sentinel Solution for SAP® BTP.
+++ Last updated : 03/22/2023++
+# Microsoft Sentinel Solution for SAP® BTP overview
+
+This article introduces the Microsoft Sentinel Solution for SAP® BTP. The solution monitors and protects your SAP Business Technology Platform (BTP) system: It collects audits and activity logs from the BTP infrastructure and BTP based apps, and detects threats, suspicious activities, illegitimate activities, and more.
+
+SAP BTP is a cloud-based solution that provides a wide range of tools and services for developers to build, run, and manage applications. One of the key features of SAP BTP is its low-code development capabilities. Low-code development allows developers to create applications quickly and efficiently by using visual drag-and-drop interfaces and prebuilt components, rather than writing code from scratch.
+
+> [!IMPORTANT]
+> The Microsoft Sentinel Solution for SAP® BTP is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+### Why it's important to monitor BTP activity
+
+While low-code development platforms have become increasingly popular among businesses looking to accelerate their application development processes, there are also security risks that organizations must consider. One key concern is the risk of security vulnerabilities introduced by citizen developers, some of whom may lack the security awareness of traditional pro-dev community. To counter these vulnerabilities, it's crucial for organizations to quickly detect and respond to threats on BTP applications.
+
+Beyond the low-code aspect, BTP applications:
+
+- Access sensitive business data, such as customers, opportunities, orders, financial data, and manufacturing processes.
+- Access and integrate with multiple different business applications and data storesΓÇï.
+- Enable key business processesΓÇï.
+- Are created by citizen developers who may not be security savvy or aware of cyber threats.
+- Used by wide range of users, internal and externalΓÇï.
+
+Therefore, it's important to protect your BTP system against these risks.
+
+## How the solution addresses BTP security risks
+
+With the Microsoft Sentinel Solution for SAP® BTP, you can:
+
+- Gain visibility to activities **on** BTP applications, including creation, modification, permissions change, execution, and more.
+- Gain visibility to activities **in** BTP applications, including who uses the application, which business applications the BTP application accesses, business data Create, Read, Update, Delete (CRUD) activities, and more.
+- Detect suspicious or illegitimate activities. The activities include: suspicious logins, illegitimate changes of application settings and user permission, data exfiltration, bypassing of SOD policies, and more.
+- Investigate and respond to threats originating from the BTP application: Find an application owner, understand relationships between applications, suspend applications or users, and more.
+- Monitor on-premises and SaaSΓÇï SAP environmentsΓÇï.
+
+The solution includes:
+
+- The **SAP BTP** connector, which allows you to connect your BTP subaccounts and global account to Microsoft Sentinel via the [Audit Log service for SAP BTP API](https://help.sap.com/docs/btp/sap-business-technology-platform/security-events-logged-by-cf-services). Learn how to [install the solution and data connector](deploy-sap-btp-solution.md).
+- **[Built-in analytics rules](sap-btp-security-content.md#built-in-analytics-rules)** for identity management and low-code application development scenarios using the Trust and Authorization Provider and Business Application Studio (BAS) event sources in BTP.
+- The **[BTP activity workbook](sap-btp-security-content.md#sap-btp-workbook)**, which provides a dashboard overview of subaccounts and a grid of identity management events.
+
+## Next steps
+
+In this article, you learned about the Microsoft Sentinel solution for SAP® BTP.
+
+> [!div class="nextstepaction"]
+> [Deploy the Microsoft Sentinel Solution for SAP® BTP](deploy-sap-btp-solution.md)
sentinel Sap Solution Security Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-solution-security-content.md
The following tables list the built-in [analytics rules](deploy-sap-security-con
| **SAP - Execution of Sensitive Function Module** | Identifies the execution of a sensitive ABAP function module. <br><br>**Sub-use case**: [Persistency](#persistency)<br><br>**Note**: Relevant for production systems only. <br><br>Maintain sensitive functions in the [SAP - Sensitive Function Modules](#modules) watchlist, and make sure to activate table logging changes in the backend for the EUFUNC table. (SE13) | Run a sensitive function module directly using SE37. <br><br>**Data sources**: SAPcon - Table Data Log | Discovery, Command and Control | **SAP - (PREVIEW) HANA DB -Audit Trail Policy Changes** | Identifies changes for HANA DB audit trail policies. | Create or update the existing audit policy in security definitions. <br> <br>**Data sources**: Linux Agent - Syslog | Lateral Movement, Defense Evasion, Persistence | | **SAP - (PREVIEW) HANA DB -Deactivation of Audit Trail** | Identifies the deactivation of the HANA DB audit log. | Deactivate the audit log in the HANA DB security definition. <br><br>**Data sources**: Linux Agent - Syslog | Persistence, Lateral Movement, Defense Evasion |
-| **SAP - RFC Execution of a Sensitive Function Module** | Sensitive function models to be used in relevant detections. <br><br>Maintain function modules in the [SAP - Sensitive Function Modules](#module) watchlist. | Run a function module using RFC. <br><br>**Data sources**: SAPcon - Audit Log | Execution, Lateral Movement, Discovery |
+| **SAP - Unauthorized Remote Execution of a Sensitive Function Module** | Detects unauthorized executions of sensitive FMs by comparing the activity with the user's authorization profile while disregarding recently changed authorizations. <br><br>Maintain function modules in the [SAP - Sensitive Function Modules](#module) watchlist. | Run a function module using RFC. <br><br>**Data sources**: SAPcon - Audit Log | Execution, Lateral Movement, Discovery |
| **SAP - System Configuration Change** | Identifies changes for system configuration. | Adapt system change options or software component modification using the `SE06` transaction code.<br><br>**Data sources**: SAPcon - Audit Log |Exfiltration, Defense Evasion, Persistence | | **SAP - Debugging Activities** | Identifies all debugging related activities. <br><br>**Sub-use case**: [Persistency](#persistency) |Activate Debug ("/h") in the system, debug an active process, add breakpoint to source code, and so on. <br><br>**Data sources**: SAPcon - Audit Log | Discovery | | **SAP - Security Audit Log Configuration Change** | Identifies changes in the configuration of the Security Audit Log | Change any Security Audit Log Configuration using `SM19`/`RSAU_CONFIG`, such as the filters, status, recording mode, and so on. <br><br>**Data sources**: SAPcon - Audit Log | Persistence, Exfiltration, Defense Evasion |
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
See these [important announcements](#announcements) about recent changes to feat
## March 2023 -- [Work with the Microsoft Sentinel solution for SAP® applications across multiple workspaces (Preview)](#work-with-the-microsoft-sentinel-solution-for-sap-applications-across-multiple-workspaces-preview)
+- [Microsoft Sentinel for SAP® BTP solution (Preview)](#microsoft-sentinel-solution-for-sap-btp-preview)
+- [Work with the Microsoft Sentinel Solution for SAP® applications across multiple workspaces (Preview)](#work-with-the-microsoft-sentinel-solution-for-sap-applications-across-multiple-workspaces-preview)
- [Monitoring the configuration of static SAP security parameters](#monitoring-the-configuration-of-static-sap-security-parameters-preview) - [Stream log data from the Google Cloud Platform into Microsoft Sentinel (Preview)](#stream-log-data-from-the-google-cloud-platform-into-microsoft-sentinel-preview) - [Microsoft Defender Threat Intelligence data connector (Preview)](#microsoft-defender-threat-intelligence-data-connector-preview) - [Microsoft Defender Threat Intelligence solution (Preview)](#microsoft-defender-threat-intelligence-solution-preview) - [Automatically update the SAP data connector agent](#automatically-update-the-sap-data-connector-agent)
+### Microsoft Sentinel Solution for SAP® BTP (Preview)
+
+The Microsoft Sentinel Solution for SAP BTP monitors and protects your SAP Business Technology Platform (BTP) system, by collecting audits and activity logs from the BTP infrastructure and BTP based apps, and detecting threats, suspicious activities, illegitimate activities, and more.
+
+The solution includes the **SAP BTP** connector, [built-in analytics rules](sap/sap-btp-security-content.md#built-in-analytics-rules) for identity management and low-code application development scenarios, and the [BTP activity workbook](sap/sap-btp-security-content.md#sap-btp-workbook), which provides a dashboard overview of subaccounts and a grid of identity management events.
+
+[Learn more about the solution](sap/sap-btp-solution-overview.md).
+ ### Work with the Microsoft Sentinel solution for SAP® applications across multiple workspaces (Preview) You can now [work with the Microsoft Sentinel solution for SAP® applications across multiple workspaces](sap/cross-workspace.md) in different scenarios. This feature allows improved flexibility for managed security service providers (MSSPs) or a global or federated SOC, data residency requirements, organizational hierarchy/IT design, and insufficient role-based access control (RBAC) in a single workspace. One common use case is the need for collaboration between the security operations center (SOC) and SAP teams in your organization. Read about [the scenarios that address this use case](sap/cross-workspace.md).
site-recovery Disaster Recovery For Edge Zone Via Vm Flow Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/disaster-recovery-for-edge-zone-via-vm-flow-tutorial.md
description: Learn how to set up disaster recovery for Virtual machines on Azure
Previously updated : 12/14/2022 Last updated : 04/19/2023
To enable replication to a secondary location, follow the below steps:
:::image type="content" source="./media/disaster-recovery-for-edge-zone-vm-flow-tutorial/disaster-recovery.png" alt-text=" Screenshot of Select Disaster Recovery option."lightbox="./media/disaster-recovery-for-edge-zone-vm-flow-tutorial/disaster-recovery-expanded.png"::: 1. In **Basics**, select the **Target region** or an Azure Public MEC.
- - Option1: **Public MEC to Region**
+ - Option 1: **Public MEC to Region**
- :::image type="content" source="./media/disaster-recovery-for-edge-zone-vm-flow-tutorial/edge-zone-to-region.png" alt-text="Screenshot of Option 1 Edge Zone to Region."lightbox="./media/disaster-recovery-for-edge-zone-vm-flow-tutorial/edge-zone-to-region-expanded.png":::
+ :::image type="content" source="./media/disaster-recovery-for-edge-zone-vm-flow-tutorial/edge-zone-to-region.png" alt-text="Screenshot of Option 1 Edge Zone to Region."lightbox="./media/disaster-recovery-for-edge-zone-vm-flow-tutorial/edge-zone-to-region.png":::
- - Option2: **Public MEC to Public MEC**
+ - Option 2: **Public MEC to Public MEC**
- :::image type="content" source="./media/disaster-recovery-for-edge-zone-vm-flow-tutorial/edgezone-to-edgezone.png" alt-text="Screenshot of Option 2 Edge Zone to Edge Zone."lightbox="./media/disaster-recovery-for-edge-zone-vm-flow-tutorial/edgezone-to-edgezone-expanded.png":::
+ :::image type="content" source="./media/disaster-recovery-for-edge-zone-vm-flow-tutorial/edgezone-to-edgezone.png" alt-text="Screenshot of Option 2 Edge Zone to Edge Zone."lightbox="./media/disaster-recovery-for-edge-zone-vm-flow-tutorial/edgezone-to-edgezone.png":::
>[!Note] >This flow proceeds with Option 1: Public MEC to Region replication.
To enable replication to a secondary location, follow the below steps:
1. In **Advanced settings**, select **Subscription**, **VM resource group**, **Virtual network**, **Availability** and **Proximity placement group** as required. 1. Under **Capacity Reservation Settings**, **Capacity Reservation Groups** is disabled. 1. Under **Storage settings** > **Cache storage account**, select the cache storage account associated with the vault from the dropdown.
- :::image type="content" source="./media/disaster-recovery-for-edge-zone-vm-flow-tutorial/cache-storage.png" alt-text="Screenshot of cache storage field."lightbox="./media/disaster-recovery-for-edge-zone-vm-flow-tutorial/cache-storage-expanded.png":::
+ :::image type="content" source="./media/disaster-recovery-for-edge-zone-vm-flow-tutorial/cache-storage.png" alt-text="Screenshot of cache storage field."lightbox="./media/disaster-recovery-for-edge-zone-vm-flow-tutorial/cache-storage.png":::
- :::image type="content" source="./media/disaster-recovery-for-edge-zone-vm-flow-tutorial/cache-storage-2.png" alt-text="Screenshot of cache storage field step 2."lightbox="./media/disaster-recovery-for-edge-zone-vm-flow-tutorial/cache-storage-2-expanded.png":::
+ :::image type="content" source="./media/disaster-recovery-for-edge-zone-vm-flow-tutorial/cache-storage-2.png" alt-text="Screenshot of cache storage field step 2."lightbox="./media/disaster-recovery-for-edge-zone-vm-flow-tutorial/cache-storage-2.png":::
1. Select **Next : Review + Start replication**. :::image type="content" source="./media/disaster-recovery-for-edge-zone-vm-flow-tutorial/review.png" alt-text="Screenshot of Review settings tab."lightbox="./media/disaster-recovery-for-edge-zone-vm-flow-tutorial/review-expanded.png":::
site-recovery Disaster Recovery For Edge Zone Vm Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/disaster-recovery-for-edge-zone-vm-tutorial.md
description: Learn how to set up disaster recovery for virtual machines on Azure
Previously updated : 12/14/2022 Last updated : 04/18/2023
site-recovery How To Migrate Run As Accounts Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/how-to-migrate-run-as-accounts-managed-identity.md
Previously updated : 02/23/2023 Last updated : 04/04/2023 # Migrate from a Run As account to Managed Identities
site-recovery Hyper V Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-azure-support-matrix.md
Title: Support for disaster recovery of Hyper-V VMs to Azure with Azure Site Rec
description: Summarizes the supported components and requirements for Hyper-V VM disaster recovery to Azure with Azure Site Recovery Previously updated : 7/14/2020 Last updated : 04/04/2023
site-recovery Vmware Azure Architecture Modernized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-architecture-modernized.md
Title: VMware VM disaster recovery architecture in Azure Site Recovery - Moderni
description: This article provides an overview of components and architecture used when setting up disaster recovery of on-premises VMware VMs to Azure with Azure Site Recovery - Modernized Previously updated : 09/21/2022 Last updated : 04/04/2023
spring-apps Access App Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/access-app-virtual-network.md
Title: "Azure Spring Apps access app in virtual network"
-description: Access app in Azure Spring Apps in a virtual network.
+ Title: Access your application in a private network
+description: Access an app in Azure Spring Apps in a virtual network.
ms.devlang: azurecli
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+**This article applies to:** ✔️ Basic/Standard ✔️ Enterprise
This article explains how to access an endpoint for your application in a private network.
When **Assign Endpoint** on applications in an Azure Spring Apps service instanc
2. In the **Connected devices** search box, enter *kubernetes-internal*.
-3. In the filtered result, find the **Device** connected to the service runtime **Subnet** of the service instance, and copy its **IP Address**. In this sample, the IP Address is *10.1.0.7*.
+3. In the filtered result, find the **Device** connected to the **Service Runtime Subnet** of the service instance, and copy its **IP Address**. In this sample, the IP Address is *10.1.0.7*.
+
+ > [!WARNING]
+ > Be sure that the IP Address belongs to **Service Runtime subnet** instead of **Spring Boot microservice apps subnet**. Subnet specifications are provided when you deploy an Azure Spring Apps instance. For more information, see the [Deploy an Azure Spring Apps instance](./how-to-deploy-in-azure-virtual-network.md#deploy-an-azure-spring-apps-instance) section of [Deploy Azure Spring Apps in a virtual network](./how-to-deploy-in-azure-virtual-network.md).
:::image type="content" source="media/spring-cloud-access-app-vnet/create-dns-record.png" alt-text="Screenshot of the Azure portal showing the Connected devices page for a virtual network, filtered for kubernetes-internal devices, with the IP Address for the service runtime subnet highlighted." lightbox="media/spring-cloud-access-app-vnet/create-dns-record.png":::
When **Assign Endpoint** on applications in an Azure Spring Apps service instanc
Find the IP Address for your Spring Cloud services. Customize the value of your Azure Spring Apps instance name based on your real environment.
- ```azurecli
- SPRING_CLOUD_NAME='spring-cloud-name'
- SERVICE_RUNTIME_RG=`az spring show \
- --resource-group $RESOURCE_GROUP \
- --name $SPRING_CLOUD_NAME \
- --query "properties.networkProfile.serviceRuntimeNetworkResourceGroup" \
- --output tsv`
- IP_ADDRESS=`az network lb frontend-ip list \
- --lb-name kubernetes-internal \
- --resource-group $SERVICE_RUNTIME_RG \
- --query "[0].privateIpAddress" \
- --output tsv`
- ```
+```azurecli
+SPRING_CLOUD_NAME='spring-cloud-name'
+SERVICE_RUNTIME_RG=`az spring show \
+ --resource-group $RESOURCE_GROUP \
+ --name $SPRING_CLOUD_NAME \
+ --query "properties.networkProfile.serviceRuntimeNetworkResourceGroup" \
+ --output tsv`
+IP_ADDRESS=`az network lb frontend-ip list \
+ --lb-name kubernetes-internal \
+ --resource-group $SERVICE_RUNTIME_RG \
+ --query "[0].privateIpAddress" \
+ --output tsv`
+```
Find the IP Address for your Spring Cloud services. Customize the value of your
If you have your own DNS solution for your virtual network, like Active Directory Domain Controller, Infoblox, or another, you need to point the domain `*.private.azuremicroservices.io` to the [IP address](#find-the-ip-for-your-application). Otherwise, you can follow the following instructions to create an **Azure Private DNS Zone** in your subscription to translate/resolve the private fully qualified domain name (FQDN) to its IP address. > [!NOTE]
-> If you are using Azure China, please replace `private.azuremicroservices.io` with `private.microservices.azure.cn` in this article. Learn more about [Check Endpoints in Azure](/azure/china/resources-developer-guide#check-endpoints-in-azure).
+> If you're using Azure China, be sure to replace `private.azuremicroservices.io` with `private.microservices.azure.cn` in this article. Learn more about [Check Endpoints in Azure](/azure/china/resources-developer-guide#check-endpoints-in-azure).
## Create a private DNS zone
To link the private DNS zone to the virtual network, you need to create a virtua
#### [Portal](#tab/azure-portal)
-1. Select the private DNS zone resource created above: *private.azuremicroservices.io*
+1. Select the private DNS zone resource you created previously: *private.azuremicroservices.io*
2. On the left pane, select **Virtual network links**, then select **Add**.
To link the private DNS zone to the virtual network, you need to create a virtua
Link the private DNS zone you created to the virtual network holding your Azure Spring Apps service.
- ```azurecli
- az network private-dns link vnet create \
- --resource-group $RESOURCE_GROUP \
- --name azure-spring-apps-dns-link \
- --zone-name private.azuremicroservices.io \
- --virtual-network $VIRTUAL_NETWORK_NAME \
- --registration-enabled false
- ```
+```azurecli
+az network private-dns link vnet create \
+ --resource-group $RESOURCE_GROUP \
+ --name azure-spring-apps-dns-link \
+ --zone-name private.azuremicroservices.io \
+ --virtual-network $VIRTUAL_NETWORK_NAME \
+ --registration-enabled false
+```
+ ## Create DNS record
To use the private DNS zone to translate/resolve DNS, you must create an "A" typ
#### [Portal](#tab/azure-portal)
-1. Select the private DNS zone resource created above: *private.azuremicroservices.io*.
+1. Select the private DNS zone resource you created previously: *private.azuremicroservices.io*.
1. Select **Record set**.
To use the private DNS zone to translate/resolve DNS, you must create an "A" typ
Use the [IP address](#find-the-ip-for-your-application) to create the A record in your DNS zone.
- ```azurecli
- az network private-dns record-set a add-record \
- --resource-group $RESOURCE_GROUP \
- --zone-name private.azuremicroservices.io \
- --record-set-name '*' \
- --ipv4-address $IP_ADDRESS
- ```
+```azurecli
+az network private-dns record-set a add-record \
+ --resource-group $RESOURCE_GROUP \
+ --zone-name private.azuremicroservices.io \
+ --record-set-name '*' \
+ --ipv4-address $IP_ADDRESS
+```
spring-apps Connect Managed Identity To Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/connect-managed-identity-to-azure-sql.md
Title: Use Managed identity to connect Azure SQL to Azure Spring Apps app
-description: Set up managed identity to connect Azure SQL to an Azure Spring Apps app.
+ Title: Use Managed identity to connect Azure SQL Database to an app deployed to Azure Spring Apps
+description: Set up managed identity to connect Azure SQL to an app deployed to Azure Spring Apps.
Last updated 09/26/2022
-# Use a managed identity to connect Azure SQL Database to an Azure Spring Apps app
+# Use a managed identity to connect Azure SQL Database to an app deployed to Azure Spring Apps
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ❌ C#
-**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+**This article applies to:** ✔️ Basic/Standard ✔️ Enterprise
-This article shows you how to create a managed identity for an Azure Spring Apps app and use it to access Azure SQL Database.
+This article shows you how to create a managed identity for an app deployed to Azure Spring Apps and use it to access Azure SQL Database.
[Azure SQL Database](https://azure.microsoft.com/services/sql-database/) is the intelligent, scalable, relational database service built for the cloud. ItΓÇÖs always up to date, with AI-powered and automated features that optimize performance and durability. Serverless compute and Hyperscale storage options automatically scale resources on demand, so you can focus on building new applications without worrying about storage size or resource management. ## Prerequisites
-* Follow the [Spring Data JPA tutorial](/azure/developer/java/spring-framework/configure-spring-data-jpa-with-azure-sql-server) to provision an Azure SQL Database and get it work with a Java app locally
-* Follow the [Azure Spring Apps system-assigned managed identity tutorial](./how-to-enable-system-assigned-managed-identity.md) to provision an Azure Spring Apps app with MI enabled
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* [Azure CLI](/cli/azure/install-azure-cli) version 2.45.0 or higher.
+* Follow the [Spring Data JPA tutorial](/azure/developer/java/spring-framework/configure-spring-data-jpa-with-azure-sql-server) to provision an Azure SQL Database and get it work with a Java app locally.
+* Follow the [Azure Spring Apps system-assigned managed identity tutorial](./how-to-enable-system-assigned-managed-identity.md) to provision an app in Azure Spring Apps with managed identity enabled.
## Connect to Azure SQL Database with a managed identity
-You can connect your application deployed to Azure Spring Apps to an Azure SQL Database with a managed identity by following manual steps or using [Service Connector](../service-connector/overview.md).
+You can connect your application to an Azure SQL Database with a managed identity by following manual steps or using [Service Connector](../service-connector/overview.md).
### [Manual configuration](#tab/manual)
You can connect your application deployed to Azure Spring Apps to an Azure SQL D
Connect to your SQL server and run the following SQL query: ```sql
-CREATE USER [<MIName>] FROM EXTERNAL PROVIDER;
-ALTER ROLE db_datareader ADD MEMBER [<MIName>];
-ALTER ROLE db_datawriter ADD MEMBER [<MIName>];
-ALTER ROLE db_ddladmin ADD MEMBER [<MIName>];
+CREATE USER [<managed-identity-name>] FROM EXTERNAL PROVIDER;
+ALTER ROLE db_datareader ADD MEMBER [<managed-identity-name>];
+ALTER ROLE db_datawriter ADD MEMBER [<managed-identity-name>];
+ALTER ROLE db_ddladmin ADD MEMBER [<managed-identity-name>];
GO ```
-The value of the `<MIName>` placeholder follows the rule `<service-instance-name>/apps/<app-name>`; for example: `myspringcloud/apps/sqldemo`. You can also query the MIName with Azure CLI:
+The value of the `<managed-identity-name>` placeholder follows the rule `<service-instance-name>/apps/<app-name>`; for example: `myspringcloud/apps/sqldemo`. You can also use the following command to query the managed identity name with Azure CLI:
```azurecli az ad sp show --id <identity-object-ID> --query displayName
spring.datasource.url=jdbc:sqlserver://$AZ_DATABASE_NAME.database.windows.net:14
#### [Service Connector](#tab/service-connector)
-Configure your app deployed to Azure Spring to connect to an SQL Database with a system-assigned managed identity using the `az spring connection create` command, as shown in the following example.
+Configure your app deployed to Azure Spring Apps to connect to an Azure SQL Database with a system-assigned managed identity using the `az spring connection create` command, as shown in the following example.
-> [!NOTE]
-> These commands require [Azure CLI](/cli/azure/install-azure-cli) version 2.45.0 or higher.
-
-1. Install the Service Connector passwordless extension for the Azure CLI.
+1. Use the following command to install the Service Connector passwordless extension for the Azure CLI:
```azurecli az extension add --name serviceconnector-passwordless --upgrade ```
-1. Run the `az spring connection create` command, as shown in the following example.
+1. Use the following command to connect to the database:
```azurecli az spring connection create sql \
Configure your app deployed to Azure Spring to connect to an SQL Database with a
--system-identity ```
+1. Use the following command to check the creation result:
+
+ ```azurecli
+ CONNECTION_NAME=$(az spring connection list \
+ --resource-group $SPRING_APP_RESOURCE_GROUP \
+ --service $SPRING_APP_SERVICE_NAME \
+ --app $APP_NAME \
+ --query '[0].name' \
+ --output tsv)
+
+ az spring connection list-configuration \
+ --resource-group $SPRING_APP_RESOURCE_GROUP \
+ --service $SPRING_APP_SERVICE_NAME \
+ --app $APP_NAME \
+ --connection $CONNECTION_NAME
+ ```
+ ## Build and deploy the app to Azure Spring Apps
-Rebuild the app and deploy it to the Azure Spring Apps provisioned in the second bullet point under Prerequisites. Now you have a Spring Boot application, authenticated by a managed identity, that uses JPA to store and retrieve data from an Azure SQL Database in Azure Spring Apps.
+Rebuild the app and deploy it to the Azure Spring Apps provisioned in the second bullet point under Prerequisites. You now have a Spring Boot application authenticated by a managed identity that uses JPA to store and retrieve data from an Azure SQL Database in Azure Spring Apps.
## Next steps
spring-apps How To Bind Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-bind-cosmos.md
**This article applies to:** ✔️ Java ❌ C#
-**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+**This article applies to:** ✔️ Basic/Standard ✔️ Enterprise
Instead of manually configuring your Spring Boot applications, you can automatically bind select Azure services to your applications by using Azure Spring Apps. This article demonstrates how to bind your application to an Azure Cosmos DB database.
If you don't have a deployed Azure Spring Apps instance, follow the steps in the
### [Service Connector](#tab/Service-Connector)
-1. Use the Azure CLI to configure your Spring app to connect to a Cosmos SQL Database with a system-assigned managed identity by using the `az spring connection create` command, as shown in the following example.
+#### Use the Azure CLI
- > [!NOTE]
- > Updating Azure Cosmos DB database settings can take a few minutes to complete.
-
- ```azurecli
- az spring connection create cosmos-sql \
- --resource-group $AZURE_SPRING_APPS_RESOURCE_GROUP \
- --service $AZURE_SPRING_APPS_SERVICE_INSTANCE_NAME \
- --app $APP_NAME \
- --deployment $DEPLOYMENT_NAME \
- --target-resource-group $COSMOSDB_RESOURCE_GROUP \
- --account $COSMOSDB_ACCOUNT_NAME \
- --database $DATABASE_NAME \
- --system-assigned-identity
- ```
+Use the following command to configure your Spring app to connect to a Cosmos SQL Database with a system-assigned managed identity:
- > [!NOTE]
- > If you're using [Service Connector](../service-connector/overview.md) for the first time, start by running the command `az provider register --namespace Microsoft.ServiceLinker` to register the Service Connector resource provider.
- >
- > If you're using Cosmos Cassandra, use a `--key_space` instead of `--database`.
+> [!NOTE]
+> Updating Azure Cosmos DB database settings can take a few minutes to complete.
+
+```azurecli
+az spring connection create cosmos-sql \
+ --resource-group $AZURE_SPRING_APPS_RESOURCE_GROUP \
+ --service $AZURE_SPRING_APPS_SERVICE_INSTANCE_NAME \
+ --app $APP_NAME \
+ --deployment $DEPLOYMENT_NAME \
+ --target-resource-group $COSMOSDB_RESOURCE_GROUP \
+ --account $COSMOSDB_ACCOUNT_NAME \
+ --database $DATABASE_NAME \
+ --system-assigned-identity
+```
+
+> [!NOTE]
+> If you're using [Service Connector](../service-connector/overview.md) for the first time, start by running the command `az provider register --namespace Microsoft.ServiceLinker` to register the Service Connector resource provider.
+>
+> If you're using Cosmos Cassandra, use a `--key_space` instead of `--database`.
+
+> [!TIP]
+> Run the command `az spring connection list-support-types --output table` to get a list of supported target services and authentication methods for Azure Spring Apps. If the `az spring` command isn't recognized by the system, check that you have installed the required extension by running `az extension add --name spring`.
- > [!TIP]
- > Run the command `az spring connection list-support-types --output table` to get a list of supported target services and authentication methods for Azure Spring Apps. If the `az spring` command isn't recognized by the system, check that you have installed the required extension by running `az extension add --name spring`.
+#### Use the Azure portal
-1. Alternately, you can use the Azure portal to configure this connection by completing the following steps. The Azure portal provides the same capabilities as the Azure CLI and provides an interactive experience.
+Alternately, you can use the Azure portal to configure this connection by completing the following steps. The Azure portal provides the same capabilities as the Azure CLI and provides an interactive experience.
- 1. Select your Azure Spring Apps instance in the Azure portal and select **Apps** from the navigation menu. Choose the app you want to connect and select **Service Connector** on the navigation menu.
+1. Select your Azure Spring Apps instance in the Azure portal and select **Apps** from the navigation menu. Choose the app you want to connect and select **Service Connector** on the navigation menu.
- 1. Select **Create**.
+1. Select **Create**.
- 1. On the **Basics** tab, for service type, select Cosmos DB, then choose a subscription. For API type, select Core (SQL), choose a Cosmos DB account, and a database. For client type, select Java, then select **Next: Authentication**. If you haven't created your database yet, see [Quickstart: Create an Azure Cosmos DB account, database, container, and items from the Azure portal](../cosmos-db/nosql/quickstart-portal.md).
+1. On the **Basics** tab, for service type, select Cosmos DB, then choose a subscription. For API type, select Core (SQL), choose a Cosmos DB account, and a database. For client type, select Java, then select **Next: Authentication**. If you haven't created your database yet, see [Quickstart: Create an Azure Cosmos DB account, database, container, and items from the Azure portal](../cosmos-db/nosql/quickstart-portal.md).
- 1. On the **Authentication** tab, choose **Connection string**. Service Connector automatically retrieves the access key from your Cosmos DB account. Select **Next: Networking**.
+1. On the **Authentication** tab, choose **Connection string**. Service Connector automatically retrieves the access key from your Cosmos DB account. Select **Next: Networking**.
- 1. On the **Networking** tab, select **Configure firewall rules to enable access to target service**, then select **Next: Review + Create**.
+1. On the **Networking** tab, select **Configure firewall rules to enable access to target service**, then select **Next: Review + Create**.
- 1. On the **Review + Create** tab, wait for the validation to pass and then select **Create**. The creation can take a few minutes to complete.
+1. On the **Review + Create** tab, wait for the validation to pass and then select **Create**. The creation can take a few minutes to complete.
- 1. Once the connection between your Spring apps and your Cosmos DB database has been generated, you can see it in the Service Connector page and select the unfold button to view the configured connection variables.
+1. Once the connection between your Spring apps and your Cosmos DB database has been generated, you can see it in the Service Connector page and select the unfold button to view the configured connection variables.
### [Service Binding](#tab/Service-Binding)
Azure Cosmos DB has five different API types that support binding. The following
### [Terraform](#tab/Terraform)
-The following Terraform script shows how to set up an Azure Spring Apps app with an Azure Cosmos DB account.
+The following Terraform script shows how to set up an app deployed to Azure Spring Apps with an Azure Cosmos DB account.
```terraform provider "azurerm" {
spring-apps How To Deploy In Azure Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-deploy-in-azure-virtual-network.md
**This article applies to:** ✔️ Java ✔️ C#
-**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+**This article applies to:** ✔️ Basic/Standard ✔️ Enterprise
This tutorial explains how to deploy an Azure Spring Apps instance in your virtual network. This deployment is sometimes called VNet injection.
The following video describes how to secure Spring Boot applications using manag
> [!VIDEO https://www.youtube.com/embed/LbHD0jd8DTQ?list=PLPeZXlCR7ew8LlhnSH63KcM0XhMKxT1k_] > [!Note]
-> You can select your Azure virtual network only when you create a new Azure Spring Apps service instance. You cannot change to use another virtual network after Azure Spring Apps has been created.
+> You can select your Azure virtual network only when you create a new Azure Spring Apps service instance. You can't change to use another virtual network after Azure Spring Apps has been created.
## Prerequisites
-Register the Azure Spring Apps resource provider **Microsoft.AppPlatform** and **Microsoft.ContainerService** according to the instructions in [Register resource provider on Azure portal](../azure-resource-manager/management/resource-providers-and-types.md#azure-portal) or by running the following Azure CLI command:
+Register the Azure Spring Apps resource provider `Microsoft.AppPlatform` and `Microsoft.ContainerService` according to the instructions in [Register resource provider on Azure portal](../azure-resource-manager/management/resource-providers-and-types.md#azure-portal) or by running the following Azure CLI command:
```azurecli az provider register --namespace Microsoft.AppPlatform
az provider register --namespace Microsoft.ContainerService
The virtual network to which you deploy your Azure Spring Apps instance must meet the following requirements:
-* **Location**: The virtual network must reside in the same location as the Azure Spring Apps instance.
-* **Subscription**: The virtual network must be in the same subscription as the Azure Spring Apps instance.
-* **Subnets**: The virtual network must include two subnets dedicated to an Azure Spring Apps instance:
+* Location: The virtual network must reside in the same location as the Azure Spring Apps instance.
+* Subscription: The virtual network must be in the same subscription as the Azure Spring Apps instance.
+* Subnets: The virtual network must include two subnets dedicated to an Azure Spring Apps instance:
* One for the service runtime. * One for your Spring applications. * There's a one-to-one relationship between these subnets and an Azure Spring Apps instance. Use a new subnet for each service instance you deploy. Each subnet can only include a single service instance.
-* **Address space**: CIDR blocks up to */28* for both the service runtime subnet and the Spring applications subnet.
-* **Route table**: By default the subnets do not need existing route tables associated. You can [bring your own route table](#bring-your-own-route-table).
+* Address space: CIDR blocks up to */28* for both the service runtime subnet and the Spring applications subnet.
+* Route table: By default the subnets don't need existing route tables associated. You can [bring your own route table](#bring-your-own-route-table).
-The following procedures describe setup of the virtual network to contain the instance of Azure Spring Apps.
+The following procedures describe how to set up the virtual network to contain the Azure Spring Apps instance.
## Create a virtual network
-#### [Portal](#tab/azure-portal)
+#### [Azure portal](#tab/azure-portal)
If you already have a virtual network to host an Azure Spring Apps instance, skip steps 1, 2, and 3. You can start from step 4 to prepare subnets for the virtual network.
If you already have a virtual network to host an Azure Spring Apps instance, ski
1. In the **Create virtual network** dialog box, enter or select the following information:
- | Setting | Value |
- |--|--|
- | Subscription | Select your subscription. |
- | Resource group | Select your resource group, or create a new one. |
- | Name | Enter **azure-spring-apps-vnet**. |
- | Location | Select **East US**. |
+ | Setting | Value |
+ |--|--|
+ | Subscription | Select your subscription. |
+ | Resource group | Select your resource group, or create a new one. |
+ | Name | Enter *azure-spring-apps-vnet*. |
+ | Location | Select **East US**. |
1. Select **Next: IP Addresses**.
-1. For the IPv4 address space, enter **10.1.0.0/16**.
+1. For the IPv4 address space, enter *10.1.0.0/16*.
-1. Select **Add subnet**. Then enter **service-runtime-subnet** for **Subnet name** and enter **10.1.0.0/24** for **Subnet address range**. Then select **Add**.
+1. Select **Add subnet**. Then enter *service-runtime-subnet* for **Subnet name** and enter *10.1.0.0/24* for **Subnet address range**. Then select **Add**.
-1. Select **Add subnet** again, and then enter **Subnet name** and **Subnet address range**. For example, enter **apps-subnet** and **10.1.1.0/24**. Then select **Add**.
+1. Select **Add subnet** again, and then enter the subnet name and subnet address range. For example, enter *apps-subnet* and *10.1.1.0/24*. Then select **Add**.
1. Select **Review + create**. Leave the rest as defaults, and select **Create**.
-#### [CLI](#tab/azure-CLI)
+#### [Azure CLI](#tab/azure-CLI)
If you already have a virtual network to host an Azure Spring Apps instance, skip steps 1, 2, 3 and 4. You can start from step 5 to prepare subnets for the virtual network.
-1. Define variables for your subscription, resource group, and Azure Spring Apps instance. Customize the values based on your real environment.
+1. Use the following command to define variables for your subscription, resource group, and Azure Spring Apps instance. Customize the values based on your real environment.
```azurecli
- SUBSCRIPTION='subscription-id'
- RESOURCE_GROUP='my-resource-group'
+ SUBSCRIPTION='<subscription-id>'
+ RESOURCE_GROUP='<resource-group-name>'
LOCATION='eastus'
- AZURE_SPRING_APPS_INSTANCE_NAME='Azure-Spring-Apps-Instance-name'
+ AZURE_SPRING_APPS_INSTANCE_NAME='<Azure-Spring-Apps-Instance-name>'
VIRTUAL_NETWORK_NAME='azure-spring-apps-vnet' ```
-1. Sign in to the Azure CLI and choose your active subscription.
+1. Use the following command to sign in to the Azure CLI and choose your active subscription.
```azurecli az login az account set --subscription ${SUBSCRIPTION} ```
-1. Create a resource group for your resources.
+1. Use the following command to create a resource group for your resources:
```azurecli az group create --name $RESOURCE_GROUP --location $LOCATION ```
-1. Create the virtual network.
+1. Use the following command to create the virtual network:
```azurecli
- az network vnet create --resource-group $RESOURCE_GROUP \
+ az network vnet create \
+ --resource-group $RESOURCE_GROUP \
--name $VIRTUAL_NETWORK_NAME \ --location $LOCATION \ --address-prefix 10.1.0.0/16 ```
-1. Create 2 subnets in this virtual network.
+1. Use the following command to create two subnets in this virtual network:
```azurecli
- az network vnet subnet create --resource-group $RESOURCE_GROUP \
+ az network vnet subnet create \
+ --resource-group $RESOURCE_GROUP \
--vnet-name $VIRTUAL_NETWORK_NAME \ --address-prefixes 10.1.0.0/24 \ --name service-runtime-subnet
- az network vnet subnet create --resource-group $RESOURCE_GROUP \
+ az network vnet subnet create \
+ --resource-group $RESOURCE_GROUP \
--vnet-name $VIRTUAL_NETWORK_NAME \ --address-prefixes 10.1.1.0/24 \ --name apps-subnet
If you already have a virtual network to host an Azure Spring Apps instance, ski
## Grant service permission to the virtual network
-Azure Spring Apps requires **Owner** permission to your virtual network, in order to grant a dedicated and dynamic service principal on the virtual network for further deployment and maintenance.
+The following procedures describe how to grant Azure Spring Apps the [Owner](../role-based-access-control/built-in-roles.md#owner) permission on your virtual network. This permission enables you to grant a dedicated and dynamic service principal on the virtual network for further deployment and maintenance.
-#### [Portal](#tab/azure-portal)
+> [!NOTE]
+> The minimal required permissions are [User Access Administrator](../role-based-access-control/built-in-roles.md#user-access-administrator) and [Network Contributor](../role-based-access-control/built-in-roles.md#network-contributor). You can grant role assignments to both of them if you can't grant `Owner` permission.
+
+#### [Azure portal](#tab/azure-portal)
-Select the virtual network **azure-spring-apps-vnet** you previously created.
+Select the virtual network `azure-spring-apps-vnet` you previously created.
1. Select **Access control (IAM)**, and then select **Add** > **Add role assignment**. :::image type="content" source="media/spring-cloud-v-net-injection/access-control.png" alt-text="Screenshot of the Azure portal Access Control (IAM) page showing the Check access tab with the Add role assignment button highlighted." lightbox="media/spring-cloud-v-net-injection/access-control.png":::
-1. Assign the *Owner* role to the Azure Spring Apps Resource Provider. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+1. Assign the `Owner` role to the Azure Spring Apps Resource Provider. For more information, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
> [!NOTE]
- > If you don't find the "Azure Spring Apps Resource Provider", search for "Azure Spring Cloud Resource Provider".
+ > If you don't find Azure Spring Apps Resource Provider, search for *Azure Spring Cloud Resource Provider*.
:::image type="content" source="./media/spring-cloud-v-net-injection/assign-owner-resource-provider.png" alt-text="Screenshot of the Azure portal showing the Access Control (IAM) page, with the Add Role Assignment pane open and search results displaying the Azure Spring Apps Resource Provider." lightbox="./media/spring-cloud-v-net-injection/assign-owner-resource-provider.png"::: - You can also do this step by running the following Azure CLI command: ```azurecli VIRTUAL_NETWORK_RESOURCE_ID=`az network vnet show \
- --name ${NAME_OF_VIRTUAL_NETWORK} \
--resource-group ${RESOURCE_GROUP_OF_VIRTUAL_NETWORK} \
+ --name ${NAME_OF_VIRTUAL_NETWORK} \
--query "id" \ --output tsv`
Select the virtual network **azure-spring-apps-vnet** you previously created.
--assignee e8de9221-a19c-4c81-b814-fd37c6caf9d2 ```
-#### [CLI](#tab/azure-CLI)
+#### [Azure CLI](#tab/azure-CLI)
```azurecli VIRTUAL_NETWORK_RESOURCE_ID=`az network vnet show \
- --name $VIRTUAL_NETWORK_NAME \
--resource-group $RESOURCE_GROUP \
+ --name $VIRTUAL_NETWORK_NAME \
--query "id" \ --output tsv`
az role assignment create \
## Deploy an Azure Spring Apps instance
-#### [Portal](#tab/azure-portal)
+#### [Azure portal](#tab/azure-portal)
To deploy an Azure Spring Apps instance in the virtual network:
To deploy an Azure Spring Apps instance in the virtual network:
1. Select the **Networking** tab, and select the following values:
- | Setting | Value |
- ||-|
- | Deploy in your own virtual network | Select **Yes**. |
- | Virtual network | Select **azure-spring-apps-vnet**. |
- | Service runtime subnet | Select **service-runtime-subnet**. |
- | Spring apps subnet | Select **apps-subnet**. |
+ | Setting | Value |
+ |--|-|
+ | Deploy in your own virtual network | Select **Yes**. |
+ | Virtual network | Select **azure-spring-apps-vnet**. |
+ | Service runtime subnet | Select **service-runtime-subnet**. |
+ | Spring Boot microservice apps subnet | Select **apps-subnet**. |
:::image type="content" source="./media/spring-cloud-v-net-injection/creation-blade-networking-tab.png" alt-text="Screenshot of the Azure portal Azure Spring Apps Create page showing the Networking tab.":::
To deploy an Azure Spring Apps instance in the virtual network:
:::image type="content" source="./media/spring-cloud-v-net-injection/verify-specifications.png" alt-text="Screenshot of the Azure portal Azure Spring Apps Create page showing the Networking section of the Review and create tab.":::
-#### [CLI](#tab/azure-CLI)
+#### [Azure CLI](#tab/azure-CLI)
To deploy an Azure Spring Apps instance in the virtual network:
-Create your Azure Spring Apps instance by specifying the virtual network and subnets you just created,
+Use the following command to create your Azure Spring Apps instance, specifying the virtual network and subnets you created previously:
- ```azurecli
- az spring create \
- --resource-group "$RESOURCE_GROUP" \
- --name "$AZURE_SPRING_APPS_INSTANCE_NAME" \
- --vnet $VIRTUAL_NETWORK_NAME \
- --service-runtime-subnet service-runtime-subnet \
- --app-subnet apps-subnet \
- --enable-java-agent \
- --sku standard \
- --location $LOCATION
- ```
+```azurecli
+az spring create \
+ --resource-group "$RESOURCE_GROUP" \
+ --name "$AZURE_SPRING_APPS_INSTANCE_NAME" \
+ --vnet $VIRTUAL_NETWORK_NAME \
+ --service-runtime-subnet service-runtime-subnet \
+ --app-subnet apps-subnet \
+ --enable-java-agent \
+ --sku standard \
+ --location $LOCATION
+```
-After the deployment, two additional resource groups will be created in your subscription to host the network resources for the Azure Spring Apps instance. Go to **Home**, and then select **Resource groups** from the top menu items to find the following new resource groups.
+After the deployment, two more resource groups are created in your subscription to host the network resources for the Azure Spring Apps instance. Go to **Home**, and then select **Resource groups** from the top menu items to find the following new resource groups.
-The resource group named as **ap-svc-rt_{service instance name}_{service instance region}** contains network resources for the service runtime of the service instance.
+The resource group named as `ap-svc-rt_{service instance name}_{service instance region}` contains network resources for the service runtime of the service instance.
- ![Screenshot that shows the service runtime.](./media/spring-cloud-v-net-injection/service-runtime-resource-group.png)
+![Screenshot that shows the service runtime.](./media/spring-cloud-v-net-injection/service-runtime-resource-group.png)
-The resource group named as **ap-app_{service instance name}_{service instance region}** contains network resources for your Spring applications of the service instance.
+The resource group named as `ap-app_{service instance name}_{service instance region}` contains network resources for your Spring applications of the service instance.
- ![Screenshot that shows apps resource group.](./media/spring-cloud-v-net-injection/apps-resource-group.png)
+![Screenshot that shows apps resource group.](./media/spring-cloud-v-net-injection/apps-resource-group.png)
Those network resources are connected to your virtual network created in the preceding image.
- :::image type="content" source="./media/spring-cloud-v-net-injection/vnet-with-connected-device.png" alt-text="Screenshot of the Azure portal showing the Connected devices page for a virtual network." lightbox="./media/spring-cloud-v-net-injection/vnet-with-connected-device.png":::
- > [!Important]
- > The resource groups are fully managed by the Azure Spring Apps service. Do *not* manually delete or modify any resource inside.
+> [!IMPORTANT]
+> The resource groups are fully managed by the Azure Spring Apps service. Do *not* manually delete or modify any resource inside.
## Using smaller subnet ranges
This table shows the maximum number of app instances Azure Spring Apps supports
| /25 | 128 | 120 | <p>App with 0.5 core: 500 <br/> App with one core: 500<br/> App with two cores: 500<br/> App with three cores: 480<br> App with four cores: 360</p> | | /24 | 256 | 248 | <p>App with 0.5 core: 500 <br/> App with one core: 500<br/> App with two cores: 500<br/> App with three cores: 500<br/> App with four cores: 500</p> |
-For subnets, five IP addresses are reserved by Azure, and at least three IP addresses are required by Azure Spring Apps. At least eight IP addresses are required, so /29 and /30 are nonoperational.
+For subnets, Azure reserves five IP addresses, and Azure Spring Apps requires at least three IP addresses. At least eight IP addresses are required, so /29 and /30 are nonoperational.
For a service runtime subnet, the minimum size is /28.
For a service runtime subnet, the minimum size is /28.
Azure Spring Apps supports using existing subnets and route tables.
-If your custom subnets do not contain route tables, Azure Spring Apps creates them for each of the subnets and adds rules to them throughout the instance lifecycle. If your custom subnets contain route tables, Azure Spring Apps acknowledges the existing route tables during instance operations and adds/updates and/or rules accordingly for operations.
+If your custom subnets don't contain route tables, Azure Spring Apps creates them for each of the subnets and adds rules to them throughout the instance lifecycle. If your custom subnets contain route tables, Azure Spring Apps acknowledges the existing route tables during instance operations and adds/updates and/or rules accordingly for operations.
-> [!Warning]
+> [!WARNING]
> Custom rules can be added to the custom route tables and updated. However, rules are added by Azure Spring Apps and these must not be updated or removed. Rules such as 0.0.0.0/0 must always exist on a given route table and map to the target of your internet gateway, such as an NVA or other egress gateway. Use caution when updating rules when only your custom rules are being modified. ### Route table requirements The route tables to which your custom vnet is associated must meet the following requirements:
-* You can associate your Azure route tables with your vnet only when you create a new Azure Spring Apps service instance. You cannot change to use another route table after Azure Spring Apps has been created.
+* You can associate your Azure route tables with your vnet only when you create a new Azure Spring Apps service instance. You can't change to use another route table after Azure Spring Apps has been created.
* Both the Spring application subnet and the service runtime subnet must associate with different route tables or neither of them.
-* Permissions must be assigned before instance creation. Be sure to grant **Azure Spring Apps Resource Provider** the *Owner* permission to your route tables.
-* The associated route table resource cannot be updated after cluster creation. While the route table resource cannot be updated, custom rules can be modified on the route table.
-* You cannot reuse a route table with multiple instances due to potential conflicting routing rules.
+* Permissions must be assigned before instance creation. Be sure to grant Azure Spring Apps Resource Provider the `Owner` permission (or `User Access Administrator` and `Network Contributor` permissions) on your route tables.
+* You can't update the associated route table resource after cluster creation. While you can't update the route table resource, you can modify custom rules on the route table.
+* You can't reuse a route table with multiple instances due to potential conflicting routing rules.
## Next steps
spring-apps How To Remote Debugging App Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-remote-debugging-app-instance.md
Previously updated : 12/01/2022 Last updated : 4/18/2023
az spring app get-remote-debugging-config \
+## Assign an Azure role
+
+To remotely debug an app instance, you must be granted the role `Azure Spring Apps Remote Debugging Role`, which includes the `Microsoft.AppPlatform/Spring/apps/deployments/remotedebugging/action` data action permission.
+
+You can assign an Azure role using the Azure portal or Azure CLI.
+
+### [Azure portal](#tab/azure-portal)
+
+Use the following steps to assign an Azure role using the Azure portal.
+
+1. Open the [Azure portal](https://portal.azure.com).
+1. Open your Azure Spring Apps service instance.
+1. In the navigation pane, select **Access Control (IAM)**.
+1. On the **Access Control (IAM)** page, select **Add**, and then select **Add role assignment**.
+
+ :::image type="content" source="media/how-to-remote-debugging-app-instance/add-role-assignment.png" alt-text="Screenshot of the Azure portal showing the Access Control (IAM) page for an Azure Spring Apps instance with the Add role assignment option highlighted." lightbox="media/how-to-remote-debugging-app-instance/add-role-assignment.png":::
+
+1. On the **Add role assignment** page, in the **Name** list, search for and select *Azure Spring Apps Remote Debugging Role*, and then select **Next**.
+
+ :::image type="content" source="media/how-to-remote-debugging-app-instance/remote-debugging-role.png" alt-text="Screenshot of the Azure portal showing the Add role assignment page for an Azure Spring Apps instance with the Azure Spring Apps Remote Debugging Role name highlighted." lightbox="media/how-to-remote-debugging-app-instance/remote-debugging-role.png":::
+
+1. Select **Members**, and then search for and select your username.
+
+1. Select **Review + assign**.
+
+### [Azure CLI](#tab/azure-cli)
+
+Use the following command to obtain the Azure Spring Apps Remote Debugging Role.
+
+ ```azurecli
+ az role assignment create \
+ --role "Azure Spring Apps Remote Debugging Role" \
+ --scope "<service-instance-resource-id>" \
+ --assignee "<your-identity>"
+ ```
+++ ## Debug an app instance remotely You can debug an app instance remotely using the Azure Toolkit for IntelliJ or the Azure Spring Apps for VS Code extension.
Use the following steps to enable or disable remote debugging:
Use the following steps to attach debugger.
-1. Use the following Azure CLI command to obtain the **Azure Spring Apps Remote Debugging Role** role, which includes the `Microsoft.AppPlatform/Spring/apps/deployments/remotedebugging/action` data action permission.
-
- ```azurecli
- az role assignment create \
- --role "Azure Spring Apps Remote Debugging Role" \
- --scope "<service-instance-resource-id>" \
- --assignee "<your-identity>"
- ```
- 1. Select an app instance, and then select **Attach Debugger**. IntelliJ connects to the app instance and starts remote debugging. :::image type="content" source="media/how-to-remote-debugging-app-instance/intellij-remote-debugging-instance.png" alt-text="Screenshot showing the Attach Debugger option." lightbox="media/how-to-remote-debugging-app-instance/intellij-remote-debugging-instance.png":::
Use the following steps to enable or disable remote debugging:
Use the following steps to attach debugger.
-1. Use the following Azure CLI command to obtain the **Azure Spring Apps Remote Debugging Role** role, which includes the `Microsoft.AppPlatform/Spring/apps/deployments/remotedebugging/action` data action permission.
-
- ```azurecli
- az role assignment create \
- --role "Azure Spring Apps Remote Debugging Role" \
- --scope "<service-instance-resource-id>" \
- --assignee "<your-identity>"
- ```
- 1. Select an app instance, and then select **Attach Debugger**. VS Code connects to the app instance and starts remote debugging. :::image type="content" source="media/how-to-remote-debugging-app-instance/visual-studio-code-remote-debugging-instance.png" alt-text="Screenshot showing the Attach Debugger option." lightbox="media/how-to-remote-debugging-app-instance/visual-studio-code-remote-debugging-instance.png":::
spring-apps Tools To Troubleshoot Memory Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/tools-to-troubleshoot-memory-issues.md
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+**This article applies to:** ✔️ Basic/Standard ✔️ Enterprise
This article describes various tools that are useful for troubleshooting Java memory issues. You can use these tools in many scenarios not limited to memory issues, but this article focuses only on the topic of memory.
App memory usage is a percentage equal to the app memory used divided by the app
For JVM memory, there are three metrics: `jvm.memory.used`, `jvm.memory.committed`, and `jvm.memory.max`, which are described in the following list.
-"JVM memory" isn't a clearly defined concept. Here, `jvm.memory` is the sum of [heap memory](concepts-for-java-memory-management.md#heap-memory) and former permGen part of [non-heap memory](concepts-for-java-memory-management.md#non-heap-memory). JVM memory doesn't include direct memory or other memory like the thread stack. These three metrics are gathered by Spring Boot Actuator, and the scope of `jvm.memory` is also determined by Spring Boot Actuator.
+"JVM memory" isn't a clearly defined concept. Here, `jvm.memory` is the sum of [heap memory](concepts-for-java-memory-management.md#heap-memory) and former permGen part of [non-heap memory](concepts-for-java-memory-management.md#non-heap-memory). JVM memory doesn't include direct memory or other memory like the thread stack. Spring Boot Actuator gathers these three metrics and determines the scope of `jvm.memory`.
- `jvm.memory.used` is the amount of used JVM memory, including used heap memory and used former permGen in non-heap memory.
For JVM memory, there are three metrics: `jvm.memory.used`, `jvm.memory.committe
- `jvm.memory.max` is the maximum amount of JVM memory, not to be confused with the real available amount.
- The value of `jvm.memory.max` can sometimes be confusing because it can be much higher than the available app memory. To clarify, `jvm.memory.max` is the sum of all maximum sizes of heap memory and the former permGen part of [non-heap memory](concepts-for-java-memory-management.md#non-heap-memory), regardless of the real available memory. For example, if an app is set with 1 GB memory in the Azure Spring Apps portal, then the default heap memory size will be 0.5 GB. For more information, see the [Default maximum heap size](concepts-for-java-memory-management.md#default-maximum-heap-size) section of [Java memory management](concepts-for-java-memory-management.md).
+ The value of `jvm.memory.max` can sometimes be confusing because it can be much higher than the available app memory. To clarify, `jvm.memory.max` is the sum of all maximum sizes of heap memory and the former permGen part of [non-heap memory](concepts-for-java-memory-management.md#non-heap-memory), regardless of the real available memory. For example, if an app is set with 1 GB of memory in the Azure Spring Apps portal, then the default heap memory size is 0.5 GB. For more information, see the [Default maximum heap size](concepts-for-java-memory-management.md#default-maximum-heap-size) section of [Java memory management](concepts-for-java-memory-management.md).
- If the default *compressed class space* size is 1 GB, then the value of `jvm.memory.max` will be larger than 1.5 GB regardless of whether the app memory size 1 GB. For more information, see [Java Platform, Standard Edition HotSpot Virtual Machine Garbage Collection Tuning Guide: Other Considerations](https://docs.oracle.com/javase/9/gctuning/other-considerations.htm) in the Oracle documentation.
+ If the default *compressed class space* size is 1 GB, then the value of `jvm.memory.max` is larger than 1.5 GB regardless of whether the app memory size 1 GB. For more information, see [Java Platform, Standard Edition HotSpot Virtual Machine Garbage Collection Tuning Guide: Other Considerations](https://docs.oracle.com/javase/9/gctuning/other-considerations.htm) in the Oracle documentation.
#### jvm.gc.memory.allocated/promoted
You can find this feature on the Azure portal, as shown in the following screens
For further debugging, you can manually capture heap dumps and thread dumps, and use Java Flight Recorder (JFR). For more information, see [Capture heap dump and thread dump manually and use Java Flight Recorder in Azure Spring Apps](how-to-capture-dumps.md).
-Heap dump records the state of the Java heap memory. Thread dump records the stacks of all live threads. These tools are available through the Azure CLI and on the app page of the Azure portal, as shown in the following screenshot.
+Heap dumps record the state of the Java heap memory. Thread dumps record the stacks of all live threads. These tools are available through the Azure CLI and on the app page of the Azure portal, as shown in the following screenshot.
:::image type="content" source="media/tools-to-troubleshoot-memory-issues/capture-dump-location.png" alt-text="Screenshot of Azure portal showing app overview page with Troubleshooting button highlighted." lightbox="media/tools-to-troubleshoot-memory-issues/capture-dump-location.png":::
-You can also use third party tools like [Memory Analyzer](https://www.eclipse.org/mat/) to analyze heap dumps.
+For more information, see [Capture heap dump and thread dump manually and use Java Flight Recorder in Azure Spring Apps](how-to-capture-dumps.md). You can also use third party tools like [Memory Analyzer](https://www.eclipse.org/mat/) to analyze heap dumps.
## Modify configurations to fix problems Some issues you might identify include [container OOM](how-to-fix-app-restart-issues-caused-by-out-of-memory.md#fix-app-restart-issues-due-to-oom), heap memory that's too large, and abnormal garbage collection. If you identify any of these issues, you may need to configure the maximum memory size in the JVM options. For more information, see the [Important JVM options](concepts-for-java-memory-management.md#important-jvm-options) section of [Java memory management](concepts-for-java-memory-management.md#important-jvm-options).
-This feature is available on Azure CLI and on the Azure portal, as shown in the following screenshot:
+You can modify the JVM options by using the Azure portal or the Azure CLI.
+
+### [Azure portal](#tab/azure-portal)
+
+In the Azure portal, navigate to your app, then select **Configuration** from the **Settings** section of the navigation menu. On the **General Settings** tab, update the **JVM options** field, as shown in the following screenshot:
:::image type="content" source="media/tools-to-troubleshoot-memory-issues/maxdirectmemorysize-location.png" alt-text="Screenshot of Azure portal showing app configuration page with JVM options highlighted." lightbox="media/tools-to-troubleshoot-memory-issues/maxdirectmemorysize-location.png":::
+### [Azure CLI](#tab/azure-cli)
+
+Use the following command to update the JVM options for your app. Be sure to replace the placeholders with your actual values. For example, you can replace the *`<jvm-options>`* placeholder with a value such as `-Xms1024m -Xmx1536m`.
+
+```azurecli
+az spring app update \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --app <app-name> \
+ --deployment <deployment-name> \
+ --jvm-options <jvm-options> \
+```
+++ ## See also - [Java memory management](concepts-for-java-memory-management.md)
spring-apps Tutorial Managed Identities Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/tutorial-managed-identities-key-vault.md
Title: "Tutorial: Managed identity to connect Key Vault"
-description: Set up managed identity to connect Key Vault to an Azure Spring Apps app
+ Title: "Tutorial: Connect Azure Spring Apps to Key Vault using managed identities"
+description: Set up managed identity to connect Key Vault to an app deployed to Azure Spring Apps
Last updated 04/15/2022
-# Tutorial: Use a managed identity to connect Key Vault to an Azure Spring Apps app
+# Tutorial: Connect Azure Spring Apps to Key Vault using managed identities
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Basic/Standard ✔️ Enterprise
-This article shows you how to create a managed identity for an Azure Spring Apps app and use it to access Azure Key Vault.
+This article shows you how to create a managed identity for an app deployed to Azure Spring Apps and use it to access Azure Key Vault.
Azure Key Vault can be used to securely store and tightly control access to tokens, passwords, certificates, API keys, and other secrets for your app. You can create a managed identity in Azure Active Directory (Azure AD), and authenticate to any service that supports Azure AD authentication, including Key Vault, without having to display credentials in your code.
This app has access to get secrets from Azure Key Vault. Use the Azure Key Vault
vim src/main/resources/application.properties ```
-1. To use managed identity for Azure Spring Apps apps, add properties with the following content to the *src/main/resources/application.properties* file.
+1. To use managed identity for an app deployed to Azure Spring Apps, add properties with the following content to the *src/main/resources/application.properties* file.
### [System-assigned managed identity](#tab/system-assigned-managed-identity)
spring.cloud.azure.keyvault.secret.property-sources[0].credential.client-id={Cli
curl https://myspringcloud-springapp.azuremicroservices.io/get ```
- You're shown the message `Successfully got the value of secret connectionString from Key Vault https://<your-keyvault-name>.vault.azure.net/: jdbc:sqlserver://SERVER.database.windows.net:1433;database=DATABASE;`.
+ You're shown the message `jdbc:sqlserver://SERVER.database.windows.net:1433;database=DATABASE;`.
## Next steps
storage Storage Blob Copy Async Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-java.md
+
+ Title: Copy a blob with asynchronous scheduling using Java
+
+description: Learn how to copy a blob with asynchronous scheduling in Azure Storage by using the Java client library.
+++ Last updated : 04/18/2023+++
+ms.devlang: java
+++
+# Copy a blob with asynchronous scheduling using Java
+
+This article shows how to copy a blob with asynchronous scheduling using the [Azure Storage client library for Java](/java/api/overview/azure/storage-blob-readme). You can copy a blob from a source within the same storage account, from a source in a different storage account, or from any accessible object retrieved via HTTP GET request on a given URL. You can also abort a pending copy operation.
+
+The client library methods covered in this article use the [Copy Blob](/rest/api/storageservices/copy-blob) REST API operation, and can be used when you want to perform a copy with asynchronous scheduling. For most copy scenarios where you want to move data into a storage account and have a URL for the source object, see [Copy a blob from a source object URL with Java](storage-blob-copy-url-java.md).
+
+## Prerequisites
+
+To work with the code examples in this article, make sure you have:
+
+- An authorized client object to connect to Blob Storage data resources. To learn more, see [Create and manage client objects that interact with data resources](storage-blob-client-management.md).
+- Permissions to perform a copy operation. To learn more, see the authorization guidance for the following REST API operations:
+ - [Copy Blob](/rest/api/storageservices/copy-blob#authorization)
+ - [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob#authorization)
+- Packages installed to your project directory. These examples use **azure-storage-blob**. If you're using `DefaultAzureCredential` for authorization, you also need **azure-identity**. To learn more about setting up your project, see [Get Started with Azure Storage and Java](storage-blob-dotnet-get-started.md#set-up-your-project). To see the necessary `import` directives, see [Code samples](#code-samples).
+
+## About copying a blob with asynchronous scheduling
+
+The `Copy Blob` operation can finish asynchronously and is performed on a best-effort basis, which means that the operation isn't guaranteed to start immediately or complete within a specified time frame. The copy operation is scheduled in the background and performed as the server has available resources. The operation can complete synchronously if the copy occurs within the same storage account.
+
+A `Copy Blob` operation can perform any of the following actions:
+
+- Copy a source blob to a destination blob with a different name. The destination blob can be an existing blob of the same blob type (block, append, or page), or it can be a new blob created by the copy operation.
+- Copy a source blob to a destination blob with the same name, which replaces the destination blob. This type of copy operation removes any uncommitted blocks and overwrites the destination blob's metadata.
+- Copy a source file in the Azure File service to a destination blob. The destination blob can be an existing block blob, or can be a new block blob created by the copy operation. Copying from files to page blobs or append blobs isn't supported.
+- Copy a snapshot over its base blob. By promoting a snapshot to the position of the base blob, you can restore an earlier version of a blob.
+- Copy a snapshot to a destination blob with a different name. The resulting destination blob is a writeable blob and not a snapshot.
+
+The source blob for a copy operation may be one of the following types: block blob, append blob, page blob, blob snapshot, or blob version. The copy operation always copies the entire source blob or file. Copying a range of bytes or set of blocks isn't supported.
+
+If the destination blob already exists, it must be of the same blob type as the source blob, and the existing destination blob is overwritten. The destination blob can't be modified while a copy operation is in progress, and a destination blob can only have one outstanding copy operation.
+
+To learn more about the `Copy Blob` operation, including information about properties, index tags, metadata, and billing, see [Copy Blob remarks](/rest/api/storageservices/copy-blob#remarks).
+
+## Copy a blob with asynchronous scheduling
+
+This section gives an overview of methods provided by the Azure Storage client library for Java to perform a copy operation with asynchronous scheduling.
+
+The following method wraps the [Copy Blob](/rest/api/storageservices/copy-blob) REST API operation, and begins an asynchronous copy of data from the source blob:
+
+- [beginCopy](/java/api/com.azure.storage.blob.specialized.blobclientbase#method-details)
+
+The `beginCopy` method returns a [SyncPoller](/java/api/com.azure.core.util.polling.syncpoller) to poll the progress of the copy operation. The poll response type is [BlobCopyInfo](/java/api/com.azure.storage.blob.models.blobcopyinfo). The `beginCopy` method is used when you want asynchronous scheduling for a copy operation.
+
+## Copy a blob within the same storage account
+
+If you're copying a blob within the same storage account, access to the source blob can be authorized via Azure Active Directory (Azure AD), a shared access signature (SAS), or an account key. The operation can complete synchronously if the copy occurs within the same storage account.
+
+The following example shows a scenario for copying a source blob within the same storage account. This example also shows how to lease the source blob during the copy operation to prevent changes to the blob from a different client. The `Copy Blob` operation saves the `ETag` value of the source blob when the copy operation starts. If the `ETag` value is changed before the copy operation finishes, the operation fails.
++
+## Copy a blob from another storage account
+
+If the source is a blob in another storage account, the source blob must either be public or authorized via SAS token. The SAS token needs to include the **Read ('r')** permission. To learn more about SAS tokens, see [Delegate access with shared access signatures](../common/storage-sas-overview.md).
+
+The following example shows a scenario for copying a blob from another storage account. In this example, we create a source blob URL with an appended user delegation SAS token. The example shows how to generate the SAS token using the client library, but you can also provide your own.
++
+## Copy a blob from a source outside of Azure
+
+You can perform a copy operation on any source object that can be retrieved via HTTP GET request on a given URL, including accessible objects outside of Azure. The following example shows a scenario for copying a blob from an accessible source object URL.
++
+## Check the status of a copy operation
+
+To check the status of a `Copy Blob` operation, you can call [getCopyStatus](/java/api/com.azure.storage.blob.models.blobcopyinfo#method-details) on the [BlobCopyInfo](/java/api/com.azure.storage.blob.models.blobcopyinfo) object returned by `SyncPoller`.
+
+The following code example shows how to check the status of a copy operation:
++
+## Abort a copy operation
+
+Aborting a pending `Copy Blob` operation results in a destination blob of zero length. However, the metadata for the destination blob has the new values copied from the source blob or set explicitly during the copy operation. To keep the original metadata from before the copy, make a snapshot of the destination blob before calling one of the copy methods.
+
+To abort a pending copy operation, call the following method:
+- [abortCopyFromUrl](/java/api/com.azure.storage.blob.specialized.blobclientbase#method-details)
+
+This method wraps the [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob) REST API operation, which cancels a pending `Copy Blob` operation. The following code example shows how to abort a pending `Copy Blob` operation:
++
+## Resources
+
+To learn more about copying blobs using the Azure Blob Storage client library for Java, see the following resources.
+
+### REST API operations
+
+The Azure SDK for Java contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Java paradigms. The client library methods covered in this article use the following REST API operations:
+
+- [Copy Blob](/rest/api/storageservices/copy-blob) (REST API)
+- [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob) (REST API)
+
+### Code samples
+
+- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobCopy.java)
+
storage Storage Blob Copy Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-java.md
Previously updated : 11/16/2022 Last updated : 04/18/2023
# Copy a blob with Java
-This article shows how to copy a blob in a storage account using the [Azure Storage client library for Java](/java/api/overview/azure/storage-blob-readme). It also shows how to abort a pending copy operation.
+This article provides an overview of copy operations using the [Azure Storage client library for Java](/java/api/overview/azure/storage-blob-readme).
-## About copying blobs
+## About copy operations
-A copy operation can perform any of the following actions:
+Copy operations can be used to move data within a storage account, between storage accounts, or into a storage account from a source outside of Azure. When using the Blob Storage client libraries to copy data resources, it's important to understand the REST API operations behind the client library methods. The following table lists REST API operations that can be used to copy data resources to a storage account. The table also includes links to detailed guidance about how to perform these operations using the [Azure Storage client library for Java](/java/api/overview/azure/storage-blob-readme).
-- Copy a source blob to a destination blob with a different name. The destination blob can be an existing blob of the same blob type (block, append, or page), or can be a new blob created by the copy operation.-- Copy a source blob to a destination blob with the same name, effectively replacing the destination blob. Such a copy operation removes any uncommitted blocks and overwrites the destination blob's metadata.-- Copy a source file in the Azure File service to a destination blob. The destination blob can be an existing block blob, or can be a new block blob created by the copy operation. Copying from files to page blobs or append blobs isn't supported.-- Copy a snapshot over its base blob. By promoting a snapshot to the position of the base blob, you can restore an earlier version of a blob.-- Copy a snapshot to a destination blob with a different name. The resulting destination blob is a writeable blob and not a snapshot.
+| REST API operation | When to use | Client library methods | Guidance |
+| | | | |
+| [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) | This operation is preferred for scenarios where you want to move data into a storage account and have a URL for the source object. This operation completes synchronously. | [uploadFromUrl](/jav) |
+| [Put Block From URL](/rest/api/storageservices/put-block-from-url) | For large objects, you can use [Put Block From URL](/rest/api/storageservices/put-block-from-url) to write individual blocks to Blob Storage, and then call [Put Block List](/rest/api/storageservices/put-block-list) to commit those blocks to a block blob. This operation completes synchronously. | [stageBlockFromUrl](/jav) |
+| [Copy Blob](/rest/api/storageservices/copy-blob) | This operation can be used when you want asynchronous scheduling for a copy operation. | [beginCopy](/jav) |
-The source blob for a copy operation may be one of the following types:
-- Block blob-- Append blob-- Page blob-- Blob snapshot-- Blob version
+For append blobs, you can use the [Append Block From URL](/rest/api/storageservices/append-block-from-url) operation to commit a new block of data to the end of an existing append blob. The following client library method wraps this operation:
-If the destination blob already exists, it must be of the same blob type as the source blob. An existing destination blob will be overwritten.
+- [appendBlockFromUrl](/java/api/com.azure.storage.blob.specialized.appendblobclient#method-details)
-The destination blob can't be modified while a copy operation is in progress. A destination blob can only have one outstanding copy operation. One way to enforce this requirement is to use a blob lease, as shown in the code example.
+For page blobs, you can use the [Put Page From URL](/rest/api/storageservices/put-page-from-url) operation to write a range of pages to a page blob where the contents are read from a URL. The following client library method wraps this operation:
-The entire source blob or file is always copied. Copying a range of bytes or set of blocks isn't supported. When a blob is copied, its system properties are copied to the destination blob with the same values.
+- [uploadPagesFromUrl](/java/api/com.azure.storage.blob.specialized.pageblobclient#method-details)
-## Copy a blob
+## Client library resources
-To copy a blob, use the following method:
--- [copyFromUrl](/java/api/com.azure.storage.blob.specialized.blobclientbase)-
-This method synchronously copies the data at the source URL to a blob and waits for the copy to complete before returning a response. The source must be a block blob no larger than 256 MB. The source URL must include a SAS token that provides permissions to read the source blob. To learn more about the underlying operation, see [REST API operations](#rest-api-operations).
-
-The following code example gets a `BlobClient` object representing an existing blob and copies it to a new blob in a different container. This example also gets a lease on the source blob before copying so that no other client can modify the blob until the copy is complete and the lease is broken.
--
-Sample output is similar to:
-
-```console
-Source blob lease state: leased
-Copy status: success
-Copy progress: 5/5
-Copy completion time: 2022-11-14T16:53:54Z
-Total bytes copied: 5
-Source blob lease state: broken
-```
-
-You can also copy a blob using the following method:
--- [beginCopy](/java/api/com.azure.storage.blob.specialized.blobclientbase)-
-This method triggers a long-running, asynchronous operation. The source may be another blob or an Azure File resource. If the source is in another storage account, the source must either be public or authorized with a SAS token. To learn more about the underlying operation, see [REST API operations](#rest-api-operations).
--
-You can also specify extended options for the copy operation by passing in a [BlobBeginCopyOptions](/java/api/com.azure.storage.blob.options.blobbegincopyoptions) object to the `beginCopy` method. The following example shows how to create a `BlobBeginCopyOptions` object and configure options to pass with the copy request:
--
-## Abort a copy operation
-
-If you have a pending copy operation and need to cancel it, you can abort the operation. Aborting a copy operation results in a destination blob of zero length and full metadata. To learn more about the underlying operation, see [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob).
-
-The metadata for the destination blob will have the new values copied from the source blob or set explicitly during the copy operation. To keep the original metadata from before the copy, make a snapshot of the destination blob before calling one of the copy methods. The final blob will be committed when the copy completes.
-
-To abort a copy operation, use the following method:
--- [BlobClient.abortCopyFromUrl](/java/api/com.azure.storage.blob.specialized.blobclientbase)-
-The following example stops a pending copy and leaves a destination blob with zero length and full metadata:
--
-## Resources
-
-To learn more about copying blobs using the Azure Blob Storage client library for Java, see the following resources.
-
-### REST API operations
-
-The Azure SDK for Java contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Java paradigms. The client library methods for copying blobs use the following REST API operations:
--- [Copy Blob](/rest/api/storageservices/copy-blob) (REST API)-- [Copy Blob From URL](/rest/api/storageservices/copy-blob-from-url) (REST API)-- [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob) (REST API)-
-### Code samples
--- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobCopy.java)-
+- [Client library reference documentation](/java/api/overview/azure/storage-blob-readme)
+- [Client library source code](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/storage/azure-storage-blob)
+- [Package (Maven)](https://mvnrepository.com/artifact/com.azure/azure-storage-blob)
storage Storage Blob Copy Url Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-java.md
+
+ Title: Copy a blob from a source object URL with Java
+
+description: Learn how to copy a blob from a source object URL in Azure Storage by using the Java client library.
+++ Last updated : 04/18/2023+++
+ms.devlang: java
+++
+# Copy a blob from a source object URL with Java
+
+This article shows how to copy a blob from a source object URL using the [Azure Storage client library for Java](/java/api/overview/azure/storage-blob-readme). You can copy a blob from a source within the same storage account, from a source in a different storage account, or from any accessible object retrieved via HTTP GET request on a given URL.
+
+The client library methods covered in this article use the [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) and [Put Block From URL](/rest/api/storageservices/put-block-from-url) REST API operations. These methods are preferred for copy scenarios where you want to move data into a storage account and have a URL for the source object. For copy operations where you want asynchronous scheduling, see [Copy a blob with asynchronous scheduling using Java](storage-blob-copy-async-java.md).
+
+## Prerequisites
+
+To work with the code examples in this article, make sure you have:
+
+- An authorized client object to connect to Blob Storage data resources. To learn more, see [Create and manage client objects that interact with data resources](storage-blob-client-management.md).
+- Permissions to perform a copy operation. To learn more, see the authorization guidance for the following REST API operations:
+ - [Put Blob From URL](/rest/api/storageservices/put-blob-from-url#authorization)
+ - [Put Block From URL](/rest/api/storageservices/put-block-from-url#authorization)
+- Packages installed to your project directory. These examples use **azure-storage-blob**. If you're using `DefaultAzureCredential` for authorization, you also need **azure-identity**. To learn more about setting up your project, see [Get Started with Azure Storage and Java](storage-blob-java-get-started.md#set-up-your-project). To see the necessary `using` directives, see [Code samples](#code-samples).
+
+## About copying a blob from a source object URL
+
+The `Put Blob From URL` operation creates a new block blob where the contents of the blob are read from a given URL. The operation completes synchronously.
+
+The source can be any object retrievable via a standard HTTP GET request on the given URL. This includes block blobs, append blobs, page blobs, blob snapshots, blob versions, or any accessible object inside or outside Azure.
+
+When the source object is a block blob, all committed blob content is copied. The content of the destination blob is identical to the content of the source, but the committed block list isn't preserved and uncommitted blocks aren't copied.
+
+The destination is always a block blob, either an existing block blob, or a new block blob created by the operation. The contents of an existing blob are overwritten with the contents of the new blob.
+
+The `Put Blob From URL` operation always copies the entire source blob. Copying a range of bytes or set of blocks isn't supported. To perform partial updates to a block blobΓÇÖs contents by using a source URL, use the [Put Block From URL](/rest/api/storageservices/put-block-from-url) API along with [Put Block List](/rest/api/storageservices/put-block-list).
+
+To learn more about the `Put Blob From URL` operation, including blob size limitations and billing considerations, see [Put Blob From URL remarks](/rest/api/storageservices/put-blob-from-url#remarks).
+
+## Copy a blob from a source object URL
+
+This section gives an overview of methods provided by the Azure Storage client library for Java to perform a copy operation from a source object URL.
+
+The following methods wrap the [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) REST API operation, and create a new block blob where the contents of the blob are read from a given URL:
+
+- [uploadFromUrl](/java/api/com.azure.storage.blob.specialized.blockblobclient#method-details)
+- [uploadFromUrlWithResponse](/java/api/com.azure.storage.blob.specialized.blockblobclient#method-details)
+
+These methods are preferred for scenarios where you want to move data into a storage account and have a URL for the source object.
+
+For large objects, you can work with individual blocks. The following method wraps the [Put Block From URL](/rest/api/storageservices/put-block-from-url) REST API operation. This method creates a new block to be committed as part of a blob where the contents are read from a source URL:
+
+- [stageBlockFromUrl](/java/api/com.azure.storage.blob.specialized.blockblobclient#method-details)
+
+## Copy a blob from a source within Azure
+
+If you're copying a blob from a source within Azure, access to the source blob can be authorized via Azure Active Directory (Azure AD), a shared access signature (SAS), or an account key.
+
+The following example shows a scenario for copying a source blob within Azure. The [uploadFromUrl](/java/api/com.azure.storage.blob.specialized.blockblobclient#method-details) method can optionally accept a Boolean parameter to indicate whether an existing blob should be overwritten, as shown in the example.
++
+The [uploadFromUrlWithResponse](/java/api/com.azure.storage.blob.specialized.blockblobclient#method-details) method can also accept a [BlobUploadFromUrlOptions](/java/api/com.azure.storage.blob.options.blobuploadfromurloptions) parameter to specify further options for the operation.
+
+## Copy a blob from an external source
+
+You can perform a copy operation on any source object that can be retrieved via HTTP GET request on a given URL, including accessible objects outside of Azure. The following example shows a scenario for copying a blob from an accessible source object URL.
++
+## Resources
+
+To learn more about copying blobs using the Azure Blob Storage client library for Java, see the following resources.
+
+### REST API operations
+
+The Azure SDK for Java contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Java paradigms. The client library methods covered in this article use the following REST API operations:
+
+- [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) (REST API)
+- [Put Block From URL](/rest/api/storageservices/put-block-from-url) (REST API)
+
+### Code samples
+
+- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobCopy.java)
+
storage Storage Blob Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy.md
This article provides an overview of copy operations using the [Azure Storage client library for .NET](/dotnet/api/overview/azure/storage).
+## About copy operations
+ Copy operations can be used to move data within a storage account, between storage accounts, or into a storage account from a source outside of Azure. When using the Blob Storage client libraries to copy data resources, it's important to understand the REST API operations behind the client library methods. The following table lists REST API operations that can be used to copy data resources to a storage account. The table also includes links to detailed guidance about how to perform these operations using the [Azure Storage client library for .NET](/dotnet/api/overview/azure/storage). | REST API operation | When to use | Client library methods | Guidance |
storage Storage Retry Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-retry-policy.md
The following table lists the properties of the [RetryOptions](/dotnet/api/azure
| | | | | | [Delay](/dotnet/api/azure.core.retryoptions.delay) | [TimeSpan](/dotnet/api/system.timespan) | The delay between retry attempts for a fixed approach or the delay on which to base calculations for a backoff-based approach. If the service provides a Retry-After response header, the next retry will be delayed by the duration specified by the header value. | 0.8 second | | [MaxDelay](/dotnet/api/azure.core.retryoptions.maxdelay) | [TimeSpan](/dotnet/api/system.timespan) | The maximum permissible delay between retry attempts when the service doesn't provide a Retry-After response header. If the service provides a Retry-After response header, the next retry will be delayed by the duration specified by the header value. | 1 minute |
-| [MaxRetries](/dotnet/api/azure.core.retryoptions.maxretries) | int | The maximum number of retry attempts before giving up. | 3 |
+| [MaxRetries](/dotnet/api/azure.core.retryoptions.maxretries) | int | The maximum number of retry attempts before giving up. | 5 |
| [Mode](/dotnet/api/azure.core.retryoptions.mode) | [RetryMode](/dotnet/api/azure.core.retrymode) | The approach to use for calculating retry delays. | Exponential | | [NetworkTimeout](/dotnet/api/azure.core.retryoptions.networktimeout) | [TimeSpan](/dotnet/api/system.timespan) | The timeout applied to an individual network operation. | 100 seconds |
storage File Sync Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-release-notes.md
Previously updated : 1/18/2022 Last updated : 4/18/2023
The following Azure File Sync agent versions are supported:
| Milestone | Agent version number | Release date | Status | |-|-|--|| | V16.0 Release - [KB5013877](https://support.microsoft.com/topic/ffdc8fe2-c653-43c8-8b47-0865267fd520)| 16.0.0.0 | January 30, 2023 | Supported |
-| V15.2 Release - [KB5013875](https://support.microsoft.com/topic/9159eee2-3d16-4523-ade4-1bac78469280)| 15.2.0.0 | November 21, 2022 | Supported |
-| V15.1 Release - [KB5003883](https://support.microsoft.com/topic/45761295-d49a-431e-98ec-4fb3329b0544)| 15.1.0.0 | September 19, 2022 | Supported |
-| V15 Release - [KB5003882](https://support.microsoft.com/topic/2f93053f-869b-4782-a832-e3c772a64a2d)| 15.0.0.0 | March 30, 2022 | Supported |
-| V14.1 Release - [KB5001873](https://support.microsoft.com/topic/d06b8723-c4cf-4c64-b7ec-3f6635e044c5)| 14.1.0.0 | December 1, 2021 | Supported - Agent version will expire on March 20, 2023 |
-| V14 Release - [KB5001872](https://support.microsoft.com/topic/92290aa1-75de-400f-9442-499c44c92a81)| 14.0.0.0 | October 29, 2021 | Supported - Agent version will expire on March 20, 2023 |
+| V15.2 Release - [KB5013875](https://support.microsoft.com/topic/9159eee2-3d16-4523-ade4-1bac78469280)| 15.2.0.0 | November 21, 2022 | Supported - Agent version will expire on October 2, 2023 |
+| V15.1 Release - [KB5003883](https://support.microsoft.com/topic/45761295-d49a-431e-98ec-4fb3329b0544)| 15.1.0.0 | September 19, 2022 | Supported - Agent version will expire on October 2, 2023 |
+| V15 Release - [KB5003882](https://support.microsoft.com/topic/2f93053f-869b-4782-a832-e3c772a64a2d)| 15.0.0.0 | March 30, 2022 | Supported - Agent version will expire on October 2, 2023 |
+| V14.1 Release - [KB5001873](https://support.microsoft.com/topic/d06b8723-c4cf-4c64-b7ec-3f6635e044c5)| 14.1.0.0 | December 1, 2021 | Supported - Agent version will expire on August 1, 2023 |
+| V14 Release - [KB5001872](https://support.microsoft.com/topic/92290aa1-75de-400f-9442-499c44c92a81)| 14.0.0.0 | October 29, 2021 | Supported - Agent version will expire on August 1, 2023 |
## Unsupported versions The following Azure File Sync agent versions have expired and are no longer supported:
storage File Sync Troubleshoot Sync Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot-sync-errors.md
description: Troubleshoot common issues with monitoring sync health and resolvin
Previously updated : 4/12/2022 Last updated : 04/19/2023
To see these errors, run the **FileSyncErrorsReport.ps1** PowerShell script (loc
| 0x80c80200 | -2134375936 | ECS_E_SYNC_CONFLICT_NAME_EXISTS | The file can't be synced because the maximum number of conflict files has been reached. Azure File Sync supports 100 conflict files per file. To learn more about file conflicts, see Azure File Sync [FAQ](../files/storage-files-faq.md?toc=/azure/storage/filesync/toc.json#afs-conflict-resolution). | To resolve this issue, reduce the number of conflict files. The file will sync once the number of conflict files is less than 100. | | 0x80c8027d | -2134375811 | ECS_E_DIRECTORY_RENAME_FAILED | Rename of a directory can't be synced because files or folders within the directory have open handles. | No action required. The rename of the directory will be synced once all open file handles within the directory are closed. | | 0x800700de | -2147024674 | ERROR_BAD_FILE_TYPE | The tiered file on the server isn't accessible because it's referencing a version of the file which no longer exists in the Azure file share. | This issue can occur if the tiered file was restored from a backup of the Windows Server. To resolve this issue, restore the file from a snapshot in the Azure file share. |
+| 0x80C80065 | -2134376347 | ECS_E_DATA_TRANSFER_BLOCKED | The file has been identified to produce persistent errors during sync. Hence it is blocked from sync until the retry interval is reached. The file will be retried later. | No action required. The file will be retried after 24 hours. If the error persists for several days, create a support request. |
+| 0x80C80203 | -2134375933 | ECS_E_SYNC_INVALID_STAGED_FILE | File transfer error. Service will retry later. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. |
+| 0x80c8027f | -2134375809 | ECS_E_SYNC_CONSTRAINT_CONFLICT_CYCLIC_DEPENDENCY | Sync session timeout error. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. |
+| 0x80070035 | -2147024843 | ERROR_BAD_NETPATH | The network path was not found. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. |
+| 0x80071779 | -2147018887 | ERROR_FILE_READ_ONLY | The specified file is read only. | If the error persists for more than a day, create a support request. |
+| 0x6 | N/A | ERROR_INVALID_HANDLE | An internal error occurred. | If the error persists for more than a day, create a support request. |
+| 0x12f | N/A | ERROR_DELETE_PENDING | The file cannot be opened because it is in the process of being deleted. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. |
+| 0x80041007 | -2147217401 | SYNC_E_ITEM_MUST_EXIST | An internal error occurred. | If the error persists for more than a day, create a support request. |
+ ### Handling unsupported characters If the **FileSyncErrorsReport.ps1** PowerShell script shows per-item sync errors due to unsupported characters (error code 0x8007007b or 0x80c80255), you should remove or rename the characters at fault from the respective file names. PowerShell will likely print these characters as question marks or empty rectangles since most of these characters have no standard visual encoding.
The table below contains all of the unicode characters Azure File Sync does not
Sync sessions might fail for various reasons including the server being restarted or updated, VSS snapshots, etc. Although this error looks like it requires follow-up, it's safe to ignore this error unless it persists over a period of several hours.
-<a id="-2147012889"></a>**A connection with the service could not be established.**
+<a id="-2134375780"></a>**The file sync session was cancelled by the volume snapshot sync session that runs once a day to sync files with open handles.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c8029c |
+| **HRESULT (decimal)** | -2134375780 |
+| **Error string** | ECS_E_SYNC_CANCELLED_BY_VSS |
+| **Remediation required** | No |
+
+No action required. This error should automatically resolve. If the error persists for more than a day, create a support request.
+
+<a id="-2147012889"></a>**A connection with the service could not be established.**
| Error | Code | |-|-|
Sync sessions might fail for various reasons including the server being restarte
| **Error string** | WININET_E_NAME_NOT_RESOLVED | | **Remediation required** | Yes |
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c83081 |
+| **HRESULT (decimal)** | -2134364031 |
+| **Error string** | ECS_E_HTTP_CLIENT_CONNECTION_ERROR |
+| **Remediation required** | Yes |
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c8309a |
+| **HRESULT (decimal)** | -2134364006 |
+| **Error string** | ECS_E_AZURE_STORAGE_REMOTE_NAME_NOT_RESOLVED |
+| **Remediation required** | Yes |
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0xc00000c4 |
+| **HRESULT (decimal)** | -1073741628 |
+| **Error string** | UNEXPECTED_NETWORK_ERROR |
+| **Remediation required** | Yes |
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80072ee2 |
+| **HRESULT (decimal)** | -2147012894 |
+| **Error string** | WININET_E_TIMEOUT |
+| **Remediation required** | Yes |
+ [!INCLUDE [storage-sync-files-bad-connection](../../../includes/storage-sync-files-bad-connection.md)] > [!Note]
No action is required; the server will try again. If this error persists for sev
No action is required. If this error persists for several hours, create a support request.
+<a id="-2134364019"></a>**The operation was cancelled.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c8308d |
+| **HRESULT (decimal)** | -2134364019 |
+| **Error string** | ECS_E_REQUEST_CANCELLED_EXTERNALLY |
+| **Remediation required** | No |
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x8013153b |
+| **HRESULT (decimal)** | -2146233029 |
+| **Error string** | COR_E_OPERATIONCANCELED |
+| **Remediation required** | No |
+
+No action required. This error should automatically resolve. If the error persists for several days, create a support request.
+ <a id="-2134364043"></a>**Sync is blocked until change detection completes post restore** | Error | Code |
No action is required. If this error persists for several hours, create a suppor
No action is required. When a file or file share (cloud endpoint) is restored using Azure Backup, sync is blocked until change detection completes on the Azure file share. Change detection runs immediately once the restore is complete and the duration is based on the number of files in the file share.
+<a id="-2134364072"></a>**Sync is blocked on the folder due to a pause initiated as part of restore on sync folder.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c83058 |
+| **HRESULT (decimal)** | -2134364072 |
+| **Error string** | ECS_E_SYNC_BLOCKED_ON_RESTORE |
+| **Remediation required** | No |
+
+No action required. This error should automatically resolve. If the error persists for several days, create a support request.
+ <a id="-2147216747"></a>**Sync failed because the sync database was unloaded.** | Error | Code |
These errors usually resolve themselves and can occur if there are:
If this error persists for longer than a few hours, create a support request and we will contact you to help you resolve this issue.
+<a id="-2134375905"></a>**The sync database has encountered a storage busy IO error.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c8021f |
+| **HRESULT (decimal)** | -2134375905 |
+| **Error string** | ECS_E_SYNC_METADATA_IO_BUSY |
+| **Remediation required** | No |
+
+No action required. This error should automatically resolve. If the error persists for several days, create a support request.
+
+<a id="-2134375906"></a>**The sync database has encountered an IO timeout.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c8021e |
+| **HRESULT (decimal)** | -2134375906 |
+| **Error string** | ECS_E_SYNC_METADATA_IO_TIMEOUT |
+| **Remediation required** | No |
+
+No action required. This error should automatically resolve. If the error persists for several days, create a support request.
+
+<a id="-2134375904"></a>**The sync database has encountered an IO error.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c80220 |
+| **HRESULT (decimal)** | -2134375904 |
+| **Error string** | ECS_E_SYNC_METADATA_IO_ERROR |
+| **Remediation required** | No |
+
+No action required. This error should automatically resolve. If the error persists for several days, create a support request.
+ <a id="-2146762487"></a>**The server failed to establish a secure connection. The cloud service received an unexpected certificate.** | Error | Code |
This error can happen if your organization is using a TLS terminating proxy or i
By setting this registry value, the Azure File Sync agent will accept any locally trusted TLS/SSL certificate when transferring data between the server and the cloud service.
-<a id="-2147012894"></a>**A connection with the service could not be established.**
-
-| Error | Code |
-|-|-|
-| **HRESULT** | 0x80072ee2 |
-| **HRESULT (decimal)** | -2147012894 |
-| **Error string** | WININET_E_TIMEOUT |
-| **Remediation required** | Yes |
--
-> [!Note]
-> Once network connectivity to the Azure File Sync service is restored, sync might not resume immediately. By default, Azure File Sync will initiate a sync session every 30 minutes if no changes are detected within the server endpoint location. To force a sync session, restart the Storage Sync Agent (FileSyncSvc) service, or make a change to a file or directory within the server endpoint location.
- <a id="-2147012721"></a>**Sync failed because the server was unable to decode the response from the Azure File Sync service** | Error | Code |
This error occurs because the server endpoint deletion failed and the endpoint i
| **Error string** | ECS_E_NOT_ENOUGH_LOCAL_STORAGE | | **Remediation required** | Yes |
-Sync sessions fail with one of these errors because either the volume has insufficient disk space or disk quota limit is reached. This error commonly occurs because files outside the server endpoint are using up space on the volume. Free up space on the volume by adding additional server endpoints, moving files to a different volume, or increasing the size of the volume the server endpoint is on. If a disk quota is configured on the volume using [File Server Resource Manager](/windows-server/storage/fsrm/fsrm-overview) or [NTFS quota](/windows-server/administration/windows-commands/fsutil-quota), increase the quota limit.
+Sync sessions fail with one of these errors because either the volume has insufficient disk space or disk quota limit is reached. This error commonly occurs because files outside the server endpoint are using up space on the volume. Check the available disk space on the server. You can free up space on the volume by adding additional server endpoints, moving files to a different volume, or increasing the size of the volume the server endpoint is on. If a disk quota is configured on the volume using [File Server Resource Manager](/windows-server/storage/fsrm/fsrm-overview) or [NTFS quota](/windows-server/administration/windows-commands/fsutil-quota), increase the quota limit.
+
+If cloud tiering is enabled for the server endpoint, verify the files are syncing to the Azure file share to avoid running out of disk space.
<a id="-2134364145"></a><a id="replica-not-ready"></a>**The service isn't yet ready to sync with this server endpoint.**
No action is required. This error occurs because sync detected the replica has b
This error occurs because Azure File Sync doesn't support HTTP redirection (3xx status code). To resolve this issue, disable HTTP redirect on your proxy server or network device.
+<a id="-2134364086"></a>**Sync session timeout error.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c8304a |
+| **HRESULT (decimal)** | -2134364086 |
+| **Error string** | ECS_E_WORK_FRAMEWORK_TIMEOUT |
+| **Remediation required** | No |
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c83049 |
+| **HRESULT (decimal)** | -2134364087 |
+| **Error string** | ECS_E_WORK_FRAMEWORK_RESULT_NOT_FOUND |
+| **Remediation required** | No |
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c83093 |
+| **HRESULT (decimal)** | -2134364013 |
+| **Error string** | ECS_E_WORK_RESULT_EXPIRED |
+| **Remediation required** | No |
+
+No action required. This error should automatically resolve. If the error persists for several days, create a support request.
+
+<a id="-2146233083"></a>**Operation time out.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80131505 |
+| **HRESULT (decimal)** | -2146233083 |
+| **Error string** | COR_E_TIMEOUT |
+| **Remediation required** | No |
+
+No action required. This error should automatically resolve. If the error persists for several days, create a support request.
+
+<a id="-2134351859"></a>**Time out error.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c8600d |
+| **HRESULT (decimal)** | -2134351859 |
+| **Error string** | ECS_E_AZURE_OPERATION_TIME_OUT |
+| **Remediation required** | No |
+
+No action required. This error should automatically resolve. If the error persists for several days, create a support request.
+ <a id="-2134364027"></a>**A timeout occurred during offline data transfer, but it is still in progress.** | Error | Code |
This provisioning error protects you from deleting all content that might be ava
1. Remove the server endpoint in the sync group by following the steps documented in [Remove a server endpoint](file-sync-server-endpoint-delete.md). 1. Create a new server endpoint in the sync group by following the steps documented in [Add a server endpoint](file-sync-server-endpoint-create.md). +
+<a id="-2134364025"></a>**The subscription owning the storage account is disabled.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c83087 |
+| **HRESULT (decimal)** | -2134364025 |
+| **Error string** | ECS_E_STORAGE_ACCOUNT_SUBSCRIPTION_DISABLED |
+| **Remediation required** | Yes |
+
+Please check and ensure the subscription where your storage account resides is enabled.
+
+<a id="64"></a>**The specified network name is no longer available.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x40 |
+| **HRESULT (decimal)** | 64 |
+| **Error string** | ERROR_NETNAME_DELETED |
+| **Remediation required** | Yes |
+
+Use the `Test-StorageSyncNetworkConnectivity` cmdlet to check network connectivity to the service endpoints. [Learn more](file-sync-firewall-and-proxy.md#test-network-connectivity-to-service-endpoints).
+
+<a id="-2134364147"></a>**Sync session error.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c8300d |
+| **HRESULT (decimal)** | -2134364147 |
+| **Error string** | ECS_E_CANNOT_CREATE_ACTIVE_SESSION_PLACEHOLDER_BLOB |
+| **Remediation required** | No |
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c8300e |
+| **HRESULT (decimal)** | -2134364146 |
+| **Error string** | ECS_E_CANNOT_UPDATE_REPLICA_WATERMARK |
+| **Remediation required** | No |
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c8024a |
+| **HRESULT (decimal)** | -2134375862 |
+| **Error string** | ECS_E_SYNC_DEFERRAL_QUEUE_RESTART_SESSION |
+| **Remediation required** | No |
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c83098 |
+| **HRESULT (decimal)** | -2134364008 |
+| **Error string** | ECS_E_STORAGE_ACCOUNT_MGMT_OPERATION_THROTTLED |
+| **Remediation required** | No |
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c83082 |
+| **HRESULT (decimal)** | -2134364030 |
+| **Error string** | ECS_E_ASYNC_WORK_ACTION_UNABLE_TO_RETRY |
+| **Remediation required** | No |
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c83006 |
+| **HRESULT (decimal)** | -2134364154 |
+| **Error string** | ECS_E_ECS_BATCH_ERROR |
+| **Remediation required** | No |
+
+No action required. This error should automatically resolve. If the error persists for several days, create a support request.
+
+<a id="-2134363999"></a>**Sync session error.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c830a1 |
+| **HRESULT (decimal)** | -2134363999 |
+| **Error string** | ECS_TOO_MANY_ETAGVERIFICATION_FAILURES |
+| **Remediation required** | Maybe |
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c8023c |
+| **HRESULT (decimal)** | -2134375876 |
+| **Error string** | ECS_E_SYNC_CLOUD_METADATA_CORRUPT |
+| **Remediation required** | Maybe |
+
+| Error | Code |
+|-|-|
+| **HRESULT** | |
+| **HRESULT (decimal)** | |
+| **Error string** | |
+| **Remediation required** | Maybe |
+
+If the error persists for more than a day, create a support request.
+
+<a id="-2147024809"></a>**An internal error occurred.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80070057 |
+| **HRESULT (decimal)** | -2147024809 |
+| **Error string** | ERROR_INVALID_PARAMETER |
+| **Remediation required** | No |
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c80302 |
+| **HRESULT (decimal)** | -2134375678 |
+| **Error string** | ECS_E_UNKNOWN_HTTP_SERVER_ERROR |
+| **Remediation required** | No |
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x8004100c |
+| **HRESULT (decimal)** | -2147217396 |
+| **Error string** | SYNC_E_DESERIALIZATION |
+| **Remediation required** | No |
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c8022d |
+| **HRESULT (decimal)** | -2134375891 |
+| **Error string** | ECS_E_SYNC_METADATA_UNCOMMITTED_TX_LIMIT_REACHED |
+| **Remediation required** | No |
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c83097 |
+| **HRESULT (decimal)** | -2134364009 |
+| **Error string** | ECS_E_QUEUE_CLIENT_EXCEPTION |
+| **Remediation required** | No |
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c80245 |
+| **HRESULT (decimal)** | -2134375867 |
+| **Error string** | ECS_E_EPOCH_CHANGE_DETECTED |
+| **Remediation required** | No |
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80072ef3 |
+| **HRESULT (decimal)** | -2147012877 |
+| **Error string** | WININET_E_INCORRECT_HANDLE_STATE |
+| **Remediation required** | No |
+
+No action required. This error should automatically resolve. If the error persists for several days, create a support request.
+
+<a id="-2146233079"></a>**An internal error occurred.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80131509 |
+| **HRESULT (decimal)** | -2146233079 |
+| **Error string** | COR_E_INVALIDOPERATION |
+| **Remediation required** | Maybe |
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x718 |
+| **HRESULT (decimal)** | N/A |
+| **Error string** | ERROR_NOT_ENOUGH_QUOTA |
+| **Remediation required** | Maybe |
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80131622 |
+| **HRESULT (decimal)** | -2146232798 |
+| **Error string** | COR_E_OBJECTDISPOSED |
+| **Remediation required** | Maybe |
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80004002 |
+| **HRESULT (decimal)** | -2147467262 |
+| **Error string** | E_NOINTERFACE |
+| **Remediation required** | Maybe |
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x800700a1 |
+| **HRESULT (decimal)** | -2147024735 |
+| **Error string** | ERROR_BAD_PATHNAME |
+| **Remediation required** | Maybe |
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x8007054f |
+| **HRESULT (decimal)** | -2147023537 |
+| **Error string** | ERROR_INTERNAL_ERROR |
+| **Remediation required** | Maybe |
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80131501 |
+| **HRESULT (decimal)** | -2146233087 |
+| **Error string** | COR_E_SYSTEM |
+| **Remediation required** | Maybe |
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80131620 |
+| **HRESULT (decimal)** | -2146232800 |
+| **Error string** | COR_E_IO |
+| **Remediation required** | Maybe |
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80070026 |
+| **HRESULT (decimal)** | -2147024858 |
+| **Error string** | COR_E_ENDOFSTREAM |
+| **Remediation required** | Maybe |
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80070554 |
+| **HRESULT (decimal)** | -2147023532 |
+| **Error string** | ERROR_NO_SUCH_PACKAGE |
+| **Remediation required** | Maybe |
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80131537 |
+| **HRESULT (decimal)** | -2146233033 |
+| **Error string** | COR_E_FORMAT |
+| **Remediation required** | Maybe |
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x1f |
+| **HRESULT (decimal)** | 31 |
+| **Error string** | ERROR_GEN_FAILURE |
+| **Remediation required** | Maybe |
+
+If the error persists for more than a day, create a support request.
+
+<a id="-2147467261"></a>**An internal error occurred.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80004003 |
+| **HRESULT (decimal)** | -2147467261 |
+| **Error string** | E_POINTER |
+| **Remediation required** | Yes |
+
+Please upgrade to the latest file sync agent version. If the error persists after upgrading the agent, create a support request.
+
+<a id="-2147023570"></a>**Operation failed due to an authentication failure.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x8007052e |
+| **HRESULT (decimal)** | -2147023570 |
+| **Error string** | ERROR_LOGON_FAILURE |
+| **Remediation required** | Maybe |
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x8007051f |
+| **HRESULT (decimal)** | -2147023585 |
+| **Error string** | ERROR_NO_LOGON_SERVERS |
+| **Remediation required** | Maybe |
+
+If the error persists for more than a day, create a support request.
+
+<a id="-2134351869"></a>**The specified Azure account is disabled.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c86003 |
+| **HRESULT (decimal)** | -2134351869 |
+| **Error string** | ECS_E_AZURE_ACCOUNT_IS_DISABLED |
+| **Remediation required** | Yes |
+
+Please check and ensure the subscription where your storage account resides is enabled.
+
+<a id="-2134364036"></a>**Storage account key based authentication blocked.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c8307c |
+| **HRESULT (decimal)** | -2134364036 |
+| **Error string** | ECS_E_STORAGE_ACCOUNT_KEY_BASED_AUTHENTICATION_BLOCKED |
+| **Remediation required** | Yes |
+
+Enable 'Allow storage account key access' on the storage account. [Learn more](file-sync-deployment-guide.md#prerequisites).
+
+<a id="-2134364020"></a>**The specified seeded share does not exist.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c8308c |
+| **HRESULT (decimal)** | -2134364020 |
+| **Error string** | ECS_E_SEEDED_SHARE_NOT_FOUND |
+| **Remediation required** | Yes |
+
+Check if the Azure file share exists in the storage account.
+
+<a id="-2134376385"></a>**Sync needs to update the database on the server.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c8003f |
+| **HRESULT (decimal)** | -2134376385 |
+| **Error string** | ECS_E_SYNC_EPOCH_MISMATCH |
+| **Remediation required** | No |
+
+No action required. This error should automatically resolve. If the error persists for several days, create a support request.
+
+<a id="-2134347516"></a>**The volume is offline. Either it is removed, not ready or not connected.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c87104 |
+| **HRESULT (decimal)** | -2134347516 |
+| **Error string** | ECS_E_VOLUME_OFFLINE |
+| **Remediation required** | Yes |
+
+Please verify the volume where the server endpoint is located is attached to the server.
+
+<a id="-2134364007"></a>**Private endpoint configuration access blocked.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c83099 |
+| **HRESULT (decimal)** | -2134364007 |
+| **Error string** | ECS_E_PRIVATE_ENDPOINT_ACCESS_BLOCKED |
+| **Remediation required** | Yes |
+
+Check the private endpoint configuration and allow access to the file sync service. [Learn more](file-sync-firewall-and-proxy.md#test-network-connectivity-to-service-endpoints).
+
+<a id="-2134375864"></a>**Sync needs to reconcile the server and Azure file share data before files can be uploaded.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c80248 |
+| **HRESULT (decimal)** | -2134375864 |
+| **Error string** | ECS_E_REPLICA_RECONCILIATION_NEEDED |
+| **Remediation required** | No |
+
+No action required. This error should automatically resolve. If the error persists for several days, create a support request.
+
+<a id="0x4c3"></a>**Multiple connections to a server or shared resource by the same user, using more than one user name, are not allowed.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x4c3 |
+| **HRESULT (decimal)** | N/A |
+| **Error string** | ERROR_SESSION_CREDENTIAL_CONFLICT |
+| **Remediation required** | Yes |
+
+Disconnect all previous connections to the server or shared resource and try again.
+
+<a id="-2134376368"></a>**The server's SSL certificate is invalid or expired.**
+
+| Error | Code |
+|-|-|
+| **HRESULT** | 0x80c80050 |
+| **HRESULT (decimal)** | -2134376368 |
+| **Error string** | ECS_E_SERVER_INVALID_OR_EXPIRED_CERTIFICATE |
+| **Remediation required** | Yes |
+
+Run the following PowerShell command on the server to reset the certificate: `Reset-AzStorageSyncServerCertificate -ResourceGroupName <string> -StorageSyncServiceName <string>`
+ ### Common troubleshooting steps <a id="troubleshoot-storage-account"></a>**Verify the storage account exists.** # [Portal](#tab/azure-portal)
storage Files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-whats-new.md
description: Learn about new features and enhancements in Azure Files and Azure
Previously updated : 03/21/2023 Last updated : 04/19/2023
Azure Files is updated regularly to offer new features and enhancements. This ar
## What's new in 2023
+### 2023 quarter 2 (April, May, June)
+#### AD Kerberos authentication for Linux clients (SMB)
+
+Azure Files customers can now use identity-based Kerberos authentication for Linux clients over SMB using either on-premises Active Directory Domain Services (AD DS) or Azure Active Directory Domain Services (Azure AD DS). For more information, see [Enable Active Directory authentication over SMB for Linux clients accessing Azure Files](storage-files-identity-auth-linux-kerberos-enable.md).
+ ### 2023 quarter 1 (January, February, March) #### Nconnect for NFS Azure file shares Nconnect is a client-side Linux mount option that increases performance at scale by allowing you to use more TCP connections between the Linux client and the Azure Premium Files service for NFSv4.1. With nconnect, you can increase performance at scale using fewer client machines to reduce total cost of ownership. For more information, see [Improve NFS Azure file share performance with nconnect](nfs-nconnect-performance.md). - ## What's new in 2022 ### 2022 quarter 4 (October, November, December)
storage Storage Files Active Directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-active-directory-overview.md
Previously updated : 12/07/2022 Last updated : 04/19/2023
It's helpful to understand some key terms relating to identity-based authenticat
## Supported authentication scenarios
-Azure Files supports identity-based authentication for Windows file shares over SMB through the following three methods. You can only use one method per storage account.
+Azure Files supports identity-based authentication over SMB through the following methods. You can only use one method per storage account.
- **On-premises AD DS authentication:** On-premises AD DS-joined or Azure AD DS-joined Windows machines can access Azure file shares with on-premises Active Directory credentials that are synched to Azure AD over SMB. Your client must have line of sight to your AD DS. If you already have AD DS set up on-premises or on a VM in Azure where your devices are domain-joined to your AD, you should use AD DS for Azure file shares authentication. - **Azure AD DS authentication:** Cloud-based, Azure AD DS-joined Windows VMs can access Azure file shares with Azure AD credentials. In this solution, Azure AD runs a traditional Windows Server AD domain on behalf of the customer, which is a child of the customerΓÇÖs Azure AD tenant. - **Azure AD Kerberos for hybrid identities:** Using Azure AD for authenticating [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md) allows Azure AD users to access Azure file shares using Kerberos authentication. This means your end users can access Azure file shares over the internet without requiring a line-of-sight to domain controllers from hybrid Azure AD-joined and Azure AD-joined VMs. Cloud-only identities aren't currently supported.
+- **AD Kerberos authentication for Linux clients:** Linux clients can use Kerberos authentication over SMB for Azure Files using on-premises AD DS or Azure AD DS.
## Restrictions
For more information about Azure Files and identity-based authentication over SM
- [Overview - on-premises Active Directory Domain Services authentication over SMB for Azure file shares](storage-files-identity-auth-active-directory-enable.md) - [Enable Azure Active Directory Domain Services authentication on Azure Files](storage-files-identity-auth-active-directory-domain-service-enable.md) - [Enable Azure Active Directory Kerberos authentication for hybrid identities on Azure Files](storage-files-identity-auth-azure-active-directory-enable.md)
+- [Enable AD Kerberos authentication for Linux clients](storage-files-identity-auth-linux-kerberos-enable.md)
- [FAQ](storage-files-faq.md)
storage Storage Files Identity Auth Linux Kerberos Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-linux-kerberos-enable.md
+
+ Title: Use on-premises Active Directory Domain Services or Azure Active Directory Domain Services to authorize access to Azure Files over SMB for Linux clients using Kerberos authentication
+description: Learn how to enable identity-based Kerberos authentication for Linux clients over Server Message Block (SMB) for Azure Files using on-premises Active Directory Domain Services (AD DS) or Azure Active Directory Domain Services (Azure AD DS)
+++ Last updated : 04/18/2023++++
+# Enable Active Directory authentication over SMB for Linux clients accessing Azure Files
+
+For more information on supported options and considerations, see [Overview of Azure Files identity-based authentication options for SMB access](storage-files-active-directory-overview.md).
+
+[Azure Files](storage-files-introduction.md) supports identity-based authentication over Server Message Block (SMB) for Linux virtual machines (VMs) using the Kerberos authentication protocol through the following methods:
+
+- On-premises Windows Active Directory Domain Services (AD DS)
+- Azure Active Directory Domain Services (Azure AD DS)
+
+In order to use the first option (AD DS), you must sync your AD DS to Azure Active Directory (Azure AD) using Azure AD Connect.
+
+> [!Note]
+> This article uses Ubuntu for the example steps. Similar configurations will work for RHEL and SLES machines, allowing you to mount Azure file shares using Active Directory.
+
+## Applies to
+| File share type | SMB | NFS |
+|-|:-:|:-:|
+| Standard file shares (GPv2), LRS/ZRS | ![Yes, this article applies to standard SMB Azure file shares LRS/ZRS.](../media/icons/yes-icon.png) | ![No, this article doesn't apply to NFS Azure file shares.](../media/icons/no-icon.png) |
+| Standard file shares (GPv2), GRS/GZRS | ![Yes, this article applies to standard SMB Azure file shares GRS/GZRS.](../media/icons/yes-icon.png) | ![No this article doesn't apply to NFS Azure file shares.](../media/icons/no-icon.png) |
+| Premium file shares (FileStorage), LRS/ZRS | ![Yes, this article applies to premium SMB Azure file shares.](../media/icons/yes-icon.png) | ![No, this article doesn't apply to premium NFS Azure file shares.](../media/icons/no-icon.png) |
+
+## Linux SMB client limitations
+
+You can't use identity-based authentication to mount Azure File shares on Linux clients at boot time using `fstab` entries because the client can't get the Kerberos ticket early enough to mount at boot time. However, you can use an `fstab` entry and specify the `noauto` option. This won't mount the share at boot time, but it will allow a user to conveniently mount the file share after they log in using a simple mount command without all the parameters. You can also use [`autofs`](storage-how-to-use-files-linux.md?tabs=smb311#dynamically-mount-with-autofs) to mount the share upon access.
+
+## Prerequisites
+
+Before you enable AD authentication over SMB for Azure file shares, make sure you've completed the following prerequisites.
+
+- A Linux VM (Ubuntu 18.04+ or an equivalent RHEL or SLES VM) running on Azure. The VM must have at least one network interface on the VNET containing the Azure AD DS, or an on-premises Linux VM with AD DS synced to Azure AD.
+- Root user or user credentials to a local user account that has full sudo rights (for this guide, localadmin).
+- The Linux VM must not have joined any AD domain. If it's already a part of a domain, it needs to first leave that domain before it can join this domain.
+- An Azure AD tenant [fully configured](../../active-directory-domain-services/tutorial-create-instance.md), with domain user already set up.
+
+Installing the samba package isn't strictly necessary, but it gives you some useful tools and brings in other packages automatically, such as `samba-common` and `smbclient`. Run the following commands to install it. If you're asked for any input values during installation, leave them blank.
+
+```bash
+sudo apt update -y
+sudo apt install samba winbind libpam-winbind libnss-winbind krb5-config krb5-user keyutils cifs-utils
+```
+
+The `wbinfo` tool is part of the samba suite. It can be useful for authentication and debugging purposes, such as checking if the domain controller is reachable, checking what domain a machine is joined to, and finding information about users.
+
+Make sure that the Linux host keeps the time synchronized with the domain server. Refer to the documentation for your Linux distribution. For some distros, you can do this [using systemd-timesyncd](https://www.freedesktop.org/software/systemd/man/timesyncd.conf.html). Edit `/etc/systemd/timesyncd.conf` with your favorite text editor to include the following:
+
+```plaintext
+[Time]
+NTP=onpremaadint.com
+FallbackNTP=ntp.ubuntu.com
+```
+
+Then restart the service:
+
+```bash
+sudo systemctl restart systemd-timesyncd.service
+```
+
+## Enable AD Kerberos authentication
+
+Follow these steps to enable AD Kerberos authentication. [This Samba documentation](https://wiki.samba.org/index.php/Setting_up_Samba_as_a_Domain_Member) might be helpful as a reference.
+
+### Make sure the domain server is reachable and discoverable
+
+1. Make sure that the DNS servers supplied contain the domain server IP addresses.
+
+```bash
+systemd-resolve --status
+```
+
+```output
+Global
+ DNSSEC NTA: 10.in-addr.arpa
+ 16.172.in-addr.arpa
+ 168.192.in-addr.arpa
+ 17.172.in-addr.arpa
+ 18.172.in-addr.arpa
+ 19.172.in-addr.arpa
+ 20.172.in-addr.arpa
+ 21.172.in-addr.arpa
+ 22.172.in-addr.arpa
+ 23.172.in-addr.arpa
+ 24.172.in-addr.arpa
+ 25.172.in-addr.arpa
+ 26.172.in-addr.arpa
+ 27.172.in-addr.arpa
+ 28.172.in-addr.arpa
+ 29.172.in-addr.arpa
+ 30.172.in-addr.arpa
+ 31.172.in-addr.arpa
+ corp
+ d.f.ip6.arpa
+ home
+ internal
+ intranet
+ lan
+ local
+ private
+ test
+
+Link 2 (eth0)
+ Current Scopes: DNS
+ LLMNR setting: yes
+MulticastDNS setting: no
+ DNSSEC setting: no
+ DNSSEC supported: no
+ DNS Servers: 10.0.2.5
+ 10.0.2.4
+ 10.0.0.41
+ DNS Domain: domain1.contoso.com
+```
+
+2. If the command worked, skip the following steps and proceed to the next section.
+
+3. If it didn't work, make sure that the domain server IP addresses are pinging.
+
+```bash
+ping 10.0.2.5
+```
+
+```output
+PING 10.0.2.5 (10.0.2.5) 56(84) bytes of data.
+64 bytes from 10.0.2.5: icmp_seq=1 ttl=128 time=0.898 ms
+64 bytes from 10.0.2.5: icmp_seq=2 ttl=128 time=0.946 ms
+
+^C
+
+ 10.0.2.5 ping statistics
+2 packets transmitted, 2 received, 0% packet loss, time 1002ms
+rtt min/avg/max/mdev = 0.898/0.922/0.946/0.024 ms
+```
+
+4. If the ping doesn't work, go back to [prerequisites](#prerequisites), and make sure that your VM is on a VNET that has access to the Azure AD tenant.
+
+5. If the IP addresses are pinging but the DNS servers aren't automatically discovered, you can add the DNS servers manually. Edit `/etc/netplan/50-cloud-init.yaml` with your favorite text editor.
+
+```plaintext
+# This file is generated from information provided by the datasource. Changes
+# to it will not persist across an instance reboot. To disable cloud-init's
+# network configuration capabilities, write a file
+# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
+# network: {config: disabled}
+network:
+ ethernets:
+ eth0:
+ dhcp4: true
+ dhcp4-overrides:
+ route-metric: 100
+ dhcp6: false
+ match:
+ macaddress: 00:22:48:03:6b:c5
+ set-name: eth0
+ nameservers:
+ addresses: [10.0.2.5, 10.0.2.4]
+ version: 2
+```
+
+Then apply the changes:
+
+```bash
+sudo netplan --debug apply
+```
+
+6. Winbind assumes that the DHCP server keeps the domain DNS records up-to-date. However, this isn't true for Azure DHCP. In order to set up the client to make DDNS updates, use [this guide](../../virtual-network/virtual-networks-name-resolution-ddns.md#linux-clients) to create a network script. Here's a sample script that lives at `/etc/dhcp/dhclient-exit-hooks.d/ddns-update`.
+
+```plaintext
+#!/bin/sh
+
+# only execute on the primary nic
+if [ "$interface" != "eth0" ]
+then
+ return
+fi
+
+# When you have a new IP, perform nsupdate
+if [ "$reason" = BOUND ] || [ "$reason" = RENEW ] ||
+ [ "$reason" = REBIND ] || [ "$reason" = REBOOT ]
+then
+ host=`hostname -f`
+ nsupdatecmds=/var/tmp/nsupdatecmds
+ echo "update delete $host a" > $nsupdatecmds
+ echo "update add $host 3600 a $new_ip_address" >> $nsupdatecmds
+ echo "send" >> $nsupdatecmds
+
+ nsupdate $nsupdatecmds
+fi
+```
+
+### Connect to Azure AD DS and make sure the services are discoverable
+
+1. Make sure that you're able to ping the domain server by the domain name.
+
+```bash
+ping contosodomain.contoso.com
+```
+
+```output
+PING contosodomain.contoso.com (10.0.2.4) 56(84) bytes of data.
+64 bytes from pwe-oqarc11l568.internal.cloudapp.net (10.0.2.4): icmp_seq=1 ttl=128 time=1.41 ms
+64 bytes from pwe-oqarc11l568.internal.cloudapp.net (10.0.2.4): icmp_seq=2 ttl=128 time=1.02 ms
+64 bytes from pwe-oqarc11l568.internal.cloudapp.net (10.0.2.4): icmp_seq=3 ttl=128 time=0.740 ms
+64 bytes from pwe-oqarc11l568.internal.cloudapp.net (10.0.2.4): icmp_seq=4 ttl=128 time=0.925 ms
+
+^C
+
+ contosodomain.contoso.com ping statistics
+4 packets transmitted, 4 received, 0% packet loss, time 3016ms
+rtt min/avg/max/mdev = 0.740/1.026/1.419/0.248 ms
+```
+
+2. Make sure you can discover the Azure AD services on the network.
+
+```bash
+nslookup
+> set type=SRV
+> _ldap._tcp.contosodomain.contoso.com.
+```
+
+```output
+Server: 127.0.0.53
+Address: 127.0.0.53#53
+
+Non-authoritative answer:
+
+_ldap._tcp.contosodomain.contoso.com service = 0 100 389 pwe-oqarc11l568.contosodomain.contoso.com.
+_ldap._tcp.contosodomain.contoso.com service = 0 100 389 hxt4yo--jb9q529.contosodomain.contoso.com.
+```
+
+### Set up hostname and fully qualified domain name (FQDN)
+
+1. Using your text editor, update the `/etc/hosts` file with the final FQDN (after joining the domain) and the alias for the host. The IP address doesn't matter for now because this line will mainly be used to translate short hostname to FQDN. For more details, see [Setting up Samba as a Domain Member](https://wiki.samba.org/index.php/Setting_up_Samba_as_a_Domain_Member).
+
+```plaintext
+127.0.0.1 contosovm.contosodomain.contoso.com contosovm
+#cmd=sudo vim /etc/hosts
+#then enter this value instead of localhost "ubuntvm.contosodomain.contoso.com UbuntuVM"
+```
+
+2. Now, your hostname should resolve. You can ignore the IP address it resolves to for now. The short hostname should resolve to the FQDN.
+
+```bash
+getent hosts contosovm
+```
+
+```output
+127.0.0.1 contosovm.contosodomain.contoso.com contosovm
+```
+
+```bash
+dnsdomainname
+```
+
+```output
+contosodomain.contoso.com
+```
+
+```bash
+hostname -f
+```
+
+```output
+contosovm.contosodomain.contoso.com
+```
+
+> [!Note]
+> Some distros require you to run the `hostnamectl` command in order for hostname -f to be updated:
+>
+> `hostnamectl set-hostname contosovm.contosodomain.contoso.com`
+
+### Set up krb5.conf
+
+1. Configure `/etc/krb5.conf` so that the Kerberos key distribution center (KDC) with the domain server can be contacted for authentication. For more information, see [MIT Kerberos Documentation](https://web.mit.edu/kerberos/krb5-1.12/doc/admin/conf_files/krb5_conf.html). Here's a sample `/etc/krb5.conf` file.
+
+```plaintext
+[libdefaults]
+ default_realm = CONTOSODOMAIN.CONTOSO.COM
+ dns_lookup_realm = false
+ dns_lookup_kdc = true
+```
+
+### Set up smb.conf
+
+1. Identify the path to `smb.conf`.
+
+```bash
+sudo smbd -b | grep "CONFIGFILE"
+```
+
+```output
+ CONFIGFILE: /etc/samba/smb.conf
+```
+
+2. Change the SMB configuration to act as a domain member. For more information, see [Setting up samba as a domain member](https://wiki.samba.org/index.php/Setting_up_Samba_as_a_Domain_Member). Here's a sample `smb.conf` file.
+
+> [!Note]
+> This example is for Azure AD DS, for which we recommend setting `backend = rid` when configuring idmap. On-premises AD DS users might prefer to [choose a different idmap backend](https://wiki.samba.org/index.php/Setting_up_Samba_as_a_Domain_Member#Choosing_an_idmap_backend).
+
+```plaintext
+[global]
+ workgroup = CONTOSODOMAIN
+ security = ADS
+ realm = CONTOSODOMAIN.CONTOSO.COM
+
+ winbind refresh tickets = Yes
+ vfs objects = acl_xattr
+ map acl inherit = Yes
+ store dos attributes = Yes
+
+ dedicated keytab file = /etc/krb5.keytab
+ kerberos method = secrets and keytab
+
+ winbind use default domain = Yes
+
+ load printers = No
+ printing = bsd
+ printcap name =
+ disable spoolss = Yes
+
+ log file = /var/log/samba/log.%m
+ log level = 1
+
+ idmap config * : backend = tdb
+ idmap config * : range = 3000-7999
+
+ idmap config CONTOSODOMAIN : backend = rid
+ idmap config CONTOSODOMAIN : range = 10000-999999
+
+ template shell = /bin/bash
+ template homedir = /home/%U
+```
+
+3. Force winbind to reload the changed config file.
+
+```bash
+sudo smbcontrol all reload-config
+```
+
+### Join the domain
+
+1. Use the `net ads join` command to join the host to the Azure AD DS domain. If the command throws an error, see [Troubleshooting samba domain members](https://wiki.samba.org/index.php/Troubleshooting_Samba_Domain_Members) to resolve the issue.
+
+```bash
+sudo net ads join -U contososmbadmin # user - garead
+
+Enter contososmbadmin's password:
+```
+
+```output
+Using short domain name -- CONTOSODOMAIN
+Joined 'CONTOSOVM' to dns domain 'contosodomain.contoso.com'
+```
+
+2. Make sure that the DNS record exists for this host on the domain server.
+
+```bash
+nslookup contosovm.contosodomain.contoso.com 10.0.2.5
+```
+
+```output
+Server: 10.0.2.5
+Address: 10.0.2.5#53
+
+Name: contosovm.contosodomain.contoso.com
+Address: 10.0.0.8
+```
+
+If users will be actively logging into client machines or VMs and accessing the Azure file shares, you need to [set up nsswitch.conf](#set-up-nsswitchconf) and [configure PAM for winbind](#configure-pam-for-winbind). If access will be limited to applications represented by a user account or computer account that need Kerberos authentication to access the file share, then you can skip these steps.
+
+### Set up nsswitch.conf
+
+1. Now that the host is joined to the domain, you need to put winbind libraries in the places to look for when looking for users and groups. Do this by updating the passwd and group entries in `nsswitch.conf`. Use your text editor to edit `/etc/nsswitch.conf` and add the following entries:
+
+```plaintext
+passwd: compat systemd winbind
+group: compat systemd winbind
+```
+
+2. Enable the winbind service to start automatically on reboot.
+
+```bash
+sudo systemctl enable winbind
+```
+
+```output
+Synchronizing state of winbind.service with SysV service script with /lib/systemd/systemd-sysv-install.
+Executing: /lib/systemd/systemd-sysv-install enable winbind
+```
+
+3. Then, restart the service.
+
+```bash
+sudo systemctl restart winbind
+sudo systemctl status winbind
+```
+
+```output
+winbind.service - Samba Winbind Daemon
+ Loaded: loaded (/lib/systemd/system/winbind.service; enabled; vendor preset: enabled)
+ Active: active (running) since Fri 2020-04-24 09:34:31 UTC; 10s ago
+ Docs: man:winbindd(8)
+ man:samba(7)
+ man:smb.conf(5)
+ Main PID: 27349 (winbindd)
+ Status: "winbindd: ready to serve connections..."
+ Tasks: 2 (limit: 4915)
+ CGroup: /system.slice/winbind.service
+ Γö£ΓöÇ27349 /usr/sbin/winbindd --foreground --no-process-group
+ ΓööΓöÇ27351 /usr/sbin/winbindd --foreground --no-process-group
+
+Apr 24 09:34:31 contosovm systemd[1]: Starting Samba Winbind Daemon...
+Apr 24 09:34:31 contosovm winbindd[27349]: [2020/04/24 09:34:31.724211, 0] ../source3/winbindd/winbindd_cache.c:3170(initialize_winbindd_cache)
+Apr 24 09:34:31 contosovm winbindd[27349]: initialize_winbindd_cache: clearing cache and re-creating with version number 2
+Apr 24 09:34:31 contosovm winbindd[27349]: [2020/04/24 09:34:31.725486, 0] ../lib/util/become_daemon.c:124(daemon_ready)
+Apr 24 09:34:31 contosovm systemd[1]: Started Samba Winbind Daemon.
+Apr 24 09:34:31 contosovm winbindd[27349]: STATUS=daemon 'winbindd' finished starting up and ready to serve connections
+```
+
+4. Make sure that the domain users and groups are discovered.
+
+```bash
+getent passwd contososmbadmin
+```
+
+```output
+contososmbadmin:*:12604:10513::/home/contososmbadmin:/bin/bash
+```
+
+```bash
+getent group 'domain users'
+```
+
+```output
+domain users:x:10513:
+```
+
+If the above doesn't work, check if the domain controller is reachable using the wbinfo tool:
+
+```bash
+wbinfo --ping-dc
+```
+
+### Configure PAM for winbind
+
+1. You need to place winbind in the authentication stack so that domain users are authenticated through winbind by configuring PAM (Pluggable Authentication Module) for winbind. The second command ensures that the homedir gets created for a domain user upon first login to this system.
+
+```bash
+sudo pam-auth-update --enable winbind
+sudo pam-auth-update --enable mkhomedir
+```
+
+2. Ensure that the PAM authentication config has the following arguments in `/etc/pam.d/common-auth`:
+
+```bash
+grep pam_winbind.so /etc/pam.d/common-auth
+```
+
+```output
+auth [success=1 default=ignore] pam_winbind.so krb5_auth krb5_ccache_type=FILE cached_login try_first_pass
+```
+
+3. You should now be able to log in to this system as the domain user, either through ssh, su, or any other means of authentication.
+
+```bash
+su - contososmbadmin
+Password:
+```
+
+```output
+Creating directory '/home/contososmbadmin'.
+contososmbadmin@contosovm:~$ pwd
+/home/contososmbadmin
+contososmbadmin@contosovm:~$ id
+uid=12604(contososmbadmin) gid=10513(domain users) groups=10513(domain users),10520(group policy creator owners),10572(denied rodc password replication group),11102(dnsadmins),11104(aad dc administrators),11164(group-readwrite),11165(fileshareallaccess),12604(contososmbadmin)
+```
+
+## Verify configuration
+
+To verify that the client machine is joined to the domain, look up the FQDN of the client on the domain controller and find the DNS entry listed for this particular client. In many cases, `<dnsserver>` is the same as the domain name that the client is joined to.
+
+```bash
+nslookup <clientname> <dnsserver>
+```
+
+Next, use the `klist` command to view the tickets in the Kerberos cache. There should be an entry beginning with `krbtgt` that looks similar to:
+
+```plaintext
+krbtgt/CONTOSODOMAIN.CONTOSO.COM@CONTOSODOMAIN.CONTOSO.COM
+```
+
+If you didn't [configure PAM for winbind](#configure-pam-for-winbind), `klist` might not show the ticket entry. In this case, you can manually authenticate the user to get the tickets:
+
+```bash
+wbinfo -K contososmbadmin
+```
+
+You can also run the command as a part of a script:
+
+```bash
+wbinfo -K 'contososmbadmin%SUPERSECRETPASSWORD'
+```
+
+## Mount the file share
+
+After you've enabled AD (or Azure AD) Kerberos authentication and domain-joined your Linux VM, you can mount the file share.
+
+For detailed mounting instructions, see [Mount the Azure file share on-demand with mount](storage-how-to-use-files-linux.md?tabs=smb311#mount-the-azure-file-share-on-demand-with-mount).
+
+Use the following additional mount option with all access control models to enable Kerberos security: `sec=krb5`
+
+> [!Note]
+> This feature only supports a server-enforced access control model using NT ACLs with no mode bits. Linux tools that update NT ACLs are minimal, so update ACLs through Windows. Client-enforced access control (`modefromsid,idsfromsid`) and client-translated access control (`cifsacl`) models aren't currently supported.
+
+### Other mount options
+
+#### Single-user versus multi-user mount
+
+In a single-user mount use case, the mount point is accessed by a single user of the AD domain and isn't shared with other users of the domain. Each file access happens in the context of the user whose krb5 credentials were used to mount the file share. Any user on the local system who accesses the mount point will impersonate that user.
+
+In a multi-user mount use case, there's still a single mount point, but multiple AD users can access that same mount point. In scenarios where multiple users on the same client will access the same share, and the system is configured for Kerberos and mounted with `sec=krb5`, consider using the `multiuser` mount option.
+
+#### File permissions
+
+File permissions matter, especially if both Linux and Windows clients will access the file share. To convert file permissions to DACLs on files, use a default mount option such as **file_mode=<>,dir_mode=<>**. File permissions specified as **file_mode** and **dir_mode** are only enforced within the client. The server enforces access control based on the file's or directory's security descriptor.
+
+#### File ownership
+
+File ownership matters, especially if both Linux and Windows clients will access the file share. Choose one of the following mount options to convert file ownership UID/GID to owner/group SID on file DACL:
+
+- Use a default such as **uid=<>,gid=<>**
+- Configure UID/GID mapping via RFC2307 and Active Directory (**nss_winbind** or **nss_sssd**)
+
+#### File attribute cache coherency
+
+Performance is important, even if file attributes aren't always accurate. The default value for **actimeo** is 1 (second), which means that the file attributes are fetched again from the server if the cached attributes are more than 1 second old. Increasing the value to 60 means that attributes are cached for at least 1 minute. For most use cases, we recommend using a value of 30 for this option (**actimeo=30**).
+
+For newer kernels, consider setting the **actimeo** features more granularly. You can use **acdirmax** for directory entry revalidation caching and **acregmax** for caching file metadata, for example **acdirmax=60,acregmax=5**.
+
+## Next steps
+
+For more information on how to mount an SMB file share on Linux, see:
+
+- [Mount SMB Azure file share on Linux](storage-how-to-use-files-linux.md)
synapse-analytics Apache Spark Azure Create Spark Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-azure-create-spark-configuration.md
In this tutorial, you will learn how to create an Apache Spark configuration for
## Create an Apache Spark Configuration
-You can create custom configurations from different entry points, such as from the Apache Spark configurations page, from the Apache Spark configuration page of an existing spark pool.
+You can create custom configurations from different entry points, such as from the Apache Spark configuration page of an existing spark pool.
## Create custom configurations in Apache Spark configurations
synapse-analytics Maintenance Scheduling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/maintenance-scheduling.md
All maintenance operations should finish within the specified maintenance window
Integration with Service Health notifications and the Resource Health Check Monitor allows customers to stay informed of impending maintenance activity. This automation takes advantage of Azure Monitor. You can decide how you want to be notified of impending maintenance events. Also, you can choose which automated flows will help you manage downtime and minimize operational impact.
-A 24-hour advance notification precedes all maintenance events that aren't for the DW400c and lower tiers.
- > [!NOTE]
-> In the event we are required to deploy a time critical update, advanced notification times may be significantly reduced. This could occur outside an identified maintenance window due to the critical nature of the update.
+> A 24-hour advance notification precedes all maintenance events. In the event we are required to deploy a time critical update, advanced notification times may be significantly reduced. This could occur outside an identified maintenance window due to the critical nature of the update.
If you received advance notification that maintenance will take place, but maintenance can't be performed during the time period in the notification, you'll receive a cancellation notification. Maintenance will then resume during the next scheduled maintenance period.
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
Some general system constraints might affect your workload:
| Property | Limitation | ||| | Maximum number of Azure Synapse workspaces per subscription | [See limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#azure-synapse-limits-for-workspaces). |
-| Maximum number of databases per serverless pool | 20 (not including databases synchronized from Apache Spark pool). |
+| Maximum number of databases per serverless pool | 100 (not including databases synchronized from Apache Spark pool). |
| Maximum number of databases synchronized from Apache Spark pool | Not limited. | | Maximum number of databases objects per database | The sum of the number of all objects in a database can't exceed 2,147,483,647. See [Limitations in SQL Server database engine](/sql/sql-server/maximum-capacity-specifications-for-sql-server#objects). | | Maximum identifier length in characters | 128. See [Limitations in SQL Server database engine](/sql/sql-server/maximum-capacity-specifications-for-sql-server#objects).|
traffic-manager Quickstart Create Traffic Manager Profile Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/quickstart-create-traffic-manager-profile-terraform.md
+
+ Title: 'Quickstart: Create an Azure Traffic Manager profile using Terraform'
+description: 'In this article, you create an Azure Traffic Manager profile using Terraform'
++++++ Last updated : 4/19/2023++
+# Quickstart: Create an Azure Traffic Manager profile using Terraform
+
+This quickstart describes how to use Terraform to create a Traffic Manager profile with external endpoints using the performance routing method.
++
+In this article, you learn how to:
+
+> [!div class="checklist"]
+> * Create a random value for the Azure resource group name using [random_pet](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/pet).
+> * Create an Azure resource group using [azurerm_resource_group](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/resource_group).
+> * Create a random value for the Azure Traffic Manager profile name using [random_string](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/string).
+> * Create a random value for the Azure Traffic Manager profile DNS config relative name using [random_string](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/string).
+> * Create an Azure Traffic Manager profile using [azurerm_traffic_manager_profile](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/traffic_manager_profile).
+> * Create two Azure Traffic Manager external endpoint using [azurerm_traffic_manager_external_endpoint](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/traffic_manager_external_endpoint).
++
+## Prerequisites
+
+- [Install and configure Terraform](/azure/developer/terraform/quickstart-configure)
+
+## Implement the Terraform code
+
+> [!NOTE]
+> The sample code for this article is located in the [Azure Terraform GitHub repo](https://github.com/Azure/terraform/tree/master/quickstart/101-traffic-manager-external-endpoint). You can view the log file containing the [test results from current and previous versions of Terraform](https://github.com/Azure/terraform/tree/master/quickstart/101-traffic-manager-external-endpoint/TestRecord.md).
+>
+> See more [articles and sample code showing how to use Terraform to manage Azure resources](/azure/terraform)
+
+1. Create a directory in which to test and run the sample Terraform code and make it the current directory.
+
+1. Create a file named `providers.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-traffic-manager-external-endpoint/providers.tf)]
+
+1. Create a file named `main.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-traffic-manager-external-endpoint/main.tf)]
+
+1. Create a file named `variables.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-traffic-manager-external-endpoint/variables.tf)]
+
+1. Create a file named `outputs.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-traffic-manager-external-endpoint/outputs.tf)]
+
+## Initialize Terraform
++
+## Create a Terraform execution plan
++
+## Apply a Terraform execution plan
++
+## Verify the results
+
+#### [Azure CLI](#tab/azure-cli)
+
+1. Get the Azure resource group name.
+
+ ```console
+ resource_group_name=$(terraform output -raw resource_group_name)
+ ```
+
+1. Get the Traffic Manager profile name.
+
+ ```console
+
+ ```
+
+1. Run [az network traffic-manager profile show](/cli/azure/network/traffic-manager/profile#az-network-traffic-manager-profile-show) to display information about the new Traffic Manager profile.
+
+ ```azurecli
+ az network traffic-manager profile show \
+ --resource-group $resource_group_name \
+ --name $azurerm_traffic_manager_profile_name
+ ```
+
+#### [Azure PowerShell](#tab/azure-powershell)
+
+1. Get the Azure resource group name.
+
+ ```console
+ $resource_group_name=$(terraform output -raw resource_group_name)
+ ```
+
+1. Get the Traffic Manager profile name.
+
+ ```console
+ $azurerm_traffic_manager_profile_name=$(terraform output -raw azurerm_traffic_manager_profile_name)
+ ```
+
+1. Run [Get-AzTrafficManagerProfile](/powershell/module/az.trafficmanager/get-aztrafficmanagerprofile) to display information about the new Traffic Manager profile.
+
+ ```azurepowershell
+ Get-AzTrafficManagerProfile -ResourceGroupName $resource_group_name `
+ -Name $azurerm_traffic_manager_profile_name
+ ```
+++
+## Clean up resources
++
+## Troubleshoot Terraform on Azure
+
+[Troubleshoot common problems when using Terraform on Azure](/azure/developer/terraform/troubleshoot)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Improve website response with Azure Traffic Manager](tutorial-traffic-manager-improve-website-response.md)
virtual-desktop Compare Remote Desktop Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/compare-remote-desktop-clients.md
There are some differences between the features of each of the Remote Desktop cl
The following table compares the features of each Remote Desktop client when connecting to Azure Virtual Desktop.
-| Feature | Windows Desktop and Azure Virtual Desktop Store app | Remote Desktop app | Android or Chrome OS | iOS or iPadOS | macOS | Web | Description |
+| Feature | Windows Desktop<br />&<br />Azure Virtual Desktop Store app | Remote Desktop app | Android or Chrome OS | iOS or iPadOS | macOS | Web | Description |
|--|--|--|--|--|--|--|--| | Remote Desktop sessions | X | X | X | X | X | X | Desktop of a remote computer presented in a full screen or windowed mode. | | Integrated RemoteApp sessions | X | | | | X | | Individual remote apps integrated into the local desktop as if they are running locally. |
The following tables compare support for device and other redirections across th
The following table shows which input methods are available for each Remote Desktop client:
-| Input | Windows Desktop and Azure Virtual Desktop Store app | Remote Desktop app | Android or Chrome OS | iOS or iPadOS | macOS | Web client |
+| Input | Windows Desktop<br />&<br />Azure Virtual Desktop Store app | Remote Desktop app | Android or Chrome OS | iOS or iPadOS | macOS | Web client |
|--|--|--|--|--|--|--| | Keyboard | X | X | X | X | X | X | | Mouse | X | X | X | X | X | X |
The following table shows which input methods are available for each Remote Desk
The following table shows which ports can be redirected for each Remote Desktop client:
-| Redirection | Windows Desktop and Azure Virtual Desktop Store app for Windows | Remote Desktop app | Android or Chrome OS | iOS or iPadOS | macOS | Web client |
+| Redirection | Windows Desktop<br />&<br />Azure Virtual Desktop Store app | Remote Desktop app | Android or Chrome OS | iOS or iPadOS | macOS | Web client |
|--|--|--|--|--|--|--| | Serial port | X | | | | | | | USB | X | | | | | |
When you enable USB port redirection, all USB devices attached to USB ports are
The following table shows which other devices can be redirected with each Remote Desktop client:
-| Redirection | Windows Desktop and Azure Virtual Desktop Store app | Remote Desktop app | Android or Chrome OS | iOS or iPadOS | macOS | Web client |
+| Redirection | Windows Desktop<br />&<br />Azure Virtual Desktop Store app | Remote Desktop app | Android or Chrome OS | iOS or iPadOS | macOS | Web client |
|--|--|--|--|--|--|--| | Cameras | X | | X | X | X | X (preview) | | Clipboard | X | X | Text | Text, images | X | Text |
The following table shows which other devices can be redirected with each Remote
\* Limited to uploading and downloading files through the Remote Desktop Web client.
-\*\* For printer redirection, the macOS app supports the Publisher Imagesetter printer driver by default. The app doesn't support the native printer drivers.
+\*\* For printer redirection, the macOS app supports the Publisher Imagesetter printer driver by default. The app doesn't support the native printer drivers.
virtual-desktop Configure Host Pool Personal Desktop Assignment Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-host-pool-personal-desktop-assignment-type.md
Title: Azure Virtual Desktop personal desktop assignment type - Azure
description: How to configure automatic or direct assignment for an Azure Virtual Desktop personal desktop host pool. Previously updated : 03/03/2023 Last updated : 04/18/2023
To assign a user to the personal desktop host pool, run the following PowerShell
New-AzRoleAssignment -SignInName <userupn> -RoleDefinitionName "Desktop Virtualization User" -ResourceName <appgroupname> -ResourceGroupName $resourceGroupName -ResourceType 'Microsoft.DesktopVirtualization/applicationGroups' ```
+### Directly assign users to session hosts
+
+#### [Azure portal](#tab/azure)
+
+To directly assign a user to a session host in the Azure portal:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Enter **Azure Virtual Desktop** into the search bar.
+
+1. Under **Services**, select **Azure Virtual Desktop**.
+
+1. At the Azure Virtual Desktop page, go the menu on the left side of the window and select **Host pools**.
+
+1. Select the host pool you want to assign users to.
+
+1. Next, go to the menu on the left side of the window and select **Application groups**.
+
+1. Select the name of the app group you want to assign users to, then select **Assignments** in the menu on the left side of the window.
+
+1. Select **+ Add**, then select the users or user groups you want to assign to this app group.
+
+1. Select **Assign VM** in the Information bar to assign a session host to a user.
+
+1. Select the session host you want to assign to the user, then select **Assign**. You can also select **Assignment** > **Assign user**.
+
+1. Select the user you want to assign the session host to from the list of available users.
+
+1. When you're done, select **Select**.
+
+#### [PowerShell](#tab/powershell)
+ To assign a user to a specific session host, run the following PowerShell cmdlet: ```powershell Update-AzWvdSessionHost -HostPoolName $hostPoolName -Name $sessionHostName -ResourceGroupName $resourceGroupName -AssignedUser <userupn> ```+
-To directly assign a user to a session host in the Azure portal:
+## Unassign a personal desktop
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Enter **Azure Virtual Desktop** into the search bar.
-3. Under **Services**, select **Azure Virtual Desktop**.
-4. At the Azure Virtual Desktop page, go the menu on the left side of the window and select **Host pools**.
-5. Select the host pool you want to assign users to.
-6. Next, go to the menu on the left side of the window and select **Application groups**.
-7. Select the name of the application group you want to assign users to, then select **Assignments** in the menu on the left side of the window.
-8. Select **+ Add**, then select the users or user groups you want to assign to this application group.
-9. Select **Assign VM** in the Information bar to assign a session host to a user.
-10. Select the session host you want to assign to the user, then select **Assign**. You can also select **Assignment** > **Assign user**.
-11. Select the user you want to assign the session host to from the list of available users.
-12. When you're done, select **Select**.
-
-## Unassign a personal desktop using the Azure portal
+#### [Azure portal](#tab/azure)
To unassign a personal desktop in the Azure portal: 1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Enter **Azure Virtual Desktop** into the search bar.
-3. Under **Services**, select **Azure Virtual Desktop**.
-4. At the Azure Virtual Desktop page, go the menu on the left side of the window and select **Host pools**.
-5. Select the host pool you want to modify user assignment for.
-6. Next, go to the menu on the left side of the window and select **Session hosts**.
-7. Select the checkbox next to the session host you want to unassign a user from, select the ellipses at the end of the row, and then select **Unassign user**. You can also select **Assignment** > **Unassign user**.
+
+1. Enter **Azure Virtual Desktop** into the search bar.
+
+1. Under **Services**, select **Azure Virtual Desktop**.
+
+1. At the Azure Virtual Desktop page, go the menu on the left side of the window and select **Host pools**.
+
+1. Select the host pool you want to modify user assignment for.
+
+1. Next, go to the menu on the left side of the window and select **Session hosts**.
+
+1. Select the checkbox next to the session host you want to unassign a user from, select the ellipses at the end of the row, and then select **Unassign user**. You can also select **Assignment** > **Unassign user**.
> [!div class="mx-imgBorder"] > ![A screenshot of the unassign user menu option from the ellipses menu for unassigning a personal desktop.](media/unassign.png)-
+
> [!div class="mx-imgBorder"] > ![A screenshot of the unassign user menu option from the assignment menu for unassigning a personal desktop.](media/unassign-2.png)
-8. Select **Unassign** when prompted with the warning.
+1. Select **Unassign** when prompted with the warning.
-## Unassign a personal desktop using PowerShell
+#### [PowerShell](#tab/powershell)
To unassign a personal desktop in PowerShell, run the following command:
$unassignDesktopParams = @{
} Invoke-AzRestMethod @unassignDesktopParams ```+
-## Reassign a personal desktop using the Azure portal
+## Reassign a personal desktop
+
+#### [Azure portal](#tab/azure)
To reassign a personal desktop in the Azure portal:+ 1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Enter **Azure Virtual Desktop** into the search bar.
-3. Under **Services**, select **Azure Virtual Desktop**.
-4. At the Azure Virtual Desktop page, go the menu on the left side of the window and select **Host pools**.
-5. Select the host pool you want to modify user assignment for.
-6. Next, go to the menu on the left side of the window and select **Session hosts**.
-7. Select the checkbox next to the session host you want to reassign to a different user, select the ellipses at the end of the row, and then select **Assign to a different user**. You can also select **Assignment** > **Assign to a different user**.
+
+1. Enter **Azure Virtual Desktop** into the search bar.
+
+1. Under **Services**, select **Azure Virtual Desktop**.
+
+1. At the Azure Virtual Desktop page, go the menu on the left side of the window and select **Host pools**.
+1. Select the host pool you want to modify user assignment for.
+
+1. Next, go to the menu on the left side of the window and select **Session hosts**.
+
+1. Select the checkbox next to the session host you want to reassign to a different user, select the ellipses at the end of the row, and then select **Assign to a different user**. You can also select **Assignment** > **Assign to a different user**.
> [!div class="mx-imgBorder"] > ![A screenshot of the assign to a different user menu option from the ellipses menu for reassigning a personal desktop.](media/reassign-doc.png)
To reassign a personal desktop in the Azure portal:
> [!div class="mx-imgBorder"] > ![A screenshot of the assign to a different user menu option from the assignment menu for reassigning a personal desktop.](media/reassign.png)
-8. Select the user you want to assign the session host to from the list of available users.
-9. When you're done, select **Select**.
+1. Select the user you want to assign the session host to from the list of available users.
-## Reassign a personal desktop using PowerShell
+1. When you're done, select **Select**.
+
+#### [PowerShell](#tab/powershell)
Before you start, first define the `$reassignUserUpn` variable by running the following command:
$reassignDesktopParams = @{
} Invoke-AzRestMethod @reassignDesktopParams ```+ ## Give session hosts in a personal host pool a friendly name
virtual-desktop Troubleshoot Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-agent.md
Title: Troubleshoot Azure Virtual Desktop Agent Issues - Azure
description: How to resolve common Azure Virtual Desktop Agent and connectivity issues. Previously updated : 02/18/2023 Last updated : 04/19/2023
On your session host VM, go to **Event Viewer** > **Windows Logs** > **Applicati
To resolve this issue: 1. Check to see if [the stack listener is working](#error-stack-listener-isnt-working-on-a-windows-10-2004-session-host-vm)
-1. If the stack listener isn't working, [manually uninstall and reinstall the stack component](#error-session-host-vms-are-stuck-in-unavailable-or-upgrading-state).
+1. If the stack listener isn't working, [manually uninstall and reinstall the stack component](#error-session-host-vms-are-stuck-in-upgrading-state).
## Error: ENDPOINT_NOT_FOUND
To resolve this issue, make space on your disk by:
On your session host VM, go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3389 with **MissingMethodException: Method not found** in the description, this means the Azure Virtual Desktop agent didn't update successfully and reverted to an earlier version. This may be because the version number of the .NET framework currently installed on your VMs is lower than 4.7.2. To resolve this issue, you need to upgrade the .NET to version 4.7.2 or later by following the installation instructions in the [.NET Framework documentation](https://support.microsoft.com/topic/microsoft-net-framework-4-7-2-offline-installer-for-windows-05a72734-2127-a15d-50cf-daf56d5faec2).
-## Error: Session host VMs are stuck in Unavailable or Upgrading state
+## Error: Session host VMs are stuck in Upgrading state
If the status listed for session hosts in your host pool always says **Unavailable** or **Upgrading**, the agent or stack didn't install successfully.
To resolve this issue, first reinstall the side-by-side stack:
1. Restart your session host VM. 1. From a command prompt run `qwinsta.exe` again and verify the *STATE* column for **rdp-tcp** and **rdp-sxs** entries is **Listen**. If not, you will need to [re-register your VM and reinstall the agent](#your-issue-isnt-listed-here-or-wasnt-resolved) component.
+## Error: Session host VMs are stuck in Unavailable state
+
+If your session host VMs are stuck in the Unavailable state, your VM didn't pass one of the health checks listed in [Health check](troubleshoot-statuses-checks.md#health-check). You must resolve the issue that's causing the VM to not pass the health check.
+
+## Error: VMs are stuck in the "Needs Assistance" state
+
+If the session host doesn't pass the *UrlsAccessibleCheck* health check, you'll need to identify which [required URL](safe-url-list.md) your deployment is currently blocking. Once you know which URL is blocked, identify which setting is blocking that URL and remove it.
+
+There are two reasons why the service is blocking a required URL:
+
+- You have an active firewall that's blocking most outbound traffic and access to the required URLs.
+- Your local hosts file is blocking the required websites.
+
+To resolve a firewall-related issue, add a rule that allows outbound connections to the TCP port 80/443 associated with the blocked URLs.
+
+If your local hosts file is blocking the required URLs, make sure none of the required URLs are in the **Hosts** file on your device. You can find the Hosts file location at the following registry key and value:
+
+**Key:** HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters
+
+**Type:** REG_EXPAND_SZ
+
+**Name:** DataBasePath
+
+If the session host doesn't pass the *MetaDataServiceCheck* health check, then the service can't access the IMDS endpoint. To resolve this issue, you'll need to do the following things:
+
+- Reconfigure your networking, firewall, or proxy settings to unblock the IP address 169.254.169.254.
+- Make sure your HTTP clients bypass web proxies within the VM when querying IMDS. We recommend that you allow the required IP address in any firewall policies within the VM that deal with outbound network traffic direction.
+
+If your issue is caused by a web proxy, add an exception for 169.254.169.254 in the web proxy's configuration. To add this exception, open an elevated Command Prompt or PowerShell session and run the following command:
+
+```cmd
+netsh winhttp set proxy proxy-server="http=<customerwebproxyhere>" bypass-list="169.254.169.254"
+```
+ ## Error: Connection not found: RDAgent does not have an active connection to the broker Your session host VMs may be at their connection limit and can't accept new connections.
virtual-desktop Troubleshoot Client Windows Azure Virtual Desktop App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-client-windows-azure-virtual-desktop-app.md
This article describes issues you may experience with the [Azure Virtual Desktop
The Azure Virtual Desktop Store app is downloaded and automatically updated through the Microsoft Store. It relies on the dependency app *Azure Virtual Desktop (HostApp)*, which is also automatically downloaded and updated. For more information, see [Azure Virtual Desktop (HostApp)](users/client-features-windows-azure-virtual-desktop-app.md#azure-virtual-desktop-hostapp).
-You can also manually search for new updates for the app. For more information, see [Update the Azure Virtual Desktop app](users/client-features-windows-azure-virtual-desktop-app.md#update-the-azure-virtual-desktop-app).
+You can go to the [Microsoft Store to check for updates](https://aka.ms/AVDStoreClient), or you can also manually search for new updates from the app. For more information, see [Update the Azure Virtual Desktop app](users/client-features-windows-azure-virtual-desktop-app.md#update-the-azure-virtual-desktop-app).
## General
virtual-desktop Troubleshoot Statuses Checks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-statuses-checks.md
Title: Azure Virtual Desktop session host statuses and health checks
description: How to troubleshoot the failed session host statuses and failed health checks Previously updated : 02/28/2023 Last updated : 04/19/2023
The following table lists all statuses for session hosts in the Azure portal eac
| Session host status | Description | How to resolve related issues | |||| |Available| This status means that the session host passed all health checks and is available to accept user connections. If a session host has reached its maximum session limit but has passed health checks, it will still be listed as ΓÇ£Available." |N/A|
-|Needs Assistance|The session host didn't pass one or more of the following non-fatal health checks: the Geneva Monitoring Agent health check, the Azure Instance Metadata Service (IMDS) health check, or the URL health check. You can find which health checks have failed in the session hosts detailed view in the Azure portal. |Follow the directions in [Error: VMs are stuck in "Needs Assistance" state](#error-vms-are-stuck-in-the-needs-assistance-state) to resolve the issue.|
+|Needs Assistance|The session host didn't pass one or more of the following non-fatal health checks: the Geneva Monitoring Agent health check, the Azure Instance Metadata Service (IMDS) health check, or the URL health check. You can find which health checks have failed in the session hosts detailed view in the Azure portal. |Follow the directions in [Error: VMs are stuck in "Needs Assistance" state](troubleshoot-agent.md#error-vms-are-stuck-in-the-needs-assistance-state) to resolve the issue.|
|Shutdown| The session host has been shut down. If the agent enters a shutdown state before connecting to the broker, its status will change to *Unavailable*. If you've shut down your session host and see an *Unavailable* status, that means the session host shut down before it could update the status, and doesn't indicate an issue. You should use this status with the [VM instance view API](/rest/api/compute/virtual-machines/instance-view?tabs=HTTP#virtualmachineinstanceview) to determine the power state of the VM. |Turn on the session host. | |Unavailable| The session host is either turned off or hasn't passed fatal health checks, which prevents user sessions from connecting to this session host. |If the session host is off, turn it back on. If the session host didn't pass the domain join check or side-by-side stack listener health checks, refer to the table in [Health check](#health-check) for ways to resolve the issue. If the status is still "Unavailable" after following those directions, open a support case.| |Upgrade Failed| This status means that the Azure Virtual Desktop Agent couldn't update or upgrade. This doesn't affect new nor existing user sessions. |Follow the instructions in the [Azure Virtual Desktop Agent troubleshooting article](troubleshoot-agent.md).|
-|Upgrading| This status means that the agent upgrade is in progress. This status will be updated to ΓÇ£AvailableΓÇ¥ once the upgrade is done and the session host can accept connections again.|If your session host has been stuck in the "Upgrading" state, then [reinstall the agent](troubleshoot-agent.md#error-session-host-vms-are-stuck-in-unavailable-or-upgrading-state).|
+|Upgrading| This status means that the agent upgrade is in progress. This status will be updated to ΓÇ£AvailableΓÇ¥ once the upgrade is done and the session host can accept connections again.|If your session host has been stuck in the "Upgrading" state, then [reinstall the agent](troubleshoot-agent.md#error-session-host-vms-are-stuck-in-upgrading-state).|
## Health check
The health check is a test run by the agent on the session host. The following t
| Geneva Monitoring Agent | Verifies that the session host has a healthy monitoring agent by checking if the monitoring agent is installed and running in the expected registry location. | If this check fails, it's semi-fatal. There may be successful connections, but they'll contain no logging information. To resolve this, make sure a monitoring agent is installed. If it's already installed, contact Microsoft support. | | Integrated Maintenance Data System (IMDS) reachable | Verifies that the service can't access the IMDS endpoint. | If this check fails, it's semi-fatal. There may be successful connections, but they won't contain logging information. To resolve this issue, you'll need to reconfigure your networking, firewall, or proxy settings. | | Side-by-side (SxS) Stack Listener | Verifies that the side-by-side stack is up and running, listening, and ready to receive connections. | If this check fails, it's fatal, and users won't be able to connect to the session host. Try restarting your virtual machine (VM). If this doesn't work, contact Microsoft support. |
-| UrlsAccessibleCheck | Verifies that the required Azure Virtual Desktop service and Geneva URLs are reachable from the session host, including the RdTokenUri, RdBrokerURI, RdDiagnosticsUri, and storage blob URLs for Geneva agent monitoring. | If this check fails, it isn't always fatal. Connections may succeed, but if certain URLs are inaccessible, the agent can't apply updates or log diagnostic information. To resolve this, follow the directions in [Error: VMs are stuck in the Needs Assistance state](#error-vms-are-stuck-in-the-needs-assistance-state). |
+| UrlsAccessibleCheck | Verifies that the required Azure Virtual Desktop service and Geneva URLs are reachable from the session host, including the RdTokenUri, RdBrokerURI, RdDiagnosticsUri, and storage blob URLs for Geneva agent monitoring. | If this check fails, it isn't always fatal. Connections may succeed, but if certain URLs are inaccessible, the agent can't apply updates or log diagnostic information. To resolve this, follow the directions in [Error: VMs are stuck in the Needs Assistance state](troubleshoot-agent.md#error-vms-are-stuck-in-the-needs-assistance-state). |
| TURN (Traversal Using Relay NAT) Relay Access Health Check | When using [RDP Shortpath for public networks](rdp-shortpath.md?tabs=public-networks#how-rdp-shortpath-works) with an indirect connection, TURN uses User Datagram Protocol (UDP) to relay traffic between the client and session host through an intermediate server when direct connection isn't possible. | If this check fails, it's not fatal. Connections will revert to the websocket TCP and the session host will enter the "Needs assistance" state. To resolve the issue, follow the instructions in [Disable RDP shortpath on managed and unmanaged windows clients using group policy](configure-rdp-shortpath.md?tabs=public-networks#disable-rdp-shortpath-on-managed-and-unmanaged-windows-clients-using-group-policy). |-
-## Error: VMs are stuck in the "Needs Assistance" state
-
-If the session host doesn't pass the *UrlsAccessibleCheck* health check, you'll need to identify which [required URL](safe-url-list.md) your deployment is currently blocking. Once you know which URL is blocked, identify which setting is blocking that URL and remove it.
-
-There are two reasons why the service is blocking a required URL:
--- You have an active firewall that's blocking most outbound traffic and access to the required URLs.-- Your local hosts file is blocking the required websites.-
-To resolve a firewall-related issue, add a rule that allows outbound connections to the TCP port 80/443 associated with the blocked URLs.
-
-If your local hosts file is blocking the required URLs, make sure none of the required URLs are in the **Hosts** file on your device. You can find the Hosts file location at the following registry key and value:
-
-**Key:** HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters
-
-**Type:** REG_EXPAND_SZ
-
-**Name:** DataBasePath
-
-If the session host doesn't pass the *MetaDataServiceCheck* health check, then the service can't access the IMDS endpoint. To resolve this issue, you'll need to do the following things:
--- Reconfigure your networking, firewall, or proxy settings to unblock the IP address 169.254.169.254.-- Make sure your HTTP clients bypass web proxies within the VM when querying IMDS. We recommend that you allow the required IP address in any firewall policies within the VM that deal with outbound network traffic direction.-
-If your issue is caused by a web proxy, add an exception for 169.254.169.254 in the web proxy's configuration. To add this exception, open an elevated Command Prompt or PowerShell session and run the following command:
-
-```cmd
-netsh winhttp set proxy proxy-server="http=<customerwebproxyhere>" bypass-list="169.254.169.254"
-```
+| App attach health check | Verifies that the [MSIX app attach](what-is-app-attach.md) service is working as intended during package staging or destaging. | If this check fails, it isn't fatal. However, certain apps will stop working for end-users. |
+| Domain reachable | Verifies the domain the session host is joined to is still reachable. | If this check fails, it's fatal. The service won't be able to connect if it can't reach the domain. |
+| Domain trust check | Verifies the session host isn't experiencing domain trust issues that could prevent authentication when a user connects to a session. | If this check fails, it's fatal. The service won't be able to connect if it can't reach the authentication domain for the session host. |
+| FSLogix health check | Verifies the FSLogix service is up and running to make sure user profiles are loading properly in the session. | If this check fails, it's fatal. Even if the connection succeeds, the profile won't load, forcing the user to use a temporary profile instead. |
+| Metadata service check | Verifies the metadata service is accessible and returns compute properties. | If this check fails, it isn't fatal. |
+| Monitoring agent check | Verifies that the required monitoring agent is running. | If this check fails, it isn't fatal. Connections will still work, but the monitoring agent will either be missing or running an earlier version. |
+| Supported encryption check | Checks the value of the SecurityLayer registration key. | If the key's value is 0, the check will fail and is fatal. If the value is 1, the check will fail but be non-fatal. |
+| Agent provisioning service health check | Verifies the provisioning status of the Azure Virtual Desktop agent installation. | If this check fails, it's fatal. |
+| Stack provisioning service health check | Verifies the provisioning status of the Azure Virtual Desktop Stack installation. | If this check fails, it's fatal. |
+| Monitoring agent provisioning service health check | Verifies the provisioning status of the Monitoring agent installation | If this check fails, it's fatal. |
+| Remote Interactive Logon Right check | Verifies if the Remote Desktop Users user group has permission to sign in through Remote Desktop Services and generates a corresponding health check report. | If this check fails, it's fatal. |
## Next steps
virtual-machine-scale-sets Flexible Virtual Machine Scale Sets Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/flexible-virtual-machine-scale-sets-cli.md
az vmss create \
--resource-group myResourceGroup \ --name myScaleSet \ --orchestration-mode Flexible \
- --image UbuntuLTS \
+ --image <SKU Linux Image> \
--upgrade-policy-mode automatic \ --instance-count 2 \ --admin-username azureuser \
virtual-machine-scale-sets Flexible Virtual Machine Scale Sets Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/flexible-virtual-machine-scale-sets-portal.md
You can deploy a scale set with a Windows Server image or Linux image such as RH
1. In the **Basics** tab, under **Project details**, make sure the correct subscription is selected and create a new resource group called *myVMSSResourceGroup*. 1. Under **Scale set details**, set *myScaleSet* for your scale set name and select a **Region** that is close to your area. 1. Under **Orchestration**, select *Flexible*.
-1. Under **Instance details**, select a marketplace image for **Image**. In this example, we have chosen *Ubuntu Server 18.04 LTS*.
+1. Under **Instance details**, select a marketplace image for **Image**. Select any of the Supported Distros.
1. Under **Administrator account** configure the admin username and set up an associated password or SSH public key. - A **Password** must be at least 12 characters long and meet three out of the four following complexity requirements: one lower case character, one upper case character, one number, and one special character. For more information, see [username and password requirements](../virtual-machines/windows/faq.yml#what-are-the-password-requirements-when-creating-a-vm-). - If you select a Linux OS disk image, you can instead choose **SSH public key**. You can use an existing key or create a new one. In this example, we will have Azure generate a new key pair for us. For more information on generating key pairs, see [create and use SSH keys](../virtual-machines/linux/mac-create-ssh-keys.md).
virtual-machine-scale-sets Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/quick-create-cli.md
Now create a Virtual Machine Scale Set with [az vmss create](/cli/azure/vmss). T
az vmss create \ --resource-group myResourceGroup \ --name myScaleSet \
- --image UbuntuLTS \
+ --image <SKU image> \
--upgrade-policy-mode automatic \ --admin-username azureuser \ --generate-ssh-keys
virtual-machine-scale-sets Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/quick-create-portal.md
Previously updated : 11/22/2022- Last updated : 04/18/2023+
> [!NOTE] > The following article is for Uniform Virtual Machine Scale Sets. We recommend using Flexible Virtual Machine Scale Sets for new workloads. Learn more about this new orchestration mode in our [Flexible Virtual Machine Scale Sets overview](flexible-virtual-machine-scale-sets.md).
-A Virtual Machine Scale Set allows you to deploy and manage a set of auto-scaling virtual machines. You can scale the number of VMs in the scale set manually, or define rules to autoscale based on resource usage like CPU, memory demand, or network traffic. An Azure load balancer then distributes traffic to the VM instances in the scale set. In this quickstart, you create a Virtual Machine Scale Set in the Azure portal.
+A Virtual Machine Scale Set allows you to deploy and manage a set of autoscaling virtual machines. You can scale the number of VMs in the scale set manually, or define rules to autoscale based on resource usage like CPU, memory demand, or network traffic. An Azure load balancer then distributes traffic to the VM instances in the scale set. In this quickstart, you create a Virtual Machine Scale Set in the Azure portal.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
First, create a public Standard Load Balancer by using the portal. The name and
| Assignment| Static | | Availability zone | Select **Zone-redundant**. |
-1. When you are done, select **Review + create**
+1. When you're done, select **Review + create**
1. After it passes validation, select **Create**. ![Create a load balancer](./media/virtual-machine-scale-sets-create-portal/load-balancer.png)
First, create a public Standard Load Balancer by using the portal. The name and
## Create Virtual Machine Scale Set You can deploy a scale set with a Windows Server image or Linux image such as RHEL, CentOS, Ubuntu, or SLES.
-1. Type **Scale set** in the search box. In the results, under **Marketplace**, select **Virtual Machine Scale Sets**. Select **Create** on the **Virtual Machine Scale Sets** page, which will open the **Create a Virtual Machine Scale Set** page.
+1. Type **Scale set** in the search box. In the results, under **Marketplace**, select **Virtual Machine Scale Sets**. Select **Create** on the **Virtual Machine Scale Sets** page, which opens the **Create a Virtual Machine Scale Set** page.
1. In the **Basics** tab, under **Project details**, make sure the correct subscription is selected and select *myVMSSResourceGroup* from resource group list. 1. Type *myScaleSet* as the name for your scale set. 1. In **Region**, select a region that is close to your area.
You can deploy a scale set with a Windows Server image or Linux image such as RH
:::image type="content" source="./media/virtual-machine-scale-sets-create-portal/quick-create-scale-set.png" alt-text="Image shows create options for scale sets in the Azure portal.":::
-1. Select **Next** to move the the other pages.
+1. Select **Next** to move the other pages.
1. Leave the defaults for the **Disks** page. 1. On the **Networking** page, under **Load balancing**, select the **Use a load balancer** option to put the scale set instances behind a load balancer. 1. In **Load balancing options**, select **Azure load balancer**. 1. In **Select a load balancer**, select *myLoadBalancer* that you created earlier. 1. For **Select a backend pool**, select **Create new**, type *myBackendPool*, then select **Create**.
-1. When you are done, select **Review + create**.
+1. When you're done, select **Review + create**.
1. After it passes validation, select **Create** to deploy the scale set.
virtual-machine-scale-sets Tutorial Autoscale Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-autoscale-cli.md
Now create a Virtual Machine Scale Set with [az vmss create](/cli/azure/vmss). T
az vmss create \ --resource-group myResourceGroup \ --name myScaleSet \
- --image UbuntuLTS \
+ --image <SKU image> \
--orchestration-mode Flexible \ --instance-count 2 \ --admin-username azureuser \
To test the autoscale rules, generate some CPU load on the VM instances in the s
To connect to an individual instance, see [Tutorial: Connect to Virtual Machine Scale Set instances](tutorial-connect-to-instances-cli.md)
-Once logged in, install the **stress** utility. Start *10* **stress** workers that generate CPU load. These workers run for *420* seconds, which is enough to cause the autoscale rules to implement the desired action.
+Once logged in, install the **stress** or **stress-ng** utility. Start *10* **stress** workers that generate CPU load. These workers run for *420* seconds, which is enough to cause the autoscale rules to implement the desired action.
-```console
+# [Ubuntu, Debian](#tab/Ubuntu)
+
+```bash
sudo apt-get update sudo apt-get -y install stress sudo stress --cpu 10 --timeout 420 & ```
+# [RHEL, CentOS](#tab/redhat)
+
+```bash
+sudo dnf install stress-ng
+sudo stress-ng --cpu 10 --timeout 420s --metrics-brief &
+```
+# [SLES](#tab/SLES)
+
+```bash
+sudo zypper install stress-ng
+sudo stress-ng --cpu 10 --timeout 420s --metrics-brief &
+```
+ When **stress** shows output similar to *stress: info: [2688] dispatching hogs: 10 cpu, 0 io, 0 vm, 0 hdd*, press the *Enter* key to return to the prompt. To confirm that **stress** generates CPU load, examine the active system load with the **top** utility:
-```console
+```bash
top ``` Exit **top**, then close your connection to the VM instance. **stress** continues to run on the VM instance.
-```console
+```bash
Ctrl-c exit ``` Connect to second VM instance with the port number listed from the previous [az vmss list-instance-connection-info](/cli/azure/vmss):
-```console
+```bash
ssh azureuser@13.92.224.66 -p 50003 ```
-Install and run **stress**, then start ten workers on this second VM instance.
+Install and run **stress** or **stress-ng**, then start ten workers on this second VM instance.
+
+# [Ubuntu, Debian](#tab/Ubuntu)
-```console
+```bash
sudo apt-get -y install stress sudo stress --cpu 10 --timeout 420 & ```
+# [RHEL, CentOS](#tab/redhat)
+
+```bash
+sudo dnf install stress-ng
+sudo stress-ng --cpu 10 --timeout 420s --metrics-brief &
+```
+
+# [SLES](#tab/SLES)
+
+```bash
+sudo zypper install stress-ng
+sudo stress-ng --cpu 10 --timeout 420s --metrics-brief &
+```
++ Again, when **stress** shows output similar to *stress: info: [2713] dispatching hogs: 10 cpu, 0 io, 0 vm, 0 hdd*, press the *Enter* key to return to the prompt. Close your connection to the second VM instance. **stress** continues to run on the VM instance.
-```console
+```bash
exit ```
virtual-machine-scale-sets Tutorial Create And Manage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-create-and-manage-cli.md
az vmss create \
--resource-group myResourceGroup \ --name myScaleSet \ --orchestration-mode flexible \
- --image UbuntuLTS \
+ --image <SKU image> \
--admin-username azureuser \ --generate-ssh-keys ```
az vm show --resource-group myResourceGroup --name myScaleSet_instance1
"storageProfile": { "dataDisks": [], "imageReference": {
- "exactVersion": "18.04.202210180",
- "offer": "UbuntuServer",
- "publisher": "Canonical",
- "sku": "18.04-LTS",
+ "exactVersion": "XXXXX",
+ "offer": "myOffer",
+ "publisher": "myPublisher",
+ "sku": "mySKU",
"version": "latest" }, "osDisk": {
When you created a scale set at the start of the tutorial, a default VM SKU of *
az vmss create \ --resource-group myResourceGroup \ --name myScaleSet \
- --image UbuntuLTS \
+ --image <SKU image> \
--orchestration-mode flexible \ --vm-sku Standard_F1 \ --admin-user azureuser \
virtual-machine-scale-sets Tutorial Use Custom Image Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-use-custom-image-cli.md
az group create --name myResourceGroup --location eastus
az vm create \ --resource-group myResourceGroup \ --name myVM \
- --image ubuntults \
+ --image <SKU image> \
--admin-username azureuser \ --generate-ssh-keys ```
In this tutorial, you learned how to create and use a custom VM image for your s
Advance to the next tutorial to learn how to deploy applications to your scale set. > [!div class="nextstepaction"]
-> [Deploy applications to your scale sets](tutorial-install-apps-cli.md)
+> [Deploy applications to your scale sets](tutorial-install-apps-cli.md)
virtual-machine-scale-sets Virtual Machine Scale Sets Automatic Instance Repairs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-instance-repairs.md
az group create --name <myResourceGroup> --location <VMSSLocation>
az vmss create \ --resource-group <myResourceGroup> \ --name <myVMScaleSet> \
- --image UbuntuLTS \
+ --image RHEL \
--admin-username <azureuser> \ --generate-ssh-keys \ --load-balancer <existingLoadBalancer> \
virtual-machine-scale-sets Virtual Machine Scale Sets Automatic Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md
The region of a scale set becomes eligible to get image upgrades either through
The scale set OS upgrade orchestrator checks for the overall scale set health before upgrading every batch. While you're upgrading a batch, there could be other concurrent planned or unplanned maintenance activities that could impact the health of your scale set instances. In such cases if more than 20% of the scale set's instances become unhealthy, then the scale set upgrade stops at the end of current batch. > [!NOTE]
->Automatic OS upgrade does not upgrade the reference image Sku on the scale set. To change the Sku (such as Ubuntu 16.04-LTS to 18.04-LTS), you must update the [scale set model](virtual-machine-scale-sets-upgrade-scale-set.md#the-scale-set-model) directly with the desired image Sku. Image publisher and offer can't be changed for an existing scale set.
+>Automatic OS upgrade does not upgrade the reference image Sku on the scale set. To change the Sku (such as Ubuntu 18.04-LTS to 20.04-LTS), you must update the [scale set model](virtual-machine-scale-sets-upgrade-scale-set.md#the-scale-set-model) directly with the desired image Sku. Image publisher and offer can't be changed for an existing scale set.
## Supported OS images Only certain OS platform images are currently supported. Custom images [are supported](virtual-machine-scale-sets-automatic-upgrade.md#automatic-os-image-upgrade-for-custom-images) if the scale set uses custom images through [Azure Compute Gallery](../virtual-machines/shared-image-galleries.md).
The following example describes how to set automatic OS upgrades on a scale set
"useRollingUpgradePolicy": true, "disableAutomaticRollback": false }
- }
+ }
+ },
"imagePublisher": { "type": "string", "defaultValue": "MicrosoftWindowsServer"
GET on `/subscriptions/subscription_id/providers/Microsoft.Compute/locations/{lo
### Azure PowerShell ```azurepowershell-interactive
-Get-AzVmImage -Location "westus" -PublisherName "Canonical" -Offer "UbuntuServer" -Skus "16.04-LTS"
+Get-AzVmImage -Location "westus" -PublisherName "Canonical" -offer "0001-com-ubuntu-server-jammy" -sku "22_04-lts"
``` ### Azure CLI 2.0 ```azurecli-interactive
-az vm image list --location "westus" --publisher "Canonical" --offer "UbuntuServer" --sku "16.04-LTS" --all
+az vm image list --location "westus" --publisher "Canonical" --offer "0001-com-ubuntu-server-jammy" --sku "22_04-lts" --all
``` ## Manually trigger OS image upgrades
virtual-machine-scale-sets Virtual Machine Scale Sets Use Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones.md
The following example creates a single-zone scale set named *myScaleSet* in zone
az vmss create \ --resource-group myResourceGroup \ --name myScaleSet \
- --image UbuntuLTS \
+ --image <SKU image> \
--upgrade-policy-mode automatic \ --admin-username azureuser \ --generate-ssh-keys \
To create a zone-redundant scale set, specify multiple zones with the `--zones`
az vmss create \ --resource-group myResourceGroup \ --name myScaleSet \
- --image UbuntuLTS \
+ --image <SKU Image> \
--upgrade-policy-mode automatic \ --admin-username azureuser \ --generate-ssh-keys \
The following example creates a Linux single-zone scale set named *myScaleSet* i
"createOption": "FromImage" }, "imageReference": {
- "publisher": "Canonical",
- "offer": "UbuntuServer",
- "sku": "16.04-LTS",
+ "publisher": "myPublisher",
+ "offer": "myOffer",
+ "sku": "mySKU",
"version": "latest" } },
virtual-machines Automatic Vm Guest Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/automatic-vm-guest-patching.md
As a new rollout is triggered every month, a VM will receive at least one patch
| Canonical | UbuntuServer | 16.04-LTS | | Canonical | UbuntuServer | 16.04.0-LTS | | Canonical | UbuntuServer | 18.04-LTS |
-| Canonical | UbuntuServer | 18.04-LTS-Gen2 |
+| Canonical | UbuntuServer | 18.04-LTS-gen2 |
| Canonical | 0001-com-ubuntu-pro-bionic | pro-18_04-lts | | Canonical | 0001-com-ubuntu-server-focal | 20_04-lts | | Canonical | 0001-com-ubuntu-server-focal | 20_04-lts-gen2 | | Canonical | 0001-com-ubuntu-pro-focal | pro-20_04-lts |
+| Canonical | 0001-com-ubuntu-pro-focal | pro-20_04-lts-gen2 |
| Canonical | 0001-com-ubuntu-server-jammy | 22_04-lts | | Canonical | 0001-com-ubuntu-server-jammy | 22_04-lts-gen2 | | microsoftcblmariner | cbl-mariner | cbl-mariner-1 |
As a new rollout is triggered every month, a VM will receive at least one patch
| microsoftcblmariner | cbl-mariner | cbl-mariner-2-gen2 | | microsoft-aks | aks | aks-engine-ubuntu-1804-202112 | | Redhat | RHEL | 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7.8, 7_9, 7-RAW, 7-LVM |
-| Redhat | RHEL | 8, 8.1, 8.2, 82gen2, 8_3, 8_4, 8_5, 8-LVM |
-| Redhat | RHEL-RAW | 8-raw |
+| Redhat | RHEL | 8, 8.1, 81gen2, 8.2, 82gen2, 8_3, 83-gen2, 8_4, 84-gen2, 8_5, 85-gen2, 8_6, 86-gen2, 8-lvm, 8-lvm-gen2 |
+| Redhat | RHEL-RAW | 8-raw, 8-raw-gen2 |
| OpenLogic | CentOS | 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7_8, 7_9, 7_9-gen2 | | OpenLogic | centos-lvm | 7-lvm | | OpenLogic | CentOS | 8.0, 8_1, 8_2, 8_3, 8_4, 8_5 |
virtual-machines Custom Script Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/custom-script-linux.md
Title: Run Custom Script Extension on Linux VMs in Azure
-description: Automate Linux VM configuration tasks by using the Custom Script Extension Version 2.
+description: Learn how to automate Linux virtual machine configuration tasks in Azure by using the Custom Script Extension Version 2.
Previously updated : 04/25/2018 Last updated : 03/31/2023 # Use the Azure Custom Script Extension Version 2 with Linux virtual machines
-The Custom Script Extension Version 2 downloads and runs scripts on Azure virtual machines (VMs). This extension is useful for post-deployment configuration, software installation, or any other configuration or management task. You can download scripts from Azure Storage or another accessible internet location, or you can provide them to the extension runtime.
+The Custom Script Extension Version 2 downloads and runs scripts on Azure virtual machines (VMs). Use this extension for post-deployment configuration, software installation, or any other configuration or management task. You can download scripts from Azure Storage or another accessible internet location, or you can provide them to the extension runtime.
-The Custom Script Extension integrates with Azure Resource Manager templates. You can also run it by using the Azure CLI, PowerShell, or the Azure Virtual Machines REST API.
+The Custom Script Extension integrates with Azure Resource Manager templates. You can also run it by using the Azure CLI, Azure PowerShell, or the Azure Virtual Machines REST API.
-This article details how to use the Custom Script Extension from the Azure CLI, and how to run the extension by using an Azure Resource Manager template. This article also provides troubleshooting steps for Linux systems.
+This article describes how to use the Custom Script Extension from the Azure CLI, and how to run the extension by using an Azure Resource Manager template. This article also provides troubleshooting steps for Linux systems.
-There are two Linux Custom Script Extensions:
+There are two versions of the Custom Script Extension:
-* Version 1: Microsoft.OSTCExtensions.CustomScriptForLinux
-* Version 2: Microsoft.Azure.Extensions.CustomScript
+- Version 1: Microsoft.OSTCExtensions.CustomScriptForLinux
+- Version 2: Microsoft.Azure.Extensions.CustomScript
-Please switch new and existing deployments to use Version 2. The new version is a drop-in replacement. The migration is as easy as changing the name and version. You don't need to change your extension configuration.
+Use Version 2 for new and existing deployments. The new version is a drop-in replacement. The migration is as easy as changing the name and version. You don't need to change your extension configuration.
## Prerequisites
-### Linux DistroΓÇÖs Supported
-| **Linux Distro** | **x64** | **ARM64** |
-|:--|:--:|:--:|
-| Alma Linux | 9.x+ | 9.x+ |
-| CentOS | 7.x+, 8.x+ | 7.x+ |
-| Debian | 10+ | 11.x+ |
-| Flatcar Linux | 3374.2.x+ | 3374.2.x+ |
-| openSUSE | 12.3+ | Not Supported |
-| Oracle Linux | 6.4+, 7.x+, 8.x+ | Not Supported |
-| Red Hat Enterprise Linux | 6.7+, 7.x+, 8.x+ | 8.6+, 9.0+ |
-| Rocky Linux | 9.x+ | 9.x+ |
-| SLES | 12.x+, 15.x+ | 15.x SP4+ |
-| Ubuntu | 18.04+, 20.04+, 22.04+ | 20.04+, 22.04+ |
+### Supported Linux distributions
+
+| Distribution | x64 | ARM64 |
+|:--|:|:|
+| Alma Linux | 9.x+ | 9.x+ |
+| CentOS | 7.x+, 8.x+ | 7.x+ |
+| Debian | 10+ | 11.x+ |
+| Flatcar Linux | 3374.2.x+ | 3374.2.x+ |
+| openSUSE | 12.3+ | Not Supported |
+| Oracle Linux | 6.4+, 7.x+, 8.x+ | Not Supported |
+| Red Hat Enterprise Linux | 6.7+, 7.x+, 8.x+ | 8.6+, 9.0+ |
+| Rocky Linux | 9.x+ | 9.x+ |
+| SLES | 12.x+, 15.x+ | 15.x SP4+ |
+| Ubuntu | 18.04+, 20.04+, 22.04+ | 20.04+, 22.04+ |
### Script location
-You can set the extension to use your Azure Blob Storage credentials so that it can access Azure Blob Storage. The script location can be anywhere, as long as the VM can route to that endpoint (for example, GitHub or an internal file server).
+You can set the extension to use your Azure Blob Storage credentials so that it can access Azure Blob Storage. The script location can be anywhere, as long as the VM can route to that endpoint, for example, GitHub or an internal file server.
### Internet connectivity
-If you need to download a script externally, such as from GitHub or Azure Storage, then you need to open additional firewall or network security group (NSG) ports. For example, if your script is located in Azure Storage, you can allow access by using Azure NSG [service tags for Storage](../../virtual-network/network-security-groups-overview.md#service-tags).
+To download a script externally, such as from GitHub or Azure Storage, you need to open other firewall or network security group (NSG) ports. For example, if your script is located in Azure Storage, you can allow access by using Azure NSG [service tags for Storage](../../virtual-network/network-security-groups-overview.md#service-tags).
-If your script is on a local server, you might still need to open additional firewall or NSG ports.
+If your script is on a local server, you might still need to open other firewall or NSG ports.
-### Tips and tricks
+### Tips
-* The highest failure rate for this extension is due to syntax errors in the script. Test that the script runs without errors. Put additional logging into the script to make it easier to find failures.
-* Write scripts that are idempotent, so running them more than once accidentally won't cause system changes.
-* Ensure that the scripts don't require user input when they run.
-* The script is allowed 90 minutes to run. Anything longer will result in a failed provision of the extension.
-* Don't put reboots inside the script. This action will cause problems with other extensions that are being installed, and the extension won't continue after the reboot.
-* If you have a script that will cause a reboot before installing applications and running scripts, schedule the reboot by using a Cron job or by using tools such as DSC, Chef, or Puppet extensions.
-* Don't run a script that will cause a stop or update of the VM agent. It might leave the extension in a transitioning state and lead to a timeout.
-* The extension will run a script only once. If you want to run a script on every startup, you can use a [cloud-init image](../linux/using-cloud-init.md) and use a [Scripts Per Boot](https://cloudinit.readthedocs.io/en/latest/topics/modules.html#scripts-per-boot) module. Alternatively, you can use the script to create a [systemd](https://systemd.io/) service unit.
-* You can have only one version of an extension applied to the VM. To run a second custom script, you can update the existing extension with a new configuration. Alternatively, you can remove the custom script extension and reapply it with the updated script.
-* If you want to schedule when a script will run, use the extension to create a Cron job.
-* When the script is running, you'll only see a "transitioning" extension status from the Azure portal or CLI. If you want more frequent status updates for a running script, you'll need to create your own solution.
-* The Custom Script Extension doesn't natively support proxy servers. However, you can use a file transfer tool that supports proxy servers within your script, such as `Curl`.
-* Be aware of non-default directory locations that your scripts or commands might rely on. Have logic to handle this situation.
+- The highest failure rate for this extension is due to syntax errors in the script. Verify that the script runs without errors. Put more logging into the script to make it easier to find failures.
+- Write scripts that are idempotent, so that running them more than once accidentally doesn't cause system changes.
+- Ensure that the scripts don't require user input when they run.
+- The script is allowed 90 minutes to run. Anything longer results in a failed provision of the extension.
+- Don't put reboots inside the script. Restarting causes problems with other extensions that are being installed, and the extension doesn't continue after the reboot.
+- If you have a script that causes a reboot before installing applications and running scripts, schedule the reboot by using a cron job or by using tools such as DSC, Chef, or Puppet extensions.
+- Don't run a script that causes a stop or update of the Azure Linux Agent. It might leave the extension in a transitioning state and lead to a time-out.
+- The extension runs a script only once. If you want to run a script on every startup, you can use a [cloud-init image](../linux/using-cloud-init.md) and use a [Scripts Per Boot](https://cloudinit.readthedocs.io/en/latest/topics/modules.html#scripts-per-boot) module. Alternatively, you can use the script to create a [systemd](https://systemd.io/) service unit.
+- You can have only one version of an extension applied to the VM. To run a second custom script, update the existing extension with a new configuration. Alternatively, you can remove the Custom Script Extension and reapply it with the updated script.
+- If you want to schedule when a script runs, use the extension to create a cron job.
+- When the script is running, you only see a *transitioning* extension status from the Azure portal or CLI. If you want more frequent status updates for a running script, create your own solution.
+- The Custom Script Extension doesn't natively support proxy servers. However, you can use a file transfer tool, such as `Curl`, that supports proxy servers within your script.
+- Be aware of nondefault directory locations that your scripts or commands might rely on. Have logic to handle this situation.
## Extension schema
-The Custom Script Extension configuration specifies things like script location and the command to be run. You can store this information in configuration files, specify it on the command line, or specify it in an Azure Resource Manager template.
+The Custom Script Extension configuration specifies things like script location and the command to be run. You can store this information in configuration files, specify it on the command line, or specify it in an Azure Resource Manager template.
-You can store sensitive data in a protected configuration, which is encrypted and only decrypted on the target virtual machine. The protected configuration is useful when the execution command includes secrets such as a password. Here's an example:
+You can store sensitive data in a protected configuration, which is encrypted and only decrypted on the target VM. The protected configuration is useful when the execution command includes secrets such as a password. Here's an example:
```json {
You can store sensitive data in a protected configuration, which is encrypted an
} ```
->[!NOTE]
+> [!NOTE]
> The `managedIdentity` property *must not* be used in conjunction with the `storageAccountName` or `storageAccountKey` property. ### Property values
-| Name | Value or example | Data type |
+| Name | Value or example | Data type |
| - | - | - |
-| `apiVersion` | `2019-03-01` | date |
-| `publisher` | `Microsoft.Azure.Extensions` | string |
-| `type` | `CustomScript` | string |
-| `typeHandlerVersion` | `2.1` | int |
-| `fileUris` | `https://github.com/MyProject/Archive/MyPythonScript.py` | array |
-| `commandToExecute` | `python MyPythonScript.py \<my-param1>` | string |
-| `script` | `IyEvYmluL3NoCmVjaG8gIlVwZGF0aW5nIHBhY2thZ2VzIC4uLiIKYXB0IHVwZGF0ZQphcHQgdXBncmFkZSAteQo=` | string |
-| `skipDos2Unix` | `false` | Boolean |
-| `timestamp` | `123456789` | 32-bit integer |
-| `storageAccountName` | `examplestorageacct` | string |
-| `storageAccountKey` | `TmJK/1N3AbAZ3q/+hOXoi/l73zOqsaxXDhqa9Y83/v5UpXQp2DQIBuv2Tifp60cE/OaHsJZmQZ7teQfczQj8hg==` | string |
-| `managedIdentity` | `{ }` or `{ "clientId": "31b403aa-c364-4240-a7ff-d85fb6cd7232" }` or `{ "objectId": "12dd289c-0583-46e5-b9b4-115d5c19ef4b" }` | JSON object |
+| apiVersion | `2019-03-01` | date |
+| publisher | `Microsoft.Azure.Extensions` | string |
+| type | `CustomScript` | string |
+| typeHandlerVersion | `2.1` | int |
+| fileUris | `https://github.com/MyProject/Archive/MyPythonScript.py` | array |
+| commandToExecute | `python MyPythonScript.py \<my-param1>` | string |
+| script | `IyEvYmluL3NoCmVjaG8gIlVwZGF0aW5nIHBhY2thZ2VzIC4uLiIKYXB0IHVwZGF0ZQphcHQgdXBncmFkZSAteQo=` | string |
+| skipDos2Unix | `false` | boolean |
+| timestamp | `123456789` | 32-bit integer |
+| storageAccountName | `examplestorageacct` | string |
+| storageAccountKey | `TmJK/1N3AbAZ3q/+hOXoi/l73zOqsaxXDhqa9Y83/v5UpXQp2DQIBuv2Tifp60cE/OaHsJZmQZ7teQfczQj8hg==` | string |
+| managedIdentity | `{ }` or `{ "clientId": "31b403aa-c364-4240-a7ff-d85fb6cd7232" }` or `{ "objectId": "12dd289c-0583-46e5-b9b4-115d5c19ef4b" }` | JSON object |
### Property value details
-| Property | Optional or required | Details |
+| Property | Optional or required | Details |
| - | - | - |
-| `apiVersion` | Not applicable | You can find the most up-to-date API version by using [Resource Explorer](https://resources.azure.com/) or by using the command `az provider list -o json` in the Azure CLI. |
-| `fileUris` | Optional | URLs for files to be downloaded. |
-| `commandToExecute` | Required if `script` isn't set | The entry point script to run. Use this property instead of `script` if your command contains secrets such as passwords. |
-| `script` | Required if `commandToExecute` isn't set | A Base64-encoded (and optionally gzip'ed) script run by `/bin/sh`. |
-| `skipDos2Unix` | Optional | Set this value to `false` if you want to skip dos2unix conversion of script-based file URLs or scripts. |
-| `timestamp` | Optional | Change this value only to trigger a rerun of the script. Any integer value is acceptable, as long as it's different from the previous value. |
-| `storageAccountName` | Optional | The name of storage account. If you specify storage credentials, all `fileUris` values must be URLs for Azure blobs. |
-| `storageAccountKey` | Optional | The access key of the storage account. |
-| `managedIdentity` | Optional | The [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) for downloading files:<br><br>`clientId` (optional, string): The client ID of the managed identity.<br><br>`objectId` (optional, string): The object ID of the managed identity.|
-
-You can set the following values in either public or protected settings. The extension will reject any configuration where these values are set in both public and protected settings.
-
-* `commandToExecute`
-* `script`
-* `fileUris`
+| apiVersion | Not applicable | You can find the most up-to-date API version by using [Resource Explorer](https://resources.azure.com/) or by using the command `az provider list -o json` in the Azure CLI. |
+| fileUris | Optional | URLs for files to be downloaded. |
+| commandToExecute | Required if `script` isn't set | The entry point script to run. Use this property instead of `script` if your command contains secrets such as passwords. |
+| script | Required if `commandToExecute` isn't set | A Base64-encoded and optionally gzip'ed script run by `/bin/sh`. |
+| skipDos2Unix | Optional | Set this value to `false` if you want to skip dos2unix conversion of script-based file URLs or scripts. |
+| timestamp | Optional | Change this value only to trigger a rerun of the script. Any integer value is acceptable, as long as it's different from the previous value. |
+| storageAccountName | Optional | The name of storage account. If you specify storage credentials, all `fileUris` values must be URLs for Azure blobs. |
+| storageAccountKey | Optional | The access key of the storage account. |
+| managedIdentity | Optional | The [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) for downloading files. Values are `clientId` (optional, string), which is the client ID of the managed identity, and `objectId` (optional, string), which is the object ID of the managed identity.|
+
+*Public settings* are sent in clear text to the VM where the script runs. *Protected settings* are encrypted through a key known only to Azure and the VM. The settings are saved to the VM as they were sent. That is, if the settings were encrypted, they're saved encrypted on the VM. The certificate that's used to decrypt the encrypted values is stored on the VM. The certificate is also used to decrypt settings, if necessary, at runtime.
Using public settings might be useful for debugging, but we strongly recommend that you use protected settings.
-Public settings are sent in clear text to the VM where the script will be run. Protected settings are encrypted through a key known only to Azure and the VM. The settings are saved to the VM as they were sent. That is, if the settings were encrypted, they're saved encrypted on the VM. The certificate that's used to decrypt the encrypted values is stored on the VM. The certificate is also used to decrypt settings (if necessary) at runtime.
+You can set the following values in either public or protected settings. The extension rejects any configuration where these values are set in both public and protected settings.
-#### Property: skipDos2Unix
+- `commandToExecute`
+- `script`
+- `fileUris`
-The default value is `false`, which means dos2unix conversion *is* executed.
+#### Property: skipDos2Unix
-The previous version of the Custom Script Extension, Microsoft.OSTCExtensions.CustomScriptForLinux, would automatically convert DOS files to UNIX files by translating `\r\n` to `\n`. This translation still exists and is on by default. This conversion is applied to all files downloaded from `fileUris` or the script setting based on either of the following criteria:
+The previous version of the Custom Script Extension, `Microsoft.OSTCExtensions.CustomScriptForLinux`, automatically converts DOS files to UNIX files by translating `\r\n` to `\n`. This translation still exists and is on by default. This conversion is applied to all files downloaded from `fileUris` or the script setting based on either of the following criteria:
-* The extension is .sh, .txt, .py, or .pl. The script setting will always match this criterion because it's assumed to be a script run with `/bin/sh`. The script setting is saved as *script.sh* on the VM.
-* The file starts with `#!`.
+- The extension is *.sh*, *.txt*, *.py*, or *.pl*. The script setting always matches this criterion because it's assumed to be a script run with */bin/sh*. The script setting is saved as *script.sh* on the VM.
+- The file starts with `#!`.
-You can skip the dos2unix conversion by setting `skipDos2Unix` to `true`:
+The default value is `false`, which means dos2unix conversion *is* executed. You can skip the dos2unix conversion by setting `skipDos2Unix` to `true`:
```json {
You can skip the dos2unix conversion by setting `skipDos2Unix` to `true`:
#### Property: script
-The Custom Script Extension supports execution of a user-defined script. The script settings combine `commandToExecute` and `fileUris` into a single setting. Instead of having to set up a file for download from Azure Storage or a GitHub gist, you can simply encode the script as a
-setting. You can use the script to replace `commandToExecute` and `fileUris`.
+The Custom Script Extension supports execution of a user-defined script. The script settings combine `commandToExecute` and `fileUris` into a single setting. Instead of having to set up a file for download from Azure Storage or a GitHub gist, you can encode the script as a setting. You can use the script to replace `commandToExecute` and `fileUris`.
Here are some requirements: -- The script *must* be Base64 encoded. -- The script can *optionally* be gzip'ed. -- You can use the script setting in public or protected settings. -- The maximum size of the script parameter's data is 256 KB. If the script exceeds this size, it won't be run.
+- The script must be Base64 encoded.
+- The script can optionally be gzip'ed.
+- You can use the script setting in public or protected settings.
+- The maximum size of the script parameter's data is 256 KB. If the script exceeds this size, it doesn't run.
For example, the following script is saved to the file */script.sh/*:
cat script | gzip -9 | base64 -w 0
The Custom Script Extension uses the following algorithm to run a script:
- 1. Assert that the length of the script's value does not exceed 256 KB.
- 1. Base64 decode the script's value.
- 1. _Try_ to gunzip the Base64-decoded value.
- 1. Write the decoded (and optionally decompressed) value to disk (*/var/lib/waagent/custom-script/#/script.sh*).
- 1. Run the script by using `_/bin/sh -c /var/lib/waagent/custom-script/#/script.sh`.
+1. Assert that the length of the script's value doesn't exceed 256 KB.
+1. Base64 decode the script's value.
+1. _Try_ to gunzip the Base64-decoded value.
+1. Write the decoded and optionally decompressed value to disk: */var/lib/waagent/custom-script/#/script.sh*.
+1. Run the script by using `_/bin/sh -c /var/lib/waagent/custom-script/#/script.sh`.
#### Property: managedIdentity > [!NOTE] > This property *must* be specified in protected settings only.
-The Custom Script Extension (version 2.1 and later) supports [managed identities](../../active-directory/managed-identities-azure-resources/overview.md) for downloading files from URLs provided in the `fileUris` setting. It allows the Custom Script Extension to access Azure Storage private blobs or containers without the user having to pass secrets like shared access signature (SAS) tokens or storage account keys.
-
-To use this feature, the user must add a [system-assigned](../../app-service/overview-managed-identity.md?tabs=dotnet#add-a-system-assigned-identity) or [user-assigned](../../app-service/overview-managed-identity.md?tabs=dotnet#add-a-user-assigned-identity) identity to the VM or virtual machine scale set where the Custom Script Extension is expected to run. The user must then [grant the managed identity access to the Azure Storage container or blob](../../active-directory/managed-identities-azure-resources/tutorial-vm-windows-access-storage.md#grant-access).
-
-To use the system-assigned identity on the target VM or virtual machine scale set, set `managedidentity` to an empty JSON object.
-
-> Example:
->
-> ```json
-> {
-> "fileUris": ["https://mystorage.blob.core.windows.net/privatecontainer/script1.sh"],
-> "commandToExecute": "sh script1.sh",
-> "managedIdentity" : {}
-> }
-> ```
-
-To use the user-assigned identity on the target VM or virtual machine scale set, configure `managedidentity` with the client ID or the object ID of the managed identity.
-
-> Examples:
->
-> ```json
-> {
-> "fileUris": ["https://mystorage.blob.core.windows.net/privatecontainer/script1.sh"],
-> "commandToExecute": "sh script1.sh",
-> "managedIdentity" : { "clientId": "31b403aa-c364-4240-a7ff-d85fb6cd7232" }
-> }
-> ```
-
-> ```json
-> {
-> "fileUris": ["https://mystorage.blob.core.windows.net/privatecontainer/script1.sh"],
-> "commandToExecute": "sh script1.sh",
-> "managedIdentity" : { "objectId": "12dd289c-0583-46e5-b9b4-115d5c19ef4b" }
-> }
-> ```
+The Custom Script Extension, version 2.1 and later, supports [managed identities](../../active-directory/managed-identities-azure-resources/overview.md) for downloading files from URLs provided in the `fileUris` setting. This approach allows the Custom Script Extension to access Azure Storage private blobs or containers without the user having to pass secrets like shared access signature (SAS) tokens or storage account keys.
+
+To use this feature, add a [system-assigned](../../app-service/overview-managed-identity.md?tabs=dotnet#add-a-system-assigned-identity) or [user-assigned](../../app-service/overview-managed-identity.md?tabs=dotnet#add-a-user-assigned-identity) identity to the VM or Virtual Machine Scale Set where the Custom Script Extension is expected to run. Then [grant the managed identity access to the Azure Storage container or blob](../../active-directory/managed-identities-azure-resources/tutorial-vm-windows-access-storage.md#grant-access).
+
+To use the system-assigned identity on the target VM or Virtual Machine Scale Set, set `managedidentity` to an empty JSON object.
+
+```json
+{
+ "fileUris": ["https://mystorage.blob.core.windows.net/privatecontainer/script1.sh"],
+ "commandToExecute": "sh script1.sh",
+ "managedIdentity" : {}
+}
+```
+
+To use the user-assigned identity on the target VM or Virtual Machine Scale Set, configure `managedidentity` with the client ID or the object ID of the managed identity.
+
+```json
+{
+ "fileUris": ["https://mystorage.blob.core.windows.net/privatecontainer/script1.sh"],
+ "commandToExecute": "sh script1.sh",
+ "managedIdentity" : { "clientId": "31b403aa-c364-4240-a7ff-d85fb6cd7232" }
+}
+```
+
+```json
+{
+ "fileUris": ["https://mystorage.blob.core.windows.net/privatecontainer/script1.sh"],
+ "commandToExecute": "sh script1.sh",
+ "managedIdentity" : { "objectId": "12dd289c-0583-46e5-b9b4-115d5c19ef4b" }
+}
+```
> [!NOTE] > The `managedIdentity` property *must not* be used in conjunction with the `storageAccountName` or `storageAccountKey` property.
You can deploy Azure VM extensions by using Azure Resource Manager templates. Th
} ```
->[!NOTE]
->These property names are case-sensitive. To avoid deployment problems, use the names as shown here.
+> [!NOTE]
+> These property names are case-sensitive. To avoid deployment problems, use the names as shown here.
## Azure CLI
-When you're using the Azure CLI to run the Custom Script Extension, create a configuration file or files. At a minimum, you must have `commandToExecute`.
+When you use the Azure CLI to run the Custom Script Extension, create a configuration file or files. At a minimum, the configuration file must contain `commandToExecute`. The `az vm extension set` command refers to the configuration file:
```azurecli az vm extension set \
az vm extension set \
--protected-settings ./script-config.json ```
-Optionally, you can specify the settings in the command as a JSON-formatted string. This allows the configuration to be specified during execution and without a separate configuration file.
+Alternatively, you can specify the settings in the command as a JSON-formatted string. This approach allows the configuration to be specified during execution and without a separate configuration file.
```azurecli az vm extension set \
az vm extension set \
### Example: Public configuration with script file
+This example uses the following script file named *script-config.json*:
+ ```json { "fileUris": ["https://raw.githubusercontent.com/Microsoft/dotnet-core-sample-templates/master/dotnet-core-music-linux/scripts/config-music.sh"],
az vm extension set \
} ```
-Azure CLI command:
+1. Create the script file by using the text editor of your choice or by using the following CLI command:
-```azurecli
-az vm extension set \
- --resource-group myResourceGroup \
- --vm-name myVM --name customScript \
- --publisher Microsoft.Azure.Extensions \
- --settings ./script-config.json
-```
+ ```azurecli
+ cat <<EOF > script-config.json
+ {
+ "fileUris": ["https://raw.githubusercontent.com/Microsoft/dotnet-core-sample-templates/master/dotnet-core-music-linux/scripts/config-music.sh"],
+ "commandToExecute": "./config-music.sh"
+ }
+ EOF
+ ```
+
+1. Run the following command:
+
+ ```azurecli
+ az vm extension set \
+ --resource-group myResourceGroup \
+ --vm-name myVM --name customScript \
+ --publisher Microsoft.Azure.Extensions \
+ --settings ./script-config.json
+ ```
### Example: Public configuration with no script file
+This example uses the following JSON-formatted content:
+ ```json { "commandToExecute": "apt-get -y update && apt-get install -y apache2" } ```
-Azure CLI command:
+Run the following command:
```azurecli az vm extension set \
- --resource-group myResourceGroup \
- --vm-name myVM --name customScript \
+ --resource-group tim0329vmRG \
+ --vm-name tim0329vm --name customScript \
--publisher Microsoft.Azure.Extensions \
- --settings ./script-config.json
+ --settings '{"commandToExecute": "apt-get -y update && apt-get install -y apache2"}'
``` ### Example: Public and protected configuration files
-You use a public configuration file to specify the script file's URI. You use a protected configuration file to specify the command to be run.
-
-Public configuration file:
+Use a public configuration file to specify the script file's URI:
```json {
Public configuration file:
} ```
-Protected configuration file:
+Use a protected configuration file to specify the command to be run:
```json {
- "commandToExecute": "./config-music.sh <param1>"
+ "commandToExecute": "./config-music.sh"
} ```
-Azure CLI command:
+1. Create the public configuration file by using the text editor of your choice or by using the following CLI command:
-```azurecli
-az vm extension set \
- --resource-group myResourceGroup \
- --vm-name myVM \
- --name customScript \
- --publisher Microsoft.Azure.Extensions \
- --settings ./script-config.json \
- --protected-settings ./protected-config.json
-```
+ ```azurecli
+ cat <<EOF > script-config.json
+ {
+ "fileUris": ["https://raw.githubusercontent.com/Microsoft/dotnet-core-sample-templates/master/dotnet-core-music-linux/scripts/config-music.sh"]
+ }
+ EOF
+ ```
-## Virtual machine scale sets
+1. Create the protected configuration file by using the text editor of your choice or by using the following CLI command:
-If you deploy the Custom Script Extension from the Azure portal, you don't have control over the expiration of the SAS token for accessing the script in your storage account. The result is that the initial deployment works, but when the storage account's SAS token expires, any subsequent scaling operation fails because the Custom Script Extension can no longer access the storage account.
+ ```azurecli
+ cat <<EOF > protected-config.json
+ {
+ "commandToExecute": "./config-music.sh"
+ }
+ EOF
+ ```
-We recommend that you use [PowerShell](/powershell/module/az.Compute/Add-azVmssExtension), the [Azure CLI](/cli/azure/vmss/extension), or an [Azure Resource Manager template](/azure/templates/microsoft.compute/virtualmachinescalesets/extensions) when you deploy the Custom Script Extension on a virtual machine scale set. This way, you can choose to use a managed identity or have direct control of the expiration of the SAS token for accessing the script in your storage account for as long as you need.
+1. Run the following command:
+
+ ```azurecli
+ az vm extension set \
+ --resource-group myResourceGroup \
+ --vm-name myVM \
+ --name customScript \
+ --publisher Microsoft.Azure.Extensions \
+ --settings ./script-config.json \
+ --protected-settings ./protected-config.json
+ ```
+
+## Virtual Machine Scale Sets
+
+If you deploy the Custom Script Extension from the Azure portal, you don't have control over the expiration of the SAS token to access the script in your storage account. The initial deployment works, but when the storage account's SAS token expires, any subsequent scaling operation fails because the Custom Script Extension can no longer access the storage account.
+
+We recommend that you use [PowerShell](/powershell/module/az.Compute/Add-azVmssExtension), the [Azure CLI](/cli/azure/vmss/extension), or an [Azure Resource Manager template](/azure/templates/microsoft.compute/virtualmachinescalesets/extensions) when you deploy the Custom Script Extension on a Virtual Machine Scale Set. This way, you can choose to use a managed identity or have direct control of the expiration of the SAS token for accessing the script in your storage account for as long as you need.
## Troubleshooting+ When the Custom Script Extension runs, the script is created or downloaded into a directory that's similar to the following example. The command output is also saved into this directory in `stdout` and `stderr` files. ```bash sudo ls -l /var/lib/waagent/custom-script/download/0/ ```
-To troubleshoot, first check the Linux Agent Log and ensure that the extension ran:
+To troubleshoot, first check the Linux Agent log and ensure that the extension ran:
```bash sudo cat /var/log/waagent.log ```
-Look for the extension execution. It will look something like:
+Look for the extension execution. It looks something like:
```output 2018/04/26 17:47:22.110231 INFO [Microsoft.Azure.Extensions.customScript-2.0.6] [Enable] current handler state is: notinstalled
The Azure Script Extension produces a log, which you can find here:
sudo cat /var/log/azure/custom-script/handler.log ```
-Look for the individual execution. It will look something like:
+Look for the individual execution. It looks something like:
```output time=2018-04-26T17:47:23Z version=v2.0.6/git@1008306-clean operation=enable seq=0 event=start
time=2018-04-26T17:47:23Z version=v2.0.6/git@1008306-clean operation=enable seq=
Here you can see:
-* The `enable` command that starts this log.
-* The settings passed to the extension.
-* The extension downloading the file and the result of that.
-* The command being run and the result.
+- The `enable` command that starts this log.
+- The settings passed to the extension.
+- The extension downloading the file and the result of that action.
+- The command being run and the result.
You can also retrieve the execution state of the Custom Script Extension, including the actual arguments passed as `commandToExecute`, by using the Azure CLI:
The output looks like the following text:
] ```
-#### Azure CLI syntax issues
+### Azure CLI syntax issues
[!INCLUDE [azure-cli-troubleshooting.md](../../../includes/azure-cli-troubleshooting.md)] ## Next steps
-To see the code, current issues, and versions, go to the [custom-script-extension-linux repo on GitHub](https://github.com/Azure/custom-script-extension-linux).
+
+To see the code, current issues, and versions, see [custom-script-extension-linux](https://github.com/Azure/custom-script-extension-linux).
virtual-machines Hpccompute Gpu Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/hpccompute-gpu-linux.md
vm-linux Previously updated : 11/15/2021 Last updated : 04/18/2023
This extension supports the following OS distros, depending on driver support fo
| Distribution | Version | |||
-| Linux: Ubuntu | 16.04 LTS, 18.04 LTS, 20.04 LTS |
+| Linux: Ubuntu | 18.04 LTS, 20.04 LTS |
| Linux: Red Hat Enterprise Linux | 7.3, 7.4, 7.5, 7.6, 7.7, 7.8 | | Linux: CentOS | 7.3, 7.4, 7.5, 7.6, 7.7, 7.8 | > [!NOTE] > The latest supported CUDA drivers for NC-series VMs are currently 470.82.01. Later driver versions aren't supported on the K80 cards in NC. While the extension is being updated with this end of support for NC, install CUDA drivers manually for K80 cards on the NC-series.
+> [!IMPORTANT]
+> This document references a release version of Linux that is nearing or at, End of Life (EOL). Please consider updating to a more current version.
+ ### Internet connectivity The Microsoft Azure Extension for NVIDIA GPU Drivers requires that the target VM is connected to the internet and has access.
virtual-machines Network Watcher Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/network-watcher-linux.md
The Network Watcher Agent extension can be configured for the following Linux di
| CentOS | 6.10 and 7 | > [!IMPORTANT]- > Keep in consideration Red Hat Enterprise Linux 6.X and Oracle Linux 6.x is already EOL. > RHEL 6.10 has available [ELS support](https://www.redhat.com/en/resources/els-datasheet), which [will end on 06/2024]( https://access.redhat.com/product-life-cycles/?product=Red%20Hat%20Enterprise%20Linux,OpenShift%20Container%20Platform%204). > Oracle Linux version 6.10 has available [ELS support](https://www.oracle.com/a/ocom/docs/linux/oracle-linux-extended-support-ds.pdf), which [will end on 07/2024](https://www.oracle.com/a/ocom/docs/elsp-lifetime-069338.pdf).
virtual-machines Hc Series Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hc-series-overview.md
description: Learn about the preview support for the HC-series VM size in Azure.
Previously updated : 03/04/2023 Last updated : 04/18/2023
The following diagram shows the segregation of cores reserved for Azure Hypervis
| MPI Support | HPC-X, Intel MPI, OpenMPI, MVAPICH2, MPICH, Platform MPI | | Additional Frameworks | UCX, libfabric, PGAS | | Azure Storage Support | Standard and Premium Disks (maximum 4 disks) |
-| OS Support for SRIOV RDMA | CentOS/RHEL 7.6+, Ubuntu 16.04+, SLES 12 SP4+, WinServer 2016+ |
+| OS Support for SRIOV RDMA | CentOS/RHEL 7.6+, Ubuntu 18.04+, SLES 15.4, WinServer 2016+ |
| Orchestrator Support | CycleCloud, Batch, AKS; [cluster configuration options](sizes-hpc.md#cluster-configuration-options) |
+> [!IMPORTANT]
+> This document references a release version of Linux that is nearing or at, End of Life(EOL). Please consider updating to a more current version.
+ ## Next steps - Learn more about [Intel Xeon SP architecture](https://software.intel.com/content/www/us/en/develop/articles/intel-xeon-processor-scalable-family-technical-overview.html).
virtual-machines Disk Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-overview.md
Previously updated : 01/04/2023 Last updated : 04/18/2023
Linux server distributions that are not endorsed by Azure do not support Azure D
| Canonical | Ubuntu | 20.04-DAILY-LTS Gen2 |Canonical:0001-com-ubuntu-server-focal-daily:20_04-daily-lts-gen2:latest | OS and data disk | | Canonical | Ubuntu | 18.04-LTS | Canonical:UbuntuServer:18.04-LTS:latest | OS and data disk | | Canonical | Ubuntu 18.04 | 18.04-DAILY-LTS | Canonical:UbuntuServer:18.04-DAILY-LTS:latest | OS and data disk |
-| Canonical | Ubuntu 16.04 | 16.04-DAILY-LTS | Canonical:UbuntuServer:16.04-DAILY-LTS:latest | OS and data disk |
-| Canonical | Ubuntu 14.04.5</br>[with Azure tuned kernel updated to 4.15 or later](disk-encryption-troubleshooting.md) | 14.04.5-LTS | Canonical:UbuntuServer:14.04.5-LTS:latest | OS and data disk |
-| Canonical | Ubuntu 14.04.5</br>[with Azure tuned kernel updated to 4.15 or later](disk-encryption-troubleshooting.md) | 14.04.5-DAILY-LTS | Canonical:UbuntuServer:14.04.5-DAILY-LTS:latest | OS and data disk |
| Oracle | Oracle Linux 8.6 | 8.6 | Oracle:Oracle-Linux:ol86-lvm:latest | OS and data disk (see note below) | | Oracle | Oracle Linux 8.6 Gen 2 | 8.6 | Oracle:Oracle-Linux:ol86-lvm-gen2:latest | OS and data disk (see note below) | | Oracle | Oracle Linux 8.5 | 8.5 | Oracle:Oracle-Linux:ol85-lvm:latest | OS and data disk (see note below) |
Linux server distributions that are not endorsed by Azure do not support Azure D
| RedHat | RHEL 7.6 | 7.6 | RedHat:RHEL:7.6:latest | OS and data disk (see note below) | | RedHat | RHEL 7.5 | 7.5 | RedHat:RHEL:7.5:latest | OS and data disk (see note below) | | RedHat | RHEL 7.4 | 7.4 | RedHat:RHEL:7.4:latest | OS and data disk (see note below) |
-| RedHat | RHEL 7.3 | 7.3 | RedHat:RHEL:7.3:latest | OS and data disk (see note below) |
-| RedHat | RHEL 7.2 | 7.2 | RedHat:RHEL:7.2:latest | OS and data disk (see note below) |
| RedHat | RHEL 6.8 | 6.8 | RedHat:RHEL:6.8:latest | Data disk (see note below) | | RedHat | RHEL 6.7 | 6.7 | RedHat:RHEL:6.7:latest | Data disk (see note below) | | OpenLogic | CentOS 8-LVM | 8-LVM | OpenLogic:CentOS-LVM:8-LVM:latest | OS and data disk |
Linux server distributions that are not endorsed by Azure do not support Azure D
| OpenLogic | CentOS 7.6 | 7.6 | OpenLogic:CentOS:7.6:latest | OS and data disk | | OpenLogic | CentOS 7.5 | 7.5 | OpenLogic:CentOS:7.5:latest | OS and data disk | | OpenLogic | CentOS 7.4 | 7.4 | OpenLogic:CentOS:7.4:latest | OS and data disk |
-| OpenLogic | CentOS 7.3 | 7.3 | OpenLogic:CentOS:7.3:latest | OS and data disk |
-| OpenLogic | CentOS 7.2n | 7.2n | OpenLogic:CentOS:7.2n:latest | OS and data disk |
-| OpenLogic | CentOS 7.1 | 7.1 | OpenLogic:CentOS:7.1:latest | Data disk only |
-| OpenLogic | CentOS 7.0 | 7.0 | OpenLogic:CentOS:7.0:latest | Data disk only |
| OpenLogic | CentOS 6.8 | 6.8 | OpenLogic:CentOS:6.8:latest | Data disk only | | SUSE | openSUSE 42.3 | 42.3 | SUSE:openSUSE-Leap:42.3:latest | Data disk only | | SUSE | SLES 12-SP4 | 12-SP4 | SUSE:SLES:12-SP4:latest | Data disk only |
sudo mount -a
## Networking requirements To enable the Azure Disk Encryption feature, the Linux VMs must meet the following network endpoint configuration requirements:+ - To get a token to connect to your key vault, the Linux VM must be able to connect to an Azure Active Directory endpoint, \[login.microsoftonline.com\]. - To write the encryption keys to your key vault, the Linux VM must be able to connect to the key vault endpoint. - The Linux VM must be able to connect to an Azure storage endpoint that hosts the Azure extension repository and an Azure storage account that hosts the VHD files.
virtual-machines Prepay Suse Software Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/prepay-suse-software-charges.md
RedHat plan discounts apply only to the VM size that you select at the time of p
## Self-service cancellation and exchanges not allowed
-You can't cancel or exchange a SUSE or RedHat plan that you bought yourself. If you want to cancel or exchange a reservation, you can [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) to have Azure support make the cancellation or exchange for you.
+You can't cancel or exchange a SUSE or RedHat plan that you bought yourself.
Check your usage before purchasing to make sure you buy the right plan. For help to identify what to buy, see [Understand how the software plan discount is applied](../../cost-management-billing/reservations/understand-suse-reservation-charges.md).
virtual-machines Hybrid Use Benefit Licensing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/hybrid-use-benefit-licensing.md
Previously updated : 9/28/2022 Last updated : 4/18/2023 ms.devlang: azurecli # Explore Azure Hybrid Benefit for Windows VMs
-For customers with Software Assurance, Azure Hybrid Benefit for Windows Server allows you to use your on-premises Windows Server licenses and run Windows virtual machines on Azure at a reduced cost. You can use Azure Hybrid Benefit for Windows Server to deploy new virtual machines with Windows OS. This article goes over the steps on how to deploy new VMs with Azure Hybrid Benefit for Windows Server and how you can update existing running VMs. For more information about Azure Hybrid Benefit for Windows Server licensing and cost savings, see the [Azure Hybrid Benefit for Windows Server licensing page](https://azure.microsoft.com/pricing/hybrid-use-benefit/).
+For customers with Software Assurance or subscription licenses, Azure Hybrid Benefit for Windows Server allows you to use your on-premises Windows Server licenses to get Windows virtual machines on Azure at a reduced cost. You can use Azure Hybrid Benefit for Windows Server to deploy new virtual machines with Windows OS. This article goes over the steps on how to deploy new VMs with Azure Hybrid Benefit for Windows Server and how you can update existing running VMs. For more information about Azure Hybrid Benefit for Windows Server licensing and cost savings, see the [Azure Hybrid Benefit for Windows Server licensing page](https://azure.microsoft.com/pricing/hybrid-use-benefit/).
-Each 2-processor license or each set of 16-core licenses is entitled to two instances of up to 8 cores, or one instance of up to 16 cores. The Azure Hybrid Benefit for Standard Edition licenses can only be used once either on-premises or in Azure. Datacenter Edition benefits allow for simultaneous usage both on-premises and in Azure.
+You'll need a minimum of 8 core licenses (Datacenter or Standard edition) per virtual machine. You may also run instances larger than 8 cores by allocating licenses equal to the core-size of the instance. For example, 12 core licenses are required for a 12-core instance, however 8 core licenses are still required if you run a 4-core instance. For customers with processor licenses, each two core processor license is equivalent to 16 core licenses.
Using Azure Hybrid Benefit for Windows Server with any VMs running Windows Server OS are now supported in all regions, including VMs with additional software such as SQL Server or third-party marketplace software.
virtual-machines Deploy Ibm Db2 Purescale Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/ibm/deploy-ibm-db2-purescale-azure.md
Previously updated : 11/09/2018 Last updated : 04/19/2023 # Deploy IBM DB2 pureScale on Azure
The repository also has scripts for setting up a Grafana dashboard. You can use
The deploy.sh script creates and configures the Azure resources for this architecture. The script prompts you for the Azure subscription and virtual machines used in the target environment, and then performs the following operations: -- Sets up the resource group, virtual network, and subnets on Azure for the installation.--- Sets up the network security groups and SSH for the environment.--- Sets up multiple NICs on both the shared storage and the DB2 pureScale virtual machines.--- Creates the shared storage virtual machines. If you use Storage Spaces Direct or another storage solution, see [Storage Spaces Direct overview](/windows-server/storage/storage-spaces/storage-spaces-direct-overview).--- Creates the jumpbox virtual machine.--- Creates the DB2 pureScale virtual machines.--- Creates the witness virtual machine that DB2 pureScale pings. Skip this part of the deployment if your version of Db2 pureScale does not require a witness.--- Creates a Windows virtual machine to use for testing but doesn't install anything on it.
+- Sets up the resource group, virtual network, and subnets on Azure for the installation.
+- Sets up the network security groups and SSH for the environment.
+- Sets up multiple NICs on both the shared storage and the DB2 pureScale virtual machines.
+- Creates the shared storage virtual machines. If you use Storage Spaces Direct or another storage solution, see [Storage Spaces Direct overview](/windows-server/storage/storage-spaces/storage-spaces-direct-overview).
+- Creates the jumpbox virtual machine.
+- Creates the DB2 pureScale virtual machines.
+- Creates the witness virtual machine that DB2 pureScale pings. Skip this part of the deployment if your version of Db2 pureScale does not require a witness.
+- Creates a Windows virtual machine to use for testing but doesn't install anything on it.
Next, the deployment scripts set up an iSCSI virtual storage area network (vSAN) for shared storage on Azure. In this example, iSCSI connects to the shared storage cluster. In the original customer solution, GlusterFS was used. However, IBM no longer supports this approach. To maintain your support from IBM, you need to use a supported iSCSI-compatible file system. Microsoft offers Storage Spaces Direct (S2D) as an option.
This solution also gives you the option to install the iSCSI targets as a single
The deployment scripts run these general steps:
-1. Set up a shared storage cluster on Azure. This step involves at least two Linux nodes.
-
-2. Set up an iSCSI Direct interface on target Linux servers for the shared storage cluster.
-
-3. Set up the iSCSI initiator on the Linux virtual machines. The initiator will access the shared storage cluster by using an iSCSI target. For setup details, see [How To Configure An iSCSI Target And Initiator In Linux](https://www.rootusers.com/how-to-configure-an-iscsi-target-and-initiator-in-linux/) in the RootUsers documentation.
-
-4. Install the shared storage layer for the iSCSI interface.
+1. Set up a shared storage cluster on Azure. This step involves at least two Linux nodes.
+2. Set up an iSCSI Direct interface on target Linux servers for the shared storage cluster.
+3. Set up the iSCSI initiator on the Linux virtual machines. The initiator will access the shared storage cluster by using an iSCSI target. For setup details, see [How To Configure An iSCSI Target And Initiator In Linux](https://www.rootusers.com/how-to-configure-an-iscsi-target-and-initiator-in-linux/) in the RootUsers documentation.
+4. Install the shared storage layer for the iSCSI interface.
After the scripts create the iSCSI device, the final step is to install DB2 pureScale. As part of the DB2 pureScale setup, [IBM Spectrum Scale](https://www.ibm.com/support/knowledgecenter/SSEPGG_11.1.0/com.ibm.db2.luw.qb.server.doc/doc/t0057167.html) (formerly known as GPFS) is compiled and installed on the GlusterFS cluster. This clustered file system enables DB2 pureScale to share data among the virtual machines that run the DB2 pureScale engine. For more information, see the [IBM Spectrum Scale](https://www.ibm.com/support/knowledgecenter/en/STXKQY_4.2.0/ibmspectrumscale42_welcome.html) documentation on the IBM website.
The GitHub repository includes DB2server.rsp, a response (.rsp) file that enable
||-|-| | Welcome | | New Install | | Choose a Product | | DB2 Version 11.1.3.3. Server Editions with DB2 pureScale |
-| Configuration | Directory | /data1/opt/ibm/db2/V11.1 |
+| Configuration | Directory | `/data1/opt/ibm/db2/V11.1` |
| | Select the installation type | Typical | | | I agree to the IBM terms | Checked | | Instance Owner | Existing User For Instance, User name | DB2sdin1 | | Fenced User | Existing User, User name | DB2sdfe1 |
-| Cluster File System | Shared disk partition device path | /dev/dm-2 |
-| | Mount point | /DB2sd\_1804a |
-| | Shared disk for data | /dev/dm-1 |
-| | Mount point (Data) | /DB2fs/datafs1 |
-| | Shared disk for log | /dev/dm-0 |
-| | Mount point (Log) | /DB2fs/logfs1 |
-| | DB2 Cluster Services Tiebreaker. Device path | /dev/dm-3 |
-| Host List | d1 [eth1], d2 [eth1], cf1 [eth1], cf2 [eth1] | |
+| Cluster File System | Shared disk partition device path | `/dev/dm-2` |
+| | Mount point | `/DB2sd_1804a` |
+| | Shared disk for data | `/dev/dm-1` |
+| | Mount point (Data) | `/DB2fs/datafs1` |
+| | Shared disk for log | `/dev/dm-0` |
+| | Mount point (Log) | `/DB2fs/logfs1` |
+| | DB2 Cluster Services Tiebreaker. Device path | `/dev/dm-3` |
+| Host List | d1 [eth1], d2 [eth1], cf1 [eth1], cf2[eth1] | |
| | Preferred primary CF | cf1 | | | Preferred secondary CF | cf2 | | Response File and Summary | first option | Install DB2 Server Edition with the IBM DB2 pureScale feature and save my settings in a response file |
-| | Response file name | /root/DB2server.rsp |
+| | Response file name | `/root/DB2server.rsp` |
### Notes about this deployment -- The values for /dev-dm0, /dev-dm1, /dev-dm2, and /dev-dm3 can change after a restart on the virtual machine where the setup takes place (d0 in the automated script). To find the right values, you can issue the following command before completing the response file on the server where the setup will run:
+- The values for `/dev-dm0`, `/dev-dm1`, `/dev-dm2`, and `/dev-dm3` can change after a restart on the virtual machine where the setup takes place (d0 in the automated script). To find the right values, you can issue the following command before completing the response file on the server where the setup will run:
+ ```bash
+ sudo ls -als /dev/mapper
```
- [root\@d0 rhel]\# ls -als /dev/mapper
+
+ ```output
total 0 0 drwxr-xr-x 2 root root 140 May 30 11:07 . 0 drwxr-xr-x 19 root root 4060 May 30 11:31 ..
The GitHub repository includes DB2server.rsp, a response (.rsp) file that enable
``` - The setup scripts use aliases for the iSCSI disks so that the actual names can be found easily.--- When the setup script is run on d0, the **/dev/dm-\*** values might be different on d1, cf0, and cf1. The difference in values doesn't affect the DB2 pureScale setup.
+- When the setup script is run on d0, the `/dev/dm-\*` values might be different on d1, cf0, and cf1. The difference in values doesn't affect the DB2 pureScale setup.
## Troubleshooting and known issues The GitHub repo includes a knowledge base that the authors maintain. It lists potential problems you might have and resolutions you can try. For example, known problems can happen when: -- You're trying to reach the gateway IP address.--- You're compiling General Public License (GPL).--- The security handshake between hosts fails.--- The DB2 installer detects an existing file system.--- You're manually installing IBM Spectrum Scale.--- You're installing DB2 pureScale when IBM Spectrum Scale is already created.--- You're removing DB2 pureScale and IBM Spectrum Scale.
+- You're trying to reach the gateway IP address.
+- You're compiling General Public License (GPL).
+- The security handshake between hosts fails.
+- The DB2 installer detects an existing file system.
+- You're manually installing IBM Spectrum Scale.
+- You're installing DB2 pureScale when IBM Spectrum Scale is already created.
+- You're removing DB2 pureScale and IBM Spectrum Scale.
For more information about these and other known problems, see the kb.md file in the [DB2onAzure](https://aka.ms/DB2onAzure) repo. ## Next steps -- [Creating required users for a DB2 pureScale Feature installation](https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.1.0/com.ibm.db2.luw.qb.server.doc/doc/t0055374.html?pos=2)--- [DB2icrt - Create instance command](https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.1.0/com.ibm.db2.luw.admin.cmd.doc/doc/r0002057.html)--- [DB2 pureScale Clusters Data Solution](https://www.ibmbigdatahub.com/blog/db2-purescale-clustered-database-solution-part-1)--- [IBM Data Studio](https://www.ibm.com/developerworks/downloads/im/data/https://docsupdatetracker.net/index.html/)--- [Azure Virtual Data Center Lift and Shift Guide](https://azure.microsoft.com/resources/azure-virtual-datacenter-lift-and-shift-guide/)
+- [Creating required users for a DB2 pureScale Feature installation](https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.1.0/com.ibm.db2.luw.qb.server.doc/doc/t0055374.html?pos=2)
+- [DB2icrt - Create instance command](https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.1.0/com.ibm.db2.luw.admin.cmd.doc/doc/r0002057.html)
+- [DB2 pureScale Clusters Data Solution](https://www.ibmbigdatahub.com/blog/db2-purescale-clustered-database-solution-part-1)
+- [IBM Data Studio](https://www.ibm.com/developerworks/downloads/im/data/https://docsupdatetracker.net/index.html/)
+- [Azure Virtual Data Center Lift and Shift Guide](https://azure.microsoft.com/resources/azure-virtual-datacenter-lift-and-shift-guide/)
virtual-machines Byos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/byos.md
After you finish the Cloud Access enablement steps, Red Hat validates your eligi
## Use the Red Hat Gold Images from the Azure portal
-1. After your Azure subscription receives access to Red Hat Gold Images, you can locate them in the [Azure portal](https://portal.azure.com). Go to **Create a Resource** > **See all**.
+1. After your Azure subscription receives access to Red Hat Gold Images, you can locate them in the [Azure portal](https://portal.azure.com). Go to **Create a Resource** > **MarketPlace**.
1. At the top of the page, you'll see that you have private offers.
- ![Marketplace private offers](./media/rhel-byos-privateoffers.png)
+ ![Marketplace private offers](./media/rhel-byos-privateoffers-2.png)
1. Select the purple link, or scroll down to the bottom of the page to see your private offers.
virtual-network-manager Concept Azure Policy Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-azure-policy-integration.md
Last updated 04/14/2023-+ # Configuring Azure Policy with network groups in Azure Virtual Network Manager
With network groups, your policy definition includes your conditional expression
> >If you need to parameterize the network group, you can utilize an Azure Resource Manager template to create the policy definition and assignment.
+When Azure Policy is used with Azure Virtual Network Manager, the policy targets a [Resource Provider property](../governance/policy/concepts/definition-structure.md#resource-provider-modes) of `Microsoft.Network.Data`. Because of this, you need to specify a *policyType* of `Custom` in your policy definition. When you [create a policy to dynamically add members](how-to-exclude-elements.md) in Virtual Network Manager, this is applied automatically when the policy is created. You only need to choose `custom` when [creating a new policy definition](../governance/policy/tutorials/create-and-manage.md) through Azure Policy or other tooling outside of the Virtual Network Manager dashboard.
+
+Here's a sample of a policy definition with the `policyType` property set to `Custom`.
+
+```json
+
+"properties": {
+ "displayName": "myProdAVNM",
+ "policyType": "Custom",
+ "mode": "Microsoft.Network.Data",
+ "metadata": {
+ "category": "Azure Virtual Network Manager",
+ "createdBy": "--",
+ "createdOn": "2023-04-10T15:35:35.9308987Z",
+ "updatedBy": null,
+ "updatedOn": null
+ }
+}
+
+```
Learn more about [policy definition structure](../governance/policy/concepts/definition-structure.md). ## Policy assignments
virtual-network-manager Create Virtual Network Manager Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/create-virtual-network-manager-portal.md
Create three virtual networks using the portal. Each virtual network has a tag o
1. Select **Next** or the **IP addresses** tab and configure the following network address spaces:
- :::image type="content" source="./media/create-virtual-network-manager-portal/create-vnet-ip.png" alt-text="Screenshot of create a virtual network IP addresses page." lightbox="./media/create-virtual-network-manager-portal/create-vnet-ip.png":::
+ :::image type="content" source="./media/create-virtual-network-manager-portal/create-vnet-ip.png" alt-text="Screenshot of create a virtual network IP addresses page.":::
| Setting | Value | | -- | -- |
In this task, you manually add two virtual networks for your Mesh configuration
Using [Azure Policy](concept-azure-policy-integration.md), you define a condition to dynamically add two virtual networks to your network group when the name of the virtual network includes **prod** using these steps:
-1. From the list of network groups, select **ng-learn-prod-eastus-001** and select **Create azure policy** under *Create policy to dynamically add members*.
+1. From the list of network groups, select **ng-learn-prod-eastus-001** and select **Create Azure policy** under *Create policy to dynamically add members*.
:::image type="content" source="media/create-virtual-network-manager-portal/define-dynamic-membership.png" alt-text="Screenshot of Create Azure Policy button.":::
-1. On the **Create azure policy** page, select or enter the following information:
+1. On the **Create Azure policy** page, select or enter the following information:
:::image type="content" source="./media/create-virtual-network-manager-portal/network-group-conditional.png" alt-text="Screenshot of create a network group conditional statements tab.":::
virtual-network-manager Tutorial Create Secured Hub And Spoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/tutorial-create-secured-hub-and-spoke.md
Title: 'Tutorial: Create a secured hub and spoke network'
-description: In this tutorial, you learn how to create a hub and spoke network with Azure Virtual Network Manager. Then you secure all your virtual networks with a security policy.
+description: In this tutorial, you learn how to create a hub and spoke network topology for your virtual networks using Azure Virtual Network Manager. Then you secure your network by blocking outbound traffic on ports 80 and 443.
Previously updated : 03/22/2023- Last updated : 04/14/2023+ # Tutorial: Create a secured hub and spoke network
In this tutorial, you learn how to:
## Prerequisite * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* Before you can complete steps in this tutorial, you must first [create an Azure Virtual Network Manager](create-virtual-network-manager-portal.md) instance.
+* Before you can complete steps in this tutorial, you must first [create an Azure Virtual Network Manager](create-virtual-network-manager-portal.md) instance. The instance needs to included the **Connectivity** and **Security admin** features. This tutorial used a Virtual Network Manager instance named **vnm-learn-eastus-001**.
## Create virtual networks
-This procedure walks you through creating three virtual networks. One is in the *West US* region and the other two are in the *East US* region.
+This procedure walks you through creating three virtual networks that will be connected using the hub and spoke network topology.
1. Sign in to the [Azure portal](https://portal.azure.com/).
This procedure walks you through creating three virtual networks. One is in the
1. On the *Basics* tab, enter or select the following information:
- :::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/create-hub-vnet-basic.png" alt-text="Screenshot of basics tab for hub and spoke virtual network.":::
+ :::image type="content" source="./media/create-virtual-network-manager-portal/create-vnet-basic.png" alt-text="Screenshot of basics tab for hub and spoke virtual network.":::
| Setting | Value | | - | -- | | Subscription | Select the subscription you want to deploy this virtual network into. |
- | Resource group | Select or create a new resource group to store the virtual network. This quickstart uses a resource group named **myAVNMResourceGroup**. |
- | Name | Enter **VNet-A-WestUS** for the virtual network name. |
- | Region | Select the **West US** region. |
+ | Resource group | Select or create a new resource group to store the virtual network. This quickstart uses a resource group named **rg-learn-eastus-001**. |
+ | Name | Enter **vnet-learn-prod-eastus-001** for the virtual network name. |
+ | Region | Select the **East US** region. |
1. Select **Next: IP Addresses** and configure the following network address space:
- :::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/create-hub-vnet-addresses.png" alt-text="Screenshot of IP addresses tab for hub and spoke virtual network.":::
+ :::image type="content" source="./media/create-virtual-network-manager-portal/create-vnet-ip.png" alt-text="Screenshot of IP addresses tab for hub and spoke virtual network.":::
| Setting | Value | | -- | -- |
- | IPv4 address space | Enter **10.3.0.0/16** as the address space. |
+ | IPv4 address space | Enter **10.0.0.0/16** as the address space. |
| Subnet name | Enter the name **default** for the subnet. |
- | Subnet address space | Enter the subnet address space of **10.3.0.0/24**. |
+ | Subnet address space | Enter the subnet address space of **10.0.0.0/24**. |
1. Select **Review + create** and then select **Create** to deploy the virtual network. 1. Repeat steps 2-5 to create two more virtual networks into the same resource group with the following information:
- **Second virtual network**:
- * Name: **VNet-A-EastUS**
- * Region: **East US**
- * IPv4 address space: **10.4.0.0/16**
- * Subnet name: **default**
- * Subnet address space: **10.4.0.0/24**
- **Third virtual network**:
- * Name: **VNet-B-EastUS**
- * Region: **East US**
- * IPv4 address space: **10.5.0.0/16**
- * Subnet name: **default**
- * Subnet address space: **10.5.0.0/24**
+ | Setting | Value |
+ | - | -- |
+ | Subscription | Select the same subscription you selected in step 3. |
+ | Resource group | Select the **rg-learn-eastus-001**. |
+ | Name | Enter **vnet-learn-prod-eastus-002** and **vnet-learn-hub-eastus-001** for the two virtual networks. |
+ | Region | Select **(US) East US** |
+ | vnet-learn-prod-eastus-002 IP addresses | IPv4 address space: 10.1.0.0/16 </br> Subnet name: default </br> Subnet address space: 10.1.0.0/24|
+ | vnet-learn-hub-eastus-001 IP addresses | IPv4 address space: 10.2.0.0/16 </br> Subnet name: default </br> Subnet address space: 10.2.0.0/24|
## Deploy a virtual network gateway
Deploy a virtual network gateway into the hub virtual network. This virtual netw
1. On the *Basics* tab, enter or select the following settings:
- :::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/gateway-basics.png" alt-text="Screenshot of create the virtual network gateway basics tab." lightbox="./media/tutorial-create-secured-hub-and-spoke/gateway-basics-expanded.png":::
+ :::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/gateway-basics.png" alt-text="Screenshot of create the virtual network gateway basics tab.":::
| Setting | Value | | -- | -- | | Subscription | Select the subscription you want to deploy this virtual network into. |
- | Name | Enter **VNet-A-WestUS-GW** for the virtual network gateway name. |
+ | Name | Enter **gw-learn-hub-eastus-001** for the virtual network gateway name. |
| SKU | Select **VpnGW1** for the SKU. | | Generation | Select **Generation1** for the generation. |
- | Virtual network | Select the **VNet-A-WestUS** for the VNet. |
- | Public IP address name | Enter the name **VNet-A-WestUS-GW-IP** for the public IP. |
+ | Virtual network | Select the **vnet-learn-hub-eastus-001** for the VNet. |
+ | **Public IP Address** | |
+ | Public IP address name | Enter the name **gwpip-learn-hub-eastus-001** for the public IP. |
+ | **SECOND PUBLIC IP ADDRESS** | |
+ | Public IP address name | Enter the name **gwpip-learn-hub-eastus-002** for the public IP. |
-1. Select **Review + create** and then select **Create** after validation has passed. The deployment of a virtual network gateway can take about 30 minutes. You can move on to the next section while waiting for this deployment to complete. However, you may find **VNet-A-WestUS-GW** doesn't display that it has a gateway due to timing and sync across the Azure portal.
+1. Select **Review + create** and then select **Create** after validation has passed. The deployment of a virtual network gateway can take about 30 minutes. You can move on to the next section while waiting for this deployment to complete. However, you may find **gw-learn-hub-eastus-001** doesn't display that it has a gateway due to timing and sync across the Azure portal.
## Create a dynamic network group
-1. Go to your Azure Virtual Network Manager instance. This tutorial assumes you've created one using the [quickstart](create-virtual-network-manager-portal.md) guide.
+1. Go to your Azure Virtual Network Manager instance. This tutorial assumes you've created one using the [quickstart](create-virtual-network-manager-portal.md) guide. The network group in this tutorial is called **ng-learn-prod-eastus-001**.
1. Select **Network groups** under *Settings*, and then select **+ Create** to create a new network group.
- :::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/add-network-group.png" alt-text="Screenshot of add a network group button.":::
+ :::image type="content" source="./media/create-virtual-network-manager-portal/add-network-group-2.png" alt-text="Screenshot of add a network group button.":::
1. On the **Create a network group** screen, enter the following information:
- :::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/network-group-basics.png" alt-text="Screenshot of the Basics tab on Create a network group page.":::
+ :::image type="content" source="./media/create-virtual-network-manager-portal/create-network-group.png" alt-text="Screenshot of the Basics tab on Create a network group page.":::
| Setting | Value | | - | -- |
- | Name | Enter **myNetworkGroupB** for the network group name. |
+ | Name | Enter **ng-learn-prod-eastus-001** for the network group name. |
| Description | Provide a description about this network group. | 1. Select **Create** to create the virtual network group.- 1. From the **Network groups** page, select the created network group from above to configure the network group.-
- :::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/network-group-page.png" alt-text="Screenshot of the network groups page.":::
- 1. On the **Overview** page, select **Create Azure Policy** under *Create policy to dynamically add members*. :::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/define-dynamic-membership.png" alt-text="Screenshot of the defined dynamic membership button.":::
Deploy a virtual network gateway into the hub virtual network. This virtual netw
| Setting | Value | | - | -- |
- | Policy name | Enter **VNetAZPolicy** in the text box. |
+ | Policy name | Enter **azpol-learn-prod-eastus-001** in the text box. |
| Scope | Select **Select Scopes** and choose your current subscription. | | Criteria | | | Parameter | Select **Name** from the drop-down.| | Operator | Select **Contains** from the drop-down.|
- | Condition | Enter **-EastUS** to dynamically add the two East US virtual networks into this network group. |
+ | Condition | Enter **-prod** for the condition in the text box. |
+
+1. Select **Preview resources** to view the **Effective virtual networks** page and select **Close**. This page shows the virtual networks that will be added to the network group based on the conditions defined in Azure Policy.
+
+ :::image type="content" source="media/create-virtual-network-manager-portal/effective-virtual-networks.png" alt-text="Screenshot of Effective virtual networks page with results of conditional statement.":::
+
+1. Select **Save** to deploy the group membership. It can take up to one minute for the policy to take effect and be added to your network group.
+1. On the **Network Group** page under **Settings**, select **Group Members** to view the membership of the group based on the conditions defined in Azure Policy. The **Source** is listed as **azpol-learn-prod-eastus-001**.
+
+ :::image type="content" source="media/create-virtual-network-manager-portal/group-members-list.png" alt-text="Screenshot of dynamic group membership under Group Membership.":::
-1. Select **Save** to deploy the group membership.
-1. Under **Settings**, select **Group Members** to view the membership of the group based on the conditions defined in Azure Policy.
## Create a hub and spoke connectivity configuration
-1. Select **Configuration** under *Settings*, then select **+ Add a configuration**. Select **Connectivity** from the drop-down menu.
+1. Select **Configurations** under **Settings**, then select **+ Create**.
- :::image type="content" source="./media/create-virtual-network-manager-portal/connectivity-configuration-dropdown.png" alt-text="Screenshot of configuration drop-down menu.":::
+1. Select **Connectivity configuration** from the drop-down menu to begin creating a connectivity configuration.
-1. On the **Basics** tab, enter and select the following information for the connectivity configuration:
+1. On the **Basics** page, enter the following information, and select **Next: Topology >**.
- :::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/connectivity-configuration.png" alt-text="Screenshot of add a connectivity configuration page.":::
+ :::image type="content" source="./media/create-virtual-network-manager-portal/connectivity-configuration.png" alt-text="Screenshot of add a connectivity configuration page.":::
| Setting | Value | | - | -- |
- | Name | Enter **HubA** for the name of the configuration |
- | Description | Provide a description about what this connectivity configuration will do. |
+ | Name | Enter **cc-learn-prod-eastus-001**. |
+ | Description | *(Optional)* Provide a description about this connectivity configuration. |
-1. Select **Next: Topology >**. Select **Hub and Spoke** under the **Topology** setting. This will reveal other settings.
+1. On the **Topology** tab, select **Hub and Spoke**. This reveals other settings.
:::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/hub-configuration.png" alt-text="Screenshot of selecting a hub for the connectivity configuration.":::
-1. Select **Select a hub** under **Hub** setting. Then, select **VNet-A-WestUS** to serve as your network hub and select **Select**.
+1. Select **Select a hub** under **Hub** setting. Then, select **vnet-learn-hub-eastus-001** to serve as your network hub and select **Select**.
:::image type="content" source="media/tutorial-create-secured-hub-and-spoke/select-hub.png" alt-text="Screenshot of Select a hub configuration."::: > [!NOTE] > Depending on the timing of deployment, you may not see the target hub virtual networked as have a gateway under **Has gateway**. This is due to the deployment of the virtual network gateway. It can take up to 30 minutes to deploy, and may not display immediately in the various Azure portal views.
-1. Under **Spoke network groups**, select **+ add**. Then, select **myNetworkGroupB** for the network group and select **Select**.
+1. Under **Spoke network groups**, select **+ add**. Then, select **ng-learn-prod-eastus-001** for the network group and select **Select**.
- :::image type="content" source="media/tutorial-create-secured-hub-and-spoke/select-network-group.png" alt-text="Screenshot of Add network groups page.":::
+ :::image type="content" source="media/create-virtual-network-manager-portal/add-network-group-configuration.png" alt-text="Screenshot of Add network groups page.":::
1. After you've added the network group, select the following options. Then select add to create the connectivity configuration.
Deploy a virtual network gateway into the hub virtual network. This virtual netw
| Setting | Value | | - | -- |
- | Direct Connectivity | Select the checkbox for **Enable connectivity within network group**. This setting will allow spoke virtual networks in the network group in the same region to communicate with each other directly. |
- | Hub as gateway | Select the checkbox for **Use hub as a gateway**. |
+ | Direct Connectivity | Select the checkbox for **Enable connectivity within network group**. This setting allows spoke virtual networks in the network group in the same region to communicate with each other directly. |
| Global Mesh | Leave **Enable mesh connectivity across regions** option **unchecked**. This setting isn't required as both spokes are in the same region |
+ | Hub as gateway | Select the checkbox for **Hub as a gateway**. |
+ 1. Select **Next: Review + create >** and then create the connectivity configuration. ## Deploy the connectivity configuration
-Make sure the virtual network gateway has been successfully deployed before deploying the connectivity configuration. If you deploy a hub and spoke configuration with **Use the hub as a gateway** enabled and there's no gateway, the deployment will fail. For more information, see [use hub as a gateway](concept-connectivity-configuration.md#use-hub-as-a-gateway).
+Make sure the virtual network gateway has been successfully deployed before deploying the connectivity configuration. If you deploy a hub and spoke configuration with **Use the hub as a gateway** enabled and there's no gateway, the deployment fails. For more information, see [use hub as a gateway](concept-connectivity-configuration.md#use-hub-as-a-gateway).
1. Select **Deployments** under *Settings*, then select **Deploy configuration**. :::image type="content" source="./media/create-virtual-network-manager-portal/deployments.png" alt-text="Screenshot of deployments page in Network Manager.":::
-1. Select **Include connectivity configurations in your goal state** and **HubA** as the **Connectivity configurations** setting. Then select **West US** and **East US** as the target regions and select **Next**.
+1. Select the following settings:
+
+ :::image type="content" source="./media/create-virtual-network-manager-portal/deploy-configuration.png" alt-text="Screenshot of deploy a configuration page.":::
+
+ | Setting | Value |
+ | - | -- |
+ | Configurations | Select **Include connectivity configurations in your goal state** . |
+ | Connectivity configurations | Select **cc-learn-prod-eastus-001**. |
+ | Target regions | Select **East US** as the deployment region. |
+
- :::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/deploy-configuration.png" alt-text="Screenshot of deploy a configuration page.":::
+1. Select **Next** and then select **Deploy** to complete the deployment.
+ :::image type="content" source="./media/create-virtual-network-manager-portal/deployment-confirmation.png" alt-text="Screenshot of deployment confirmation message.":::
-1. Select **Deploy**. You should now see the deployment show up in the list for those regions. The deployment of the configuration can take several minutes to complete.
+1. The deployment displays in the list for the selected region. The deployment of the configuration can take a few minutes to complete.
- :::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/deployment-in-progress.png" alt-text="Screenshot of deployment in progress in deployment list.":::
+ :::image type="content" source="./media/create-virtual-network-manager-portal/deployment-in-progress.png" alt-text="Screenshot of configuration deployment in progress status.":::
-## Create security configuration
+## Create a security admin configuration
1. Select **Configuration** under *Settings* again, then select **+ Create**, and select **SecurityAdmin** from the menu to begin creating a SecurityAdmin configuration.
-1. Enter the name **mySecurityConfig** for the configuration, then select **Next: Rule collections**.
+1. Enter the name **sac-learn-prod-eastus-001** for the configuration, then select **Next: Rule collections**.
:::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/security-admin-configuration.png" alt-text="Screenshot of Security Admin configuration page.":::
-1. Enter the name **myRuleCollection** for the rule collection and select **myNetworkGroupB** for the target network group. Then select **+ Add**.
+1. Enter the name **rc-learn-prod-eastus-001** for the rule collection and select **ng-learn-prod-eastus-001** for the target network group. Then select **+ Add**.
:::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/add-rule-collection.png" alt-text="Screenshot of add a rule collection page.":::
Make sure the virtual network gateway has been successfully deployed before depl
| Action | Select **Deny** | | Direction | Select **Outbound** | | Protocol | Select **TCP** |
+ | **Source** | |
+ | Source type | Select **IP** |
+ | Source IP addresses | Enter **\*** |
+ | **Destination** | |
+ | Destination type | Select **IP addresses** |
+ | Destination IP addresses | Enter **\*** |
| Destination port | Enter **80, 443** | 1. Select **Add** to add the rule collection to the configuration.
Make sure the virtual network gateway has been successfully deployed before depl
1. Select **Deployments** under *Settings*, then select **Deploy configurations**.
-1. Under *Configurations*, Select **Include security admin in your goal state** and the **mySecurityConfig** configuration you created in the last section. Then select **West US** and **East US** as the target regions and select **Next**.
+1. Under *Configurations*, Select **Include security admin in your goal state** and the **sac-learn-prod-eastus-001** configuration you created in the last section. Then select **East US** as the target region and select **Next**.
:::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/deploy-security.png" alt-text="Screenshot of deploying a security configuration.":::
-1. Select **Next** and then **Deploy**. You should now see the deployment show up in the list for the selected region. The deployment of the configuration can take about 15-20 minutes to complete.
+1. Select **Next** and then **Deploy**. You should now see the deployment show up in the list for the selected region. The deployment of the configuration can take a few minutes to complete.
## Verify deployment of configurations ### Verify from a virtual network
-1. Go to **VNet-A-EastUS** virtual network and select **Network Manager** under *Settings*. You'll see the **HubA** connectivity configuration applied.
+1. Go to **vnet-learn-hub-eastus-001** virtual network and select **Network Manager** under **Settings**. The **Connectivity configurations** tab lists **cc-learn-prod-eastus-001** connectivity configuration applied in the
:::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/vnet-connectivity-configuration.png" alt-text="Screenshot of connectivity configuration applied to the virtual network.":::
-1. Select **Peerings** under *Settings*. You'll see virtual network peerings created by Virtual Network Manager with *AVNM* in the name.
+1. Select the **Security admin configurations** tab and expand **Outbound** to list the security admin rules applied to this virtual network.
- :::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/vnet-peerings.png" alt-text="Screenshot of virtual network peerings created by Virtual Network Manager.":::
+ :::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/verify-security-admin-configuration.png" alt-text="Screenshot of security admin configuration applied to the virtual network.":::
-1. Select the **SecurityAdmin** tab to see the security admin rules applied to this virtual network.
+1. Select **Peerings** under **Settings** to list the virtual network peerings created by Virtual Network Manager. Its name starts with **ANM_**.
- :::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/vnet-admin-configuration.png" alt-text="Screenshot of security admin rules applied to the virtual network.":::
+ :::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/vnet-peerings.png" alt-text="Screenshot of virtual network peerings created by Virtual Network Manager." lightbox="media/tutorial-create-secured-hub-and-spoke/vnet-peerings-large.png":::
### Verify from a VM
-1. Deploy a test Windows VM into **VNet-A-EastUS**.
+1. [Deploy a test virtual machine](../virtual-machines/linux/quick-create-portal.md) into **vnet-learn-prod-eastus-001**.
-1. Go to the test VM created in *VNet-A-EastUS* and select **Networking** under *Settings*. Select **Outbound port rules** and you'll see the security admin rule applied.
+1. Go to the test VM created in *vnet-learn-prod-eastus-001* and select **Networking** under *Settings*. Select **Outbound port rules** and verify the **DENY_INTERNET** rule is applied.
:::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/vm-security-rules.png" alt-text="Screenshot of test VM's network security rules.":::
-1. Select the network interface name.
-
- :::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/vm-network-settings.png" alt-text="Screenshot of test VM's network settings.":::
-
-1. Then select **Effective routes** under *Help* to see the routes for the virtual network peerings. The `10.3.0.0/16` route with the next hop of `VNetGlobalPeering` is the route to the hub virtual network. The `10.5.0.0/16` route with the next hop of `ConnectedGroup` is route to the other spoke virtual network. All spokes virtual network is in a *ConnectedGroup* when **Transitivity** is enabled.
+1. Select the network interface name and select **Effective routes** under **Help** to verify the routes for the virtual network peerings.The `10.2.0.0/16` route with the **Next Hop Type** of `VNet peering` is the route to the hub virtual network.
- :::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/effective-routes.png" alt-text="Screenshot of effective routes from test VM network interface." lightbox="./media/tutorial-create-secured-hub-and-spoke/effective-routes-expanded.png" :::
+ :::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/effective-routes.png" alt-text="Screenshot of effective routes from test VM network interface." :::
## Clean up resources
virtual-network Accelerated Networking How It Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/accelerated-networking-how-it-works.md
Title: How Accelerated Networking works in Linux and FreeBSD VMs description: How Accelerated Networking Works in Linux and FreeBSD VMs- - vm-linux Previously updated : 02/15/2022 Last updated : 04/18/2023
When a VM is created in Azure, a synthetic network interface is created for each
If the VM is configured with Accelerated Networking, a second network interface is created for each virtual NIC that is configured. The second interface is an SR-IOV Virtual Function (VF) offered by the physical network NIC in the Azure host. The VF interface shows up in the Linux guest as a PCI device, and uses the Mellanox ΓÇ£mlx4ΓÇ¥ or ΓÇ£mlx5ΓÇ¥ driver in Linux, since Azure hosts use physical NICs from Mellanox. Most network packets go directly between the Linux guest and the physical NIC without traversing the virtual switch or any other software that runs on the host. Because of the direct access to the hardware, network latency is lower and less CPU time is used to process network packets when compared with the synthetic interface.
-Different Azure hosts use different models of Mellanox physical NIC, so Linux automatically determines whether to use the ΓÇ£mlx4ΓÇ¥ or ΓÇ£mlx5ΓÇ¥ driver. Placement of the VM on an Azure host is controlled by the Azure infrastructure. With no customer option to specify which physical NIC that a VM deployment uses, the VMs must include both drivers. If a VM is stopped/deallocated and then restarted, it might be redeployed on hardware with a different model of Mellanox physical NIC. Therefore, it might use the other Mellanox driver.
+Different Azure hosts use different models of Mellanox physical NIC. Linux automatically determines whether to use the ΓÇ£mlx4ΓÇ¥ or ΓÇ£mlx5ΓÇ¥ driver. The Azure infrastructure controls the placement of the VM on the Azure host. With no customer option to specify which physical NIC that a VM deployment uses, the VMs must include both drivers. If a VM is stopped/deallocated and then restarted, it might be redeployed on hardware with a different model of Mellanox physical NIC. Therefore, it might use the other Mellanox driver.
-If a VM image doesn't include a driver for the Mellanox physical NIC, networking capabilities will continue to work at the slower speeds of the virtual NIC, even though the portal, Azure CLI, and Azure PowerShell will still show the Accelerated Networking feature as _enabled_.
+If a VM image doesn't include a driver for the Mellanox physical NIC, networking capabilities continue to work at the slower speeds of the virtual NIC. The portal, Azure CLI, and Azure PowerShell display the Accelerated Networking feature as _enabled_.
FreeBSD provides the same support for Accelerated Networking as Linux when running in Azure. The remainder of this article describes Linux and uses Linux examples, but the same functionality is available in FreeBSD.
FreeBSD provides the same support for Accelerated Networking as Linux when runni
## Bonding
-The synthetic network interface and VF interface are automatically paired and act as a single interface in most aspects that are seen by applications. The bonding is done by the netvsc driver. Depending on the Linux distro, udev rules and scripts might help in naming the VF interface and in network configuration. If the VM is configured with multiple virtual NICs, the Azure host provides a unique serial number for each one. It's used to allow Linux to do the proper pairing of synthetic and VF interfaces for each virtual NIC.
+The synthetic network interface and VF interface are automatically paired and act as a single interface in most aspects used by applications. The bonding is done by the netvsc driver. Depending on the Linux distro, udev rules and scripts might help in naming the VF interface and in network configuration. If the VM is configured with multiple virtual NICs, the Azure host provides a unique serial number for each one. It's used to allow Linux to do the proper pairing of synthetic and VF interfaces for each virtual NIC.
The synthetic and VF interfaces both have the same MAC address. Together they constitute a single NIC from the standpoint of other network entities that exchange packets with the virtual NIC in the VM. Other entities don't take any special action because of the existence of both the synthetic interface and the VF interface.
-Both interfaces are visible via the ΓÇ£ifconfigΓÇ¥ or ΓÇ£ip addrΓÇ¥ command in Linux. Here's example ΓÇ£ifconfigΓÇ¥ output in Ubuntu 18.04:
+Both interfaces are visible via the `ifconfig` or `ip addr` command in Linux. Here's an example `ifconfig` output:
```output U1804:~$ ifconfig
TX packets 9103233 bytes 2183731687 (2.1 GB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ```
-The synthetic interface always has a name of the form ΓÇ£eth\<n\>ΓÇ¥. Depending on the Linux distro, the VF interface might have a name of the form ΓÇ£eth\<n\>ΓÇ¥, or a name of a different form because of a udev rule that does renaming.
+The synthetic interface always has a name of the form `eth\<n\>`. Depending on the Linux distro, the VF interface might have a name of the form `eth\<n\>`, or a name of a different form because of a `udev` rule that does renaming.
Whether a particular interface is the synthetic interface or the VF interface can be determined with the shell command line that shows the device driver used by the interface:
Whether a particular interface is the synthetic interface or the VF interface ca
$ ethtool -i <interface name> | grep driver ```
-If the driver is ΓÇ£hv_netvscΓÇ¥, it's the synthetic interface. The VF interface has a driver name that contains ΓÇ£mlxΓÇ¥. The VF interface is also identifiable because its flags field includes ΓÇ£SLAVE.ΓÇ¥ This flag indicates that it's under the control of the synthetic interface that has the same MAC address. Finally, IP addresses are assigned only to the synthetic interface, and the output of ΓÇÿifconfigΓÇÖ or ΓÇÿip addrΓÇÖ shows this distinction as well.
+If the driver is `hv_netvsc`, it's the synthetic interface. The VF interface has a driver name that contains ΓÇ£mlxΓÇ¥. The VF interface is also identifiable because its flags field includes `SLAVE`. This flag indicates that it's under the control of the synthetic interface that has the same MAC address. Finally, IP addresses are assigned only to the synthetic interface, and the output of `ifconfig` or `ip addr` shows this distinction as well.
## Application Usage
-Applications should interact only with the synthetic interface, just like in any other networking environment. Outgoing network packets are passed from the netvsc driver to the VF driver and then transmitted through the VF interface. Incoming packets are received and processed on the VF interface before being passed to the synthetic interface. Exceptions are incoming TCP SYN packets and broadcast/multicast packets that are processed by the synthetic interface only.
+Applications should interact only with the synthetic interface, just like in any other networking environment. Outgoing network packets are passed from the netvsc driver to the VF driver and then transmitted through the VF interface. Incoming packets are received and processed on the VF interface before being passed to the synthetic interface. Exceptions are incoming TCP SYN packets and broadcast/multicast packets processed by the synthetic interface only.
-You can verify that packets are flowing over the VF interface from the output of ΓÇ£ethtool -S eth\<n\>ΓÇ¥. The output lines that contain ΓÇ£vfΓÇ¥ show the traffic over the VF interface. For example:
+You can verify that packets are flowing over the VF interface from the output of `ethtool -S eth\<n\>`. The output lines that contain `vf` show the traffic over the VF interface. For example:
```output U1804:~# ethtool -S eth0 | grep ' vf_'
U1804:~# ethtool -S eth0 | grep ' vf_'
If these counters are incrementing on successive execution of the ΓÇ£ethtoolΓÇ¥ command, then network traffic is flowing over the VF interface.
-The existence of the VF interface as a PCI device can be seen with the ΓÇ£lspciΓÇ¥ command. For example, on the Generation 1 VM, you might see output similar to this (Generation 2 VMs donΓÇÖt have the legacy PCI devices):
+The existence of the VF interface as a PCI device can be seen with the `lspci` command. For example, on the Generation 1 VM, you might see output similar to the following output (Generation 2 VMs donΓÇÖt have the legacy PCI devices):
```output U1804:~# lspci
The corresponding synthetic interface that is using the netvsc driver has detect
[ 7.480651] mlx5_core cf63:00:02.0 enP53091s1np0: renamed from eth1 ```
-The VF interface initially was named ΓÇ£eth1ΓÇ¥ by the Linux kernel. A udev rule renamed it to avoid confusion with the names given to the synthetic interfaces.
+The VF interface initially was named ΓÇ£eth1ΓÇ¥ by the Linux kernel. An udev rule renamed it to avoid confusion with the names given to the synthetic interfaces.
```output [ 8.087962] mlx5_core cf63:00:02.0 enP53091s1np0: Link up
The final message indicates that the data path has switched to using the VF inte
## Azure Host Servicing
-When Azure host servicing is performed, all VF interfaces might be temporarily removed from the VM during the servicing. When the servicing is complete, the VF interfaces are added back to the VM and normal operation continues. While the VM is operating without the VF interfaces, network traffic continues to flow through the synthetic interface without any disruption to applications. In this context, Azure host servicing might include updating the various components of the Azure network infrastructure or a full upgrade of the Azure host hypervisor software. Such servicing events occur at time intervals depending on the operational needs of the Azure infrastructure. These events typically can be expected several times over the course of a year. If applications interact only with the synthetic interface, the automatic switching between the VF interface and the synthetic interface ensures that workloads aren't disturbed by such servicing events. Latencies and CPU load might be higher during the periods because of the use of the synthetic interface. The duration of such periods is typically on the order of 30 seconds, but sometimes might be as long as a few minutes.
+When Azure host servicing is performed, all VF interfaces might be temporarily removed from the VM during the servicing. When the servicing is complete, the VF interfaces are added back to the VM. Normal operation continues. While the VM is operating without the VF interfaces, network traffic continues to flow through the synthetic interface without any disruption to applications. In this context, Azure host servicing might include updating the various components of the Azure network infrastructure or a full upgrade of the Azure host hypervisor software. Such servicing events occur at time intervals depending on the operational needs of the Azure infrastructure. These events typically can be expected several times over the course of a year. The automatic switching between the VF interface and the synthetic interface ensures that servicing events don't disturb workloads if applications interact only with the synthetic interface. Latencies and CPU load might be higher during the periods because of the use of the synthetic interface. The duration of such periods is typically on the order of 30 seconds, but sometimes might be as long as a few minutes.
-The removal and re-add of the VF interface during a servicing event is visible in the ΓÇ£dmesgΓÇ¥ output in the VM. Here's typical output:
+The removal and readd of the VF interface during a servicing event is visible in the ΓÇ£dmesgΓÇ¥ output in the VM. Here's typical output:
```output [ 8160.911509] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: Data path switched from VF: enP53091s1np0
The data path has been switched away from the VF interface, and the VF interface
[ 8225.667978] pci cf63:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] ```
-When the VF interface is re-added after servicing is complete, a new PCI device with the specified GUID is detected. It's assigned the same PCI domain ID (0xcf63) as before. The handling of the re-add VF interface is like during the initial boot.
+When the VF interface is readded after servicing is complete, a new PCI device with the specified GUID is detected. It's assigned the same PCI domain ID (0xcf63) as before. The handling of the readd VF interface is like during the initial boot.
```output [ 8225.679672] mlx5_core cf63:00:02.0: firmware version: 14.25.8362
The mlx5 driver initializes the VF interface, and the interface is now functiona
The data path has been switched back to the VF interface.
-## Disable/Enable Accelerated Networking in a non-running VM
+## Disable/Enable Accelerated Networking in a nonrunning VM
-Accelerated Networking can be toggled on a virtual NIC in a non-running VM with Azure CLI. For example:
+Accelerated Networking can be toggled on a virtual NIC in a nonrunning VM with Azure CLI. For example:
```output $ az network nic update --name u1804895 --resource-group testrg --accelerated-network false ```
-Disabling Accelerated Networking that is enabled in the guest VM produces a ΓÇ£dmesgΓÇ¥ output. It's the same as when the VF interface is removed for Azure host servicing. Enabling Accelerated Networking produces the same ΓÇ£dmesgΓÇ¥ output as when the VF interface is readded after Azure host servicing. These Azure CLI commands can be used to simulate Azure host servicing. With them you can verify that your applications do not incorrectly depend on direct interaction with the VF interface.
+Disabling Accelerated Networking that is enabled in the guest VM produces a ΓÇ£dmesgΓÇ¥ output. It's the same as when the VF interface is removed for Azure host servicing. Enabling Accelerated Networking produces the same ΓÇ£dmesgΓÇ¥ output as when the VF interface is readded after Azure host servicing. These Azure CLI commands can be used to simulate Azure host servicing. With them, you can verify that your applications don't incorrectly depend on direct interaction with the VF interface.
## Next steps * Learn how to [create a VM with Accelerated Networking in PowerShell](../virtual-network/create-vm-accelerated-networking-powershell.md)
-* Learn how to [create a VM with Accerelated Networking using Azure CLI](../virtual-network/create-vm-accelerated-networking-cli.md)
+* Learn how to [create a VM with Accelerated Networking using Azure CLI](../virtual-network/create-vm-accelerated-networking-cli.md)
* Improve latency with an [Azure proximity placement group](../virtual-machines/co-location.md)
virtual-network Accelerated Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/accelerated-networking-overview.md
Title: Accelerated Networking overview description: Learn how Accelerated Networking can improve the networking performance of Azure VMs.- - - Previously updated : 03/20/2023 Last updated : 04/18/2023
This article explains Accelerated Networking and describes its benefits, constra
The following diagram illustrates how two VMs communicate with and without Accelerated Networking:
-![Screenshot that shows communication between Azure VMs with and without Accelerated Networking.](./media/create-vm-accelerated-networking/accelerated-networking.png)
**Without Accelerated Networking**, all networking traffic in and out of the VM traverses the host and the virtual switch. The virtual switch provides all policy enforcement to network traffic. Policies include network security groups, access control lists, isolation, and other network virtualized services. To learn more about virtual switches, see [Hyper-V Virtual Switch](/windows-server/virtualization/hyper-v-virtual-switch/hyper-v-virtual-switch).
Accelerated Networking has the following benefits:
- The benefits of Accelerated Networking apply only to the VM that enables it. -- For best results, you should enable Accelerated Networking on at least two VMs in the same Azure virtual network. This feature has minimal impact on latency when you communicate across virtual networks or connect on-premises.
+- For best results, you should enable Accelerated Networking on at least two VMs in the same Azure virtual network. This feature has minimal effect on latency when you communicate across virtual networks or connect on-premises.
- You can't enable Accelerated Networking on a running VM. You can enable Accelerated Networking on a supported VM only when the VM is stopped and deallocated.
The following Linux and FreeBSD distributions from the Azure Gallery support Acc
If you use a custom image that supports Accelerated Networking, make sure you have the required drivers to work with Mellanox ConnectX-3, ConnectX-4 Lx, and ConnectX-5 NICs on Azure. Accelerated Networking also requires network configurations that exempt configuration of the virtual functions on the mlx4_en and mlx5_core drivers. Images with cloud-init version 19.4 or greater have networking correctly configured to support Accelerated Networking during provisioning.
+# [RHEL, CentOS](#tab/redhat)
+ The following example shows a sample configuration drop-in for `NetworkManager` on RHEL or CentOS: ```bash
unmanaged-devices=driver:mlx4_core;driver:mlx5_core
EOF ```
+# [openSUSE, SLES](#tab/suse)
+
+The following example shows a sample configuration drop-in for `networkd` on openSUSE or SLES:
+
+```bash
+sudo mkdir -p /etc/systemd/network
+sudo cat /etc/systemd/network/99-azure-unmanaged-devices.network <<EOF
+# Ignore SR-IOV interface on Azure, since it's transparently bonded
+# to the synthetic interface
+[Match]
+Driver=mlx4_en mlx5_en mlx4_core mlx5_core
+[Link]
+Unmanaged=yes
+EOF
+```
+
+# [Ubuntu, Debian](#tab/ubuntu)
+ The following example shows a sample configuration drop-in for `networkd` on Ubuntu, Debian, or Flatcar: ```bash
Unmanaged=yes
EOF ```
+
+ ## Next steps - [How Accelerated Networking works in Linux and FreeBSD VMs](./accelerated-networking-how-it-works.md) - [Create a VM with Accelerated Networking by using PowerShell](./create-vm-accelerated-networking-powershell.md)
virtual-network Create Peering Different Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-peering-different-subscriptions.md
This tutorial peers virtual networks in the same region. You can also peer virtu
## Prerequisites
+# [**Portal**](#tab/create-peering-portal)
+ - An Azure account(s) with two active subscriptions. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - An Azure account with permissions in both subscriptions or an account in each subscription with the proper permissions to create a virtual network peering. For a list of permissions, see [Virtual network peering permissions](virtual-network-manage-peering.md#permissions).
- - If the virtual networks are in different subscriptions and Active Directory tenants, and you intend to separate the duty of managing the network belonging to each tenant, then add the user from each tenant as a guest in the opposite tenant and assign them a reader role to the virtual network.
+ - To separate the duty of managing the network belonging to each tenant, add the user from each tenant as a guest in the opposite tenant and assign them a reader role to the virtual network. This procedure applies if the virtual networks are in different subscriptions and Active Directory tenants.
- - If the virtual networks are in different subscriptions and Active Directory tenants, and you do not intend to separate the duty of managing the network belonging to each tenant, then add the user from tenant A as a guest in the opposite tenant and assign them the correct permissions to establish a network peering. This user will be able to initiate and connect the network peering from each subscription.
+ - To establish a network peering when you don't intend to separate the duty of managing the network belonging to each tenant, add the user from tenant A as a guest in the opposite tenant. Then, assign them the correct permissions to initiate and connect the network peering from each subscription. With these permissions, the user is able to establish the network peering from each subscription.
- For more information about guest users, see [Add Azure Active Directory B2B collaboration users in the Azure portal](../active-directory/external-identities/add-users-administrator.md?toc=%2fazure%2fvirtual-network%2ftoc.json#add-guest-users-to-the-directory). - Each user must accept the guest user invitation from the opposite Azure Active Directory tenant.
+# [**PowerShell**](#tab/create-peering-powershell)
-- This how-to article requires version 2.31.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+- An Azure account(s) with two active subscriptions. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- An Azure account with permissions in both subscriptions or an account in each subscription with the proper permissions to create a virtual network peering. For a list of permissions, see [Virtual network peering permissions](virtual-network-manage-peering.md#permissions).
+
+ - To separate the duty of managing the network belonging to each tenant, add the user from each tenant as a guest in the opposite tenant and assign them a reader role to the virtual network. This procedure applies if the virtual networks are in different subscriptions and Active Directory tenants.
+
+ - To establish a network peering when you don't intend to separate the duty of managing the network belonging to each tenant, add the user from tenant A as a guest in the opposite tenant. Then, assign them the correct permissions to initiate and connect the network peering from each subscription. With these permissions, the user is able to establish the network peering from each subscription.
+
+ - For more information about guest users, see [Add Azure Active Directory B2B collaboration users in the Azure portal](../active-directory/external-identities/add-users-administrator.md?toc=%2fazure%2fvirtual-network%2ftoc.json#add-guest-users-to-the-directory).
+
+ - Each user must accept the guest user invitation from the opposite Azure Active Directory tenant.
- Azure PowerShell installed locally or Azure Cloud Shell.
This tutorial peers virtual networks in the same region. You can also peer virtu
If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
-In the following steps, you'll learn how to peer virtual networks in different subscriptions and Azure Active Directory tenants.
+# [**Azure CLI**](#tab/create-peering-cli)
+
+- An Azure account(s) with two active subscriptions. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- An Azure account with permissions in both subscriptions or an account in each subscription with the proper permissions to create a virtual network peering. For a list of permissions, see [Virtual network peering permissions](virtual-network-manage-peering.md#permissions).
+
+ - To separate the duty of managing the network belonging to each tenant, add the user from each tenant as a guest in the opposite tenant and assign them a reader role to the virtual network. This procedure applies if the virtual networks are in different subscriptions and Active Directory tenants.
+
+ - To establish a network peering when you don't intend to separate the duty of managing the network belonging to each tenant, add the user from tenant A as a guest in the opposite tenant. Then, assign them the correct permissions to initiate and connect the network peering from each subscription. With these permissions, the user is able to establish the network peering from each subscription.
+
+ - For more information about guest users, see [Add Azure Active Directory B2B collaboration users in the Azure portal](../active-directory/external-identities/add-users-administrator.md?toc=%2fazure%2fvirtual-network%2ftoc.json#add-guest-users-to-the-directory).
+
+ - Each user must accept the guest user invitation from the opposite Azure Active Directory tenant.
++
+- This how-to article requires version 2.31.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+++
+In the following steps, learn how to peer virtual networks in different subscriptions and Azure Active Directory tenants.
You can use the same account that has permissions in both subscriptions or you can use separate accounts for each subscription to set up the peering. An account with permissions in both subscriptions can complete all of the steps without signing out and signing in to portal and assigning permissions.
Use [az ad user list](/cli/azure/ad/user#az-ad-user-list) to obtain the object I
```azurecli-interactive az ad user list --display-name UserB ```
-```bash
+```output
[ { "businessPhones": [],
echo $vnetidA
## Create virtual network - myVNetB
-In this section, you'll sign in as **UserB** and create a virtual network for the peering connection to **myVNetA**.
+In this section, you sign in as **UserB** and create a virtual network for the peering connection to **myVNetA**.
# [**Portal**](#tab/create-peering-portal)
Use [az ad user list](/cli/azure/ad/user#az-ad-user-list) to obtain the object I
az ad user list --display-name UserA ```
-```bash
+```output
[ { "businessPhones": [],
echo $vnetidB
## Create peering connection - myVNetA to myVNetB
-You'll need the **Resource ID** for **myVNetB** from the previous steps to set up the peering connection.
+You need the **Resource ID** for **myVNetB** from the previous steps to set up the peering connection.
# [**Portal**](#tab/create-peering-portal)
az network vnet peering list \
-The peering connection will show in **Peerings** in a **Initiated** state. To complete the peer, a corresponding connection must be set up in **myVNetB**.
+The peering connection shows in **Peerings** in a **Initiated** state. To complete the peer, a corresponding connection must be set up in **myVNetB**.
## Create peering connection - myVNetB to myVNetA
-You'll need the **Resource IDs** for **myVNetA** from the previous steps to set up the peering connection.
+You need the **Resource IDs** for **myVNetA** from the previous steps to set up the peering connection.
# [**Portal**](#tab/create-peering-portal)
For more information about using your own DNS for name resolution, see, [Name re
For more information about Azure DNS, see [What is Azure DNS?](../dns/dns-overview.md). ## Next steps
-<!-- Add a context sentence for the following links -->
+ - Thoroughly familiarize yourself with important [virtual network peering constraints and behaviors](virtual-network-manage-peering.md#requirements-and-constraints) before creating a virtual network peering for production use.+ - Learn about all [virtual network peering settings](virtual-network-manage-peering.md#create-a-peering).+ - Learn how to [create a hub and spoke network topology](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke#virtual-network-peering) with virtual network peering.
virtual-network Create Vm Accelerated Networking Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-vm-accelerated-networking-cli.md
Title: Use Azure CLI to create a Windows or Linux VM with Accelerated Networking description: Use Azure CLI to create and manage virtual machines that have Accelerated Networking enabled for improved network performance.- -
-tags: azure-resource-manager
- Previously updated : 03/20/2023 Last updated : 04/18/2023
To use Azure PowerShell to create a Windows VM with Accelerated Networking enabl
## Prerequisites - An Azure account with an active subscription. You can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).+ - The latest version of [Azure CLI installed](/cli/azure/install-azure-cli). Sign in to Azure by using the [az login](/cli/azure/reference-index#az-login) command. ## Create a VM with Accelerated Networking
az vm create \
# [Linux](#tab/linux)
-The following example creates a VM with the UbuntuLTS OS image and a size that supports Accelerated Networking, Standard_DS4_v2.
+The following example creates a VM with a size that supports Accelerated Networking, Standard_DS4_v2.
```azurecli az vm create \
Once you create the VM in Azure, connect to the VM and confirm that the Ethernet
1. Use the following command to create an SSH session with the VM. Replace `<myPublicIp>` with the public IP address assigned to the VM you created, and replace `<myAdminUser>` with the `--admin-username` you specified when you created the VM.
- ```bash
+ ```azurecli
ssh <myAdminUser>@<myPublicIp> ```
-1. From a Bash shell on the remote VM, enter `uname -r` and confirm that the kernel version is one of the following versions, or greater:
+1. From a shell on the remote VM, enter `uname -r` and confirm that the kernel version is one of the following versions, or greater:
- **Ubuntu 16.04**: 4.11.0-1013. - **SLES SP3**: 4.4.92-6.18.
You must run an application over the synthetic NIC to guarantee that the applica
For more information about application binding requirements, see [How Accelerated Networking works in Linux and FreeBSD VMs](./accelerated-networking-how-it-works.md#application-usage). <a name="enable-accelerated-networking-on-existing-vms"></a>+ ## Manage Accelerated Networking on existing VMs It's possible to enable Accelerated Networking on an existing VM. The VM must meet the following requirements to support Accelerated Networking: -- Be a supported size for Accelerated Networking.-- Be a supported Azure Marketplace image and kernel version for Linux.-- Be stopped or deallocated before you can enable Accelerated Networking on any NIC. This requirement applies to all individual VMs or VMs in an availability set or Azure Virtual Machine Scale Sets.
+- A supported size for Accelerated Networking.
+
+- A supported Azure Marketplace image and kernel version for Linux.
+
+- Stopped or deallocated before you can enable Accelerated Networking on any NIC. This requirement applies to all individual VMs or VMs in an availability set or Azure Virtual Machine Scale Sets.
### Enable Accelerated Networking on individual VMs or VMs in availability sets
Once you restart and the upgrades finish, the VF appears inside VMs that use a s
You can resize VMs with Accelerated Networking enabled only to sizes that also support Accelerated Networking. You can't resize a VM with Accelerated Networking to a VM instance that doesn't support Accelerated Networking by using the resize operation. Instead, use the following process to resize these VMs: 1. Stop and deallocate the VM or all the VMs in the availability set or Virtual Machine Scale Sets.+ 1. Disable Accelerated Networking on the NIC of the VM or all the VMs in the availability set or Virtual Machine Scale Sets.+ 1. Move the VM or VMs to a new size that doesn't support Accelerated Networking, and restart them. ## Manage Accelerated Networking through the portal
If the VM uses a [supported operating system](./accelerated-networking-overview.
To enable or disable Accelerated Networking for an existing VM through the Azure portal: 1. From the [Azure portal](https://portal.azure.com) page for the VM, select **Networking** from the left menu.+ 1. On the **Networking** page, select the **Network Interface**.+ 1. At the top of the NIC **Overview** page, select **Edit accelerated networking**.+ 1. Select **Automatic**, **Enabled**, or **Disabled**, and then select **Save**. To confirm whether Accelerated Networking is enabled for an existing VM: 1. From the portal page for the VM, select **Networking** from the left menu.+ 1. On the **Networking** page, select the **Network Interface**.+ 1. On the network interface **Overview** page, under **Essentials**, note whether **Accelerated networking** is set to **Enabled** or **Disabled**. ## Next steps - [How Accelerated Networking works in Linux and FreeBSD VMs](./accelerated-networking-how-it-works.md)+ - [Create a VM with Accelerated Networking by using PowerShell](../virtual-network/create-vm-accelerated-networking-powershell.md)+ - [Proximity placement groups](../virtual-machines/co-location.md)
virtual-network Nat Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-overview.md
Title: What is Azure Virtual Network NAT?
+ Title: What is Azure NAT Gateway?
-description: Overview of Virtual Network NAT features, resources, architecture, and implementation. Learn how Virtual Network NAT works and how to use NAT gateway resources in Azure.
+description: Overview of Azure NAT Gateway features, resources, architecture, and implementation. Learn how Azure NAT Gateway works and how to use NAT gateway resources in Azure.
-# What is Virtual Network NAT?
+# What is Azure NAT Gateway?
-Virtual Network NAT is a fully managed and highly resilient Network Address Translation (NAT) service. Virtual Network NAT simplifies outbound Internet connectivity for virtual networks. When configured on a subnet, all outbound connectivity uses the Virtual Network NAT's static public IP addresses.
+Azure NAT Gateway is a fully managed and highly resilient Network Address Translation (NAT) service. Azure NAT Gateway simplifies outbound Internet connectivity for virtual networks. When configured on a subnet, all outbound connectivity uses the NAT gateway's static public IP addresses.
:::image type="content" source="./media/nat-overview/flow-map.png" alt-text="Figure shows a NAT receiving traffic from internal subnets and directing it to a public IP (PIP) and an IP prefix.":::
-*Figure: Virtual Network NAT*
+*Figure: Azure NAT Gateway*
-## Virtual Network NAT benefits
+## Azure NAT Gateway benefits
### Security
With a NAT gateway, individual VMs or other compute resources, don't need public
### Resiliency
-Virtual Network NAT is a fully managed and distributed service. It doesn't depend on individual compute instances such as VMs or a single physical gateway device. A NAT gateway always has multiple fault domains and can sustain multiple failures without service outage. Software defined networking makes a NAT gateway highly resilient.
+Azure NAT Gateway is a fully managed and distributed service. It doesn't depend on individual compute instances such as VMs or a single physical gateway device. A NAT gateway always has multiple fault domains and can sustain multiple failures without service outage. Software defined networking makes a NAT gateway highly resilient.
### Scalability
-Virtual Network NAT is scaled out from creation. There isn't a ramp up or scale-out operation required. Azure manages the operation of Virtual Network NAT for you.
+NAT gateway is scaled out from creation. There isn't a ramp up or scale-out operation required. Azure manages the operation of NAT gateway for you.
A NAT gateway resource can be associated to a subnet and can be used by all compute resources in that subnet. All subnets in a virtual network can use the same NAT gateway resource. Outbound connectivity can be scaled out by assigning up to 16 IP addresses to NAT gateway. When a NAT gateway is associated to a public IP prefix, it automatically scales to the number of IP addresses needed for outbound. ### Performance
-Virtual Network NAT is a software defined networking service. A NAT gateway won't affect the network bandwidth of your compute resources. Learn more about [NAT gateway's performance](nat-gateway-resource.md#performance).
+Azure NAT Gateway is a software defined networking service. A NAT gateway won't affect the network bandwidth of your compute resources. Learn more about [NAT gateway's performance](nat-gateway-resource.md#performance).
-## Virtual Network NAT basics
+## Azure NAT Gateway basics
### Outbound connectivity
-* Virtual Network NAT (NAT gateway) is the recommended method for outbound connectivity. NAT gateway doesn't have the same limitations of SNAT port exhaustion as does [default outbound access](../ip-services/default-outbound-access.md) and [outbound rules of a load balancer](../../load-balancer/outbound-rules.md).
+* NAT gateway is the recommended method for outbound connectivity. NAT gateway doesn't have the same limitations of SNAT port exhaustion as does [default outbound access](../ip-services/default-outbound-access.md) and [outbound rules of a load balancer](../../load-balancer/outbound-rules.md).
* NAT gateway allows flows to be created from the virtual network to the services outside your virtual network. Return traffic from the internet is only allowed in response to an active flow. Services outside your virtual network canΓÇÖt initiate an inbound connection through NAT gateway.
Virtual appliance UDR / ExpressRoute >> NAT gateway >> Instance-level public IP
* NAT gateway canΓÇÖt be associated to an IPv6 public IP address or IPv6 public IP prefix. It can be associated to a dual stack subnet, but will only be able to direct outbound traffic with an IPv4 address.
-* NAT gateway can be used to provide outbound connectivity in a hub and spoke model when associated with Azure Firewall. NAT gateway can be associated to an Azure Firewall subnet in a hub virtual network and provide outbound connectivity from spoke virtual networks peered to the hub. To learn more, see [Azure Firewall integration with NAT gateway](../../firewall/integrate-with-nat-gateway.md).
+* NAT gateway can be associated to an Azure Firewall subnet in a hub virtual network and provide outbound connectivity from spoke virtual networks peered to the hub. To learn more, see [Azure Firewall integration with NAT gateway](../../firewall/integrate-with-nat-gateway.md).
### Availability zones
Virtual appliance UDR / ExpressRoute >> NAT gateway >> Instance-level public IP
* NAT gateway can be isolated in a specific zone when you create [zone isolation scenarios](./nat-availability-zones.md). This deployment is called a zonal deployment. After NAT gateway is deployed, the zone selection can't be changed.
-* NAT gateway is placed in no zone by default. A [non-zonal NAT gateway](./nat-availability-zones.md#non-zonal) is placed in a zone for you by Azure.
+* NAT gateway is placed in 'no zone' by default. A [non-zonal NAT gateway](./nat-availability-zones.md#non-zonal) is placed in a zone for you by Azure.
### NAT gateway and basic SKU resources * NAT gateway is compatible with standard SKU public IP addresses or public IP prefix resources or a combination of both. You can use a public IP prefix directly or distribute the public IP addresses of the prefix across multiple NAT gateway resources. The NAT gateway will groom all traffic to the range of IP addresses of the prefix.
-* Basic resources, such as basic load balancer or basic public IPs aren't compatible with Virtual Network NAT. Basic resources must be placed on a subnet not associated to a NAT gateway. Basic load balancer and basic public IP can be upgraded to standard to work with a NAT gateway
+* Basic resources, such as basic load balancer or basic public IPs aren't compatible with NAT gateway. Basic resources must be placed on a subnet not associated to a NAT gateway. Basic load balancer and basic public IP can be upgraded to standard to work with a NAT gateway
* Upgrade a load balancer from basic to standard, see [Upgrade a public basic Azure Load Balancer](../../load-balancer/upgrade-basic-standard.md).
Virtual appliance UDR / ExpressRoute >> NAT gateway >> Instance-level public IP
## Pricing and SLA
-For Azure Virtual Network NAT pricing, see [NAT gateway pricing](https://azure.microsoft.com/pricing/details/virtual-network/#pricing).
+For Azure NAT Gateway pricing, see [NAT gateway pricing](https://azure.microsoft.com/pricing/details/virtual-network/#pricing).
-For information on the SLA, see [SLA for Virtual Network NAT](https://azure.microsoft.com/support/legal/sla/virtual-network-nat/v1_0/).
+For information on the SLA, see [SLA for Azure NAT Gateway](https://azure.microsoft.com/support/legal/sla/virtual-network-nat/v1_0/).
## Next steps * To create and validate a NAT gateway, see [Quickstart: Create a NAT gateway using the Azure portal](quickstart-create-nat-gateway-portal.md).
-* To view a video on more information about Azure Virtual Network NAT, see [How to get better outbound connectivity using an Azure NAT gateway](https://www.youtube.com/watch?v=2Ng_uM0ZaB4).
+* To view a video on more information about Azure NAT Gateway, see [How to get better outbound connectivity using an Azure NAT gateway](https://www.youtube.com/watch?v=2Ng_uM0ZaB4).
* Learn about the [NAT gateway resource](./nat-gateway-resource.md).
-* [Learn module: Introduction to Azure Virtual Network NAT](/training/modules/intro-to-azure-virtual-network-nat).
+* [Learn module: Introduction to Azure NAT Gateway](/training/modules/intro-to-azure-virtual-network-nat).
-* To learn more about architecture options for Azure Virtual Network NAT, see [Azure Well-Architected Framework review of an Azure NAT gateway](/azure/architecture/networking/guide/well-architected-network-address-translation-gateway).
+* To learn more about architecture options for Azure NAT Gateway, see [Azure Well-Architected Framework review of an Azure NAT gateway](/azure/architecture/networking/guide/well-architected-network-address-translation-gateway).
virtual-network Troubleshoot Nat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/troubleshoot-nat.md
Title: Troubleshoot Azure Virtual Network NAT (NAT gateway)
+ Title: Troubleshoot Azure NAT Gateway
-description: Troubleshoot issues with Virtual Network NAT.
+description: Troubleshoot issues with NAT Gateway.
Last updated 08/29/2022
-# Troubleshoot Azure Virtual Network NAT (NAT gateway)
+# Troubleshoot Azure NAT Gateway
This article provides guidance on how to correctly configure your NAT gateway and troubleshoot common configuration and deployment related issues.
Check the following configurations to ensure that NAT gateway can be used to dir
### How to validate connectivity
-[Virtual Network NAT gateway](./nat-overview.md#virtual-network-nat-basics) supports IPv4 UDP and TCP protocols. ICMP isn't supported and is expected to fail.
+[NAT gateway](./nat-overview.md#azure-nat-gateway-basics) supports IPv4 UDP and TCP protocols. ICMP isn't supported and is expected to fail.
To validate end-to-end connectivity of NAT gateway, follow these steps: 1. Validate that your [NAT gateway public IP address is being used](./quickstart-create-nat-gateway-portal.md#test-nat-gateway).
NAT gateway can't be associated with more than 16 public IP addresses. You can u
### IPv6 coexistence
-[Virtual Network NAT gateway](nat-overview.md) supports IPv4 UDP and TCP protocols. NAT gateway can't be associated to an IPv6 Public IP address or IPv6 Public IP Prefix. NAT gateway can be deployed on a dual stack subnet, but will still only use IPv4 Public IP addresses for directing outbound traffic. Deploy NAT gateway on a dual stack subnet when you need IPv6 resources to exist in the same subnet as IPv4 resources.
+[NAT gateway](nat-overview.md) supports IPv4 UDP and TCP protocols. NAT gateway can't be associated to an IPv6 Public IP address or IPv6 Public IP Prefix. NAT gateway can be deployed on a dual stack subnet, but will still only use IPv4 Public IP addresses for directing outbound traffic. Deploy NAT gateway on a dual stack subnet when you need IPv6 resources to exist in the same subnet as IPv4 resources.
### Can't use basic SKU public IPs with NAT gateway
We're always looking to improve the experience of our customers. If you're exper
To learn more about NAT gateway, see:
-* [Virtual Network NAT](nat-overview.md)
+* [Azure NAT Gateway](nat-overview.md)
* [NAT gateway resource](nat-gateway-resource.md) * [Manage NAT gateway](./manage-nat-gateway.md)
-* [Metrics and alerts for NAT gateway resources](nat-metrics.md).
+* [Metrics and alerts for NAT gateway resources](nat-metrics.md).