Updates from: 03/22/2022 02:09:13
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Partner F5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-f5.md
Title: Tutorial to enable Secure Hybrid Access to applications with Azure AD B2C and F5 BIG-IP description: Learn how to integrate Azure AD B2C authentication with F5 BIG-IP for secure hybrid access --++
active-directory Howto Sspr Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-deployment.md
Last updated 02/02/2022 -+
active-directory Sample V2 Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/sample-v2-code.md
The following sample shows a public client application running on a device witho
> | Java | [Sign in users and invoke protected API](https://github.com/Azure-Samples/ms-identity-java-devicecodeflow) | MSAL Java | Device code | > | Python | [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-devicecodeflow) | MSAL Python | Device code |
+## Microsoft Teams applications
+
+The following sample illustrates Microsoft Teams Tab application that signs in users. Additionally it demonstrates how to call Microsoft Graph API with the user's identity using the Microsoft Authentication Library (MSAL).
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Language/<br/>Platform | Code sample(s) <br/>on GitHub | Auth<br/> libraries | Auth flow |
+> | - | -- | - | -- |
+> | Node.js | [Teams Tab app: single sign-on (SSO) and call Microsoft Graph](https://github.com/OfficeDev/Microsoft-Teams-Samples/tree/main/samples/tab-sso/nodejs) | MSAL Node | On-Behalf-Of (OBO) |
+ ## Multi-tenant SaaS The following samples show how to configure your application to accept sign-ins from any Azure Active Directory (Azure AD) tenant. Configuring your application to be _multi-tenant_ means that you can offer a **Software as a Service** (SaaS) application to many organizations, allowing their users to be able to sign-in to your application after providing consent.
active-directory F5 Aad Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-aad-integration.md
Title: Secure hybrid access with F5 description: F5 BIG-IP Access Policy Manager and Azure Active Directory integration for Secure Hybrid Access-+ Last updated 11/12/2020-+
active-directory F5 Aad Password Less Vpn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-aad-password-less-vpn.md
Title: Configure F5 BIG-IP SSL-VPN solution in Azure AD
description: Tutorial to configure F5ΓÇÖs BIG-IP based Secure socket layer Virtual private network (SSL-VPN) solution with Azure Active Directory (AD) for Secure Hybrid Access (SHA) -+ Last updated 10/12/2020-+
The F5 VPN application should also be visible as a target resource in Azure AD C
- [Five steps to full application integration with Azure AD](../fundamentals/five-steps-to-full-application-integration-with-azure-ad.md) -- [Microsoft Zero Trust framework to enable remote work](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/)
+- [Microsoft Zero Trust framework to enable remote work](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/)
active-directory F5 Big Ip Forms Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-forms-advanced.md
Title: Configure F5 BIG-IPΓÇÖs Access Policy Manager for form-based SSO description: Learn how to configure F5's BIG-IP Access Policy Manager and Azure Active Directory for secure hybrid access to form-based applications.-+ Last updated 10/20/2021-+
For more information, see the F5 BIG-IP [Session Variables reference](https://te
* [What is Conditional Access?](../conditional-access/overview.md)
-* [Zero Trust framework to enable remote work](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/)
+* [Zero Trust framework to enable remote work](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/)
active-directory F5 Big Ip Header Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-header-advanced.md
Title: Configure F5 BIG-IP Access Policy Manager for header-based SSO description: Learn how to configure F5's BIG-IP Access Policy Manager (APM) and Azure Active Directory SSO for header-based authentication -+ Last updated 11/10/2021-+
For more information refer to these articles:
- [What is Conditional Access?](../conditional-access/overview.md) - [Microsoft Zero Trust framework to enable remote
- work](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/)
+ work](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/)
active-directory F5 Big Ip Headers Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-headers-easy-button.md
Title: Configure F5 BIG-IPΓÇÖs Easy Button for Header-based SSO description: learn to implement Secure Hybrid Access (SHA) with single sign-on (SSO) to header-based applications using F5ΓÇÖs BIG-IP Easy Button Guided Configuration. -+ Last updated 01/07/2022-+
If you donΓÇÖt see a BIG-IP error page, then the issue is probably more related
2. The **View Variables** link in this location may also help root cause SSO issues, particularly if the BIG-IP APM fails to obtain the right attributes from Azure AD or another source
-For more information, visit this F5 knowledge article [Configuring LDAP remote authentication for Active Directory](https://support.f5.com/csp/article/K11072). ThereΓÇÖs also a great BIG-IP reference table to help diagnose LDAP-related issues in this F5 knowledge article on [LDAP Query](https://techdocs.f5.com/kb/en-us/products/big-ip_apm/manuals/product/apm-authentication-single-sign-on-11-5-0/5.html).
+For more information, visit this F5 knowledge article [Configuring LDAP remote authentication for Active Directory](https://support.f5.com/csp/article/K11072). ThereΓÇÖs also a great BIG-IP reference table to help diagnose LDAP-related issues in this F5 knowledge article on [LDAP Query](https://techdocs.f5.com/kb/en-us/products/big-ip_apm/manuals/product/apm-authentication-single-sign-on-11-5-0/5.html).
active-directory F5 Big Ip Kerberos Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-kerberos-advanced.md
Title: Configure F5 BIG-IP Access Policy Manager for Kerberos authentication description: Learn how to implement Secure Hybrid Access (SHA) with single sign-on (SSO) to Kerberos applications by using F5's BIG-IP advanced configuration. -+ Last updated 12/13/2021-+
For help with diagnosing KCD-related problems, see the F5 BIG-IP deployment guid
* [What is Conditional Access?](../conditional-access/overview.md)
-* [Zero Trust framework to enable remote work](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/)
+* [Zero Trust framework to enable remote work](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/)
active-directory F5 Big Ip Kerberos Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-kerberos-easy-button.md
Title: Configure F5 BIG-IP Easy Button for Kerberos SSO description: Learn to implement Secure Hybrid Access (SHA) with Single Sign-on to Kerberos applications using F5ΓÇÖs BIG-IP Easy Button guided configuration.. -+ Last updated 12/20/2021-+
If you donΓÇÖt see a BIG-IP error page, then the issue is probably more related
2. Select the link for your active session. The **View Variables** link in this location may also help determine root cause KCD issues, particularly if the BIG-IP APM fails to obtain the right user and domain identifiers from session variables.
-See [BIG-IP APM variable assign examples]( https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference]( https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
+See [BIG-IP APM variable assign examples]( https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference]( https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
active-directory F5 Big Ip Ldap Header Easybutton https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-ldap-header-easybutton.md
Title: Configure F5 BIG-IPΓÇÖs Easy Button for Header-based and LDAP SSO description: Learn to configure F5ΓÇÖs BIG-IP Access Policy Manager (APM) and Azure Active Directory (Azure AD) for secure hybrid access to header-based applications that also require session augmentation through Lightweight Directory Access Protocol (LDAP) sourced attributes. -+ Last updated 11/22/2021-+
The following command can also be used from the BIG-IP bash shell to validate th
```ldapsearch -xLLL -H 'ldap://192.168.0.58' -b "CN=partners,dc=contoso,dc=lds" -s sub -D "CN=f5-apm,CN=partners,DC=contoso,DC=lds" -w 'P@55w0rd!' "(cn=testuser)" ```
-For more information, visit this F5 knowledge article [Configuring LDAP remote authentication for Active Directory](https://support.f5.com/csp/article/K11072). ThereΓÇÖs also a great BIG-IP reference table to help diagnose LDAP-related issues in this F5 knowledge article on [LDAP Query](https://techdocs.f5.com/kb/en-us/products/big-ip_apm/manuals/product/apm-authentication-single-sign-on-11-5-0/5.html).
+For more information, visit this F5 knowledge article [Configuring LDAP remote authentication for Active Directory](https://support.f5.com/csp/article/K11072). ThereΓÇÖs also a great BIG-IP reference table to help diagnose LDAP-related issues in this F5 knowledge article on [LDAP Query](https://techdocs.f5.com/kb/en-us/products/big-ip_apm/manuals/product/apm-authentication-single-sign-on-11-5-0/5.html).
active-directory F5 Big Ip Oracle Enterprise Business Suite Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-oracle-enterprise-business-suite-easy-button.md
Title: Configure F5 BIG-IP Easy Button for SSO to Oracle EBS description: Learn to implement SHA with header-based SSO to Oracle EBS using F5ΓÇÖs BIG-IP Easy Button guided configuration -+ Last updated 1/31/2022-+
The following command from a bash shell validates the APM service account used f
```ldapsearch -xLLL -H 'ldap://192.168.0.58' -b "CN=oraclef5,dc=contoso,dc=lds" -s sub -D "CN=f5-apm,CN=partners,DC=contoso,DC=lds" -w 'P@55w0rd!' "(cn=testuser)" ```
-For more information, visit this F5 knowledge article [Configuring LDAP remote authentication for Active Directory](https://support.f5.com/csp/article/K11072). ThereΓÇÖs also a great BIG-IP reference table to help diagnose LDAP-related issues in this [F5 knowledge article on LDAP Query](https://techdocs.f5.com/en-us/bigip-16-1-0/big-ip-access-policy-manager-authentication-methods/ldap-query.html).
+For more information, visit this F5 knowledge article [Configuring LDAP remote authentication for Active Directory](https://support.f5.com/csp/article/K11072). ThereΓÇÖs also a great BIG-IP reference table to help diagnose LDAP-related issues in this [F5 knowledge article on LDAP Query](https://techdocs.f5.com/en-us/bigip-16-1-0/big-ip-access-policy-manager-authentication-methods/ldap-query.html).
active-directory F5 Big Ip Oracle Jde Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-oracle-jde-easy-button.md
Title: Configure F5 BIG-IP Easy Button for SSO to Oracle JDE description: Learn to implement SHA with header-based SSO to Oracle JD Edwards using F5ΓÇÖs BIG-IP Easy Button guided configuration -+ Last updated 02/03/2022-+
If you donΓÇÖt see a BIG-IP error page, then the issue is probably more related
2. The **View Variables** link in this location may also help root cause SSO issues, particularly if the BIG-IP APM fails to obtain the right attributes from Azure AD or another source
-See [BIG-IP APM variable assign examples](https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference](https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
+See [BIG-IP APM variable assign examples](https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference](https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
active-directory F5 Big Ip Oracle Peoplesoft Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-oracle-peoplesoft-easy-button.md
Title: Configure F5 BIG-IP Easy Button for SSO to Oracle PeopleSoft description: Learn to implement SHA with header-based SSO to Oracle PeopleSoft using F5 BIG-IP Easy Button guided configuration. -+ Last updated 02/26/2022-+
If you donΓÇÖt see a BIG-IP error page, then the issue is probably more related
2. The **View Variables** link in this location may also help root cause SSO issues, particularly if the BIG-IP APM fails to obtain the right attributes from session variables
-See [BIG-IP APM variable assign examples](https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference](https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
+See [BIG-IP APM variable assign examples](https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference](https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
active-directory F5 Big Ip Sap Erp Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-sap-erp-easy-button.md
Title: Configure F5 BIG-IP Easy Button for SSO to SAP ERP description: Learn to secure SAP ERP using Azure Active Directory (Azure AD), through F5ΓÇÖs BIG-IP Easy Button guided configuration. -+ Last updated 3/1/2022-+
If you donΓÇÖt see a BIG-IP error page, then the issue is probably more related
2. Select the link for your active session. The **View Variables** link in this location may also help determine root cause KCD issues, particularly if the BIG-IP APM fails to obtain the right user and domain identifiers from session variables
-See [BIG-IP APM variable assign examples]( https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference]( https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
+See [BIG-IP APM variable assign examples]( https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference]( https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
active-directory F5 Bigip Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-bigip-deployment-guide.md
Title: Secure hybrid access with F5 deployment guide
description: Tutorial to deploy F5 BIG-IP Virtual Edition (VE) VM in Azure IaaS for Secure hybrid access -+ Last updated 10/12/2020-+
active-directory Manage App Consent Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-app-consent-policies.md
App consent policies where the ID begins with "microsoft-" are built-in policies
## Pre-requisites
+1. A user or service with one of the following:
+ - Global Administrator directory role
+ - Privileged Role Administrator directory role
+ - A custom directory role with the necessary [permissions to manage app consent policies](../roles/custom-consent-permissions.md#managing-app-consent-policies)
+ - The Microsoft Graph app role (application permission) Policy.ReadWrite.PermissionGrant (when connecting as an app or a service)
+
1. Connect to [Azure AD PowerShell](/powershell/module/azuread/). ```powershell
active-directory My Apps Deployment Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/my-apps-deployment-plan.md
Administrators can configure:
* [Collections of applications](../manage-apps/access-panel-collections.md). * Assignment of icons to applications * User-friendly names for applications
-* Company branding shown on My Apps
+* Banner logo in the My Apps header. For more information about assigning a banner logo, see [Add branding to your organization's Azure Active Directory sign-in page](../fundamentals/customize-branding.md)
## Plan consent configuration
The extension allows users to launch any app from its search bar, finding access
#### Plan for mobile access
-For applications that use password-based SSO or accessed by using [Microsoft Azure AD Application Proxy](../app-proxy/application-proxy.md), you must use Microsoft Edge mobile. For other applications, any mobile browser can be used.
+For applications that use password-based SSO or accessed by using [Microsoft Azure AD Application Proxy](../app-proxy/application-proxy.md), you must use Microsoft Edge mobile. For other applications, any mobile browser can be used. Be sure to enable password-based SSO in your mobile settings, which can be off by default. For example, **Settings -> Privacy and Security -> Azure AD Password SSO**.
### Linked SSO
active-directory Managed Identity Best Practice Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/managed-identity-best-practice-recommendations.md
will be displayed with ΓÇ£Identity not foundΓÇ¥ when viewed in the portal. [Read
:::image type="content" source="media/managed-identity-best-practice-recommendations/identity-not-found.png" alt-text="Identity not found for role assignment.":::
+Role assignments which are no longer associated with a user or service principal will appear with an `ObjectType` value of `Unknown`. In order to remove them, you can pipe several Azure PowerShell commands together to first get all the role assignments, filter to only those with an `ObjectType` value of `Unknown` and then remove those role assignments from Azure.
+
+```azurepowershell
+Get-AzRoleAssignment | Where-Object {$_.ObjectType -eq "Unknown"} | Remove-AzRoleAssignment
+```
+ ## Limitation of using managed identities for authorization Using Azure AD **groups** for granting access to services is a great way to simplify the authorization process. The idea is simple ΓÇô grant permissions to a group and add identities to the group so that they inherit the same permissions. This is a well-established pattern from various on-premises systems and works well when the identities represent users. Another option to control authorization in Azure AD is by using [App Roles](../develop/howto-add-app-roles-in-azure-ad-apps.md), which allows you to declare **roles** that are specific to an app (rather than groups, which are a global concept in the directory). You can then [assign app roles to managed identities](how-to-assign-app-role-managed-identity-powershell.md) (as well as users or groups).
active-directory Groups Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-concept.md
The following scenarios are not supported:
The following are known issues with role-assignable groups: - *Azure AD P2 licensed customers only*: Even after deleting the group, it is still shown an eligible member of the role in PIM UI. Functionally there's no problem; it's just a cache issue in the Azure portal. -- Use the new [Exchange admin center](/exchange/exchange-admin-center) for role assignments via group membership. The old Exchange admin center doesn't support this feature yet. Exchange PowerShell cmdlets will work as expected.
+- Use the new [Exchange admin center](/exchange/exchange-admin-center) for role assignments via group membership. The old Exchange admin center doesn't support this feature. If accessing the old Exchange admin center is required, assign the eligible role directly to the user (not via role-assignable groups). Exchange PowerShell cmdlets will work as expected.
- If an administrator role is assigned to a role-assignable group instead of individual users, members of the group will not be able to access Rules, Organization, or Public Folders in the new [Exchange admin center](/exchange/exchange-admin-center). The workaround is to assign the role directly to users instead of the group. - Azure Information Protection Portal (the classic portal) doesn't recognize role membership via group yet. You can [migrate to the unified sensitivity labeling platform](/azure/information-protection/configure-policy-migrate-labels) and then use the Office 365 Security & Compliance center to use group assignments to manage roles. - [Apps admin center](https://config.office.com/) doesn't support this feature yet. Assign the Office Apps Administrator role directly to users.
active-directory Mural Identity Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mural-identity-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure MURAL Identity for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to MURAL Identity.
++
+writer: twimmers
+
+ms.assetid: 0b932dbd-b5c9-40e3-baeb-a7c7424e1bfd
++++ Last updated : 03/21/2022+++
+# Tutorial: Configure MURAL Identity for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both MURAL Identity and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [MURAL Identity](https://www.mural.co/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Capabilities Supported
+> [!div class="checklist"]
+> * Create users in MURAL Identity
+> * Keep user attributes synchronized between Azure AD and MURAL Identity
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* SCIM provisioning is only available for MURALΓÇÖs Enterprise plan. Before you configure SCIM provisioning, please reach out to a member of the MURAL Customer Success Team to enable the feature.
+* SAML based SSO must be properly set up before configuring automated provisioning. The instructions on how to set up SSO through Azure Active Directory for MURAL can be found [here](mural-identity-tutorial.md).
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+3. Determine what data to [map between Azure AD and MURAL Identity](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure MURAL Identity to support provisioning with Azure AD
+
+Follow the [steps](https://developers.mural.co/enterprise/docs/set-up-the-scim-api) to get your SCIM URL and unique API Token from the API keys page in your MURAL Company dashboard. Use this key in the Secret Token field in **Step 5**.
+
+## Step 3. Add MURAL Identity from the Azure AD application gallery
+
+Add MURAL Identity from the Azure AD application gallery to start managing provisioning to MURAL Identity. If you have previously setup MURAL Identity for SSO you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* When assigning users and groups to MURAL Identity, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
++
+## Step 5. Configure automatic user provisioning to MURAL Identity
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in MURAL Identity based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for MURAL Identity in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+2. In the applications list, select **MURAL Identity**.
+
+ ![The MURAL Identity link in the Applications list](common/all-applications.png)
+
+3. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+4. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning](common/provisioning-automatic.png)
+
+5. Under the **Admin Credentials** section, input your MURAL Identity Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to MURAL Identity. If the connection fails, ensure your MURAL Identity account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+7. Select **Save**.
+
+8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to MURAL Identity**.
+
+9. Review the user attributes that are synchronized from Azure AD to MURAL Identity in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in MURAL Identity for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the MURAL Identity API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|
+ ||||
+ |userName|String|&check;
+ |active|Boolean|
+ |emails[type eq "work"].value|String|
+ |name.givenName|String|
+ |name.familyName|String|
+ |externalId|String|
++
+10. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+11. To enable the Azure AD provisioning service for MURAL Identity, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+12. Define the users and/or groups that you would like to provision to MURAL Identity by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+13. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## Troubleshooting Tips
+* When provisioning a user keep in mind that at MURAL we do not support numbers in the name fields (i.e. givenName or familyName).
+* When filtering on **userName** in the GET endpoint make sure that the email address is all lowercase otherwise you will get an empty result. This is because we convert email addresses to lowercase while provisioning accounts.
+* When de-provisioning an end-user (setting the active attribute to false), user will be soft-deleted and lose access to all their workspaces. When that same de-provisioned end-user is later activated again (setting the active attribute to true), user will not have access to the workspaces user previously belonged to. The end-user will see an error message "YouΓÇÖve been deactivated from this workspace", with an option to request reactivation which the workspace admin must approve.
+* If you have any other issues, please reach out to [MURAL Identity support team](mailto:support@mural.co)
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Mural Identity Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mural-identity-tutorial.md
Title: 'Tutorial: Azure AD SSO integration with Mural Identity'
-description: Learn how to configure single sign-on between Azure Active Directory and Mural Identity.
+ Title: 'Tutorial: Azure AD SSO integration with MURAL Identity'
+description: Learn how to configure single sign-on between Azure Active Directory and MURAL Identity.
-# Tutorial: Azure AD SSO integration with Mural Identity
+# Tutorial: Azure AD SSO integration with MURAL Identity
-In this tutorial, you'll learn how to integrate Mural Identity with Azure Active Directory (Azure AD). When you integrate Mural Identity with Azure AD, you can:
+In this tutorial, you'll learn how to integrate MURAL Identity with Azure Active Directory (Azure AD). When you integrate MURAL Identity with Azure AD, you can:
-* Control in Azure AD who has access to Mural Identity.
-* Enable your users to be automatically signed-in to Mural Identity with their Azure AD accounts.
+* Control in Azure AD who has access to MURAL Identity.
+* Enable your users to be automatically signed-in to MURAL Identity with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal. ## Prerequisites
In this tutorial, you'll learn how to integrate Mural Identity with Azure Active
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* Mural Identity single sign-on (SSO) enabled subscription.
+* MURAL Identity single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Mural Identity supports **SP and IDP** initiated SSO.
-* Mural Identity supports **Just In Time** user provisioning.
+* MURAL Identity supports **SP and IDP** initiated SSO.
+* MURAL Identity supports **Just In Time** user provisioning.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-## Add Mural Identity from the gallery
+## Add MURAL Identity from the gallery
-To configure the integration of Mural Identity into Azure AD, you need to add Mural Identity from the gallery to your list of managed SaaS apps.
+To configure the integration of MURAL Identity into Azure AD, you need to add MURAL Identity from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **Mural Identity** in the search box.
-1. Select **Mural Identity** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. In the **Add from the gallery** section, type **MURAL Identity** in the search box.
+1. Select **MURAL Identity** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD SSO for Mural Identity
+## Configure and test Azure AD SSO for MURAL Identity
-Configure and test Azure AD SSO with Mural Identity using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Mural Identity.
+Configure and test Azure AD SSO with MURAL Identity using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in MURAL Identity.
-To configure and test Azure AD SSO with Mural Identity, perform the following steps:
+To configure and test Azure AD SSO with MURAL Identity, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure Mural Identity SSO](#configure-mural-identity-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Mural Identity test user](#create-mural-identity-test-user)** - to have a counterpart of B.Simon in Mural Identity that is linked to the Azure AD representation of user.
+1. **[Configure MURAL Identity SSO](#configure-mural-identity-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create MURAL Identity test user](#create-mural-identity-test-user)** - to have a counterpart of B.Simon in MURAL Identity that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the Azure portal, on the **Mural Identity** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **MURAL Identity** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
-1. On the **Basic SAML Configuration** section, perform the following steps if you wish to configure the application in **SP** initiated mode:
-
- a. In the **Identifier** text box, type the URL:
- `https://app.mural.co`
-
- b. In the **Reply URL** text box, type one of the following URLs:
-
- | **Reply URL** |
- |-|
- | `https://api.mural.co/api/v0/authenticate/saml2/callback` |
- | `https://app.mural.co/api/v0/authenticate/saml2/callback` |
-
- c. In the **Sign-on URL** text box, type one of the following URLs:
-
- | **Sign-on URL** |
- |--|
- | `https://api.mural.co/api/v0/authenticate/saml2/callback` |
- | `https://app.mural.co/api/v0/authenticate/saml2/callback` |
-
-1. Mural Identity application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+1. MURAL Identity application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
![image](common/default-attributes.png)
-1. In addition to above, Mural Identity application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+1. In addition to above, MURAL Identity application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
| Name | Source Attribute| | -- | |
Follow these steps to enable Azure AD SSO in the Azure portal.
![The Certificate download link](common/certificatebase64.png)
-1. On the **Set up Mural Identity** section, copy the appropriate URL(s) based on your requirement.
+1. On the **Set up MURAL Identity** section, copy the appropriate URL(s) based on your requirement.
![Copy configuration URLs](common/copy-configuration-urls.png)
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Mural Identity.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to MURAL Identity.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **Mural Identity**.
+1. In the applications list, select **MURAL Identity**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure Mural Identity SSO
+## Configure MURAL Identity SSO
-To configure single sign-on on **Mural Identity** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Mural Identity support team](mailto:support@mural.co). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **MURAL Identity** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [MURAL Identity support team](mailto:support@mural.co). They set this setting to have the SAML SSO connection set properly on both sides.
-### Create Mural Identity test user
+### Create MURAL Identity test user
-In this section, a user called Britta Simon is created in Mural Identity. Mural Identity supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Mural Identity, a new one is created after authentication.
+In this section, a user called Britta Simon is created in MURAL Identity. MURAL Identity supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in MURAL Identity, a new one is created after authentication.
## Test SSO
In this section, you test your Azure AD single sign-on configuration with follow
#### SP initiated:
-* Click on **Test this application** in Azure portal. This will redirect to Mural Identity Sign on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to MURAL Identity Sign on URL where you can initiate the login flow.
-* Go to Mural Identity Sign-on URL directly and initiate the login flow from there.
+* Go to MURAL Identity Sign-on URL directly and initiate the login flow from there.
#### IDP initiated:
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the Mural Identity for which you set up the SSO.
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the MURAL Identity for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the MURAL Identity tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the MURAL Identity for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Change log
-You can also use Microsoft My Apps to test the application in any mode. When you click the Mural Identity tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Mural Identity for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+* 03/21/2022 - Application Name updated.
## Next steps
-Once you configure Mural Identity you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
+Once you configure MURAL Identity you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
advisor Advisor Azure Resource Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-azure-resource-graph.md
Title: Advisor data in Azure Resource Graph description: Make queries for Advisor data in Azure Resource Graph+ Last updated 03/12/2020-+
advisor Advisor Quick Fix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-quick-fix.md
Title: Quick Fix remediation for Advisor recommendations description: Perform bulk remediation using Quick Fix in Advisor+ Last updated 03/13/2020-+
advisor Advisor Recommendations Digest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-recommendations-digest.md
Title: Recommendation digest for Azure Advisor description: Get periodic summary for your active recommendations+ Last updated 03/16/2020-+
aks Azure Disk Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-csi.md
test.txt
## Resize a persistent volume without downtime
-You can instead request a larger volume for a PVC. Edit the PVC object, and specify a larger size. This change triggers the expansion of the underlying volume that backs the PV.
+You can request a larger volume for a PVC. Edit the PVC object, and specify a larger size. This change triggers the expansion of the underlying volume that backs the PV.
> [!NOTE] > A new PV is never created to satisfy the claim. Instead, an existing volume is resized.
Filesystem Size Used Avail Use% Mounted on
> [!IMPORTANT] > Currently, Azure disk CSI driver supports resizing PVCs without downtime on specific regions. > Follow this [link][expand-an-azure-managed-disk] to register the disk online resize feature.
+> If your cluster is not in the supported region list, you need to delete application first to detach disk on the node before expanding PVC.
Let's expand the PVC by increasing the `spec.resources.requests.storage` field:
aks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/faq.md
The feature to enable storing customer data in a single region is currently only
## Are AKS images required to run as root?
-Except for the following two images, AKS images aren't required to run as root:
+The following images have functional requirements to "Run as Root" and exceptions must be filed for any policies:
- *mcr.microsoft.com/oss/kubernetes/coredns* - *mcr.microsoft.com/azuremonitor/containerinsights/ciprod*
AKS doesn't apply Network Security Groups (NSGs) to its subnet and will not modi
[admission-controllers]: https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/ [private-clusters-github-issue]: https://github.com/Azure/AKS/issues/948 [csi-driver]: https://github.com/Azure/secrets-store-csi-driver-provider-azure
-[vm-sla]: https://azure.microsoft.com/support/legal/sla/virtual-machines/
+[vm-sla]: https://azure.microsoft.com/support/legal/sla/virtual-machines/
api-management Api Management Advanced Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-advanced-policies.md
The `send-one-way-request` policy sends the provided request to the specified UR
```xml <send-one-way-request mode="new | copy">
- <url>...</url>
+ <set-url>...</set-url>
<method>...</method> <header name="" exists-action="override | skip | append | delete">...</header> <body>...</body>
This sample policy shows an example of using the `send-one-way-request` policy t
| Element | Description | Required | | -- | -- | - | | send-one-way-request | Root element. | Yes |
-| url | The URL of the request. | No if mode=copy; otherwise yes. |
+| set-url | The URL of the request. | No if mode=copy; otherwise yes. |
| method | The HTTP method for the request. | No if mode=copy; otherwise yes. | | header | Request header. Use multiple header elements for multiple request headers. | No | | body | The request body. | No |
api-management Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/quickstart-bicep.md
You can use Azure CLI or Azure PowerShell to deploy the Bicep file. For more in
```azurecli az group create --name exampleRG --location eastus
- az deployment group create --resource-group exampleRG --template-file main.bicep --parameters publisherEmail=<publisher-email> publishername=<publisher-name>
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters publisherEmail=<publisher-email> publisherName=<publisher-name>
``` # [PowerShell](#tab/PowerShell)
api-management Self Hosted Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-overview.md
Previously updated : 01/19/2022 Last updated : 03/18/2022
The self-hosted gateway is a containerized, functionally equivalent version of t
The following functionality found in the managed gateways is **not available** in the self-hosted gateways: -- Azure Monitor logs
+- Sending resource logs (diagnostic logs) to Azure Monitor. However, you can [send metrics](how-to-configure-cloud-metrics-logs.md) to Azure Monitor, or [configure and persist logs locally](how-to-configure-local-metrics-logs.md) where the self-hosted gateway is deployed.
- Upstream (backend side) TLS version and cipher management - Validation of server and client certificates using [CA root certificates](api-management-howto-ca-certificates.md) uploaded to API Management service. You can configure [custom certificate authorities](api-management-howto-ca-certificates.md#create-custom-ca-for-self-hosted-gateway) for your self-hosted gateways and [client certificate validation](api-management-access-restriction-policies.md#validate-client-certificate) policies to enforce them. - Integration with [Service Fabric](../service-fabric/service-fabric-api-management-overview.md)
Self-hosted gateways require outbound TCP/IP connectivity to Azure on port 443.
- Reporting its status by sending heartbeat messages every minute - Regularly checking for (every 10 seconds) and applying configuration updates whenever they are available-- Sending request logs and metrics to Azure Monitor, if configured to do so
+- Sending metrics to Azure Monitor, if configured to do so
- Sending events to Application Insights, if set to do so ### FQDN dependencies
When connectivity is restored, each self-hosted gateway affected by the outage w
- [Deploy self-hosted gateway to Docker](how-to-deploy-self-hosted-gateway-docker.md) - [Deploy self-hosted gateway to Kubernetes](how-to-deploy-self-hosted-gateway-kubernetes.md) - [Deploy self-hosted gateway to Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md)
+- Learn about [observability capabilities](observability.md) in API Management
app-service App Service Ip Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-ip-restrictions.md
Title: Azure App Service access restrictions description: Learn how to secure your app in Azure App Service by setting up access restrictions. -+ ms.assetid: 3be1f4bd-8a81-4565-8a56-528c037b24bd Previously updated : 12/17/2020-- Last updated : 03/21/2022+ # Set up Azure App Service access restrictions
To add an access restriction rule to your app, do the following:
The list displays all the current restrictions that are applied to the app. If you have a virtual network restriction on your app, the table shows whether the service endpoints are enabled for Microsoft.Web. If no restrictions are defined on your app, the app is accessible from anywhere.
+### Permissions
+
+You must have at least the following Role-based access control permissions on the subnet or at a higher level to configure access restrictions through Azure portal, CLI or when setting the site config properties directly:
+
+| Action | Description |
+|-|-|
+| Microsoft.Web/sites/config/read | Get Web App configuration settings |
+| Microsoft.Web/sites/config/write | Update Web App's configuration settings |
+| Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/action* | Joins resource such as storage account or SQL database to a subnet |
+
+**only required when adding a virtual network (service endpoint) rule.*
+
+If you are adding a service endpoint-based rule and the virtual network is in a different subscription than the app, you must ensure that the subscription with the virtual network is registered for the Microsoft.Web resource provider. You can explicitly register the provider [by following this documentation](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider), but it will also automatically be registered when creating the first web app in a subscription.
+ ### Add an access restriction rule To add an access restriction rule to your app, on the **Access Restrictions** pane, select **Add rule**. After you add a rule, it becomes effective immediately.
app-service App Service Web Tutorial Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-tutorial-custom-domain.md
ms.assetid: dc446e0e-0958-48ea-8d99-441d2b947a7c
Last updated 05/27/2021
-adobe-target: true
-adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021
-adobe-target-experience: Experience B
-adobe-target-content: ./app-service-web-tutorial-custom-domain-uiex
# Tutorial: Map an existing custom DNS name to Azure App Service
app-service Networking Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/networking-features.md
ms.assetid: 5c61eed1-1ad1-4191-9f71-906d610ee5b7
Last updated 09/20/2021 - # App Service networking features
You can deploy applications in Azure App Service in multiple ways. By default, apps hosted in App Service are accessible directly through the internet and can reach only internet-hosted endpoints. But for many applications, you need to control the inbound and outbound network traffic. There are several features in App Service to help you meet those needs. The challenge is knowing which feature to use to solve a given problem. This article will help you determine which feature to use, based on some example use cases. There are two main deployment types for Azure App Service: -- The multitenant public service hosts App Service plans in the Free, Shared, Basic, Standard, Premium, PremiumV2, and PremiumV3 pricing SKUs.
+- The multi-tenant public service hosts App Service plans in the Free, Shared, Basic, Standard, Premium, PremiumV2, and PremiumV3 pricing SKUs.
- The single-tenant App Service Environment (ASE) hosts Isolated SKU App Service plans directly in your Azure virtual network.
-The features you use will depend on whether you're in the multitenant service or in an ASE.
+The features you use will depend on whether you're in the multi-tenant service or in an ASE.
> [!NOTE] > Networking features are not available for [apps deployed in Azure Arc](overview-arc-integration.md).
-## Multitenant App Service networking features
+## Multi-tenant App Service networking features
-Azure App Service is a distributed system. The roles that handle incoming HTTP or HTTPS requests are called *front ends*. The roles that host the customer workload are called *workers*. All the roles in an App Service deployment exist in a multitenant network. Because there are many different customers in the same App Service scale unit, you can't connect the App Service network directly to your network.
+Azure App Service is a distributed system. The roles that handle incoming HTTP or HTTPS requests are called *front ends*. The roles that host the customer workload are called *workers*. All the roles in an App Service deployment exist in a multi-tenant network. Because there are many different customers in the same App Service scale unit, you can't connect the App Service network directly to your network.
Instead of connecting the networks, you need features to handle the various aspects of application communication. The features that handle requests *to* your app can't be used to solve problems when you're making calls *from* your app. Likewise, the features that solve problems for calls from your app can't be used to solve problems to your app. | Inbound features | Outbound features | ||-| | App-assigned address | Hybrid Connections |
-| Access restrictions | Gateway-required VNet Integration |
-| Service endpoints | VNet Integration |
+| Access restrictions | Gateway-required virtual network integration |
+| Service endpoints | Virtual network integration |
| Private endpoints || Other than noted exceptions, you can use all of these features together. You can mix the features to solve your problems.
The following outbound use cases suggest how to use App Service networking featu
| Outbound use case | Feature | ||-|
-| Access resources in an Azure virtual network in the same region | VNet Integration </br> ASE |
-| Access resources in an Azure virtual network in a different region | VNet Integration and virtual network peering </br> Gateway-required VNet Integration </br> ASE and virtual network peering |
-| Access resources secured with service endpoints | VNet Integration </br> ASE |
+| Access resources in an Azure virtual network in the same region | Virtual network integration </br> ASE |
+| Access resources in an Azure virtual network in a different region | virtual network integration and virtual network peering </br> Gateway-required virtual network integration </br> ASE and virtual network peering |
+| Access resources secured with service endpoints | virtual network integration </br> ASE |
| Access resources in a private network that's not connected to Azure | Hybrid Connections |
-| Access resources across Azure ExpressRoute circuits | VNet Integration </br> ASE |
-| Secure outbound traffic from your web app | VNet Integration and network security groups </br> ASE |
-| Route outbound traffic from your web app | VNet Integration and route tables </br> ASE |
+| Access resources across Azure ExpressRoute circuits | virtual network integration </br> ASE |
+| Secure outbound traffic from your web app | virtual network integration and network security groups </br> ASE |
+| Route outbound traffic from your web app | virtual network integration and route tables </br> ASE |
### Default networking behavior
-Azure App Service scale units support many customers in each deployment. The Free and Shared SKU plans host customer workloads on multitenant workers. The Basic and higher plans host customer workloads that are dedicated to only one App Service plan. If you have a Standard App Service plan, all the apps in that plan will run on the same worker. If you scale out the worker, all the apps in that App Service plan will be replicated on a new worker for each instance in your App Service plan.
+Azure App Service scale units support many customers in each deployment. The Free and Shared SKU plans host customer workloads on multi-tenant workers. The Basic and higher plans host customer workloads that are dedicated to only one App Service plan. If you have a Standard App Service plan, all the apps in that plan will run on the same worker. If you scale out the worker, all the apps in that App Service plan will be replicated on a new worker for each instance in your App Service plan.
#### Outbound addresses
To learn how to set an address on your app, see [Add a TLS/SSL certificate in Az
Access restrictions let you filter *inbound* requests. The filtering action takes place on the front-end roles that are upstream from the worker roles where your apps are running. Because the front-end roles are upstream from the workers, you can think of access restrictions as network-level protection for your apps.
-This feature allows you to build a list of allow and deny rules that are evaluated in priority order. It's similar to the network security group (NSG) feature in Azure networking. You can use this feature in an ASE or in the multitenant service. When you use it with an ILB ASE, you can restrict access from private address blocks.
+This feature allows you to build a list of allow and deny rules that are evaluated in priority order. It's similar to the network security group (NSG) feature in Azure networking. You can use this feature in an ASE or in the multi-tenant service. When you use it with an ILB ASE, you can restrict access from private address blocks.
> [!NOTE] > Up to 512 access restriction rules can be configured per app.
The IP-based access restrictions feature helps when you want to restrict the IP
To learn how to enable this feature, see [Configuring access restrictions][iprestrictions]. > [!NOTE]
-> IP-based access restriction rules only handle virtual network address ranges when your app is in an App Service Environment. If your app is in the multitenant service, you need to use [service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) to restrict traffic to select subnets in your virtual network.
+> IP-based access restriction rules only handle virtual network address ranges when your app is in an App Service Environment. If your app is in the multi-tenant service, you need to use [service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) to restrict traffic to select subnets in your virtual network.
#### Access restriction rules based on service endpoints
Service endpoints allow you to lock down *inbound* access to your app so that th
Some use cases for this feature: * Set up an application gateway with your app to lock down inbound traffic to your app.
-* Restrict access to your app to resources in your virtual network. These resources can include VMs, ASEs, or even other apps that use VNet Integration.
+* Restrict access to your app to resources in your virtual network. These resources can include VMs, ASEs, or even other apps that use virtual network integration.
![Diagram that illustrates the use of service endpoints with Application Gateway.](media/networking-features/service-endpoints-appgw.png)
To learn how to enable this feature, see [Configuring access restrictions][ipres
#### Http header filtering for access restriction rules
-For each access restriction rule, you can add additional http header filtering. This allows you to further inspect the incoming request and filter based on specific http header values. Each header can have up to 8 values per rule. The following list of http headers is currently supported:
+For each access restriction rule, you can add additional http header filtering. This allows you to further inspect the incoming request and filter based on specific http header values. Each header can have up to eight values per rule. The following list of http headers is currently supported:
* X-Forwarded-For * X-Forwarded-Host * X-Azure-FDID
Note that App Service Hybrid Connections is unaware of what you're doing on top
Hybrid Connections is popular for development, but it's also used in production applications. It's great for accessing a web service or database, but it's not appropriate for situations that involve creating many connections.
-### Gateway-required VNet Integration
+### Gateway-required virtual network integration
-Gateway-required App Service VNet Integration enables your app to make *outbound* requests into an Azure virtual network. The feature works by connecting the host your app is running on to a Virtual Network gateway on your virtual network by using a point-to-site VPN. When you configure the feature, your app gets one of the point-to-site addresses assigned to each instance. This feature enables you to access resources in either classic or Azure Resource Manager virtual networks in any region.
+Gateway-required App Service virtual network integration enables your app to make *outbound* requests into an Azure virtual network. The feature works by connecting the host your app is running on to a Virtual Network gateway on your virtual network by using a point-to-site VPN. When you configure the feature, your app gets one of the point-to-site addresses assigned to each instance. This feature enables you to access resources in either classic or Azure Resource Manager virtual networks in any region.
-![Diagram that illustrates gateway-required VNet Integration.](media/networking-features/gw-vnet-integration.png)
+![Diagram that illustrates gateway-required virtual network integration.](media/networking-features/gw-vnet-integration.png)
This feature solves the problem of accessing resources in other virtual networks. It can even be used to connect through a virtual network to either other virtual networks or on-premises. It doesn't work with ExpressRoute-connected virtual networks, but it does work with site-to-site VPN-connected networks. It's usually inappropriate to use this feature from an app in an App Service Environment (ASE) because the ASE is already in your virtual network. Use cases for this feature:
-* Access resources on private IPs in your Classic virtual networks.
-* Access resources on-premises if there's a site-to-site VPN.
-* Access resources in cross region VNets that are not peered to a VNet in the region.
+* Access resources in cross region virtual networks that aren't peered to a virtual network in the region.
-When this feature is enabled, your app will use the DNS server that the destination virtual network is configured with. For more information on this feature, see [App Service VNet Integration][vnetintegrationp2s].
+When this feature is enabled, your app will use the DNS server that the destination virtual network is configured with. For more information on this feature, see [App Service virtual network integration][vnetintegrationp2s].
-### Regional VNet Integration
+### <a id="regional-vnet-integration"></a>Regional virtual network integration
-Gateway-required VNet Integration is useful, but it doesn't solve the problem of accessing resources across ExpressRoute. On top of needing to reach across ExpressRoute connections, there's a need for apps to be able to make calls to services secured by service endpoint. Another VNet Integration capability can meet these needs.
+Gateway-required virtual network integration is useful, but it doesn't solve the problem of accessing resources across ExpressRoute. On top of needing to reach across ExpressRoute connections, there's a need for apps to be able to make calls to services secured by service endpoint. Another virtual network integration capability can meet these needs.
-The regional VNet Integration feature enables you to place the back end of your app in a subnet in a Resource Manager virtual network in the same region as your app. This feature isn't available from an App Service Environment, which is already in a virtual network. Use cases for this feature:
+The regional virtual network integration feature enables you to place the back end of your app in a subnet in a Resource Manager virtual network in the same region as your app. This feature isn't available from an App Service Environment, which is already in a virtual network. Use cases for this feature:
* Access resources in Resource Manager virtual networks in the same region. * Access resources in peered virtual networks, including cross region connections.
The regional VNet Integration feature enables you to place the back end of your
* Help to secure all outbound traffic. * Force tunnel all outbound traffic.
-![Diagram that illustrates VNet Integration.](media/networking-features/vnet-integration.png)
+![Diagram that illustrates virtual network integration.](media/networking-features/vnet-integration.png)
-To learn more, see [App Service VNet Integration][vnetintegration].
+To learn more, see [App Service virtual network integration][vnetintegration].
### App Service Environment
An App Service Environment (ASE) is a single-tenant deployment of the Azure App
* Access resources across service endpoints. * Access resources across private endpoints.
-With an ASE, you don't need to use VNet Integration because the ASE is already in your virtual network. If you want to access resources like SQL or Azure Storage over service endpoints, enable service endpoints on the ASE subnet. If you want to access resources in the virtual network or private endpoints in the virtual network, you don't need to do any additional configuration. If you want to access resources across ExpressRoute, you're already in the virtual network and don't need to configure anything on the ASE or the apps in it.
+With an ASE, you don't need to use virtual network integration because the ASE is already in your virtual network. If you want to access resources like SQL or Azure Storage over service endpoints, enable service endpoints on the ASE subnet. If you want to access resources in the virtual network or private endpoints in the virtual network, you don't need to do any additional configuration. If you want to access resources across ExpressRoute, you're already in the virtual network and don't need to configure anything on the ASE or the apps in it.
-Because the apps in an ILB ASE can be exposed on a private IP address, you can easily add WAF devices to expose just the apps that you want to the internet and help keep the rest secure. This feature can help make the development of multitier applications easier.
+Because the apps in an ILB ASE can be exposed on a private IP address, you can easily add WAF devices to expose just the apps that you want to the internet and help keep the rest secure. This feature can help make the development of multi-tier applications easier.
-Some things aren't currently possible from the multitenant service but are possible from an ASE. Here are some examples:
+Some things aren't currently possible from the multi-tenant service but are possible from an ASE. Here are some examples:
* Host your apps in a single-tenant service.
-* Scale up to many more instances than are possible in the multitenant service.
+* Scale up to many more instances than are possible in the multi-tenant service.
* Load private CA client certificates for use by your apps with private CA-secured endpoints.
-* Force TLS 1.1 across all apps hosted in the system without any ability to disable it at the app level.
+* Force TLS 1.2 across all apps hosted in the system without any ability to disable it at the app level.
![Diagram that illustrates an ASE in a virtual network.](media/networking-features/app-service-environment.png) The ASE provides the best story around isolated and dedicated app hosting, but it does involve some management challenges. Some things to consider before you use an operational ASE: * An ASE runs inside your virtual network, but it does have dependencies outside the virtual network. Those dependencies must be allowed. For more information, see [Networking considerations for an App Service Environment][networkinfo].
- * An ASE doesn't scale immediately like the multitenant service. You need to anticipate scaling needs rather than reactively scaling.
+ * An ASE doesn't scale immediately like the multi-tenant service. You need to anticipate scaling needs rather than reactively scaling.
* An ASE does have a higher up-front cost. To get the most out of your ASE, you should plan to put many workloads into one ASE rather than using it for small efforts. * The apps in an ASE can't selectively restrict access to some apps in the ASE and not others. * An ASE is in a subnet, and any networking rules apply to all the traffic to and from that ASE. If you want to assign inbound traffic rules for just one app, use access restrictions. ## Combining features
-The features noted for the multitenant service can be used together to solve more elaborate use cases. Two of the more common use cases are described here, but they're just examples. By understanding what the various features do, you can meet nearly all your system architecture needs.
+The features noted for the multi-tenant service can be used together to solve more elaborate use cases. Two of the more common use cases are described here, but they're just examples. By understanding what the various features do, you can meet nearly all your system architecture needs.
### Place an app into a virtual network
-You might wonder how to put an app into a virtual network. If you put your app into a virtual network, the inbound and outbound endpoints for the app are within the virtual network. An ASE is the best way to solve this problem. But you can meet most of your needs within the multitenant service by combining features. For example, you can host intranet-only applications with private inbound and outbound addresses by:
+You might wonder how to put an app into a virtual network. If you put your app into a virtual network, the inbound and outbound endpoints for the app are within the virtual network. An ASE is the best way to solve this problem. But you can meet most of your needs within the multi-tenant service by combining features. For example, you can host intranet-only applications with private inbound and outbound addresses by:
* Creating an application gateway with private inbound and outbound addresses. * Securing inbound traffic to your app with service endpoints.
-* Using the new VNet Integration feature so the back end of your app is in your virtual network.
+* Using the virtual network integration feature so the back end of your app is in your virtual network.
This deployment style won't give you a dedicated address for outbound traffic to the internet or the ability to lock down all outbound traffic from your app. It will give you a much of what you would only otherwise get with an ASE.
-### Create multitier applications
+### Create multi-tier applications
-A multitier application is an application in which the API back-end apps can be accessed only from the front-end tier. There are two ways to create a multitier application. Both start by using VNet Integration to connect your front-end web app to a subnet in a virtual network. Doing so will enable your web app to make calls into your virtual network. After your front-end app is connected to the virtual network, you need to decide how to lock down access to your API application. You can:
+A multi-tier application is an application in which the API back-end apps can be accessed only from the front-end tier. There are two ways to create a multi-tier application. Both start by using virtual network integration to connect your front-end web app to a subnet in a virtual network. Doing so will enable your web app to make calls into your virtual network. After your front-end app is connected to the virtual network, you need to decide how to lock down access to your API application. You can:
* Host both the front end and the API app in the same ILB ASE, and expose the front-end app to the internet by using an application gateway.
-* Host the front end in the multitenant service and the back end in an ILB ASE.
-* Host both the front end and the API app in the multitenant service.
+* Host the front end in the multi-tenant service and the back end in an ILB ASE.
+* Host both the front end and the API app in the multi-tenant service.
-If you're hosting both the front end and API app for a multitier application, you can:
+If you're hosting both the front end and API app for a multi-tier application, you can:
- Expose your API application by using private endpoints in your virtual network:
If you're hosting both the front end and API app for a multitier application, yo
Here are some considerations to help you decide which method to use:
-* When you use service endpoints, you only need to secure traffic to your API app to the integration subnet. This helps to secure the API app, but you could still have data exfiltration from your front-end app to other apps in the app service.
+* When you use service endpoints, you only need to secure traffic to your API app to the integration subnet. Service endpoints helps to secure the API app, but you could still have data exfiltration from your front-end app to other apps in the app service.
* When you use private endpoints, you have two subnets at play, which adds complexity. Also, the private endpoint is a top-level resource and adds management overhead. The benefit of using private endpoints is that you don't have the possibility of data exfiltration. Either method will work with multiple front ends. On a small scale, service endpoints are easier to use because you simply enable service endpoints for the API app on the front-end integration subnet. As you add more front-end apps, you need to adjust every API app to include service endpoints with the integration subnet. When you use private endpoints, there's more complexity, but you don't have to change anything on your API apps after you set a private endpoint. ### Line-of-business applications
-Line-of-business (LOB) applications are internal applications that aren't normally exposed for access from the internet. These applications are called from inside corporate networks where access can be strictly controlled. If you use an ILB ASE, it's easy to host your line-of-business applications. If you use the multitenant service, you can either use private endpoints or use service endpoints combined with an application gateway. There are two reasons to use an application gateway with service endpoints instead of using private endpoints:
+Line-of-business (LOB) applications are internal applications that aren't normally exposed for access from the internet. These applications are called from inside corporate networks where access can be strictly controlled. If you use an ILB ASE, it's easy to host your line-of-business applications. If you use the multi-tenant service, you can either use private endpoints or use service endpoints combined with an application gateway. There are two reasons to use an application gateway with service endpoints instead of using private endpoints:
* You need WAF protection on your LOB apps. * You want to load balance to multiple instances of your LOB apps.
Configuring private endpoints will expose your apps on a private address, but yo
## App Service ports
-If you scan App Service, you'll find several ports that are exposed for inbound connections. There's no way to block or control access to these ports in the multitenant service. Here's the list of exposed ports:
+If you scan App Service, you'll find several ports that are exposed for inbound connections. There's no way to block or control access to these ports in the multi-tenant service. Here's the list of exposed ports:
| Use | Port or ports | |-|-|
app-service Overview Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-vnet-integration.md
When you scale up or down in size, the required address space is doubled for a s
<sup>*</sup>Assumes that you'll need to scale up or down in either size or SKU at some point.
-Because subnet size can't be changed after assignment, use a subnet that's large enough to accommodate whatever scale your app might reach. To avoid any issues with subnet capacity, use a `/26` with 64 addresses. When creating subnets in Azure portal as part of integrating with the virtual network, a minimum size of /27 is required.
+Because subnet size can't be changed after assignment, use a subnet that's large enough to accommodate whatever scale your app might reach. To avoid any issues with subnet capacity, use a `/26` with 64 addresses. When you're creating subnets in Azure portal as part of integrating with the virtual network, a minimum size of /27 is required.
When you want your apps in your plan to reach a virtual network that's already connected to by apps in another plan, select a different subnet than the one being used by the pre-existing virtual network integration.
+### Permissions
+ You must have at least the following Role-based access control permissions on the subnet or at a higher level to configure regional virtual network integration through Azure portal, CLI or when setting the `virtualNetworkSubnetId` site property directly: | Action | Description |
If the virtual network is in a different subscription than the app, you must ens
### Routes
-You can control what traffic goes through the virtual network integration. There are three types of routing to consider when you configure regional virtual network integration. [Application routing](#application-routing) defines what traffic is routed from your app and into the virtual network. [Configuration routing](#configuration-routing) affects operations that happen before or during startup of you app. Examples are container image pull and app settings with Key Vault reference. [Network routing](#network-routing) is the ability to handle how both app and configuration traffic is routed from your virtual network and out.
+You can control what traffic goes through the virtual network integration. There are three types of routing to consider when you configure regional virtual network integration. [Application routing](#application-routing) defines what traffic is routed from your app and into the virtual network. [Configuration routing](#configuration-routing) affects operations that happen before or during startup of your app. Examples are container image pull and app settings with Key Vault reference. [Network routing](#network-routing) is the ability to handle how both app and configuration traffic are routed from your virtual network and out.
By default, only private traffic (also known as [RFC1918](https://datatracker.ietf.org/doc/html/rfc1918#section-3) traffic) sent from your app is routed through the virtual network integration. Unless you configure application routing or configuration routing options, all other traffic will not be sent through the virtual network integration. Traffic is only subject to [network routing](#network-routing) if it is sent through the virtual network integration.
When you are using virtual network integration, you can configure how parts of t
##### Content storage
-Bringing you own storage for content in often used in Functions where [content storage](./../azure-functions/configure-networking-how-to.md#restrict-your-storage-account-to-a-virtual-network) is configured as part of the Functions app.
+Bringing your own storage for content in often used in Functions where [content storage](./../azure-functions/configure-networking-how-to.md#restrict-your-storage-account-to-a-virtual-network) is configured as part of the Functions app.
To route content storage traffic through the virtual network integration, you need to add an app setting named `WEBSITE_CONTENTOVERVNET` with the value `1`. In addition to adding the app setting, you must also ensure that any firewall or Network Security Group configured on traffic from the subnet allow traffic to port 443 and 445.
application-gateway Application Gateway Troubleshooting 502 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-troubleshooting-502.md
The following table lists the values associated with the default health probe:
### Solution
-* Ensure that a default site is configured and is listening at 127.0.0.1.
+* Host value of the request will be set to 127.0.0.1. Ensure that a default site is configured and is listening at 127.0.0.1.
+* Protocol of the request is determined by the BackendHttpSetting protocol.
+* URI Path will be set to */*.
* If BackendHttpSetting specifies a port other than 80, the default site should be configured to listen at that port.
-* The call to `http://127.0.0.1:port` should return an HTTP result code of 200. This should be returned within the 30-second timeout period.
-* Ensure that the port configured is open and that there are no firewall rules or Azure Network Security Groups, which block incoming or outgoing traffic on the port configured.
+* The call to `protocol://127.0.0.1:port` should return an HTTP result code of 200. This should be returned within the 30-second timeout period.
+* Ensure the configured port is open and there are no firewall rules or Azure Network Security Groups blocking incoming or outgoing traffic on the port configured.
* If Azure classic VMs or Cloud Service is used with a FQDN or a public IP, ensure that the corresponding [endpoint](/previous-versions/azure/virtual-machines/windows/classic/setup-endpoints?toc=%2fazure%2fapplication-gateway%2ftoc.json) is opened. * If the VM is configured via Azure Resource Manager and is outside the VNet where the application gateway is deployed, a [Network Security Group](../virtual-network/network-security-groups-overview.md) must be configured to allow access on the desired port.
application-gateway Ingress Controller Add Health Probes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-add-health-probes.md
spec:
``` Kubernetes API Reference:
-* [Container Probes]https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#httpgetaction-v1-core)
+* [Container Probes](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#httpgetaction-v1-core)
> [!NOTE] > * `readinessProbe` and `livenessProbe` are supported when configured with `httpGet`.
attestation Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/audit-logs.md
Audit logs are provided in JSON format. Here is an example of what an audit log
"env_cloud_deploymentUnit": null } ```-
-## Access Audit Logs
-
-These logs are stored in Azure and can't be accessed directly. If you need to access these logs, file a support ticket. For more information, see [Contact Microsoft Support](https://azure.microsoft.com/support/options/).
-
-Once the support ticket is filed, Microsoft will download and provide you access to these logs.
azure-arc Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-agent.md
The azcmagent tool is used to configure the Azure Connected Machine agent during
* **show** - View agent status and its configuration properties (Resource Group name, Subscription ID, version, etc.), which can help when troubleshooting an issue with the agent. Include the `-j` parameter to output the results in JSON format.
-* **config** - View and change settings to enable features and control agent behavior
+* **config** - View and change settings to enable features and control agent behavior.
+
+* **check** - Validate network connectivity.
* **logs** - Creates a .zip file in the current directory containing logs to assist you while troubleshooting.
azure-arc Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md
Title: Connected Machine agent prerequisites description: Learn about the prerequisites for installing the Connected Machine agent for Azure Arc-enabled servers. Previously updated : 03/14/2022 Last updated : 03/21/2022
Azure Arc-enabled servers does not support installing the agent on virtual machi
## Supported operating systems
-The following versions of the Windows and Linux operating system are officially supported for the Azure Connected Machine agent:
+The following versions of the Windows and Linux operating system are officially supported for the Azure Connected Machine agent. Only x86-64 (64-bit) architectures are supported. x86 (32-bit) and ARM-based architectures, including x86-64 emulation on arm64, are not supported operating environments.
* Windows Server 2008 R2 SP1, 2012 R2, 2016, 2019, and 2022 * Both Desktop and Server Core experiences are supported * Azure Editions are supported when running as a virtual machine on Azure Stack HCI * Azure Stack HCI
-* Ubuntu 16.04, 18.04, and 20.04 LTS (x64)
-* CentOS Linux 7 and 8 (x64)
-* SUSE Linux Enterprise Server (SLES) 12 and 15 (x64)
-* Red Hat Enterprise Linux (RHEL) 7 and 8 (x64)
-* Amazon Linux 2 (x64)
-* Oracle Linux 7 and 8 (x64)
+* Ubuntu 16.04, 18.04, and 20.04 LTS
+* CentOS Linux 7 and 8
+* SUSE Linux Enterprise Server (SLES) 12 and 15
+* Red Hat Enterprise Linux (RHEL) 7 and 8
+* Amazon Linux 2
+* Oracle Linux 7 and 8
> [!WARNING] > If the Linux hostname or Windows computer name uses a reserved word or trademark, attempting to register the connected machine with Azure will fail. For a list of reserved words, see [Resolve reserved resource name errors](../../azure-resource-manager/templates/error-reserved-resource-name.md).
azure-functions Functions Bindings Event Grid Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid-output.md
public static async Task Run(
The following example shows how the custom type is used in both the trigger and an Event Grid output binding: # [C# Script](#tab/csharp-script)
azure-maps Routing Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/routing-coverage.md
This article provides coverage information for Azure Maps routing. Upon a search query, Azure Maps returns an optimal route from location A to location B. You're provided with accurate travel times, live updates of travel information, and route instructions. You can also add more search parameters such as current traffic, vehicle type, and conditions to avoid. The optimization of the route depends on the region. That's because, Azure Maps has various levels of information and accuracy for different regions. The tables in this article list the regions and what kind of information you can request for them. -- Check out coverage for [**Geocoding**](geocoding-coverage.md).-- Check out coverage for [**Traffic**](traffic-coverage.md). -- Check out coverage for [**Render**](render-coverage.md).
+## Routing information supported
+
+In the [Azure Maps routing coverage tables](#azure-maps-routing-coverage-tables), the following information is available.
+
+### Calculate Route
+
+The Calculate Route service calculates a route between an origin and a destination, passing through waypoints if they're specified. For more information, see [Get Route Directions](/rest/api/maps/route/get-route-directions) in the REST API documentation.
+
+### Calculate Reachable Range
+
+The Calculate Reachable Range service calculates a set of locations that can be reached from the origin point. For more information, see [Get Route Range](/rest/api/maps/route/get-route-range) in the REST API documentation.
+
+### Matrix Routing
+
+The Matrix Routing service calculates travel time and distance between all possible pairs in a list of origins and destinations. It does not provide any detailed information about the routes. You can get one-to-many, many-to-one, or many-to-many route options simply by varying the number of origins and/or destinations. For more information, see [Matrix Routing service](/rest/api/maps/route/post-route-matrix) in the REST API documentation.
+
+### Real-time Traffic
+
+Delivers real-time information about traffic jams, road closures, and a detailed view of the current speed and travel times across the entire road network. For more information, see [Traffic](/rest/api/maps/traffic) in the REST API documentation.
+
+### Truck routes
+
+The Azure Maps Truck Routing API provides travel routes which take truck attributes into consideration. Truck attributes include things such as width, height, weight, turning radius and type of cargo. This is important as not all trucks can travel the same routes as other vehicles. Here are some examples:
+
+- Bridges have heights and weight limits.
+- Tunnels often have restrictions on flammable or hazardous materials.
+- Longer trucks have difficulty making tight turns.
+- Highways often have a separate speed limit for trucks.
+- Certain trucks may want to avoid roads that have steep gradients.
+
+Azure Maps supports truck routing in the countries/regions indicated in the tables below.
<! ### Legend
This article provides coverage information for Azure Maps routing. Upon a search
| Γùæ | Region has partial routing data. | ->
-The following tables provides coverage information for Azure Maps routing.
+## Azure Maps routing coverage tables
+
+The following tables provide coverage information for Azure Maps routing.
-## Americas
+### Americas
-| Country/Region | Calculate Route & Reachable Range | Real-time Traffic | Truck Route |
+| Country/Region | Calculate Route, Reachable Range & Matrix Routing | Real-time Traffic | Truck Route |
|-|::|:--:|:--:| | Anguilla | Γ£ô | | | | Antigua & Barbuda | Γ£ô | | |
The following tables provides coverage information for Azure Maps routing.
| Uruguay | Γ£ô | Γ£ô | Γ£ô | | Venezuela | Γ£ô | | |
-## Asia Pacific
+### Asia Pacific
-| Country/Region | Calculate Route & Reachable Range | Real-time Traffic | Truck Route |
+| Country/Region | Calculate Route, Reachable Range & Matrix Routing | Real-time Traffic | Truck Route |
|-|::|:--:|:--:| | American Samoa | Γ£ô | | | | Australia | Γ£ô | Γ£ô | Γ£ô |
The following tables provides coverage information for Azure Maps routing.
| Vietnam | Γ£ô | Γ£ô | Γ£ô | | Wallis & Futuna | Γ£ô | | |
-## Europe
+### Europe
-| Country/Region | Calculate Route & Reachable Range | Real-time Traffic | Truck Route |
+| Country/Region | Calculate Route, Reachable Range & Matrix Routing | Real-time Traffic | Truck Route |
|-|::|:--:|:--:| | Albania | Γ£ô | | Γ£ô | | Andorra | Γ£ô | Γ£ô | Γ£ô |
The following tables provides coverage information for Azure Maps routing.
| Uzbekistan | Γ£ô | | | | Vatican City | Γ£ô | Γ£ô | Γ£ô |
-## Middle East & Africa
+### Middle East & Africa
-| Country/Region | Calculate Route & Reachable Range | Real-time Traffic | Truck Route |
+| Country/Region | Calculate Route, Reachable Range & Matrix Routing | Real-time Traffic | Truck Route |
|-|::|:--:|:--:| | Afghanistan | Γ£ô | | | | Algeria | Γ£ô | | |
The following tables provides coverage information for Azure Maps routing.
## Next steps For more information about Azure Maps routing, see the [Routing](/rest/api/maps/route) reference pages.+
+For more coverage tables, see:
+
+- Check out coverage for [**Geocoding**](geocoding-coverage.md).
+- Check out coverage for [**Traffic**](traffic-coverage.md).
+- Check out coverage for [**Render**](render-coverage.md).
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-overview.md
description: Overview of the Azure Monitor agent, which collects monitoring data
Previously updated : 3/16/2022 Last updated : 3/21/2022
The methods for defining data collection for the existing agents are distinctly
The Azure Monitor agent uses [data collection rules](../essentials/data-collection-rule-overview.md) to configure data to collect from each agent. Data collection rules enable manageability of collection settings at scale while still enabling unique, scoped configurations for subsets of machines. They're independent of the workspace and independent of the virtual machine, which allows them to be defined once and reused across machines and environments. See [Configure data collection for the Azure Monitor agent](data-collection-rule-azure-monitor-agent.md). ## Should I switch to the Azure Monitor agent?
-The Azure Monitor agent replaces the [legacy agents for Azure Monitor](agents-overview.md). To start transitioning your VMs off the current agents to the new agent, consider the following factors:
+To start transitioning your VMs off the current agents to the new agent, consider the following factors:
-- **Environment requirements:** The Azure Monitor agent supports [these operating systems](./agents-overview.md#supported-operating-systems) today. Support for future operating system versions, environment support, and networking requirements will most likely be provided in this new agent.
-
- Assess whether your environment is supported by the Azure Monitor agent. If not, you might need to stay with the current agent. If the Azure Monitor agent supports your current environment, consider transitioning to it.
-- **Current and new feature requirements:** The Azure Monitor agent introduces several new capabilities, such as filtering, scoping, and multi-homing. But it isn't at parity yet with the current agents for other functionality, such as custom log collection and integration with all solutions. ([View supported features and services](#supported-services-and-features).)
+- **Environment requirements:** The Azure Monitor agent supports [these operating systems](./agents-overview.md#supported-operating-systems) today. Support for future operating system versions, environment support, and networking requirements will only be provided in this new agent. If the Azure Monitor agent supports your current environment, start transitioning to it.
+
+- **Current and new feature requirements:** The Azure Monitor agent introduces several new capabilities, such as filtering, scoping, and multi-homing. But it isn't at parity yet with the current agents for other functionality. View [current limitations](#current-limitations) and [supported solutions](#supported-services-and-features).
- Most new capabilities in Azure Monitor will be made available only with the Azure Monitor agent. Over time, more functionality will be available only in the new agent. Consider whether the Azure Monitor agent has the features you require and if there are some features that you can temporarily do without to get other important features in the new agent.
+ That said, most new capabilities in Azure Monitor will be made available only with the Azure Monitor agent. Review whether the Azure Monitor agent has the features you require and if there are some features that you can temporarily do without to get other important features in the new agent.
- If the Azure Monitor agent has all the core capabilities you require, consider transitioning to it. If there are critical features that you require, continue with the current agent until the Azure Monitor agent reaches parity.
+ If the Azure Monitor agent has all the core capabilities you require, start transitioning to it. If there are critical features that you require, continue with the current agent until the Azure Monitor agent reaches parity.
- **Tolerance for rework:** If you're setting up a new environment with resources such as deployment scripts and onboarding templates, assess the effort involved. If the setup will take a significant amount of work, consider setting up your new environment with the new agent as it's now generally available. Azure Monitor's Log Analytics agent is retiring on 31 August 2024. The current agents will be supported until the retirement date.
+## Coexistence with other agents
+The Azure Monitor agent can coexist (run side by side on the same machine) with the existing agents so that you can continue to use their existing functionality during evaluation or migration. While this allows you to begin transition given the limitations, you must review the below points carefully:
+- Be careful in collecting duplicate data because it could skew query results and affect downstream features like alerts, dashboards or workbooks. For example, VM insights uses the Log Analytics agent to send performance data to a Log Analytics workspace. You might also have configured the workspace to collect Windows events and Syslog events from agents. If you install the Azure Monitor agent and create a data collection rule for these same events and performance data, it will result in duplicate data. As such, ensure you're not collecting the same data from both agents. If you are, ensure they're **collecting from different machines** or **going to separate destinations**.
+- Besides data duplication, this would also generate more charges for data ingestion and retention.
+- Running two telemetry agents on the same machine would result in double the resource consumption, including but not limited to CPU, memory, storage space and network bandwidth.
+ ## Supported resource types Azure virtual machines, virtual machine scale sets, and Azure Arc-enabled servers are currently supported. Azure Kubernetes Service and other compute resource types aren't currently supported.
The following table shows the current support for the Azure Monitor agent with A
| [Change Tracking](../../automation/change-tracking/overview.md) | Supported as File Integrity Monitoring in the Microsoft Defender for Cloud private preview. | [Sign-up link](https://aka.ms/AMAgent) | | [Update Management](../../automation/update-management/overview.md) | Use Update Management v2 (private preview) that doesn't require an agent. | [Sign-up link](https://www.yammer.com/azureadvisors/threads/1064001355087872) |
-## Coexistence with other agents
-The Azure Monitor agent can coexist with the existing agents so that you can continue to use their existing functionality during evaluation or migration. This capability is important because of the limitations that support existing solutions. Be careful in collecting duplicate data because it could skew query results and generate more charges for data ingestion and retention.
-
-For example, VM insights uses the Log Analytics agent to send performance data to a Log Analytics workspace. You might also have configured the workspace to collect Windows events and Syslog events from agents. If you install the Azure Monitor agent and create a data collection rule for these same events and performance data, it will result in duplicate data.
-
-As such, ensure you're not collecting the same data from both agents. If you are, ensure they're going to separate destinations.
- ## Costs There's no cost for the Azure Monitor agent, but you might incur charges for the data ingested. For details on Log Analytics data collection and retention and for customer metrics, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
azure-monitor Alerts Unified Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-unified-log.md
See this alert stateless evaluation example:
| 00:15 | TRUE | Alert fires and action groups called. New alert state ACTIVE. | 00:20 | FALSE | Alert doesn't fire. No actions called. Pervious alerts state remains ACTIVE.
-Stateful alerts fire once per incident and resolve. The alert rule resolves when the alert condition isn't met for 30 minutes for a specific evaluation period (to account for log ingestion delay), and for three consecutive evaluations to reduce noise if there is flapping conditions. For example, with a frequency of 5 minutes, the alert resolve after 40 minutes or with a frequency of 1 minute, the alert resolve after 32 minutes. The resolved notification is sent out via web-hooks or email, the status of the alert instance (called monitor state) in Azure portal is also set to resolved.
+Stateful alerts fire once per incident and resolve. The alert rule resolves when the alert condition isn't met for 30 minutes for a specific evaluation period (to account for [log ingestion delay](../alerts/alerts-troubleshoot-log.md#data-ingestion-time-for-logs)), and for three consecutive evaluations to reduce noise if there is flapping conditions. For example, with a frequency of 5 minutes, the alert resolve after 40 minutes or with a frequency of 1 minute, the alert resolve after 32 minutes. The resolved notification is sent out via web-hooks or email, the status of the alert instance (called monitor state) in Azure portal is also set to resolved.
Stateful alerts feature is currently in preview. You can set this using **Automatically resolve alerts** in the alert details section.
azure-monitor Migrate From Instrumentation Keys To Connection Strings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/migrate-from-instrumentation-keys-to-connection-strings.md
Connection strings provide a single configuration setting and eliminate the need
## Supported SDK Versions -- .NET and .NET Core v2.12.0+
+- .NET and .NET Core [LTS](https://dotnet.microsoft.com/download/visual-studio-sdks)
- Java v2.5.1 and Java 3.0+ - JavaScript v2.3.0+ - NodeJS v1.5.0+
azure-monitor Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md
By default no sampling is enabled in the Java auto-instrumentation and SDK. Curr
#### Configuring Java auto-instrumentation
-* To configure sampling overrides that override the defaulr sampling rate and apply different sampling rates to selected requests and dependencies, use the [sampling override guide](./java-standalone-sampling-overrides.md#getting-started).
-* To configure fixed-rate samping that applies to all of your telemetry, use the [fixed rate sampling guide](./java-standalone-config.md#sampling).
+* To configure sampling overrides that override the default sampling rate and apply different sampling rates to selected requests and dependencies, use the [sampling override guide](./java-standalone-sampling-overrides.md#getting-started).
+* To configure fixed-rate sampling that applies to all of your telemetry, use the [fixed rate sampling guide](./java-standalone-config.md#sampling).
#### Configuring Java 2.x SDK
When presenting telemetry back to you, the Application Insights service adjusts
The accuracy of the approximation largely depends on the configured sampling percentage. Also, the accuracy increases for applications that handle a large volume of generally similar requests from lots of users. On the other hand, for applications that don't work with a significant load, sampling is not needed as these applications can usually send all their telemetry while staying within the quota, without causing data loss from throttling.
+## Log query accuracy and high sample rates
+
+As the application is scaled up, it may be processing dozens, hundreds, or thousands of work items per second. Logging an event for each of them is not resource nor cost effective. Application Insights uses sampling to adapt to growing telemetry volume in a flexible manner and to control resource usage and cost.
+
+However, sampling can affect the accuracy of query results gained from sampled telemetry. For example, 25 different users made a single request to a web application and each of those requests generated 1 Request telemetry record, 1 Dependency telemetry record, 1 Trace Message telemetry record and 1 Exception telemetry record. This adds up to a total of 100 raw telemetry records displayed in the image below.
+
+![Sample rate at 0 percent and the itemCount is 1](./media/sampling/records-with-legend-0-sampled.png) **Sample Rate 0% 25 Requests (itemCount=1) 25 Dependencies (itemCount=1) 25 Traces (itemCount=1) 25 Exceptions (itemCount=1)**
+
+If the Application Insights SDK did **not** need to throttle the telemetry, the application would send all 100 records to the ingestion endpoint. This is the equivalent of a sample rate of 0%. The SDK would package all of the telemetry records into JSON payloads and send them to the ingestion service. Every one of those 100 telemetry records would have the `itemCount` field set to 1, that is because we donΓÇÖt need to drop any records for sampling and each single telemetry record represents a count of 1. Running a query of `sum(itemCount)` for the requests telemetry would return 25, which matches the 25 requests and is 25% of the 100 telemetry records produced by the web application.
+
+When the SDK **does** throttle the telemetry through sampling the `itemCount` is less representative of the amount of telemetry records stored. For example, if the decision was made to keep 1% of all the records and the sample rate was 99% for the 100 telemetry records in the above example. That would mean only a single record out of all the items would be stored. To illustrate this, if the SDK picks one of the request telemetry records it will have to drop all of the other 99 records (24 requests, 25 dependencies, 25 traces, 25 exceptions). Although only 1 record is stored the SDK sets the `itemCount` field for the request as 100. This is because the single ingested record represents 100 total telemetry records that executed within the web application.
+
+![Sample rate at 99 percent and the itemCount is 100 visualized](./media/sampling/sampling-with-legend-99-sampled.png)
+![Sample rate at 99 percent and the itemCount is 100 in percentages](./media/sampling/sample-rate-99.png) **Sample Rate 99% 1 Request (itemCount=100) 0 Dependencies 0 Traces 0 Exceptions**
++
+One caveat for this example is that App Insights SDK samples based on operation ID, meaning that an `operation_Id` is selected and **all** of the telemetry for that single operation are ingested and saved (not random individual records). This can also result in fluctuations based on application operation telemetry counts. If one operation has a higher amount of records and that operation is sampled it would show up as a spike in adjusted sample rates. For example if one operation produces 4000 telemetry records and the other operations only produce 1 to 3 telemetry records. The sampling based on `operation_Id` is done to enable an end-to-end view for failing operations. All telemetry for an operation can be reviewed, including exception details, to precisely diagnose application code errors.
+
+As sampling rates increase log based queries accuracy decrease and are usually inflated. This only impacts the accuracy of log-based queries when sampling is enabled and the sample rates are in a higher range (~ 60%). The impact varies based on telemetry types, telemetry counts per operation as well as other factors.
+
+To address the problems introduced by sampling pre-aggregated metrics are used in the SDKs. Additional details about these metrics, log-based and pre-aggregated, can be referenced in [Azure Application Insights - Azure Monitor | Microsoft Docs](./pre-aggregated-metrics-log-metrics.md#sdk-supported-pre-aggregated-metrics-table). Relevant properties of the logged data are identified and statistics extracted before sampling occurs. To avoid resource and cost issues, metrics are aggregated. The resulting aggregate data is represented by only a few metric telemetry items per minute, instead of potentially thousands of event telemetry items. These metrics calculate the 25 requests from the example and send a metric to the MDM account reporting ΓÇ£this web app processed 25 requestsΓÇ¥, but the sent request telemetry record will have an `itemCount` of 100. These pre-aggregated metrics report the correct numbers and can be relied upon when sampling affects the log-based queries results. They can be viewed on the Metrics blade of the Application Insights portal.
+ ## Frequently asked questions *What is the default sampling behavior in the ASP.NET and ASP.NET Core SDKs?*
Prior to v2.5.0-beta2 of the ASP.NET SDK, and v2.2.0-beta3 of ASP.NET Core SDK,
## Next steps * [Filtering](./api-filtering-sampling.md) can provide more strict control of what your SDK sends.
-* Read the Developer Network article [Optimize Telemetry with Application Insights](/archive/msdn-magazine/2017/may/devops-optimize-telemetry-with-application-insights).
+* Read the Developer Network article [Optimize Telemetry with Application Insights](/archive/msdn-magazine/2017/may/devops-optimize-telemetry-with-application-insights).
azure-monitor Data Ingestion Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-ingestion-time.md
description: Explains the different factors that affect latency in collecting lo
Previously updated : 07/18/2019+ Last updated : 03/21/2022
Heartbeat
``` ## Next steps
-* Read the [Service Level Agreement (SLA)](https://azure.microsoft.com/support/legal/sla/monitor/v1_3/) for Azure Monitor.
+* Read the [Service Level Agreement (SLA)](https://azure.microsoft.com/support/legal/sla/monitor/v1_3/) for Azure Monitor.
azure-monitor Data Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-security.md
description: Learn about how Azure Monitor Logs protects your privacy and secure
Previously updated : 11/11/2020+ Last updated : 03/21/2022
azure-monitor Manage Cost Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/manage-cost-storage.md
Soon after the daily limit is reached, the collection of billable data types sto
> The daily cap can't stop data collection at precisely the specified cap level and some excess data is expected, particularly if the workspace is receiving high volumes of data. If data is collected above the cap, it's still billed. For a query that is helpful in studying the daily cap behavior, see the [View the effect of the Daily Cap](#view-the-effect-of-the-daily-cap) section in this article. > [!WARNING]
-> The daily cap doesn't stop the collection of data types **WindowsEvent**, **SecurityAlert**, **SecurityBaseline**, **SecurityBaselineSummary**, **SecurityDetection**, **SecurityEvent**, **WindowsFirewall**, **MaliciousIPCommunication**, **LinuxAuditLog**, **SysmonEvent**, **ProtectionStatus**, **Update**, and **UpdateSummary**, except for workspaces in which Microsoft Defender for Cloud was installed before June 19, 2017.
+> For workspaces with Microsoft Defender for Cloud, the daily cap doesn't stop the collection of data types **WindowsEvent**, **SecurityAlert**, **SecurityBaseline**, **SecurityBaselineSummary**, **SecurityDetection**, **SecurityEvent**, **WindowsFirewall**, **MaliciousIPCommunication**, **LinuxAuditLog**, **SysmonEvent**, **ProtectionStatus**, **Update**, and **UpdateSummary**, except for workspaces in which Microsoft Defender for Cloud was installed before June 19, 2017.
### Identify what daily data limit to define
azure-monitor Monitor Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/monitor-workspace.md
Title: Monitor health of Log Analytics workspace in Azure Monitor description: Describes how to monitor the health of your Log Analytics workspace using data in the Operation table. Previously updated : 10/20/2020+ Last updated : 03/21/2022
azure-monitor Monitor Virtual Machine Analyze https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-analyze.md
After you've enabled VM insights on your virtual machines, data will be availabl
## Single machine experience Access the single machine analysis experience from the **Monitoring** section of the menu in the Azure portal for each Azure virtual machine and Azure Arc-enabled server. These options either limit the data that you're viewing to that machine or at least set an initial filter for it. In this way, you can focus on a particular machine, view its current performance and its trending over time, and help to identify any issues it might be experiencing. -- **Overview page**: Select the **Monitoring** tab to display [platform metrics](../essentials/data-platform-metrics.md) for the virtual machine host. You get a quick view of the trend over different time periods for important metrics, such as CPU, network, and disk. Because these are host metrics though, counters from the guest operating system such as memory aren't included. Select a graph to work with this data in [metrics explorer](../essentials/metrics-getting-started.md) where you can perform different aggregations, and add more counters for analysis.
+- **Overview page**: Select the **Monitoring** tab to display alerts, [platform metrics](../essentials/data-platform-metrics.md), and other monitoring information for the virtual machine host. You can see the number of active alerts on the tab. In the **Monitoring** tab, you get a quick view of:
+ - **Alerts:** the alerts fired in the last 24 hours, with some important statistics about those alerts. If you do not have any alerts set up for this VM, there is a link to help you quickly create new alerts for your VM.
+ - **Key metrics:** the trend over different time periods for important metrics, such as CPU, network, and disk. Because these are host metrics though, counters from the guest operating system such as memory aren't included. Select a graph to work with this data in [metrics explorer](../essentials/metrics-getting-started.md) where you can perform different aggregations, and add more counters for analysis.
- **Activity log**: See [activity log](../essentials/activity-log.md#view-the-activity-log) entries filtered for the current virtual machine. Use this log to view the recent activity of the machine, such as any configuration changes and when it was stopped and started. - **Insights**: Open [VM insights](../vm/vminsights-overview.md) with the map for the current virtual machine selected. The map shows you running processes on the machine, dependencies on other machines, and external processes. For details on how to use the Map view for a single machine, see [Use the Map feature of VM insights to understand application components](vminsights-maps.md#view-a-map-from-a-vm).
azure-sql Link Feature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/link-feature.md
Previously updated : 03/17/2022 Last updated : 03/21/2022 # Link feature for Azure SQL Managed Instance (preview) [!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
To use the link feature, you'll need:
- Azure SQL Managed Instance provisioned on any service tier. > [!NOTE]
-> SQL Managed Instance link feature is now available in all Azure regions.
+> SQL Managed Instance link feature is available in all public Azure regions.
+> National clouds are currently not supported.
## Overview
azure-sql Sql Server Distributed Availability Group Migrate Ag https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/migration-guides/virtual-machines/sql-server-distributed-availability-group-migrate-ag.md
To create the availability group on the source instances, run this script on the
CREATE AVAILABILITY GROUP [OnPremAG] WITH ( AUTOMATED_BACKUP_PREFERENCE = PRIMARY, DB_FAILOVER = OFF,
- DTC_SUPPORT = NONE,
+ DTC_SUPPORT = NONE )
FOR DATABASE [Adventureworks] REPLICA ON N'OnPremNode1' WITH (ENDPOINT_URL = N'TCP://OnPremNode1.contoso.com:5022',
azure-sql Sql Server Distributed Availability Group Migrate Standalone Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/migration-guides/virtual-machines/sql-server-distributed-availability-group-migrate-standalone-instance.md
CREATE AVAILABILITY GROUP [OnPremAG]
WITH (AUTOMATED_BACKUP_PREFERENCE = PRIMARY, DB_FAILOVER = OFF, DTC_SUPPORT = NONE,
- CLUSTER_TYPE=NONE,
+ CLUSTER_TYPE=NONE )
FOR DATABASE [Adventureworks] REPLICA ON N'OnPremNode'
azure-web-pubsub Howto Develop Reliable Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-develop-reliable-clients.md
+
+ Title: Create reliable Websocket clients
+description: How to create reliable Websocket clients
++++ Last updated : 12/15/2021++
+# Create reliable Websocket with subprotocol
+
+Websocket client connections may drop due to intermittent network issue and when connections drop, messages will also be lost. In a pubsub system, publishers are decoupled from subscribers, so publishers hard to detect subscribers' drop or message loss. It's crucial for clients to overcome intermittent network issue and keep the reliability of message delivery. To achieve that, you can create a reliable Websocket client with the help of reliable subprotocols.
+
+> [!NOTE]
+> Reliable protocols are still in preview. Some changes are expected in future.
+
+## Reliable Protocol
+
+Service supports two reliable subprotocols `json.reliable.webpubsub.azure.v1` and `protobuf.reliable.webpubsub.azure.v1`. Clients must follow the protocol, mainly including the part of reconnection, publisher and subscriber to achieve the reliability, or the message delivery may not work as expected or the service may terminate the client as it violates the protocol spec.
+
+## Initialization
+
+To use reliable subprotocols, you must set subprotocol when constructing Websocket connections. In JavaScript, you can use as following:
+
+- Use Json reliable subprotocol
+ ```js
+ var pubsub = new WebSocket('wss://test.webpubsub.azure.com/client/hubs/hub1', 'json.reliable.webpubsub.azure.v1');
+ ```
+
+- Use Protobuf reliable subprotocol
+ ```js
+ var pubsub = new WebSocket('wss://test.webpubsub.azure.com/client/hubs/hub1', 'protobuf.reliable.webpubsub.azure.v1');
+ ```
+
+## Reconnection
+
+Websocket connections relay on TCP, so if the connection doesn't drop, all messages should be lossless and in order. When facing network issue and connections drop, all the status such as group and message info are kept by the service and wait for reconnection to recover. A Websocket connection owns a session in the service and the identifier is `connectionId`. Reconnection is the basis of achieving reliability and must be implemented. When a new connection connects to the service using reliable subprotocols, the connection will receive a `Connected` message contains `connectionId` and `reconnectionToken`.
+
+```json
+{
+ "type":"system",
+ "event":"connected",
+ "connectionId": "<connection_id>",
+ "reconnectionToken": "<reconnection_token>"
+}
+```
+
+Once the WebSocket connection dropped, the client should first try to reconnect with the same `connectionId` to keep the session. Clients don't need to negotiate with server and obtain the `access_token`. Instead, reconnection should make a websocket connect request to service directly with `connection_id` and `reconnection_token` with the following uri:
+
+```
+wss://<service-endpoint>/client/hubs/<hub>?awps_connection_id=<connection_id>&awps_reconnection_token=<reconnection_token>
+```
+
+Reconnection may fail as network issue hasn't been recovered yet. Client should keep retrying reconnecting until
+1. Websocket connection closed with status code 1008. The status code means the connectionId has been removed from the service.
+2. Reconnection failure keeps more than 1 minute.
+
+## Publisher
+
+Clients who send events to event handler or publish message to other clients are called publishers in the document. Publishers should set `ackId` to the message to get acknowledged from the service about whether the message publishing success or not. The `ackId` in message is the identifier of the message, which means different messages should use different `ackId`s, while resending message should keep the same `ackId` for the service to de-duplicate.
+
+A sample group send message:
+```json
+{
+ "type": "sendToGroup",
+ "group": "group1",
+ "dataType" : "text",
+ "data": "text data",
+ "ackId": 1
+}
+```
+
+A sample ack response:
+```json
+{
+ "type": "ack",
+ "ackId": 1,
+ "success": true
+}
+```
+
+If the service returns ack with `success: true`, the message has been processed by the service and the client can expect the message will be delivered to all subscribers.
+
+However, In some cases, Service meets some transient internal error and the message can't be sent to subscriber. In such case, publisher will receive an ack like following and should resend message with the same `ackId` if it's necessary based on business logic.
+
+```json
+{
+ "type": "ack",
+ "ackId": 1,
+ "success": false,
+ "error": {
+ "name": "InternalServerError",
+ "message": "Internal server error"
+ }
+}
+```
+
+![Message Failure](./media/howto-develop-reliable-clients/message-failed.png)
+
+Service's ack may be dropped because of WebSockets connection dropped. So, publishers should get notified when Websocket connection drops and resend message with the same `ackId` after reconnection. If the message has actually processed by the service, it will response ack with `Duplicate` and publishers should stop resending this message again.
+
+```json
+{
+ "type": "ack",
+ "ackId": 1,
+ "success": false,
+ "error": {
+ "name": "Duplicate",
+ "message": "Message with ack-id: 1 has been processed"
+ }
+}
+```
+
+![Message duplicated](./media/howto-develop-reliable-clients/message-duplicated.png)
+
+## Subscriber
+
+Clients who receive messages sent from event handlers or publishers are called subscriber in the document. When connections drop by network issues, the service doesn't know how many messages are actually sent to subscribers. So subscribers should tell the service which message has been received. Data Messages contains `sequenceId` and subscribers must ack the sequence-id with sequence ack message:
+
+A sample sequence ack:
+```json
+{
+ "type": "sequenceAck",
+ "sequenceId": 1
+}
+```
+
+The sequence-id is a uint64 incremental number with-in a connection-id session. Subscribers should record the largest sequence-id it ever received and accept all messages with larger sequence-id and drop all messages with smaller or equal sequence-id. The sequence ack supports cumulative ack, which means if you ack `sequence-id=5`, the service will treat all messages with sequence-id smaller than 5 have already been received by subscribers. Subscribers should ack with the largest sequence-id it recorded, so that the service can skip redelivering messages that subscribers have already received.
+
+All messages are delivered to subscribers in order until the WebSockets connection drops. With sequence-id, the service can have the knowledge about how many messages subscribers have actually received across WebSockets connections with-in a connection-id session. After a WebSockets connection drop, the service will redeliver messages it should deliver but not ack-ed by the subscriber. The service hold messages that are not ack-ed with limit, if messages exceed the limit, the service will close the WebSockets connection and remove the connection-id session. Thus, subscribers should ack the sequence-id as soon as possible.
azure-web-pubsub Reference Json Reliable Webpubsub Subprotocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-json-reliable-webpubsub-subprotocol.md
+
+ Title: Reference - Azure Web PubSub supported JSON WebSocket subprotocol `json.reliable.webpubsub.azure.v1`
+description: The reference describes Azure Web PubSub supported WebSocket subprotocol `json.reliable.webpubsub.azure.v1`
++++ Last updated : 11/06/2021++
+# Azure Web PubSub supported Reliable JSON WebSocket subprotocol
+
+This document describes the subprotocol `json.reliable.webpubsub.azure.v1`.
+
+When the client is using this subprotocol, both outgoing data frame and incoming data frame are expected to be **JSON** payloads.
+
+> [!NOTE]
+> Reliable protocols are still in preview. Some changes are expected in future.
+
+## Overview
+
+Subprotocol `json.reliable.webpubsub.azure.v1` empowers the client to have a high reliable message delivery experience under network issues and do a publish-subscribe (PubSub) directly instead of doing a round trip to the upstream server. The WebSocket connection with the `json.reliable.webpubsub.azure.v1` subprotocol is called a Reliable PubSub WebSocket client.
+
+For example, in JS, a Reliable PubSub WebSocket client can be created using:
+```js
+var pubsub = new WebSocket('wss://test.webpubsub.azure.com/client/hubs/hub1', 'json.reliable.webpubsub.azure.v1');
+```
+
+When using `json.reliable.webpubsub.azure.v1` subprotocol, the client must follow the [How to create reliable clients](./howto-develop-reliable-clients.md) to implement reconnection, publisher and subscriber.
++
+## Requests
++
+### Sequence Ack
+
+Format:
+
+```json
+{
+ "type": "sequenceAck",
+ "sequenceId": "<sequenceId>",
+}
+```
+
+Reliable PubSub WebSocket client must send sequence ack message once it received a message from the service. Find more in [How to create reliable clients](./howto-develop-reliable-clients.md#subscriber)
+
+* `sequenceId` is a incremental uint64 number from the message received.
+
+## Responses
+
+Messages received by the client can be several types: `ack`, `message`, and `system`. Messages with type `message` have `sequenceId` property. Client must send [Sequence Ack](#sequence-ack) to the service once it receives a message.
+
+### Ack response
+
+If the request contains `ackId`, the service will return an ack response for this request. The client implementation should handle this ack mechanism, including waiting for the ack response for an `async` `await` operation, and having a timeout check when the ack response is not received during a certain period.
+
+Format:
+```json
+{
+ "type": "ack",
+ "ackId": 1, // The ack id for the request to ack
+ "success": false, // true or false
+ "error": {
+ "name": "Forbidden|InternalServerError|Duplicate",
+ "message": "<error_detail>"
+ }
+}
+```
+
+The client implementation SHOULD always check if the `success` is `true` or `false` first. Only when `success` is `false` the client reads from `error`.
+
+### Message response
+
+Clients can receive messages published from one group the client joined, or from the server management role that the server sends messages to the specific client or the specific user.
+
+1. When the message is from a group
+
+ ```json
+ {
+ "sequenceId": 1,
+ "type": "message",
+ "from": "group",
+ "group": "<group_name>",
+ "dataType": "json|text|binary",
+ "data" : {} // The data format is based on the dataType
+ "fromUserId": "abc"
+ }
+ ```
+
+1. When The message is from the server.
+
+ ```json
+ {
+ "sequenceId": 1,
+ "type": "message",
+ "from": "server",
+ "dataType": "json|text|binary",
+ "data" : {} // The data format is based on the dataType
+ }
+ ```
+
+#### Case 1: Sending data `Hello World` to the connection through REST API with `Content-Type`=`text/plain`
+* What a simple WebSocket client receives is a text WebSocket frame with data: `Hello World`;
+* What a PubSub WebSocket client receives is as follows:
+ ```json
+ {
+ "sequenceId": 1,
+ "type": "message",
+ "from": "server",
+ "dataType" : "text",
+ "data": "Hello World",
+ }
+ ```
+
+#### Case 2: Sending data `{ "Hello" : "World"}` to the connection through REST API with `Content-Type`=`application/json`
+* What a simple WebSocket client receives is a text WebSocket frame with stringified data: `{ "Hello" : "World"}`;
+* What a PubSub WebSocket client receives is as follows:
+ ```json
+ {
+ "sequenceId": 1,
+ "type": "message",
+ "from": "server",
+ "dataType" : "json",
+ "data": {
+ "Hello": "World"
+ }
+ }
+ ```
+
+If the REST API is sending a string `Hello World` using `application/json` content type, what the simple WebSocket client receives is a JSON string, which is `"Hello World"` that wraps the string with `"`.
+
+#### Case 3: Sending binary data to the connection through REST API with `Content-Type`=`application/octet-stream`
+* What a simple WebSocket client receives is a binary WebSocket frame with the binary data.
+* What a PubSub WebSocket client receives is as follows:
+ ```json
+ {
+ "sequenceId": 1,
+ "type": "message",
+ "from": "server",
+ "dataType" : "binary",
+ "data": "<base64_binary>"
+ }
+ ```
+
+### System response
+
+The Web PubSub service can also send system-related responses to the client.
+
+#### Connected
+
+When the connection connects to service.
+
+```json
+{
+ "type": "system",
+ "event": "connected",
+ "userId": "user1",
+ "connectionId": "abcdefghijklmnop",
+ "reconnectionToken": "<token>"
+}
+```
+
+`connectionId` and `reconnectionToken` are used for reconnection. Make connect request with uri for reconnection:
+
+```
+wss://<service-endpoint>/client/hubs/<hub>?awps_connection_id=<connectionId>&awps_reconnection_token=<reconnectionToken>
+```
+
+Find more details in [Reconnection](./howto-develop-reliable-clients.md#reconnection)
+
+#### Disconnected
+
+When the server closes the connection, or when the service declines the client.
+
+```json
+{
+ "type": "system",
+ "event": "disconnected",
+ "message": "reason"
+}
+```
+
+## Next steps
+
azure-web-pubsub Reference Json Webpubsub Subprotocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-json-webpubsub-subprotocol.md
var pubsub = new WebSocket('wss://test.webpubsub.azure.com/client/hubs/hub1', 'j
``` For a simple WebSocket client, the *server* is a MUST HAVE role to handle the events from clients. A simple WebSocket connection always triggers a `message` event when it sends messages, and always relies on the server-side to process messages and do other operations. With the help of the `json.webpubsub.azure.v1` subprotocol, an authorized client can join a group using [join requests](#join-groups) and publish messages to a group using [publish requests](#publish-messages) directly. It can also route messages to different upstream (event handlers) by customizing the *event* the message belongs using [event requests](#send-custom-events).
-## Permissions
-
-You may have noticed that when we describe the PubSub WebSocket clients, a client can publish to other clients only when it's *authorized* to. The `role`s of the client determines the *initial* permissions the client have:
-
-| Role | Permission |
-|||
-| Not specified | The client can send event requests.
-| `webpubsub.joinLeaveGroup` | The client can join/leave any group.
-| `webpubsub.sendToGroup` | The client can publish messages to any group.
-| `webpubsub.joinLeaveGroup.<group>` | The client can join/leave group `<group>`.
-| `webpubsub.sendToGroup.<group>` | The client can publish messages to group `<group>`.
-
-The server-side can also grant or revoke permissions of the client dynamically through REST APIs or server SDKs.
## Requests
-### Join groups
-
-Format:
-
-```json
-{
- "type": "joinGroup",
- "group": "<group_name>",
- "ackId" : 1
-}
-```
-
-* `ackId` is the identity of each request and should be unique. The service sends a [ack response message](#ack-response) to notify the process result of the request. More details can be found at [AckId and Ack Response](./concept-client-protocols.md#ackid-and-ack-response)
-
-### Leave groups
-
-Format:
-
-```json
-{
- "type": "leaveGroup",
- "group": "<group_name>",
- "ackId" : 1
-}
-```
-
-* `ackId` is the identity of each request and should be unique. The service sends a [ack response message](#ack-response) to notify the process result of the request. More details can be found at [AckId and Ack Response](./concept-client-protocols.md#ackid-and-ack-response)
-
-### Publish messages
-
-Format:
-
-```json
-{
- "type": "sendToGroup",
- "group": "<group_name>",
- "ackId" : 1,
- "noEcho": true|false,
- "dataType" : "json|text|binary",
- "data": {}, // data can be string or valid json token depending on the dataType
-}
-```
-
-* `ackId` is the identity of each request and should be unique. The service sends a [ack response message](#ack-response) to notify the process result of the request. More details can be found at [AckId and Ack Response](./concept-client-protocols.md#ackid-and-ack-response)
-* `noEcho` is optional. If set to true, this message is not echoed back to the same connection. If not set, the default value is false.
-* `dataType` can be one of `json`, `text`, or `binary`:
- * `json`: `data` can be any type that JSON supports and will be published as what it is; If `dataType` isn't specified, it defaults to `json`.
- * `text`: `data` should be in string format, and the string data will be published;
- * `binary`: `data` should be in base64 format, and the binary data will be published;
-
-#### Case 1: publish text data:
-```json
-{
- "type": "sendToGroup",
- "group": "<group_name>",
- "dataType" : "text",
- "data": "text data",
- "ackId": 1
-}
-```
-
-* What subprotocol client in this group `<group_name>` receives:
-```json
-{
- "type": "message",
- "from": "group",
- "group": "<group_name>",
- "dataType" : "text",
- "data" : "text data"
-}
-```
-* What the raw client in this group `<group_name>` receives is string data `text data`.
-
-#### Case 2: publish JSON data:
-```json
-{
- "type": "sendToGroup",
- "group": "<group_name>",
- "dataType" : "json",
- "data": {
- "hello": "world"
- }
-}
-```
-
-* What subprotocol client in this group `<group_name>` receives:
-```json
-{
- "type": "message",
- "from": "group",
- "group": "<group_name>",
- "dataType" : "json",
- "data" : {
- "hello": "world"
- }
-}
-```
-* What the raw client in this group `<group_name>` receives is serialized string data `{"hello": "world"}`.
--
-#### Case 3: publish binary data:
-```json
-{
- "type": "sendToGroup",
- "group": "<group_name>",
- "dataType" : "binary",
- "data": "<base64_binary>",
- "ackId": 1
-}
-```
-
-* What subprotocol client in this group `<group_name>` receives:
-```json
-{
- "type": "message",
- "from": "group",
- "group": "<group_name>",
- "dataType" : "binary",
- "data" : "<base64_binary>",
-}
-```
-* What the raw client in this group `<group_name>` receives is the **binary** data in the binary frame.
-
-### Send custom events
-
-Format:
-
-```json
-{
- "type": "event",
- "event": "<event_name>",
- "ackId": 1,
- "dataType" : "json|text|binary",
- "data": {}, // data can be string or valid json token depending on the dataType
-}
-```
-
-* `ackId` is the identity of each request and should be unique. The service sends a [ack response message](#ack-response) to notify the process result of the request. More details can be found at [AckId and Ack Response](./concept-client-protocols.md#ackid-and-ack-response)
-
-`dataType` can be one of `text`, `binary`, or `json`:
-* `json`: data can be any type json supports and will be published as what it is; If `dataType` is not specified, it defaults to `json`.
-* `text`: data should be in string format, and the string data will be published;
-* `binary`: data should be in base64 format, and the binary data will be published;
-
-#### Case 1: send event with text data:
-```json
-{
- "type": "event",
- "event": "<event_name>",
- "ackId": 1,
- "dataType" : "text",
- "data": "text data",
-}
-```
-
-What the upstream event handler receives like below, the `Content-Type` for the CloudEvents HTTP request is `text/plain` for `dataType`=`text`
-
-```HTTP
-POST /upstream HTTP/1.1
-Host: xxxxxx
-WebHook-Request-Origin: xxx.webpubsub.azure.com
-Content-Type: text/plain
-Content-Length: nnnn
-ce-specversion: 1.0
-ce-type: azure.webpubsub.user.<event_name>
-ce-source: /client/{connectionId}
-ce-id: {eventId}
-ce-time: 2021-01-01T00:00:00Z
-ce-signature: sha256={connection-id-hash-primary},sha256={connection-id-hash-secondary}
-ce-userId: {userId}
-ce-connectionId: {connectionId}
-ce-hub: {hub_name}
-ce-eventName: <event_name>
-
-text data
-
-```
-
-#### Case 2: send event with JSON data:
-```json
-{
- "type": "event",
- "event": "<event_name>",
- "ackId": 1,
- "dataType" : "json",
- "data": {
- "hello": "world"
- },
-}
-```
-
-What the upstream event handler receives like below, the `Content-Type` for the CloudEvents HTTP request is `application/json` for `dataType`=`json`
-
-```HTTP
-POST /upstream HTTP/1.1
-Host: xxxxxx
-WebHook-Request-Origin: xxx.webpubsub.azure.com
-Content-Type: application/json
-Content-Length: nnnn
-ce-specversion: 1.0
-ce-type: azure.webpubsub.user.<event_name>
-ce-source: /client/{connectionId}
-ce-id: {eventId}
-ce-time: 2021-01-01T00:00:00Z
-ce-signature: sha256={connection-id-hash-primary},sha256={connection-id-hash-secondary}
-ce-userId: {userId}
-ce-connectionId: {connectionId}
-ce-hub: {hub_name}
-ce-eventName: <event_name>
-
-{
- "hello": "world"
-}
-
-```
-
-#### Case 3: send event with binary data:
-```json
-{
- "type": "event",
- "event": "<event_name>",
- "ackId": 1,
- "dataType" : "binary",
- "data": "base64_binary",
-}
-```
-
-What the upstream event handler receives like below, the `Content-Type` for the CloudEvents HTTP request is `application/octet-stream` for `dataType`=`binary`
-
-```HTTP
-POST /upstream HTTP/1.1
-Host: xxxxxx
-WebHook-Request-Origin: xxx.webpubsub.azure.com
-Content-Type: application/octet-stream
-Content-Length: nnnn
-ce-specversion: 1.0
-ce-type: azure.webpubsub.user.<event_name>
-ce-source: /client/{connectionId}
-ce-id: {eventId}
-ce-time: 2021-01-01T00:00:00Z
-ce-signature: sha256={connection-id-hash-primary},sha256={connection-id-hash-secondary}
-ce-userId: {userId}
-ce-connectionId: {connectionId}
-ce-hub: {hub_name}
-ce-eventName: <event_name>
-
-binary
-
-```
-
-The WebSocket frame can be `text` format for text message frames or UTF8 encoded binaries for `binary` message frames.
-
-Service declines the client if the message does not match the described format.
## Responses
azure-web-pubsub Reference Protobuf Reliable Webpubsub Subprotocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-protobuf-reliable-webpubsub-subprotocol.md
+
+ Title: Reference - Azure Web PubSub-supported protobuf WebSocket subprotocol `protobuf.reliable.webpubsub.azure.v1`
+description: The reference describes the Azure Web PubSub-supported WebSocket subprotocol `protobuf.reliable.webpubsub.azure.v1`.
++++ Last updated : 11/08/2021++
+# The Azure Web PubSub-supported reliable protobuf WebSocket subprotocol
+
+This document describes the subprotocol `protobuf.reliable.webpubsub.azure.v1`.
+
+When a client is using this subprotocol, both the outgoing and incoming data frames are expected to be protocol buffers (protobuf) payloads.
+
+> [!NOTE]
+> Reliable protocols are still in preview. Some changes are expected in future.
+
+## Overview
+
+Subprotocol `protobuf.reliable.webpubsub.azure.v1` empowers the client to have a high reliable message delivery experience under network issues and do a publish-subscribe (PubSub) directly instead of doing a round trip to the upstream server. The WebSocket connection with the `protobuf.reliable.webpubsub.azure.v1` subprotocol is called a Reliable PubSub WebSocket client.
+
+For example, in JavaScript, you can create a Reliable PubSub WebSocket client with the protobuf subprotocol by using:
+
+```js
+// PubSub WebSocket client
+var pubsub = new WebSocket('wss://test.webpubsub.azure.com/client/hubs/hub1', 'protobuf.reliable.webpubsub.azure.v1');
+```
+
+When using `json.reliable.webpubsub.azure.v1` subprotocol, the client must follow the [How to create reliable clients](./howto-develop-reliable-clients.md) to implement reconnection, publisher and subscriber.
+
+> [!NOTE]
+> Currently, the Web PubSub service supports only [proto3](https://developers.google.com/protocol-buffers/docs/proto3).
++
+## Requests
+
+All request messages adhere to the following protobuf format:
+
+```protobuf
+syntax = "proto3";
+
+import "google/protobuf/any.proto";
+
+message UpstreamMessage {
+ oneof message {
+ SendToGroupMessage send_to_group_message = 1;
+ EventMessage event_message = 5;
+ JoinGroupMessage join_group_message = 6;
+ LeaveGroupMessage leave_group_message = 7;
+ }
+
+ message SendToGroupMessage {
+ string group = 1;
+ optional uint64 ack_id = 2;
+ MessageData data = 3;
+ optional bool no_echo = 4;
+ }
+
+ message EventMessage {
+ string event = 1;
+ MessageData data = 2;
+ optional uint64 ack_id = 3;
+ }
+
+ message JoinGroupMessage {
+ string group = 1;
+ optional uint64 ack_id = 2;
+ }
+
+ message LeaveGroupMessage {
+ string group = 1;
+ optional uint64 ack_id = 2;
+ }
+
+ message SequenceAckMessage {
+ uint64 sequence_id = 1;
+ }
+}
+
+message MessageData {
+ oneof data {
+ string text_data = 1;
+ bytes binary_data = 2;
+ google.protobuf.Any protobuf_data = 3;
+ }
+}
+```
++
+### Sequence Ack
+
+Reliable PubSub WebSocket client must send `SequenceAckMessage` once it received a message from the service. Find more in [How to create reliable clients](./howto-develop-reliable-clients.md#subscriber)
+
+* `sequence_id` is a incremental uint64 number from the message received.
+
+## Responses
+
+All response messages adhere to the following protobuf format:
+
+```protobuf
+message DownstreamMessage {
+ oneof message {
+ AckMessage ack_message = 1;
+ DataMessage data_message = 2;
+ SystemMessage system_message = 3;
+ }
+
+ message AckMessage {
+ uint64 ack_id = 1;
+ bool success = 2;
+ optional ErrorMessage error = 3;
+
+ message ErrorMessage {
+ string name = 1;
+ string message = 2;
+ }
+ }
+
+ message DataMessage {
+ string from = 1;
+ optional string group = 2;
+ MessageData data = 3;
+ uint64 sequence_id = 4;
+ }
+
+ message SystemMessage {
+ oneof message {
+ ConnectedMessage connected_message = 1;
+ DisconnectedMessage disconnected_message = 2;
+ }
+
+ message ConnectedMessage {
+ string connection_id = 1;
+ string user_id = 2;
+ string reconnection_token = 3;
+ }
+
+ message DisconnectedMessage {
+ string reason = 2;
+ }
+ }
+}
+```
+
+### Ack response
+
+If the request contains `ackId`, the service returns an ack response for this request. The client implementation should handle this ack mechanism, including:
+* Waiting for the ack response for an `async` `await` operation.
+* Having a timeout check when the ack response isn't received during a certain period.
+
+The client implementation should always check first to see whether the `success` status is `true` or `false`. When the `success` status is `false`, the client can read from the `error` property for error details.
+
+### Message response
+
+Clients can receive messages published from a group that the client has joined. Or they can receive messages from the server management role when the server sends messages to a specific client or a specific user.
+
+You'll always get a `DownstreamMessage.DataMessage` message in the following scenarios:
+
+- When the message is from a group, `from` is `group`. When the message is from the server, `from` is `server`.
+- When the message is from a group, `group` is the group name.
+
+The sender's `dataType` will cause one of the following messages to be sent:
+* If `dataType` is `text`, use `message_response_message.data.text_data`.
+* If `dataType` is `binary`, use `message_response_message.data.binary_data`.
+* If `dataType` is `protobuf`, use `message_response_message.data.protobuf_data`.
+* If `dataType` is `json`, use `message_response_message.data.text_data`, and the content is a serialized JSON string.
+
+`DownstreamMessage.DataMessage` has `sequence_id` property. Client must send [Sequence Ack](#sequence-ack) to the service once it receives a message.
+
+### System response
+
+The Web PubSub service can also send system-related responses to the client.
+
+#### Connected
+
+When the client connects to the service, you receive a `DownstreamMessage.SystemMessage.ConnectedMessage` message.
+`connection_id` and `reconnection_token` are used for reconnection. Make connect request with uri for reconnection:
+
+```
+wss://<service-endpoint>/client/hubs/<hub>?awps_connection_id=<connectionId>&awps_reconnection_token=<reconnectionToken>
+```
+
+Find more details in [Reconnection](./howto-develop-reliable-clients.md#reconnection)
+
+#### Disconnected
+
+When the server closes the connection or the service declines the client, you receive a `DownstreamMessage.SystemMessage.DisconnectedMessage` message.
+
+## Next steps
+
azure-web-pubsub Reference Protobuf Webpubsub Subprotocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-protobuf-webpubsub-subprotocol.md
For a simple WebSocket client, the server has the *necessary* role of handling e
> [!NOTE] > Currently, the Web PubSub service supports only [proto3](https://developers.google.com/protocol-buffers/docs/proto3).
-## Permissions
-
-In the earlier description of the PubSub WebSocket client, you might have noticed that a client can publish to other clients only when it's *authorized* to do so. The client's roles determine its *initial* permissions, as listed in the following table:
-
-| Role | Permission |
-|||
-| Not specified | The client can send event requests. |
-| `webpubsub.joinLeaveGroup` | The client can join or leave any group. |
-| `webpubsub.sendToGroup` | The client can publish messages to any group. |
-| `webpubsub.joinLeaveGroup.<group>` | The client can join or leave group `<group>`. |
-| `webpubsub.sendToGroup.<group>` | The client can publish messages to group `<group>`. |
-| | |
-
-The server side can also grant or revoke a client's permissions dynamically through REST APIs or server SDKs.
## Requests
message MessageData {
} ```
-### Join groups
-
-Format:
-
-Set `join_group_message.group` to the group name.
-
-* `ackId` is the identity of each request and should be unique. The service sends a [ack response message](#ack-response) to notify the process result of the request. More details can be found at [AckId and Ack Response](./concept-client-protocols.md#ackid-and-ack-response)
-
-### Leave groups
-
-Format:
-
-Set `leave_group_message.group` to the group name.
-
-* `ackId` is the identity of each request and should be unique. The service sends a [ack response message](#ack-response) to notify the process result of the request. More details can be found at [AckId and Ack Response](./concept-client-protocols.md#ackid-and-ack-response)
-
-### Publish messages
-
-Format:
-
-* `ackId` is the identity of each request and should be unique. The service sends a [ack response message](#ack-response) to notify the process result of the request. More details can be found at [AckId and Ack Response](./concept-client-protocols.md#ackid-and-ack-response)
-
-There's an implicit `dataType`, which can be `protobuf`, `text`, or `binary`, depending on the `data` in `MessageData` you set. The receiver clients can use `dataType` to handle the content correctly.
-
-* `protobuf`: If you set `send_to_group_message.data.protobuf_data`, the implicit `dataType` is `protobuf`. `protobuf_data` can be of the [Any](https://developers.google.com/protocol-buffers/docs/proto3#any) message type. All other clients receive a protobuf-encoded binary, which can be deserialized by the protobuf SDK. Clients that support only text-based content (for example, `json.webpubsub.azure.v1`) receive a Base64-encoded binary.
-
-* `text`: If you set `send_to_group_message.data.text_data`, the implicit `dataType` is `text`. `text_data` should be a string. All clients with other protocols receive a UTF-8-encoded string.
-
-* `binary`: If you set `send_to_group_message.data.binary_data`, the implicit `dataType` is `binary`. `binary_data` should be a byte array. All clients with other protocols receive a raw binary without protobuf encoding. Clients that support only text-based content (for example, `json.webpubsub.azure.v1`) receive a Base64-encoded binary.
-
-#### Case 1: Publish text data
-
-Set `send_to_group_message.group` to `group`, and set `send_to_group_message.data.text_data` to `"text data"`.
-
-* The protobuf subprotocol client in group `group` receives the binary frame and can use [DownstreamMessage](#responses) to deserialize it.
-
-* The JSON subprotocol client in group `group` receives:
-
- ```json
- {
- "type": "message",
- "from": "group",
- "group": "group",
- "dataType" : "text",
- "data" : "text data"
- }
- ```
-
-* The raw client in group `group` receives string `text data`.
-
-#### Case 2: Publish protobuf data
-
-Let's assume that you have a customer message:
-
-```
-message MyMessage {
- int32 value = 1;
-}
-```
-
-Set `send_to_group_message.group` to `group` and `send_to_group_message.data.protobuf_data` to `Any.pack(MyMessage)` with `value = 1`.
-
-* The protobuf subprotocol client in group `group` receives the binary frame and can use [DownstreamMessage](#responses) to deserialize it.
-
-* The subprotocol client in group `group` receives:
-
- ```json
- {
- "type": "message",
- "from": "group",
- "group": "G",
- "dataType" : "protobuf",
- "data" : "Ci90eXBlLmdvb2dsZWFwaXMuY29tL2F6dXJlLndlYnB1YnN1Yi5UZXN0TWVzc2FnZRICCAE=" // Base64-encoded bytes
- }
- ```
-
- > [!NOTE]
- > The data is a Base64-encoded, deserializeable protobuf binary.
-
-You can use the following protobuf definition and use `Any.unpack()` to deserialize it:
-
-```protobuf
-syntax = "proto3";
-
-message MyMessage {
- int32 value = 1;
-}
-```
-
-* The raw client in group `group` receives the binary frame:
-
- ```
- # Show in hexadecimal
- 0A 2F 74 79 70 65 2E 67 6F 6F 67 6C 65 61 70 69 73 2E 63 6F 6D 2F 61 7A 75 72 65 2E 77 65 62 70 75 62 73 75 62 2E 54 65 73 74 4D 65 73 73 61 67 65 12 02 08 01
- ```
-
-#### Case 3: Publish binary data
-
-Set `send_to_group_message.group` to `group`, and set `send_to_group_message.data.binary_data` to `[1, 2, 3]`.
-
-* The protobuf subprotocol client in group `group` receives the binary frame and can use [DownstreamMessage](#responses) to deserialize it.
-
-* The JSON subprotocol client in group `group` receives:
-
- ```json
- {
- "type": "message",
- "from": "group",
- "group": "group",
- "dataType" : "binary",
- "data" : "AQID", // Base64-encoded [1,2,3]
- }
- ```
-
- Because the JSON subprotocol client supports only text-based messaging, the binary is always Base64-encoded.
-
-* The raw client in group `group` receives the binary data in the binary frame:
-
- ```
- # Show in hexadecimal
- 01 02 03
- ```
-
-### Send custom events
-
-There's an implicit `dataType`, which can be `protobuf`, `text`, or `binary`, depending on the `dataType` you set. The receiver clients can use `dataType` to handle the content correctly.
-
-* `protobuf`: If you set `event_message.data.protobuf_data`, the implicit `dataType` is `protobuf`. `protobuf_data` can be any supported protobuf type. The event handler receives the protobuf-encoded binary, which can be deserialized by any protobuf SDK.
-
-* `text`: If you set `event_message.data.text_data`, the implicit is `text`. `text_data` should be a string. The event handler receives a UTF-8-encoded string;
-
-* `binary`: If you set `event_message.data.binary_data`, the implicit is `binary`. `binary_data` should be a byte array. The event handler receives the raw binary frame.
-
-#### Case 1: Send an event with text data
-
-Set `event_message.data.text_data` to `"text data"`.
-
-The upstream event handler receives a request that's similar to the following. Note that `Content-Type` for the CloudEvents HTTP request is `text/plain`, where `dataType`=`text`.
-
-```HTTP
-POST /upstream HTTP/1.1
-Host: xxxxxx
-WebHook-Request-Origin: xxx.webpubsub.azure.com
-Content-Type: text/plain
-Content-Length: nnnn
-ce-specversion: 1.0
-ce-type: azure.webpubsub.user.<event_name>
-ce-source: /client/{connectionId}
-ce-id: {eventId}
-ce-time: 2021-01-01T00:00:00Z
-ce-signature: sha256={connection-id-hash-primary},sha256={connection-id-hash-secondary}
-ce-userId: {userId}
-ce-connectionId: {connectionId}
-ce-hub: {hub_name}
-ce-eventName: <event_name>
-
-text data
-
-```
-
-#### Case 2: Send an event with protobuf data
-
-Assume that you've received the following customer message:
-
-```
-message MyMessage {
- int32 value = 1;
-}
-```
-
-Set `event_message.data.protobuf_data` to `any.pack(MyMessage)` with `value = 1`
-
-The upstream event handler receives a request that's similar to the following. Note that the `Content-Type` for the CloudEvents HTTP request is `application/x-protobuf`, where `dataType`=`protobuf`.
-
-```HTTP
-POST /upstream HTTP/1.1
-Host: xxxxxx
-WebHook-Request-Origin: xxx.webpubsub.azure.com
-Content-Type: application/json
-Content-Length: nnnn
-ce-specversion: 1.0
-ce-type: azure.webpubsub.user.<event_name>
-ce-source: /client/{connectionId}
-ce-id: {eventId}
-ce-time: 2021-01-01T00:00:00Z
-ce-signature: sha256={connection-id-hash-primary},sha256={connection-id-hash-secondary}
-ce-userId: {userId}
-ce-connectionId: {connectionId}
-ce-hub: {hub_name}
-ce-eventName: <event_name>
-
-// Just show in hexadecimal; read it as binary
-0A 2F 74 79 70 65 2E 67 6F 6F 67 6C 65 61 70 69 73 2E 63 6F 6D 2F 61 7A 75 72 65 2E 77 65 62 70 75 62 73 75 62 2E 54 65 73 74 4D 65 73 73 61 67 65 12 02 08 01
-```
-
-The data is a valid protobuf binary. You can use the following `proto` and `any.unpack()` to deserialize it:
-
-```protobuf
-syntax = "proto3";
-
-message MyMessage {
- int32 value = 1;
-}
-```
-
-#### Case 3: Send an event with binary data
-
-Set `send_to_group_message.binary_data` to `[1, 2, 3]`.
-
-The upstream event handler receives a request similar to the following. For `dataType`=`binary`, the `Content-Type` for the CloudEvents HTTP request is `application/octet-stream`.
-
-```HTTP
-POST /upstream HTTP/1.1
-Host: xxxxxx
-WebHook-Request-Origin: xxx.webpubsub.azure.com
-Content-Type: application/octet-stream
-Content-Length: nnnn
-ce-specversion: 1.0
-ce-type: azure.webpubsub.user.<event_name>
-ce-source: /client/{connectionId}
-ce-id: {eventId}
-ce-time: 2021-01-01T00:00:00Z
-ce-signature: sha256={connection-id-hash-primary},sha256={connection-id-hash-secondary}
-ce-userId: {userId}
-ce-connectionId: {connectionId}
-ce-hub: {hub_name}
-ce-eventName: <event_name>
-
-// Just show in hexadecimal; you need to read it as binary
-01 02 03
-```
-
-The WebSocket frame can be in `text` format for text message frames or UTF-8-encoded binaries for `binary` message frames.
-
-The service declines the client if the message doesn't match the prescribed format.
## Responses
backup Archive Tier Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/archive-tier-support.md
Title: Azure Backup - Archive Tier overview
-description: Learn about Archive Tier Support for Azure Backup.
+ Title: Azure Backup - Archive tier overview
+description: Learn about Archive tier support for Azure Backup.
Previously updated : 02/28/2022 Last updated : 03/21/2022
-# Overview of Archive Tier in Azure Backup
+# Overview of Archive tier in Azure Backup
Customers rely on Azure Backup to store backup data including their Long-Term Retention (LTR) backup data as per the retention needs defined by the organization's compliance rules. In most cases, the older backup data is rarely accessed and is only stored for compliance needs. Azure Backup supports backup of Long-Term Retention points in the archive tier, in addition to Snapshots and the Standard tier.
-## Supported workloads
+## Support matrix
+
+### Supported workloads
Archive tier supports the following workloads: | Workloads | Operations | | | |
-| Azure virtual machines | <ul><li>Only monthly and yearly recovery points. Daily and weekly recovery points aren't supported. </li><li>Age >= 3 months in Vault-Standard Tier </li><li>Retention left >= 6 months </li><li>No active daily and weekly dependencies. </li></ul> |
-| SQL Server in Azure virtual machines/ SAP HANA in Azure Virtual Machines | <ul><li>Only full recovery points. Logs and differentials aren't supported. </li><li>Age >= 45 days in Vault-Standard Tier. </li><li>Retention left >= 6 months </li><li>No dependencies </li></ul> |
+| Azure Virtual Machines | Only monthly and yearly recovery points. Daily and weekly recovery points aren't supported. <br><br> Age >= 3 months in Vault-atandard tier <br><br> Retention left >= 6 months. <br><br> No active daily and weekly dependencies. |
+| SQL Server in Azure Virtual Machines <br><br> SAP HANA in Azure Virtual Machines | Only full recovery points. Logs and differentials aren't supported. <br><br> Age >= 45 days in Vault-standard tier. <br><br> Retention left >= 6 months. <br><br> No dependencies. |
>[!Note]
->- Archive Tier support for SQL Servers in Azure VMs and SAP HANA in Azure VM is now generally available in multiple regions. For the detailed list of supported regions, see the [support matrix](#support-matrix).
->- Archive Tier support for Azure Virtual Machines is also in limited public preview. To sign up for limited public preview, fill [this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR463S33c54tEiJLEM6Enqb9UNU5CVTlLVFlGUkNXWVlMNlRPM1lJWUxLRy4u).
+>- Archive tier support for Azure Virtual Machines, SQL Servers in Azure VMs and SAP HANA in Azure VM is now generally available in multiple regions. For the detailed list of supported regions, see the [support matrix](#support-matrix).
+>- Archive tier support for Azure Virtual Machines for the remaining regions is in limited public preview. To sign up for limited public preview, fill [this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR463S33c54tEiJLEM6Enqb9UNU5CVTlLVFlGUkNXWVlMNlRPM1lJWUxLRy4u).
-## Supported clients
+### Supported clients
Archive tier supports the following clients:
+- [Azure portal](./use-archive-tier-support.md?pivots=client-portaltier)
- [PowerShell](./use-archive-tier-support.md?pivots=client-powershelltier) - [CLI](./use-archive-tier-support.md?pivots=client-clitier)-- [Azure portal](./use-archive-tier-support.md?pivots=client-portaltier)
-## How Azure Backup moves recovery points to the vault-archive tier?
+### Supported regions
+
+| Workloads | Preview | Generally available |
+| | | |
+| SQL Server in Azure Virtual Machines/ SAP HANA in Azure Virtual Machines | None | All regions, except West US 3, West India, UAE North, Switzerland North, Switzerland West, Sweden Central, Sweden South, Australia Central, Australia Central 2, Brazil Southeast, Norway West, Germany Central, Germany North, Germany Northeast, South Africa North, South Africa West. |
+| Azure Virtual Machines | East US, East US 2, Central US, South Central US, West US, West US 2, West Central US, North Central US, Brazil South, Canada East, Canada Central, West Europe, UK South, UK West, East Asia, Japan East, South India, South East Asia, Australia East, Central India, North Europe, Australia South East, France Central, France South, Japan West, Korea Central, Korea South, UAE North, Germany West Central, Norway East. | Australia East, South Central US, West Central US, Southeast Asia, Central India. |
+
+## How Azure Backup moves recovery points to the Vault-archive tier?
> [!VIDEO https://www.youtube.com/embed/nQnH5mpiz60?start=416]
-## Archive recommendations (Only for Azure Virtual Machines)
+## Archive recommendations (only for Azure Virtual Machines)
-The recovery points for Azure Virtual Machines are incremental. When you move recovery points to archive tier, they are converted to full recovery points (to ensure that all recovery points in the archive tier are independent and isolated from each other). Thus, overall backup storage (Vault-Standard + Vault-Archive) may increase.
+The recovery points for Azure Virtual Machines are incremental. When you move recovery points to archive tier, they're converted to full recovery points (to ensure that all recovery points in Archive tier are independent and isolated from each other). Thus, overall backup storage (Vault-standard + Vault-archive) may increase.
The amount of storage increase depends on the churn pattern of the Virtual Machines. - The higher the churn in the Virtual Machines, lesser is the overall backup storage when a recovery point is moved to archive tier.-- If the churn in the Virtual Machine is low, moving to archive tier may lead to increase in Backup storage, which may offset the price difference between the Vault-Standard Tier and Vault-Archive Tier. Therefore, that might increase the overall cost.
+- If the churn in the Virtual Machine is low, moving to Archive tier may lead to increase in Backup storage. This may offset the price difference between the Vault-standard tier and Vault-archive tier. Therefore, that might increase the overall cost.
-To resolve this, Azure Backup provides recommendation set. The recommendation set returns a list of recovery points, which if moved together to archive tier ensures cost savings.
+To resolve this, Azure Backup provides recommendation set. The recommendation set returns a list of recovery points, which if moved together to Archive tier ensures cost savings.
>[!Note] >The cost savings depends on various reasons and might differ for every instance.
Stop protection and delete data deletes all recovery points. For recovery points
## Archive Tier pricing
-You can view the archive tier pricing from our [pricing page](azure-backup-pricing.md).
-
-## Support matrix
-
-| Workloads | Preview | Generally available |
-| | | |
-| SQL Server in Azure Virtual Machines/ SAP HANA in Azure Virtual Machines | None | Australia East, Central India, North Europe, South East Asia, East Asia, Australia South East, Canada Central, Brazil South, Canada East, France Central, France South, Japan East, Japan West, Korea Central, Korea South, South India, UK West, UK South, Central US, East US 2, West US, West US 2, West Central US, East US, South Central US, North Central US, West Europe, US Gov Virginia, US Gov Texas, US Gov Arizona, UAE North, Germany West Central, China East 2, China North 2, Norway East. |
-| Azure Virtual Machines | East US, East US 2, Central US, South Central US, West US, West US 2, West Central US, North Central US, Brazil South, Canada East, Canada Central, West Europe, UK South, UK West, East Asia, Japan East, South India, South East Asia, Australia East, Central India, North Europe, Australia South East, France Central, France South, Japan West, Korea Central, Korea South, UAE North, Germany West Central, Norway East. | None |
-
+You can view the Archive tier pricing from our [pricing page](azure-backup-pricing.md).
## Frequently asked questions
The recovery point will remain in archive forever. For more information, see [Im
When you move your data in GRS vaults from standard tier to archive tier, the data moves into GRS archive. This is true even when Cross region restore is enabled. Once the backup data moves into archive tier, you canΓÇÖt restore the data into the paired region. However, during region failures, the backup data in secondary region will become available for restore.
-While restoring from recovery point in archive tier in primary region, the recovery point is copied to the Standard tier and is retained according to the rehydration duration, both in primary and secondary region. You can perform Cross region restore from these rehydrated recovery points.
+While restoring from recovery point in Archive tier in primary region, the recovery point is copied to the Standard tier and is retained according to the rehydration duration, both in primary and secondary region. You can perform Cross region restore from these rehydrated recovery points.
### I can see eligible recovery points for my Virtual Machine, but I can't seeing any recommendation. What can be the reason?
The recovery points for Virtual Machines meet the eligibility criteria. So, ther
No. Once protection is stopped for a particular workload, the corresponding recovery points can't be moved to the archive tier. To move recovery points to archive tier, you need to resume the protection on the data source.
+### How do I ensure that all recovery points are moved to Archive tier, if moved via Azure portal?
+
+To ensure that all recovery points are moved to Archive tier,
+
+1. Select the required workload.
+1. Go to **Move Recovery Points** by following [these steps](./use-archive-tier-support.md?pivots=client-portaltier#move-recommended-recovery-points-for-a-particular-azure-virtual-machine).
+
+If the list of recovery points is blank, then all the eligible/recommended recovery points are moved to the vault Archive tier.
+ ## Next steps -- [Use archive tier](use-archive-tier-support.md)
+- [Use Archive tier](use-archive-tier-support.md)
- [Azure Backup pricing](azure-backup-pricing.md)
backup Backup Azure Arm Restore Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-restore-vms.md
As one of the [restore options](#restore-options), you can create a disk from a
- [Attach restored disks](../virtual-machines/windows/attach-managed-disk-portal.md) to an existing VM. - [Create a new VM](./backup-azure-vms-automation.md#create-a-vm-from-restored-disks) from the restored disks using PowerShell.
-1. In **Restore configuration** > **Create new** > **Restore Type**, select **Create new virtual machine**.
+1. In **Restore configuration** > **Create new** > **Restore Type**, select **Restore disks**.
1. In **Resource group**, select an existing resource group for the restored disks, or create a new one with a globally unique name. 1. In **Staging location**, specify the storage account to which to copy the VHDs. [Learn more](#storage-accounts).
backup Use Archive Tier Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/use-archive-tier-support.md
Title: Use Archive Tier
-description: Learn about using Archive Tier Support for Azure Backup.
+ Title: Use Archive tier
+description: Learn about using Archive tier Support for Azure Backup.
Previously updated : 10/23/2021 Last updated : 03/21/2022
-zone_pivot_groups: backup-client-powershelltier-clitier-portaltier
+zone_pivot_groups: backup-client-portaltier-powershelltier-clitier
-# Use Archive Tier support
+# Use Archive tier support
++
+This article provides the procedure to back up long-term retention points in Archive tier, and snapshots and the Standard tier using Azure portal.
+
+## Supported workloads
+
+| Workloads | Operations |
+| | |
+| Azure Virtual Machine | View archived recovery points. <br><br> Move all recommended recovery points to archive.<br><br> Restore for archived recovery points. <br><br> View archive move and restore jobs. |
+| SQL Server in Azure Virtual Machine <br><br> SAP HANA in Azure Virtual Machines | View archived recovery points. <br><br> Move all archivable recovery points to archive. <br><br> Restore from archived recovery points. <br><br> View archive move and restore jobs. |
+
+## View archived recovery points
+
+You can now view all the recovery points that are moved to archive.
++
+## Move archivable recovery points
+
+### Move archivable recovery points for a particular SQL/SAP HANA database
+
+You can move all recovery points for a particular SQL/SAP HANA database at one go.
+
+Follow these steps:
+
+1. Select the backup item (database in SQL Server or SAP HANA in Azure VM) whose recovery points you want to move to the Vault-archive tier.
+
+1. Select **click here** to view the list of all eligible achievable recovery points.
+
+ :::image type="content" source="./media/use-archive-tier-support/view-old-recovery-points-inline.png" alt-text="Screenshot showing the process to view recovery points that are older than 7 days." lightbox="./media/use-archive-tier-support/view-old-recovery-points-expanded.png":::
+
+1. Click **Move recovery points to archive** to move all recovery points to the Vault-archive tier.
+
+ :::image type="content" source="./media/use-archive-tier-support/move-all-recovery-points-to-vault-inline.png" alt-text="Screenshot showing the option to start the move process of all recovery points to the Vault-archive tier." lightbox="./media/use-archive-tier-support/move-all-recovery-points-to-vault-expanded.png":::
+
+ >[!Note]
+ >This option moves all the archivable recovery points to the Vault-archive tier.
+
+You can monitor the progress in backup jobs.
+
+### Move recommended recovery points for a particular Azure Virtual Machine
+
+You can move all recommended recovery points for selected Azure Virtual Machines to the Vault-archive tier. [Learn](archive-tier-support.md#archive-recommendations-only-for-azure-virtual-machines) about recommendation set for Azure Virtual Machine.
+
+Follow these steps:
+
+1. Select the Virtual Machine whose recovery points you want to move to the Vault-archive tier.
+
+1. Select **click here** to view recommended recovery points.
+
+ :::image type="content" source="./media/use-archive-tier-support/view-old-virtual-machine-recovery-points-inline.png" alt-text="Screenshot showing the process to view recovery points for virtual machines that are older than 7 days." lightbox="./media/use-archive-tier-support/view-old-virtual-machine-recovery-points-expanded.png":::
+
+1. Click **Move recovery points to archive** to move all the recommended recovery points to Archive tier.
+
+ :::image type="content" source="./media/use-archive-tier-support/move-all-virtual-machine-recovery-points-to-vault-inline.png" alt-text="Screenshot showing the option to start the move process of all recovery points for virtual machines to the Vault-archive tier." lightbox="./media/use-archive-tier-support/move-all-virtual-machine-recovery-points-to-vault-expanded.png":::
+
+>[!Note]
+>To ensure cost savings, you need to move all the recommended recovery points to the Vault-archive tier. To verify, follow steps 1 and 2. If the list of recovery points is empty in step 3, all the recommended recovery points are moved to the Vault-archive tier.
+## Restore
+
+To restore the recovery points that are moved to archive, you need to add the required parameters for rehydration duration and rehydration priority.
++
+## View jobs
++
+## View Archive Usage in Vault Dashboard
+
+You can also view the archive usage in the vault dashboard.
++
+## Next steps
+
+- Use Archive tier support via [PowerShell](?pivots=client-powershelltier)/[CLI](?pivots=client-clitier).
+- [Troubleshoot Archive tier errors](troubleshoot-archive-tier.md)
+ ::: zone pivot="client-powershelltier"
-This article provides the procedure to backup of long-term retention points in the archive tier, and snapshots and the Standard tier using PowerShell.
+This article provides the procedure to back up long-term retention points in Archive tier, and snapshots and the Standard tier using PowerShell.
## Supported workloads | Workloads | Operations | | | |
-| Azure Virtual Machines (Preview) <br><br> SQL Server in Azure Virtual Machines | <ul><li>View Archivable Recovery Points </li><li>View Recommended Recovery Points (Only for Virtual Machines) </li><li>Move Archivable Recovery Points </li><li>Move Recommended Recovery Points (Only for Azure Virtual Machines) </li><li>View Archived Recovery points </li><li>Restore from archived recovery points </li></ul> |
+| Azure Virtual Machines <br><br> SQL Server in Azure Virtual Machines | View archivable recovery points. <br><br> View recommended recovery points (only for Virtual Machines). <br><br> Move archivable recovery points. <br><br> Move recommended recovery points (only for Azure Virtual Machines). <br><br> View archived recovery points. <br><br> Restore from archived recovery points. |
## Get started
Therefore, Azure Backup provides a recommended set of recovery points that might
>[!NOTE] >- The cost savings depends on various reasons and might not be the same for every instance.
->- Cost savings are ensured only when you move all recovery points contained in the recommendation set to the vault-archive tier.
+>- Cost savings are ensured only when you move all recovery points contained in the recommendation set to the Vault-archive tier.
```azurepowershell $RecommendedRecoveryPointList = Get-AzRecoveryServicesBackupRecommendedArchivableRPGroup -Item $bckItm -VaultId $vault.ID
For recovery points in archive, Azure Backup provides an integrated restore meth
The integrated restore is a two-step process. 1. Involves rehydrating the recovery points stored in archive.
-2. Temporarily store it in the vault-standard tier for a duration (also known as the rehydration duration) ranging from a period of 10 to 30 days. The default is 15 days. There are two different priorities of rehydration ΓÇô Standard and High priority. Learn more about [rehydration priority](../storage/blobs/archive-rehydrate-overview.md#rehydration-priority).
+1. Temporarily store it in the Vault-standard tier for a duration (also known as the rehydration duration) ranging from a period of 10 to 30 days. The default is 15 days. There are two different priorities of rehydration ΓÇô Standard and High priority. Learn more about [rehydration priority](../storage/blobs/archive-rehydrate-overview.md#rehydration-priority).
>[!NOTE] > >- The rehydration duration once selected can't be changed and the rehydrated recovery points stay in the standard tier for the rehydration duration. >- The additional step of rehydration incurs cost.
-For more information about the various restore methods for Azure virtual machines, see [Restore an Azure VM with PowerShell](backup-azure-vms-automation.md#restore-an-azure-vm).
+For more information about various restore methods for Azure Virtual Machines, see [Restore an Azure VM with PowerShell](backup-azure-vms-automation.md#restore-an-azure-vm).
```azurepowershell Restore-AzRecoveryServicesBackupItem -VaultLocation $vault.Location -RehydratePriority "Standard" -RehydrateDuration 15 -RecoveryPoint $rp -StorageAccountName "SampleSA" -StorageAccountResourceGroupName "SArgName" -TargetResourceGroupName $vault.ResourceGroupName -VaultId $vault.ID
To view the move and restore jobs, use the following PowerShell cmdlet:
Get-AzRecoveryServicesBackupJob -VaultId $vault.ID ```
-## Move recovery points to archive tier at scale
+## Move recovery points to Archive tier at scale
You can now use sample scripts to perform at scale operations. [Learn more](https://github.com/hiag) about how to run the sample scripts. You can download the scripts from [here](https://github.com/hiaga/Az.RecoveryServices). You can perform the following operations using the sample scripts provided by Azure Backup: -- Move all eligible recovery points for a particular database/all databases for a SQL server in Azure VM to the archive tier.-- Move all recommended recovery points for a particular Azure Virtual Machine to the archive tier.
+- Move all eligible recovery points for a particular database/all databases for a SQL server in Azure VM to Archive tier.
+- Move all recommended recovery points for a particular Azure Virtual Machine to Archive tier.
You can also write a script as per your requirements or modify the above sample scripts to fetch the required backup items.
+## Next steps
+
+- Use Archive tier support via [Azure portal](?pivots=client-portaltier)/[CLI](?pivots=client-clitier).
+- [Troubleshoot Archive tier errors](troubleshoot-archive-tier.md)
+ ::: zone-end ::: zone pivot="client-clitier"
-This article provides the procedure to backup of long-term retention points in the archive tier, and snapshots and the Standard tier using command-line interface (CLI).
+This article provides the procedure to back up long-term retention points in the Archive tier, and snapshots and the Standard tier using command-line interface (CLI).
## Supported workloads | Workloads | Operations | | | |
-| Azure Virtual Machines (Preview) <br><br> SQL Server in Azure Virtual Machines <br><br> SAP HANA in Azure Virtual Machines | <ul><li> View Archivable Recovery Points </li><li>View Recommended Recovery Points (Only for Virtual Machines) </li><li>Move Archivable Recovery Points </li><li>Move Recommended Recovery Points (Only for Azure Virtual Machines) </li><li>View Archived Recovery points </li><li>Restore from archived recovery points </li></ul> |
+| Azure Virtual Machines <br><br> SQL Server in Azure Virtual Machines <br><br> SAP HANA in Azure Virtual Machines | View archivable recovery points. <br><br> View recommended recovery points (only for Virtual Machines). <br><br> Move archivable recovery points. <br><br> Move recommended recovery points (only for Azure Virtual Machines). <br><br> View archived recovery points. <br><br> Restore from archived recovery points. |
## Get started
This article provides the procedure to backup of long-term retention points in t
``` ## View archivable recovery points
-You can move the archivable recovery points to the vault-archive tier using the following commands. [Learn more](archive-tier-support.md#supported-workloads) about the eligibility criteria.
+You can move the archivable recovery points to the Vault-archive tier using the following commands. [Learn more](archive-tier-support.md#supported-workloads) about the eligibility criteria.
- **For Azure Virtual Machines**
az backup recoverypoint list -g {rg} -v {vault} -c {container} -i {item} --backu
>[!Note] >- Cost savings depends on various reasons and might not be the same for every instance.
->- You can ensure cost savings only when all the recovery points contained in the recommendation set is moved to the vault-archive tier.
+>- You can ensure cost savings only when all the recovery points contained in the recommendation set is moved to the Vault-archive tier.
## Move to archive
-You can move archivable recovery points to the vault-archive tier using the following commands. The name parameter in the command should contain the name of an archivable recovery point.
+You can move archivable recovery points to the Vault-archive tier using the following commands. The name parameter in the command should contain the name of an archivable recovery point.
- **For Azure Virtual Machine**
Run the following commands:
az backup restore restore-azurewl -g {rg} -v {vault} --recovery-config {recov_config} --rehydration-priority {Standard / High} --rehydration-duration {rehyd_dur} ``` ----
-This article provides the procedure to backup of long-term retention points in the archive tier, and snapshots and the Standard tier using Azure portal.
-
-## Supported workloads
-
-| Workloads | Operations |
-| | |
-| Azure Virtual Machine | <ul><li>View Archived Recovery Points </li><li>Restore for Archived Recovery points </li><li>View Archive move and Restore Jobs </li></ul> |
-| SQL Server in Azure Virtual Machine/ <br> SAP HANA in Azure Virtual Machines | <ul><li>View Archived Recovery Points </li><li>Move all archivable recovery to archive </li><li>Restore from Archived recovery points </li><li>View Archive move and Restore jobs</li></ul> |
-
-## View archived recovery points
-
-You can now view all the recovery points that have are moved to archive.
---
-## Move archivable recovery points for a particular SQL/SAP HANA database
-
-You can now move all recovery points for a particular SQL/SAP HANA database at one go.
-
-Follow these steps:
-
-1. Select the backup Item (database in SQL Server or SAP HANA in Azure VM) whose recovery points you want to move to the vault-archive tier.
-
-2. Select **click here** to view recovery points that are older than 7 days.
-
- :::image type="content" source="./media/use-archive-tier-support/view-old-recovery-points-inline.png" alt-text="Screenshot showing the process to view recovery points that are older than 7 days." lightbox="./media/use-archive-tier-support/view-old-recovery-points-expanded.png":::
-
-3. To view all eligible archivable points to be moved to archive, select _Long term retention points can be moved to archive. To move all ΓÇÿeligible recovery pointsΓÇÖ to archive tier, click here_.
-
- :::image type="content" source="./media/use-archive-tier-support/view-all-eligible-archivable-points-for-move-inline.png" alt-text="Screenshot showing the process to view all eligible archivable points to be moved to archive." lightbox="./media/use-archive-tier-support/view-all-eligible-archivable-points-for-move-expanded.png":::
-
- All archivable recovery points appear.
--
- [Learn more](archive-tier-support.md#supported-workloads) about eligibility criteria.
-
-3. Click **Move Recovery Points to archive** to move all recovery points to the vault-archive tier.
-
- :::image type="content" source="./media/use-archive-tier-support/move-all-recovery-points-to-vault-inline.png" alt-text="Screenshot showing the option to start the move process of all recovery points to the vault-archive tier." lightbox="./media/use-archive-tier-support/move-all-recovery-points-to-vault-expanded.png":::
-
- >[!Note]
- >This option moves all the archivable recovery points to vault-archive.
-
-You can monitor the progress in backup jobs.
-
-## Restore
-
-To restore the recovery points that are moved to archive, you need to add the required parameters for rehydration duration and rehydration priority.
--
-## View jobs
--
-## View Archive Usage in Vault Dashboard
-
-You can also view the archive usage in the vault dashboard.
-
+## Next steps
+- Use Archive tier support via [Azure portal](?pivots=client-portaltier)/[PowerShell](?pivots=client-powershelltier).
+- [Troubleshoot Archive tier errors](troubleshoot-archive-tier.md)
::: zone-end
-## Next steps
-- [Troubleshoot Archive tier errors](troubleshoot-archive-tier.md)
cognitive-services Intents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/concepts/intents.md
Previously updated : 01/06/2022 Last updated : 03/21/2022 # Intents
The **None** intent is not included in the balance. That intent should contain
### Intent limits
-Review the [limits](../luis-limits.md#model-boundaries) to understand how many intents you can add to a model.
+Review the [limits](../luis-limits.md) to understand how many intents you can add to a model.
> [!Tip] > If you need more than the maximum number of intents, consider whether your system is using too many intents and determine if multiple intents be combined into single intent with entities.
cognitive-services Luis Concept Data Conversion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-concept-data-conversion.md
Previously updated : 07/29/2019 Last updated : 03/21/2022 # Convert data format of utterances
-LUIS provides the following conversions of a user utterance before prediction"
+LUIS provides the following conversions of a user utterance before prediction.
* Speech to text using [Cognitive Services Speech](../Speech-Service/overview.md) service.
Conversion of speech to text in LUIS allows you to send spoken utterances to an
You do not need to create a **Bing Speech API** key for this integration. A **Language Understanding** key created in the Azure portal works for this integration. Do not use the LUIS starter key. ### Pricing Tier
-This integration uses a different [pricing](luis-limits.md#key-limits) model than the usual Language Understanding pricing tiers.
+This integration uses a different [pricing](luis-limits.md#resource-usage-and-limits) model than the usual Language Understanding pricing tiers.
### Quota usage
-See [Key limits](luis-limits.md#key-limits) for information.
+See [Key limits](luis-limits.md#resource-usage-and-limits) for information.
## Next steps
cognitive-services Luis Concept Data Extraction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-concept-data-extraction.md
Previously updated : 10/28/2021 Last updated : 03/21/2022 # Extract data from utterance text with intents and entities
LUIS extracts data from the user's utterance at the published [endpoint](luis-gl
`https://westus.api.cognitive.microsoft.com/luis/v3.0-preview/apps/<appID>/slots/<slot-type>/predict?subscription-key=<subscription-key>&verbose=true&timezoneOffset=0&query=book 2 tickets to paris`
-The `appID` is available on the **Settings** page of your LUIS app as well as part of the URL (after `/apps/`) when you're editing that LUIS app. The `subscription-key` is the endpoint key used for querying your app. While you can use your free authoring/starter key while you're learning LUIS, it is important to change the endpoint key to a key that supports your [expected LUIS usage](luis-limits.md#key-limits). The `timezoneOffset` unit is minutes.
+The `appID` is available on the **Settings** page of your LUIS app as well as part of the URL (after `/apps/`) when you're editing that LUIS app. The `subscription-key` is the endpoint key used for querying your app. While you can use your free authoring/starter key while you're learning LUIS, it is important to change the endpoint key to a key that supports your [expected LUIS usage](luis-limits.md#resource-usage-and-limits). The `timezoneOffset` unit is minutes.
The **HTTPS response** contains all the intent and entity information LUIS can determine based on the current published model of either the staging or production endpoint. The endpoint URL is found on the [LUIS](luis-reference-regions.md) website, in the **Manage** section, on the **Keys and endpoints** page.
cognitive-services Luis Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-glossary.md
description: The glossary explains terms that you might encounter as you work wi
Previously updated : 10/28/2021 Last updated : 03/21/2022 # Language understanding glossary of common vocabulary and concepts
Authoring is the ability to create, manage and deploy a LUIS app, either using t
### Authoring Key
-The [authoring key](luis-how-to-azure-subscription.md) is used to author the app. Not used for production-level endpoint queries. For more information, see [Key limits](luis-limits.md#key-limits).
+The [authoring key](luis-how-to-azure-subscription.md) is used to author the app. Not used for production-level endpoint queries. For more information, see [resource limits](luis-limits.md#resource-usage-and-limits).
### Authoring Resource
cognitive-services Luis How To Azure Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-how-to-azure-subscription.md
Previously updated : 07/12/2021 Last updated : 03/21/2022 ms.devlang: azurecli
Use the [Azure CLI](/cli/azure/install-azure-cli) to create each resource indivi
az cognitiveservices account create -n my-luis-authoring-resource -g my-resource-group --kind LUIS.Authoring --sku F0 -l westus --yes ```
-3. Create a LUIS prediction endpoint resource of kind `LUIS`, named `my-luis-prediction-resource`. Create it in the _existing_ resource group named `my-resource-group` for the `westus` region. If you want higher throughput than the free tier provides, change `F0` to `S0`. [Learn more about pricing tiers and throughput.](luis-limits.md#key-limits)
+3. Create a LUIS prediction endpoint resource of kind `LUIS`, named `my-luis-prediction-resource`. Create it in the _existing_ resource group named `my-resource-group` for the `westus` region. If you want higher throughput than the free tier provides, change `F0` to `S0`. [Learn more about pricing tiers and throughput.](luis-limits.md#resource-usage-and-limits)
```azurecli az cognitiveservices account create -n my-luis-prediction-resource -g my-resource-group --kind LUIS --sku F0 -l westus --yes
To change the ownership of a resource, you can take one of these actions:
You can create as many as 10 authoring keys per region, per subscription. Publishing regions are different from authoring regions. Make sure you create an app in the authoring region that corresponds to the publishing region where you want your client application to be located. For information on how authoring regions map to publishing regions, see [Authoring and publishing regions](luis-reference-regions.md).
-For more information on key limits, see [key limits](luis-limits.md#key-limits).
+See [resource limits](luis-limits.md#resource-usage-and-limits) for more information.
### Errors for key usage limits
cognitive-services Luis Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-limits.md
description: This article contains the known limits of Azure Cognitive Services
Previously updated : 01/07/2022 Last updated : 03/21/2022 + # Limits for your LUIS model and keys
-LUIS has several limit areas. The first is the [model limit](#model-limits), which controls intents, entities, and features in LUIS. The second area is [quota limits](#key-limits) based on key type. A third area of limits is the [keyboard combination](#keyboard-controls) for controlling the LUIS website. A fourth area is the [world region mapping](luis-reference-regions.md) between the LUIS authoring website and the LUIS [endpoint](luis-glossary.md#endpoint) APIs.
-<a name="model-boundaries"></a>
+LUIS has several limit areas. The first is the [model limit](#model-limits), which controls intents, entities, and features in LUIS. The second area is [quota limits](#resource-usage-and-limits) based on resource type. A third area of limits is the [keyboard combination](#keyboard-controls) for controlling the LUIS website. A fourth area is the [world region mapping](luis-reference-regions.md) between the LUIS authoring website and the LUIS [endpoint](luis-glossary.md#endpoint) APIs.
## Model limits If your app exceeds the LUIS model limits, consider using a [LUIS dispatch](luis-concept-enterprise.md#dispatch-tool-and-model) app or using a [LUIS container](luis-container-howto.md).
-|Area|Limit|
-|--|:--|
-| [App name][luis-get-started-create-app] | *Default character max |
-| Applications| 500 applications per Azure authoring resource |
-| [Batch testing][batch-testing]| 10 datasets, 1000 utterances per dataset|
-| Explicit list | 50 per application|
+| Area | Limit |
+| |: |
+| [App name][luis-get-started-create-app] | \*Default character max |
+| Applications | 500 applications per Azure authoring resource |
+| [Batch testing][batch-testing] | 10 datasets, 1000 utterances per dataset |
+| Explicit list | 50 per application |
| External entities | no limits |
-| [Intents][intents]|500 per application: 499 custom intents, and the required _None_ intent.<br>[Dispatch-based](https://aka.ms/dispatch-tool) application has corresponding 500 dispatch sources.|
-| [List entities](concepts/entities.md) | Parent: 50, child: 20,000 items. Canonical name is *default character max. Synonym values have no length restriction. |
-| [machine-learning entities + roles](concepts/entities.md):<br> composite,<br>simple,<br>entity role|A limit of either 100 parent entities or 330 entities, whichever limit the user hits first. A role counts as an entity for the purpose of this limit. An example is a composite with a simple entity, which has 2 roles is: 1 composite + 1 simple + 2 roles = 4 of the 330 entities.<br>Subentities can be nested up to 5 levels, with a maximum of 20 children per level.|
-|Model as a feature| Maximum number of models that can be used as a feature to a specific model to be 10 models. The maximum number of phrase lists used as a feature for a specific model to be 10 phrase lists.|
-| [Preview - Dynamic list entities](./luis-migration-api-v3.md)|2 lists of ~1k per query prediction endpoint request|
-| [Patterns](luis-concept-patterns.md)|500 patterns per application.<br>Maximum length of pattern is 400 characters.<br>3 Pattern.any entities per pattern<br>Maximum of 2 nested optional texts in pattern|
-| [Pattern.any](concepts/entities.md)|100 per application, 3 pattern.any entities per pattern |
-| [Phrase list][phrase-list]|500 phrase lists. 10 global phrase lists due to the model as a feature limit. Non-interchangeable phrase list has max of 5,000 phrases. Interchangeable phrase list has max of 50,000 phrases. Maximum number of total phrases per application of 500,000 phrases.|
-| [Prebuilt entities](./howto-add-prebuilt-models.md) | no limit|
-| [Regular expression entities](concepts/entities.md)|20 entities<br>500 character max. per regular expression entity pattern|
-| [Roles](concepts/entities.md)|300 roles per application. 10 roles per entity|
-| [Utterance][utterances] | 500 characters<br><br>If you have text longer than this character limit, you need to segment the utterance prior to input to LUIS and you will receive individual intent responses per segment. There are obvious breaks you can work with, such as punctuation marks and long pauses in speech.|
+| [Intents][intents] | 500 per application: 499 custom intents, and the required _None_ intent.<br>[Dispatch-based](https://aka.ms/dispatch-tool) application has corresponding 500 dispatch sources. |
+| [List entities](concepts/entities.md) | Parent: 50, child: 20,000 items. Canonical name is \*default character max. Synonym values have no length restriction. |
+| [machine-learning entities + roles](concepts/entities.md):<br> composite,<br>simple,<br>entity role | A limit of either 100 parent entities or 330 entities, whichever limit the user hits first. A role counts as an entity for the purpose of this limit. An example is a composite with a simple entity, which has 2 roles is: 1 composite + 1 simple + 2 roles = 4 of the 330 entities.<br>Subentities can be nested up to 5 levels, with a maximum of 20 children per level. |
+| Model as a feature | Maximum number of models that can be used as a feature to a specific model to be 10 models. The maximum number of phrase lists used as a feature for a specific model to be 10 phrase lists. |
+| [Preview - Dynamic list entities](./luis-migration-api-v3.md) | 2 lists of \~1k per query prediction endpoint request |
+| [Patterns](luis-concept-patterns.md) | 500 patterns per application.<br>Maximum length of pattern is 400 characters.<br>3 Pattern.any entities per pattern<br>Maximum of 2 nested optional texts in pattern |
+| [Pattern.any](concepts/entities.md) | 100 per application, 3 pattern.any entities per pattern |
+| [Phrase list][phrase-list] | 500 phrase lists. 10 global phrase lists due to the model as a feature limit. Non-interchangeable phrase list has max of 5,000 phrases. Interchangeable phrase list has max of 50,000 phrases. Maximum number of total phrases per application of 500,000 phrases. |
+| [Prebuilt entities](./howto-add-prebuilt-models.md) | no limit |
+| [Regular expression entities](concepts/entities.md) | 20 entities<br>500 character max. per regular expression entity pattern |
+| [Roles](concepts/entities.md) | 300 roles per application. 10 roles per entity |
+| [Utterance][utterances] | 500 characters<br><br>If you have text longer than this character limit, you need to segment the utterance prior to input to LUIS and you will receive individual intent responses per segment. There are obvious breaks you can work with, such as punctuation marks and long pauses in speech. |
| [Utterance examples][utterances] | 15,000 per application - there is no limit on the number of utterances per intent<br><br>If you need to train the application with more examples, use a [dispatch](https://github.com/Microsoft/botbuilder-tools/tree/master/packages/Dispatch) model approach. You train individual LUIS apps (known as child apps to the parent dispatch app) with one or more intents and then train a dispatch app that samples from each child LUIS app's utterances to direct the prediction request to the correct child app. |
-| [Versions](./luis-concept-app-iteration.md)| 100 versions per application |
+| [Versions](./luis-concept-app-iteration.md) | 100 versions per application |
| [Version name][luis-how-to-manage-versions] | 128 characters |
-*Default character max is 50 characters.
-
-<a name="intent-and-entity-naming"></a>
+\*Default character max is 50 characters.
## Name uniqueness Object names must be unique when compared to other objects of the same level.
-|Objects|Restrictions|
-|--|--|
-|Intent, entity|All intent and entity names must be unique in a version of an app.|
-|ML entity components|All machine-learning entity components (child entities) must be unique, within that entity for components at the same level.|
-|Features | All named features, such as phrase lists, must be unique within a version of an app.|
-|Entity roles|All roles on an entity or entity component must be unique when they are at the same entity level (parent, child, grandchild, etc.).|
+| Objects | Restrictions |
+| | |
+| Intent, entity | All intent and entity names must be unique in a version of an app. |
+| ML entity components | All machine-learning entity components (child entities) must be unique, within that entity for components at the same level. |
+| Features | All named features, such as phrase lists, must be unique within a version of an app. |
+| Entity roles | All roles on an entity or entity component must be unique when they are at the same entity level (parent, child, grandchild, etc.). |
## Object naming Do not use the following characters in the following names.
-|Object|Exclude characters|
-|--|--|
-|Intent, entity, and role names|`:`<br>`$` <br> `&`|
-|Version name|`\`<br> `/`<br> `:`<br> `?`<br> `&`<br> `=`<br> `*`<br> `+`<br> `(`<br> `)`<br> `%`<br> `@`<br> `$`<br> `~`<br> `!`<br> `#`|
+| Object | Exclude characters |
+| | |
+| Intent, entity, and role names | `:`, `$`, `&`, `%`, `*`, `(`, `)`, `+`, `?`, `~` |
+| Version name | `\`, `/`, `:`, `?`, `&`, `=`, `*`, `+`, `(`, `)`, `%`, `@`, `$`, `~`, `!`, `#` |
## Resource usage and limits Language Understand has separate resources, one type for authoring, and one type for querying the prediction endpoint. To learn more about the differences between key types, see [Authoring and query prediction endpoint keys in LUIS](luis-how-to-azure-subscription.md).
-<a name="key-limits"></a>
- ### Authoring resource limits Use the _kind_, `LUIS.Authoring`, when filtering resources in the Azure portal. LUIS limits 500 applications per Azure authoring resource.
-|Authoring resource|Authoring TPS|
-|--|--|
-|F0 - Free tier |1 million/month, 5/second|
+| Authoring resource | Authoring TPS |
+| | |
+| F0 - Free tier | 1 million/month, 5/second |
* TPS = Transactions per second
Use the _kind_, `LUIS.Authoring`, when filtering resources in the Azure portal.
Use the _kind_, `LUIS`, when filtering resources in the Azure portal.The LUIS query prediction endpoint resource, used on the runtime, is only valid for endpoint queries.
-|Query Prediction resource|Query TPS|
-|--|--|
-|F0 - Free tier |10 thousand/month, 5/second|
-|S0 - Standard tier|50/second|
+| Query Prediction resource | Query TPS |
+| | |
+| F0 - Free tier | 10 thousand/month, 5/second |
+| S0 - Standard tier | 50/second |
### Sentiment analysis
Use the _kind_, `LUIS`, when filtering resources in the Azure portal.The LUIS qu
## Keyboard controls
-|Keyboard input | Description |
-|--|--|
-|Control+E|switches between tokens and entities on utterances list|
+| Keyboard input | Description |
+| | |
+| Control+E | switches between tokens and entities on utterances list |
## Website sign-in time period Your sign-in access is for **60 minutes**. After this time period, you will get this error. You need to sign in again.
-[luis-get-started-create-app]: ./luis-get-started-create-app.md
-[batch-testing]: ./luis-interactive-test.md#batch-testing
-[intents]: ./luis-concept-intent.md
-[phrase-list]: ./concepts/patterns-features.md
-[utterances]: ./concepts/utterances.md
-[luis-how-to-manage-versions]: ./luis-how-to-manage-versions.md
-[pricing]: https://azure.microsoft.com/pricing/details/cognitive-services/language-understanding-intelligent-services/
<!-- TBD: fix this link -->
-[speech-to-intent-pricing]: https://azure.microsoft.com/pricing/details/cognitive-services/language-understanding-intelligent-services/
+[BATCH-TESTING]: ./luis-interactive-test.md#batch-testing
+[INTENTS]: ./luis-concept-intent.md
+[LUIS-GET-STARTED-CREATE-APP]: ./luis-get-started-create-app.md
+[LUIS-HOW-TO-MANAGE-VERSIONS]: ./luis-how-to-manage-versions.md
+[PHRASE-LIST]: ./concepts/patterns-features.md
+[PRICING]: https://azure.microsoft.com/pricing/details/cognitive-services/language-understanding-intelligent-services/
+[UTTERANCES]: ./concepts/utterances.md
cognitive-services Luis Migration Authoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-migration-authoring.md
Previously updated : 05/28/2021 Last updated : 03/21/2022 # Migrate to an Azure resource authoring key
Migration has to be done from the [LUIS portal](https://www.luis.ai). If you cre
> [!Note] > * If you need to create a prediction runtime resource, there's [a separate process](luis-how-to-azure-subscription.md#create-luis-resources) to create it. > * See the [migration notes](#migration-notes) section below for information on how your applications and contributors will be affected.
-> * Authoring your LUIS app is free, as indicated by the F0 tier. Learn [more about pricing tiers](luis-limits.md#key-limits).
+> * Authoring your LUIS app is free, as indicated by the F0 tier. Learn [more about pricing tiers](luis-limits.md#resource-usage-and-limits).
## Migration prerequisites
cognitive-services Translator How To Install Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/containers/translator-how-to-install-container.md
All Cognitive Services containers require three primary elements:
> [!IMPORTANT] >
-> * Subscription keys are used to access your Cognitive Service API. Do not share your keys. Store them securely, for example, using Azure Key Vault. We also recommend regenerating these keys regularly. Only one key is necessary to make an API call. When regenerating the first key, you can use the second key for continued access to the service.
+> * Keys are used to access your Cognitive Service API. Do not share your keys. Store them securely, for example, using Azure Key Vault. We also recommend regenerating these keys regularly. Only one key is necessary to make an API call. When regenerating the first key, you can use the second key for continued access to the service.
## Host computer
cognitive-services How To Manage Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/how-to-manage-settings.md
Title: How to manage settings? - Custom Translator
-description: How to manage settings, create workspace, share workspace, and manage subscription key in Custom Translator.
+description: How to manage settings, create workspace, share workspace, and manage key in Custom Translator.
Last updated 12/06/2021
-#Customer intent: As a Custom Translator user, I want to understand how to manage settings, so that I can create workspace, share workspace, and manage subscription key in Custom Translator.
+#Customer intent: As a Custom Translator user, I want to understand how to manage settings, so that I can create workspace, share workspace, and manage key in Custom Translator.
# How to manage settings
-Within the Custom Translator settings page, you can share your workspace, modify your Translator subscription key, and delete workspace.
+Within the Custom Translator settings page, you can share your workspace, modify your Translator key, and delete workspace.
To access the settings page:
To access the settings page:
## Associating Translator Subscription
-You need to have a Translator subscription key associated with your workspace to train or deploy models.
+You need to have a Translator key associated with your workspace to train or deploy models.
If you don't have a subscription, follow the steps below:
If you don't have a subscription, follow the steps below:
![Create new workspace dialog](media/how-to/create-new-workspace-dialog.png) >[!Note]
->Custom Translator does not support creating workspace for Translator Text API resource (a.k.a. Azure subscription key) that was created inside [Enabled VNET](../../../api-management/api-management-using-with-vnet.md).
+>Custom Translator does not support creating workspace for Translator Text API resource (a.k.a. Azure key) that was created inside [Enabled VNET](../../../api-management/api-management-using-with-vnet.md).
### Modify existing key 1. Navigate to the "Settings" page for your workspace. 2. Select **Change Key**.
- ![How to add subscription key](media/how-to/how-to-add-subscription-key.png)
+ ![How to add key](media/how-to/how-to-add-subscription-key.png)
3. In the dialog, enter the key for your Translator subscription, then select the **Save** button.
- ![How to add subscription key dialog](media/how-to/how-to-add-subscription-key-dialog.png)
+ ![How to add key dialog](media/how-to/how-to-add-subscription-key-dialog.png)
## Manage your workspace
In Custom Translator you can share your workspace with others, if different part
1. **Reader:** A reader in the workspace will be able to view all information in the workspace.
-2. **Editor:** An editor in the workspace will be able to add documents, train models, and delete documents and projects. They can add a subscription key, but can't modify who the workspace is shared with, delete the workspace, or change the workspace name.
+2. **Editor:** An editor in the workspace will be able to add documents, train models, and delete documents and projects. They can add a key, but can't modify who the workspace is shared with, delete the workspace, or change the workspace name.
3. **Owner:** An owner has full permissions to the workspace.
cognitive-services How To View System Test Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/how-to-view-system-test-results.md
To update deployment settings:
## Next steps - Start using your deployed custom translation model via [Microsoft Translator Text API V3](../reference/v3-0-translate.md?tabs=curl).-- Learn [how to manage settings](how-to-manage-settings.md) to share your workspace, manage subscription key.
+- Learn [how to manage settings](how-to-manage-settings.md) to share your workspace, manage key.
cognitive-services Quickstart Build Deploy Custom Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/quickstart-build-deploy-custom-model.md
This article provides step-by-step instructions to build a translation system wi
Portal, you will need a [Microsoft account](https://signup.live.com) or [Azure AD account](../../../active-directory/fundamentals/active-directory-whatis.md) (organization account hosted on Azure) to sign in.
-2. A subscription to the Translator Text API via the Azure portal. You will need the Translator Text API subscription key to associate with your workspace in Custom Translator. See [how to sign up for the Translator Text API](../translator-how-to-signup.md).
+2. A subscription to the Translator Text API via the Azure portal. You will need the Translator Text API key to associate with your workspace in Custom Translator. See [how to sign up for the Translator Text API](../translator-how-to-signup.md).
3. When you have both of the above, sign in to the [Custom Translator](https://portal.customtranslator.azure.ai) portal to create workspaces, projects, upload files and create/deploy models.
If you are first-time user, you will be asked to agree to the Terms of Service t
![Create workspace image 5](media/quickstart/create-workspace-5.png) ![Create workspace image 6](media/quickstart/create-workspace-6.png)
-On subsequent visits to the Custom Translator portal, navigate to the Settings page where you can manage your workspace, create more workspaces, associate your Microsoft Translator Text API subscription key with your workspaces, add co-owners, and change a subscription key.
+On subsequent visits to the Custom Translator portal, navigate to the Settings page where you can manage your workspace, create more workspaces, associate your Microsoft Translator Text API key with your workspaces, add co-owners, and change a key.
## Create a project
cognitive-services Client Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/client-sdks.md
using System;
using System.Threading; ```
-In the application's **Program** class, create variable for your subscription key and custom endpoint. For details, *see* [Custom domain name and subscription key](get-started-with-document-translation.md#custom-domain-name-and-subscription-key)
+In the application's **Program** class, create variables for your key and custom endpoint. For details, *see* [Custom domain name and key](get-started-with-document-translation.md#your-custom-domain-name-and-key)
```csharp private static readonly string endpoint = "<your custom endpoint>";
-private static readonly string subscriptionKey = "<your subscription key>";
+private static readonly string key = "<your key>";
``` ### Translate a document or batch files
public void StartTranslation() {
Uri sourceUri = new Uri("<sourceUrl>"); Uri targetUri = new Uri("<targetUrl>");
- DocumentTranslationClient client = new DocumentTranslationClient(new Uri(endpoint), new AzureKeyCredential(subscriptionKey));
+ DocumentTranslationClient client = new DocumentTranslationClient(new Uri(endpoint), new AzureKeyCredential(key));
DocumentTranslationInput input = new DocumentTranslationInput(sourceUri, targetUri, "es")
Create a new Python application in your preferred editor or IDE. Then import the
from azure.ai.translation.document import DocumentTranslationClient ```
-Create variables for your resource subscription key, custom endpoint, sourceUrl, and targetUrl. For
-more information, *see* [Custom domain name and subscription key](get-started-with-document-translation.md#custom-domain-name-and-subscription-key)
+Create variables for your resource key, custom endpoint, sourceUrl, and targetUrl. For
+more information, *see* [Custom domain name and key](get-started-with-document-translation.md#your-custom-domain-name-and-key)
```python
- subscriptionKey = "<your-subscription-key>"
+ key = "<your-key>"
endpoint = "<your-custom-endpoint>" sourceUrl = "<your-container-sourceUrl>" targetUrl = "<your-container-targetUrl>"
more information, *see* [Custom domain name and subscription key](get-started-w
### Translate a document or batch files ```python
-client = DocumentTranslationClient(endpoint, AzureKeyCredential(subscriptionKey))
+client = DocumentTranslationClient(endpoint, AzureKeyCredential(key))
poller = client.begin_translation(sourceUrl, targetUrl, "fr") result = poller.result()
That's it! You've created a program to translate documents in a blob container u
[python-dt-product-docs]: overview.md [python-dt-samples]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/translation/azure-ai-translation-document/samples -+
cognitive-services Get Started With Document Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/get-started-with-document-translation.md
> [!NOTE] >
-> * Generally, when you create a Cognitive Service resource in the Azure portal, you have the option to create a multi-service subscription key or a single-service subscription key. However, Document Translation is currently supported in the Translator (single-service) resource only, and is **not** included in the Cognitive Services (multi-service) resource.
+> * Generally, when you create a Cognitive Service resource in the Azure portal, you have the option to create a multi-service key or a single-service key. However, Document Translation is currently supported in the Translator (single-service) resource only, and is **not** included in the Cognitive Services (multi-service) resource.
> > * Document Translation is **only** supported in the S1 Standard Service Plan (Pay-as-you-go) or in the D3 Volume Discount Plan. _See_ [Cognitive Services pricingΓÇöTranslator](https://azure.microsoft.com/pricing/details/cognitive-services/translator/). >
To get started, you'll need:
1. After your resource has successfully deployed, select **Go to resource**.
-## Custom domain name and subscription key
+## Your custom domain name and key
> [!IMPORTANT] >
The **NAME-OF-YOUR-RESOURCE** (also called *custom domain name*) parameter is th
:::image type="content" source="../media/instance-details.png" alt-text="Image of the Azure portal, create resource, instant details, name field.":::
-### Get your subscription key
+### Get your key
Requests to the Translator service require a read-only key for authenticating access. 1. If you've created a new resource, after it deploys, select **Go to resource**. If you have an existing Document Translation resource, navigate directly to your resource page. 1. In the left rail, under *Resource Management*, select **Keys and Endpoint**.
-1. Copy and paste your subscription key in a convenient location, such as *Microsoft Notepad*.
+1. Copy and paste your key in a convenient location, such as *Microsoft Notepad*.
1. You'll paste it into the code below to authenticate your request to the Document Translation service. ## Create Azure blob storage containers
The following headers are included with each Document Translator API request:
|HTTP header|Description| ||--|
-|Ocp-Apim-Subscription-Key|**Required**: The value is the Azure subscription key for your Translator or Cognitive Services resource.|
+|Ocp-Apim-Subscription-Key|**Required**: The value is the Azure key for your Translator or Cognitive Services resource.|
|Content-Type|**Required**: Specifies the content type of the payload. Accepted values are application/json or charset=UTF-8.| ### POST request body properties
The following headers are included with each Document Translator API request:
* Create a new project. * Replace Program.cs with the C# code shown below.
-* Set your endpoint, subscription key, and container URL values in Program.cs.
+* Set your endpoint, key, and container URL values in Program.cs.
* To process JSON data, add [Newtonsoft.Json package using .NET CLI](https://www.nuget.org/packages/Newtonsoft.Json/). * Run the program from the project directory.
The following headers are included with each Document Translator API request:
* Create a new Node.js project. * Install the Axios library with `npm i axios`. * Copy/paste the code below into your project.
-* Set your endpoint, subscription key, and container URL values.
+* Set your endpoint, key, and container URL values.
* Run the program. ### [Python](#tab/python) * Create a new project. * Copy and paste the code from one of the samples into your project.
-* Set your endpoint, subscription key, and container URL values.
+* Set your endpoint, key, and container URL values.
* Run the program. For example: `python translate.py`. ### [Java](#tab/java)
gradle init --type basic
} ```
-* Create a Java file in the **java** directory and copy/paste the code from the provided sample. Don't forget to add your subscription key and endpoint.
+* Create a Java file in the **java** directory and copy/paste the code from the provided sample. Don't forget to add your key and endpoint.
* **Build and run the sample from the root directory**:
gradle run
* Create a new Go project. * Add the code provided below.
-* Set your endpoint, subscription key, and container URL values.
+* Set your endpoint, key, and container URL values.
* Save the file with a '.go' extension. * Open a command prompt on a computer with Go installed. * Build the file. For example: 'go build example-code.go'.
gradle run
> You may need to update the following fields, depending upon the operation: >>> >> * `endpoint`
->> * `subscriptionKey`
+>> * `key`
>> * `sourceURL` >> * `targetURL` >> * `glossaryURL`
Operation-Location | https://<<span>NAME-OF-YOUR-RESOURCE>.cognitiveservices.a
private static readonly string endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0";
- private static readonly string subscriptionKey = "<YOUR-SUBSCRIPTION-KEY>";
+ private static readonly string key = "<YOUR-KEY>";
static readonly string json = ("{\"inputs\": [{\"source\": {\"sourceUrl\": \"https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS\",\"storageSource\": \"AzureBlob\",\"language\": \"en\",\"filter\":{\"prefix\": \"Demo_1/\"} }, \"targets\": [{\"targetUrl\": \"https://YOUR-TARGET-URL-WITH-WRITE-LIST-ACCESS-SAS\",\"storageSource\": \"AzureBlob\",\"category\": \"general\",\"language\": \"es\"}]}]}");
Operation-Location | https://<<span>NAME-OF-YOUR-RESOURCE>.cognitiveservices.a
request.Method = HttpMethod.Post; request.RequestUri = new Uri(endpoint + route);
- request.Headers.Add("Ocp-Apim-Subscription-Key", subscriptionKey);
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
request.Content = content; HttpResponseMessage response = await client.SendAsync(request);
const axios = require('axios').default;
let endpoint = 'https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0'; let route = '/batches';
-let subscriptionKey = '<YOUR-SUBSCRIPTION-KEY>';
+let key = '<YOUR-KEY>';
let data = JSON.stringify({"inputs": [ {
let config = {
baseURL: endpoint, url: route, headers: {
- 'Ocp-Apim-Subscription-Key': subscriptionKey,
+ 'Ocp-Apim-Subscription-Key': key,
'Content-Type': 'application/json' }, data: data
axios(config)
import requests endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0"
-subscriptionKey = '<YOUR-SUBSCRIPTION-KEY>'
+key = '<YOUR-KEY>'
path = '/batches' constructed_url = endpoint + path
payload= {
] } headers = {
- 'Ocp-Apim-Subscription-Key': subscriptionKey,
+ 'Ocp-Apim-Subscription-Key': key,
'Content-Type': 'application/json' }
import java.util.*;
import com.squareup.okhttp.*; public class DocumentTranslation {
- String subscriptionKey = "'<YOUR-SUBSCRIPTION-KEY>'";
+ String key = "<YOUR-KEY>";
String endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0"; String path = endpoint + "/batches";
public class DocumentTranslation {
RequestBody body = RequestBody.create(mediaType, "{\n \"inputs\": [\n {\n \"source\": {\n \"sourceUrl\": \"https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS\",\n \"filter\": {\n \"prefix\": \"Demo_1\"\n },\n \"language\": \"en\",\n \"storageSource\": \"AzureBlob\"\n },\n \"targets\": [\n {\n \"targetUrl\": \"https://YOUR-TARGET-URL-WITH-WRITE-LIST-ACCESS-SAS\",\n \"category\": \"general\",\n\"language\": \"fr\",\n\"storageSource\": \"AzureBlob\"\n }\n ],\n \"storageType\": \"Folder\"\n }\n ]\n}"); Request request = new Request.Builder() .url(path).post(body)
- .addHeader("Ocp-Apim-Subscription-Key", subscriptionKey)
+ .addHeader("Ocp-Apim-Subscription-Key", key)
.addHeader("Content-type", "application/json") .build(); Response response = client.newCall(request).execute();
import (
func main() { endpoint := "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0"
-subscriptionKey := "<YOUR-SUBSCRIPTION-KEY>"
+key := "<YOUR-KEY>"
uri := endpoint + "/batches" method := "POST" var jsonStr = []byte(`{"inputs":[{"source":{"sourceUrl":"https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS","storageSource":"AzureBlob","language":"en","filter":{"prefix":"Demo_1/"}},"targets":[{"targetUrl":"https://YOUR-TARGET-URL-WITH-WRITE-LIST-ACCESS-SAS","storageSource":"AzureBlob","category":"general","language":"es"}]}]}`) req, err := http.NewRequest(method, endpoint, bytes.NewBuffer(jsonStr))
-req.Header.Add("Ocp-Apim-Subscription-Key", subscriptionKey)
+req.Header.Add("Ocp-Apim-Subscription-Key", key)
req.Header.Add("Content-Type", "application/json") client := &http.Client{}
class Program
static readonly string route = "/documents/formats";
- private static readonly string subscriptionKey = "<YOUR-SUBSCRIPTION-KEY>";
+ private static readonly string key = "<YOUR-KEY>";
static async Task Main(string[] args) {
class Program
{ request.Method = HttpMethod.Get; request.RequestUri = new Uri(endpoint + route);
- request.Headers.Add("Ocp-Apim-Subscription-Key", subscriptionKey);
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
HttpResponseMessage response = await client.SendAsync(request);
class Program
const axios = require('axios'); let endpoint = 'https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0';
-let subscriptionKey = '<YOUR-SUBSCRIPTION-KEY>';
+let key = '<YOUR-KEY>';
let route = '/documents/formats'; let config = { method: 'get', url: endpoint + route, headers: {
- 'Ocp-Apim-Subscription-Key': subscriptionKey
+ 'Ocp-Apim-Subscription-Key': key
} };
import com.squareup.okhttp.*;
public class GetFileFormats {
- String subscriptionKey = "<YOUR-SUBSCRIPTION-KEY>";
+ String key = "<YOUR-KEY>";
String endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0"; String url = endpoint + "/documents/formats"; OkHttpClient client = new OkHttpClient(); public void get() throws IOException { Request request = new Request.Builder().url(
- url).method("GET", null).addHeader("Ocp-Apim-Subscription-Key", subscriptionKey).build();
+ url).method("GET", null).addHeader("Ocp-Apim-Subscription-Key", key).build();
Response response = client.newCall(request).execute(); System.out.println(response.body().string()); }
import http.client
host = '<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com' parameters = '//translator/text/batch/v1.0/documents/formats'
-subscriptionKey = '<YOUR-SUBSCRIPTION-KEY>'
+key = '<YOUR-KEY>'
conn = http.client.HTTPSConnection(host) payload = '' headers = {
- 'Ocp-Apim-Subscription-Key': subscriptionKey
+ 'Ocp-Apim-Subscription-Key': key
} conn.request("GET", parameters , payload, headers) res = conn.getresponse()
import (
func main() { endpoint := "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0"
- subscriptionKey := "<YOUR-SUBSCRIPTION-KEY>"
+ key := "<YOUR-KEY>"
uri := endpoint + "/documents/formats" method := "GET"
func main() {
fmt.Println(err) return }
- req.Header.Add("Ocp-Apim-Subscription-Key", subscriptionKey)
+ req.Header.Add("Ocp-Apim-Subscription-Key", key)
res, err := client.Do(req) if err != nil {
class Program
static readonly string route = "/batches/{id}";
- private static readonly string subscriptionKey = "<YOUR-SUBSCRIPTION-KEY>";
+ private static readonly string key = "<YOUR-KEY>";
static async Task Main(string[] args) {
class Program
{ request.Method = HttpMethod.Get; request.RequestUri = new Uri(endpoint + route);
- request.Headers.Add("Ocp-Apim-Subscription-Key", subscriptionKey);
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
HttpResponseMessage response = await client.SendAsync(request);
class Program
const axios = require('axios'); let endpoint = 'https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0';
-let subscriptionKey = '<YOUR-SUBSCRIPTION-KEY>';
+let key = '<YOUR-KEY>';
let route = '/batches/{id}'; let config = { method: 'get', url: endpoint + route, headers: {
- 'Ocp-Apim-Subscription-Key': subscriptionKey
+ 'Ocp-Apim-Subscription-Key': key
} };
import com.squareup.okhttp.*;
public class GetJobStatus {
- String subscriptionKey = "<YOUR-SUBSCRIPTION-KEY>";
+ String key = "<YOUR-KEY>";
String endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0"; String url = endpoint + "/batches/{id}"; OkHttpClient client = new OkHttpClient(); public void get() throws IOException { Request request = new Request.Builder().url(
- url).method("GET", null).addHeader("Ocp-Apim-Subscription-Key", subscriptionKey).build();
+ url).method("GET", null).addHeader("Ocp-Apim-Subscription-Key", key).build();
Response response = client.newCall(request).execute(); System.out.println(response.body().string()); }
import http.client
host = '<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com' parameters = '//translator/text/batch/v1.0/batches/{id}'
-subscriptionKey = '<YOUR-SUBSCRIPTION-KEY>'
+key = '<YOUR-KEY>'
conn = http.client.HTTPSConnection(host) payload = '' headers = {
- 'Ocp-Apim-Subscription-Key': subscriptionKey
+ 'Ocp-Apim-Subscription-Key': key
} conn.request("GET", parameters , payload, headers) res = conn.getresponse()
import (
func main() { endpoint := "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0"
- subscriptionKey := "<YOUR-SUBSCRIPTION-KEY>"
+ key := "<YOUR-KEY>"
uri := endpoint + "/batches/{id}" method := "GET"
func main() {
fmt.Println(err) return }
- req.Header.Add("Ocp-Apim-Subscription-Key", subscriptionKey)
+ req.Header.Add("Ocp-Apim-Subscription-Key", key)
res, err := client.Do(req) if err != nil {
class Program
static readonly string route = "/{id}/document/{documentId}";
- private static readonly string subscriptionKey = "<YOUR-SUBSCRIPTION-KEY>";
+ private static readonly string key = "<YOUR-KEY>";
static async Task Main(string[] args) {
class Program
{ request.Method = HttpMethod.Get; request.RequestUri = new Uri(endpoint + route);
- request.Headers.Add("Ocp-Apim-Subscription-Key", subscriptionKey);
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
HttpResponseMessage response = await client.SendAsync(request);
class Program
const axios = require('axios'); let endpoint = 'https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0';
-let subscriptionKey = '<YOUR-SUBSCRIPTION-KEY>';
+let key = '<YOUR-KEY>';
let route = '/{id}/document/{documentId}'; let config = { method: 'get', url: endpoint + route, headers: {
- 'Ocp-Apim-Subscription-Key': subscriptionKey
+ 'Ocp-Apim-Subscription-Key': key
} };
import com.squareup.okhttp.*;
public class GetDocumentStatus {
- String subscriptionKey = "<YOUR-SUBSCRIPTION-KEY>";
+ String key = "<YOUR-KEY>";
String endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0"; String url = endpoint + "/{id}/document/{documentId}"; OkHttpClient client = new OkHttpClient(); public void get() throws IOException { Request request = new Request.Builder().url(
- url).method("GET", null).addHeader("Ocp-Apim-Subscription-Key", subscriptionKey).build();
+ url).method("GET", null).addHeader("Ocp-Apim-Subscription-Key", key).build();
Response response = client.newCall(request).execute(); System.out.println(response.body().string()); }
import http.client
host = '<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com' parameters = '//translator/text/batch/v1.0/{id}/document/{documentId}'
-subscriptionKey = '<YOUR-SUBSCRIPTION-KEY>'
+key = '<YOUR-KEY>'
conn = http.client.HTTPSConnection(host) payload = '' headers = {
- 'Ocp-Apim-Subscription-Key': subscriptionKey
+ 'Ocp-Apim-Subscription-Key': key
} conn.request("GET", parameters , payload, headers) res = conn.getresponse()
import (
func main() { endpoint := "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0"
- subscriptionKey := "<YOUR-SUBSCRIPTION-KEY>"
+ key := "<YOUR-KEY>"
uri := endpoint + "/{id}/document/{documentId}" method := "GET"
func main() {
fmt.Println(err) return }
- req.Header.Add("Ocp-Apim-Subscription-Key", subscriptionKey)
+ req.Header.Add("Ocp-Apim-Subscription-Key", key)
res, err := client.Do(req) if err != nil {
class Program
static readonly string route = "/batches/{id}";
- private static readonly string subscriptionKey = "<YOUR-SUBSCRIPTION-KEY>";
+ private static readonly string key = "<YOUR-KEY>";
static async Task Main(string[] args) {
class Program
{ request.Method = HttpMethod.Delete; request.RequestUri = new Uri(endpoint + route);
- request.Headers.Add("Ocp-Apim-Subscription-Key", subscriptionKey);
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
HttpResponseMessage response = await client.SendAsync(request);
class Program
const axios = require('axios'); let endpoint = 'https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0';
-let subscriptionKey = '<YOUR-SUBSCRIPTION-KEY>';
+let key = '<YOUR-KEY>';
let route = '/batches/{id}'; let config = { method: 'delete', url: endpoint + route, headers: {
- 'Ocp-Apim-Subscription-Key': subscriptionKey
+ 'Ocp-Apim-Subscription-Key': key
} };
import com.squareup.okhttp.*;
public class DeleteJob {
- String subscriptionKey = "<YOUR-SUBSCRIPTION-KEY>";
+ String key = "<YOUR-KEY>";
String endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0"; String url = endpoint + "/batches/{id}"; OkHttpClient client = new OkHttpClient(); public void get() throws IOException { Request request = new Request.Builder().url(
- url).method("DELETE", null).addHeader("Ocp-Apim-Subscription-Key", subscriptionKey).build();
+ url).method("DELETE", null).addHeader("Ocp-Apim-Subscription-Key", key).build();
Response response = client.newCall(request).execute(); System.out.println(response.body().string()); }
import http.client
host = '<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com' parameters = '//translator/text/batch/v1.0/batches/{id}'
-subscriptionKey = '<YOUR-SUBSCRIPTION-KEY>'
+key = '<YOUR-KEY>'
conn = http.client.HTTPSConnection(host) payload = '' headers = {
- 'Ocp-Apim-Subscription-Key': subscriptionKey
+ 'Ocp-Apim-Subscription-Key': key
} conn.request("DELETE", parameters , payload, headers) res = conn.getresponse()
import (
func main() { endpoint := "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0"
- subscriptionKey := "<YOUR-SUBSCRIPTION-KEY>"
+ key := "<YOUR-KEY>"
uri := endpoint + "/batches/{id}" method := "DELETE"
func main() {
fmt.Println(err) return }
- req.Header.Add("Ocp-Apim-Subscription-Key", subscriptionKey)
+ req.Header.Add("Ocp-Apim-Subscription-Key", key)
res, err := client.Do(req) if err != nil {
Document Translation can't be used to translate secured documents such as those
||-|--| | 200 | OK | The request was successful. | | 400 | Bad Request | A required parameter is missing, empty, or null. Or, the value passed to either a required or optional parameter is invalid. A common issue is a header that is too long. |
-| 401 | Unauthorized | The request isn't authorized. Check to make sure your subscription key or token is valid and in the correct region. When managing your subscription on the Azure portal, make sure you're using the **Translator** single-service resource _not_ the **Cognitive Services** multi-service resource.
+| 401 | Unauthorized | The request isn't authorized. Check to make sure your key or token is valid and in the correct region. When managing your subscription on the Azure portal, make sure you're using the **Translator** single-service resource _not_ the **Cognitive Services** multi-service resource.
| 429 | Too Many Requests | You've exceeded the quota or rate of requests allowed for your subscription. | | 502 | Bad Gateway | Network or server-side issue. May also indicate invalid headers. |
cognitive-services Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/managed-identity.md
The following headers are included with each Document Translator API request:
|HTTP header|Description| ||--|
-|Ocp-Apim-Subscription-Key|**Required**: The value is the Azure subscription key for your Translator or Cognitive Services resource.|
+|Ocp-Apim-Subscription-Key|**Required**: The value is the Azure key for your Translator or Cognitive Services resource.|
|Content-Type|**Required**: Specifies the content type of the payload. Accepted values are application/json or charset=UTF-8.| ### POST request body
cognitive-services Quickstart Translator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/quickstart-translator.md
In this quickstart, you learn to use the Translator service via REST. You start
* Create a new project: `dotnet new console -o your_project_name` * Replace Program.cs with the C# code shown below.
-* Set the subscription key and endpoint values in Program.cs.
+* Set the key and endpoint values in Program.cs.
* [Add Newtonsoft.Json using .NET CLI](https://www.nuget.org/packages/Newtonsoft.Json/). * Run the program from the project directory: ``dotnet run``
In this quickstart, you learn to use the Translator service via REST. You start
* Create a new Go project in your favorite code editor. * Add the code provided below.
-* Replace the `subscriptionKey` value with an access key valid for your subscription.
+* Replace the `key` value with an access key valid for your subscription.
* Save the file with a '.go' extension. * Open a command prompt on a computer with Go installed. * Build the file, for example: 'go build example-code.go'.
In this quickstart, you learn to use the Translator service via REST. You start
compile("com.google.code.gson:gson:2.8.5") } ```
-* Create a Java file and copy in the code from the provided sample. Don't forget to add your subscription key.
+* Create a Java file and copy in the code from the provided sample. Don't forget to add your key.
* Run the sample: `gradle run`.
In this quickstart, you learn to use the Translator service via REST. You start
* Create a new project in your favorite IDE or editor. * Copy the code from one of the samples into your project.
-* Set your subscription key.
+* Set your key.
* Run the program. For example: `node Translate.js`.
In this quickstart, you learn to use the Translator service via REST. You start
* Create a new project in your favorite IDE or editor. * Copy the code from one of the samples into your project.
-* Set your subscription key.
+* Set your key.
* Run the program. For example: `python translate.py`.
using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
class Program {
- private static readonly string subscriptionKey = "YOUR-SUBSCRIPTION-KEY";
+ private static readonly string key = "YOUR-KEY";
private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com/"; // Add your location, also known as region. The default is global.
class Program
request.Method = HttpMethod.Post; request.RequestUri = new Uri(endpoint + route); request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
- request.Headers.Add("Ocp-Apim-Subscription-Key", subscriptionKey);
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
request.Headers.Add("Ocp-Apim-Subscription-Region", location); // Send the request and get response.
import (
) func main() {
- subscriptionKey := "YOUR-SUBSCRIPTION-KEY"
+ key := "YOUR-KEY"
// Add your location, also known as region. The default is global. // This is required if using a Cognitive Services resource. location := "YOUR_RESOURCE_LOCATION";
func main() {
log.Fatal(err) } // Add required headers to the request
- req.Header.Add("Ocp-Apim-Subscription-Key", subscriptionKey)
+ req.Header.Add("Ocp-Apim-Subscription-Key", key)
req.Header.Add("Ocp-Apim-Subscription-Region", location) req.Header.Add("Content-Type", "application/json")
import com.google.gson.*;
import com.squareup.okhttp.*; public class Translate {
- private static String subscriptionKey = "YOUR_SUBSCRIPTION_KEY";
+ private static String key = "YOUR_KEY";
// Add your location, also known as region. The default is global. // This is required if using a Cognitive Services resource.
public class Translate {
RequestBody body = RequestBody.create(mediaType, "[{\"Text\": \"Hello World!\"}]"); Request request = new Request.Builder().url(url).post(body)
- .addHeader("Ocp-Apim-Subscription-Key", subscriptionKey)
+ .addHeader("Ocp-Apim-Subscription-Key", key)
.addHeader("Ocp-Apim-Subscription-Region", location) .addHeader("Content-type", "application/json") .build();
public class Translate {
const axios = require('axios').default; const { v4: uuidv4 } = require('uuid');
-var subscriptionKey = "YOUR_SUBSCRIPTION_KEY";
+var key = "YOUR_KEY";
var endpoint = "https://api.cognitive.microsofttranslator.com"; // Add your location, also known as region. The default is global.
axios({
url: '/translate', method: 'post', headers: {
- 'Ocp-Apim-Subscription-Key': subscriptionKey,
+ 'Ocp-Apim-Subscription-Key': key,
'Ocp-Apim-Subscription-Region': location, 'Content-type': 'application/json', 'X-ClientTraceId': uuidv4().toString()
axios({
```python import requests, uuid, json
-# Add your subscription key and endpoint
-subscription_key = "YOUR_SUBSCRIPTION_KEY"
+# Add your key and endpoint
+key = "YOUR_KEY"
endpoint = "https://api.cognitive.microsofttranslator.com" # Add your location, also known as region. The default is global.
params = {
} headers = {
- 'Ocp-Apim-Subscription-Key': subscription_key,
+ 'Ocp-Apim-Subscription-Key': key,
'Ocp-Apim-Subscription-Region': location, 'Content-type': 'application/json', 'X-ClientTraceId': str(uuid.uuid4())
using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
class Program {
- private static readonly string subscriptionKey = "YOUR-SUBSCRIPTION-KEY";
+ private static readonly string key = "YOUR-KEY";
private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com/"; // Add your location, also known as region. The default is global.
class Program
request.Method = HttpMethod.Post; request.RequestUri = new Uri(endpoint + route); request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
- request.Headers.Add("Ocp-Apim-Subscription-Key", subscriptionKey);
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
request.Headers.Add("Ocp-Apim-Subscription-Region", location); // Send the request and get response.
import (
) func main() {
- subscriptionKey := "YOUR-SUBSCRIPTION-KEY"
+ key := "YOUR-KEY"
// Add your location, also known as region. The default is global. // This is required if using a Cognitive Services resource. location := "YOUR_RESOURCE_LOCATION";
func main() {
log.Fatal(err) } // Add required headers to the request
- req.Header.Add("Ocp-Apim-Subscription-Key", subscriptionKey)
+ req.Header.Add("Ocp-Apim-Subscription-Key", key)
req.Header.Add("Ocp-Apim-Subscription-Region", location) req.Header.Add("Content-Type", "application/json")
import com.google.gson.*;
import com.squareup.okhttp.*; public class Translate {
- private static String subscriptionKey = "YOUR_SUBSCRIPTION_KEY";
+ private static String key = "YOUR_KEY";
// Add your location, also known as region. The default is global. // This is required if using a Cognitive Services resource.
public class Translate {
RequestBody body = RequestBody.create(mediaType, "[{\"Text\": \"Hello World!\"}]"); Request request = new Request.Builder().url(url).post(body)
- .addHeader("Ocp-Apim-Subscription-Key", subscriptionKey)
+ .addHeader("Ocp-Apim-Subscription-Key", key)
.addHeader("Ocp-Apim-Subscription-Region", location) .addHeader("Content-type", "application/json") .build();
public class Translate {
const axios = require('axios').default; const { v4: uuidv4 } = require('uuid');
-var subscriptionKey = "YOUR_SUBSCRIPTION_KEY";
+var key = "YOUR_KEY";
var endpoint = "https://api.cognitive.microsofttranslator.com"; // Add your location, also known as region. The default is global.
axios({
url: '/translate', method: 'post', headers: {
- 'Ocp-Apim-Subscription-Key': subscriptionKey,
+ 'Ocp-Apim-Subscription-Key': key,
'Ocp-Apim-Subscription-Region': location, 'Content-type': 'application/json', 'X-ClientTraceId': uuidv4().toString()
axios({
```python import requests, uuid, json
-# Add your subscription key and endpoint
-subscription_key = "YOUR_SUBSCRIPTION_KEY"
+# Add your key and endpoint
+key = "YOUR_KEY"
endpoint = "https://api.cognitive.microsofttranslator.com" # Add your location, also known as region. The default is global.
params = {
} headers = {
- 'Ocp-Apim-Subscription-Key': subscription_key,
+ 'Ocp-Apim-Subscription-Key': key,
'Ocp-Apim-Subscription-Region': location, 'Content-type': 'application/json', 'X-ClientTraceId': str(uuid.uuid4())
using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
class Program {
- private static readonly string subscriptionKey = "YOUR-SUBSCRIPTION-KEY";
+ private static readonly string key = "YOUR-KEY";
private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com/"; // Add your location, also known as region. The default is global.
class Program
request.Method = HttpMethod.Post; request.RequestUri = new Uri(endpoint + route); request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
- request.Headers.Add("Ocp-Apim-Subscription-Key", subscriptionKey);
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
request.Headers.Add("Ocp-Apim-Subscription-Region", location); // Send the request and get response.
import (
) func main() {
- subscriptionKey := "YOUR-SUBSCRIPTION-KEY"
+ key := "YOUR-KEY"
// Add your location, also known as region. The default is global. // This is required if using a Cognitive Services resource. location := "YOUR_RESOURCE_LOCATION";
func main() {
log.Fatal(err) } // Add required headers to the request
- req.Header.Add("Ocp-Apim-Subscription-Key", subscriptionKey)
+ req.Header.Add("Ocp-Apim-Subscription-Key", key)
req.Header.Add("Ocp-Apim-Subscription-Region", location) req.Header.Add("Content-Type", "application/json")
import com.google.gson.*;
import com.squareup.okhttp.*; public class Detect {
- private static String subscriptionKey = "YOUR_SUBSCRIPTION_KEY";
+ private static String key = "YOUR_KEY";
// Add your location, also known as region. The default is global. // This is required if using a Cognitive Services resource.
public class Detect {
RequestBody body = RequestBody.create(mediaType, "[{\"Text\": \"Ich w├╝rde wirklich gern Ihr Auto um den Block fahren ein paar Mal.\"}]"); Request request = new Request.Builder().url(url).post(body)
- .addHeader("Ocp-Apim-Subscription-Key", subscriptionKey)
+ .addHeader("Ocp-Apim-Subscription-Key", key)
.addHeader("Ocp-Apim-Subscription-Region", location) .addHeader("Content-type", "application/json") .build();
public class Detect {
const axios = require('axios').default; const { v4: uuidv4 } = require('uuid');
-var subscriptionKey = "YOUR_SUBSCRIPTION_KEY";
+var key = "YOUR_KEY";
var endpoint = "https://api.cognitive.microsofttranslator.com"; // Add your location, also known as region. The default is global.
axios({
url: '/detect', method: 'post', headers: {
- 'Ocp-Apim-Subscription-Key': subscriptionKey,
+ 'Ocp-Apim-Subscription-Key': key,
'Ocp-Apim-Subscription-Region': location, 'Content-type': 'application/json', 'X-ClientTraceId': uuidv4().toString()
axios({
```python import requests, uuid, json
-# Add your subscription key and endpoint
-subscription_key = "YOUR_SUBSCRIPTION_KEY"
+# Add your key and endpoint
+key = "YOUR_KEY"
endpoint = "https://api.cognitive.microsofttranslator.com" # Add your location, also known as region. The default is global.
params = {
} headers = {
- 'Ocp-Apim-Subscription-Key': subscription_key,
+ 'Ocp-Apim-Subscription-Key': key,
'Ocp-Apim-Subscription-Region': location, 'Content-type': 'application/json', 'X-ClientTraceId': str(uuid.uuid4())
using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
class Program {
- private static readonly string subscriptionKey = "YOUR-SUBSCRIPTION-KEY";
+ private static readonly string key = "YOUR-KEY";
private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com/"; // Add your location, also known as region. The default is global.
class Program
request.Method = HttpMethod.Post; request.RequestUri = new Uri(endpoint + route); request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
- request.Headers.Add("Ocp-Apim-Subscription-Key", subscriptionKey);
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
request.Headers.Add("Ocp-Apim-Subscription-Region", location); // Send the request and get response.
import (
) func main() {
- subscriptionKey := "YOUR-SUBSCRIPTION-KEY"
+ key := "YOUR-KEY"
// Add your location, also known as region. The default is global. // This is required if using a Cognitive Services resource. location := "YOUR_RESOURCE_LOCATION";
func main() {
log.Fatal(err) } // Add required headers to the request
- req.Header.Add("Ocp-Apim-Subscription-Key", subscriptionKey)
+ req.Header.Add("Ocp-Apim-Subscription-Key", key)
req.Header.Add("Ocp-Apim-Subscription-Region", location) req.Header.Add("Content-Type", "application/json")
import com.google.gson.*;
import com.squareup.okhttp.*; public class Translate {
- private static String subscriptionKey = "YOUR_SUBSCRIPTION_KEY";
+ private static String key = "YOUR_KEY";
// Add your location, also known as region. The default is global. // This is required if using a Cognitive Services resource.
public class Translate {
RequestBody body = RequestBody.create(mediaType, "[{\"Text\": \"Hello\"}]"); Request request = new Request.Builder().url(url).post(body)
- .addHeader("Ocp-Apim-Subscription-Key", subscriptionKey)
+ .addHeader("Ocp-Apim-Subscription-Key", key)
.addHeader("Ocp-Apim-Subscription-Region", location) .addHeader("Content-type", "application/json") .build();
public class Translate {
const axios = require('axios').default; const { v4: uuidv4 } = require('uuid');
-var subscriptionKey = "YOUR_SUBSCRIPTION_KEY";
+var key = "YOUR_KEY";
var endpoint = "https://api.cognitive.microsofttranslator.com"; // Add your location, also known as region. The default is global.
axios({
url: '/translate', method: 'post', headers: {
- 'Ocp-Apim-Subscription-Key': subscriptionKey,
+ 'Ocp-Apim-Subscription-Key': key,
'Ocp-Apim-Subscription-Region': location, 'Content-type': 'application/json', 'X-ClientTraceId': uuidv4().toString()
axios({
```Python import requests, uuid, json
-# Add your subscription key and endpoint
-subscription_key = "YOUR_SUBSCRIPTION_KEY"
+# Add your key and endpoint
+key = "YOUR_KEY"
endpoint = "https://api.cognitive.microsofttranslator.com" # Add your location, also known as region. The default is global.
params = {
} headers = {
- 'Ocp-Apim-Subscription-Key': subscription_key,
+ 'Ocp-Apim-Subscription-Key': key,
'Ocp-Apim-Subscription-Region': location, 'Content-type': 'application/json', 'X-ClientTraceId': str(uuid.uuid4())
using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
class Program {
- private static readonly string subscriptionKey = "YOUR-SUBSCRIPTION-KEY";
+ private static readonly string key = "YOUR-KEY";
private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com/"; // Add your location, also known as region. The default is global.
class Program
request.Method = HttpMethod.Post; request.RequestUri = new Uri(endpoint + route); request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
- request.Headers.Add("Ocp-Apim-Subscription-Key", subscriptionKey);
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
request.Headers.Add("Ocp-Apim-Subscription-Region", location); // Send the request and get response.
import (
) func main() {
- subscriptionKey := "YOUR-SUBSCRIPTION-KEY"
+ key := "YOUR-KEY"
// Add your location, also known as region. The default is global. // This is required if using a Cognitive Services resource. location := "YOUR_RESOURCE_LOCATION";
func main() {
log.Fatal(err) } // Add required headers to the request
- req.Header.Add("Ocp-Apim-Subscription-Key", subscriptionKey)
+ req.Header.Add("Ocp-Apim-Subscription-Key", key)
req.Header.Add("Ocp-Apim-Subscription-Region", location) req.Header.Add("Content-Type", "application/json")
import com.google.gson.*;
import com.squareup.okhttp.*; public class Transliterate {
- private static String subscriptionKey = "YOUR_SUBSCRIPTION_KEY";
+ private static String key = "YOUR_KEY";
// Add your location, also known as region. The default is global. // This is required if using a Cognitive Services resource.
public class Transliterate {
RequestBody body = RequestBody.create(mediaType, "[{\"Text\": \"สวัสดี\"}]"); Request request = new Request.Builder().url(url).post(body)
- .addHeader("Ocp-Apim-Subscription-Key", subscriptionKey)
+ .addHeader("Ocp-Apim-Subscription-Key", key)
.addHeader("Ocp-Apim-Subscription-Region", location) .addHeader("Content-type", "application/json") .build();
public class Transliterate {
const axios = require('axios').default; const { v4: uuidv4 } = require('uuid');
-var subscriptionKey = "YOUR_SUBSCRIPTION_KEY";
+var key = "YOUR_KEY";
var endpoint = "https://api.cognitive.microsofttranslator.com"; // Add your location, also known as region. The default is global.
axios({
url: '/transliterate', method: 'post', headers: {
- 'Ocp-Apim-Subscription-Key': subscriptionKey,
+ 'Ocp-Apim-Subscription-Key': key,
'Ocp-Apim-Subscription-Region': location, 'Content-type': 'application/json', 'X-ClientTraceId': uuidv4().toString()
axios({
```python import requests, uuid, json
-# Add your subscription key and endpoint
-subscription_key = "YOUR_SUBSCRIPTION_KEY"
+# Add your key and endpoint
+key = "YOUR_KEY"
endpoint = "https://api.cognitive.microsofttranslator.com" # Add your location, also known as region. The default is global.
params = {
} headers = {
- 'Ocp-Apim-Subscription-Key': subscription_key,
+ 'Ocp-Apim-Subscription-Key': key,
'Ocp-Apim-Subscription-Region': location, 'Content-type': 'application/json', 'X-ClientTraceId': str(uuid.uuid4())
using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
class Program {
- private static readonly string subscriptionKey = "YOUR-SUBSCRIPTION-KEY";
+ private static readonly string key = "YOUR-KEY";
private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com/"; // Add your location, also known as region. The default is global.
class Program
request.Method = HttpMethod.Post; request.RequestUri = new Uri(endpoint + route); request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
- request.Headers.Add("Ocp-Apim-Subscription-Key", subscriptionKey);
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
request.Headers.Add("Ocp-Apim-Subscription-Region", location); // Send the request and get response.
import (
) func main() {
- subscriptionKey := "YOUR-SUBSCRIPTION-KEY"
+ key := "YOUR-KEY"
// Add your location, also known as region. The default is global. // This is required if using a Cognitive Services resource. location := "YOUR_RESOURCE_LOCATION";
func main() {
log.Fatal(err) } // Add required headers to the request
- req.Header.Add("Ocp-Apim-Subscription-Key", subscriptionKey)
+ req.Header.Add("Ocp-Apim-Subscription-Key", key)
req.Header.Add("Ocp-Apim-Subscription-Region", location) req.Header.Add("Content-Type", "application/json")
import com.google.gson.*;
import com.squareup.okhttp.*; public class Translate {
- private static String subscriptionKey = "YOUR_SUBSCRIPTION_KEY";
+ private static String key = "YOUR_KEY";
// Add your location, also known as region. The default is global. // This is required if using a Cognitive Services resource.
public class Translate {
RequestBody body = RequestBody.create(mediaType, "[{\"Text\": \"Can you tell me how to get to Penn Station? Oh, you aren\'t sure? That\'s fine.\"}]"); Request request = new Request.Builder().url(url).post(body)
- .addHeader("Ocp-Apim-Subscription-Key", subscriptionKey)
+ .addHeader("Ocp-Apim-Subscription-Key", key)
.addHeader("Ocp-Apim-Subscription-Region", location) .addHeader("Content-type", "application/json") .build();
public class Translate {
const axios = require('axios').default; const { v4: uuidv4 } = require('uuid');
-var subscriptionKey = "YOUR_SUBSCRIPTION_KEY";
+var key = "YOUR_KEY";
var endpoint = "https://api.cognitive.microsofttranslator.com"; // Add your location, also known as region. The default is global.
axios({
url: '/translate', method: 'post', headers: {
- 'Ocp-Apim-Subscription-Key': subscriptionKey,
+ 'Ocp-Apim-Subscription-Key': key,
'Ocp-Apim-Subscription-Region': location, 'Content-type': 'application/json', 'X-ClientTraceId': uuidv4().toString()
axios({
```python import requests, uuid, json
-# Add your subscription key and endpoint
-subscription_key = "YOUR_SUBSCRIPTION_KEY"
+# Add your key and endpoint
+key = "YOUR_KEY"
endpoint = "https://api.cognitive.microsofttranslator.com" # Add your location, also known as region. The default is global.
params = {
} headers = {
- 'Ocp-Apim-Subscription-Key': subscription_key,
+ 'Ocp-Apim-Subscription-Key': key,
'Ocp-Apim-Subscription-Region': location, 'Content-type': 'application/json', 'X-ClientTraceId': str(uuid.uuid4())
using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
class Program {
- private static readonly string subscriptionKey = "YOUR-SUBSCRIPTION-KEY";
+ private static readonly string key = "YOUR-KEY";
private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com/"; // Add your location, also known as region. The default is global.
class Program
request.Method = HttpMethod.Post; request.RequestUri = new Uri(endpoint + route); request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
- request.Headers.Add("Ocp-Apim-Subscription-Key", subscriptionKey);
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
request.Headers.Add("Ocp-Apim-Subscription-Region", location); // Send the request and get response.
import (
) func main() {
- subscriptionKey := "YOUR-SUBSCRIPTION-KEY"
+ key := "YOUR-KEY"
// Add your location, also known as region. The default is global. // This is required if using a Cognitive Services resource. location := "YOUR_RESOURCE_LOCATION";
func main() {
log.Fatal(err) } // Add required headers to the request
- req.Header.Add("Ocp-Apim-Subscription-Key", subscriptionKey)
+ req.Header.Add("Ocp-Apim-Subscription-Key", key)
req.Header.Add("Ocp-Apim-Subscription-Region", location) req.Header.Add("Content-Type", "application/json")
import com.google.gson.*;
import com.squareup.okhttp.*; public class BreakSentence {
- private static String subscriptionKey = "YOUR_SUBSCRIPTION_KEY";
+ private static String key = "YOUR_KEY";
// Add your location, also known as region. The default is global. // This is required if using a Cognitive Services resource.
public class BreakSentence {
RequestBody body = RequestBody.create(mediaType, "[{\"Text\": \"Can you tell me how to get to Penn Station? Oh, you aren\'t sure? That\'s fine.\"}]"); Request request = new Request.Builder().url(url).post(body)
- .addHeader("Ocp-Apim-Subscription-Key", subscriptionKey)
+ .addHeader("Ocp-Apim-Subscription-Key", key)
.addHeader("Ocp-Apim-Subscription-Region", location) .addHeader("Content-type", "application/json") .build();
public class BreakSentence {
const axios = require('axios').default; const { v4: uuidv4 } = require('uuid');
-var subscriptionKey = "YOUR_SUBSCRIPTION_KEY";
+var key = "YOUR_KEY";
var endpoint = "https://api.cognitive.microsofttranslator.com"; // Add your location, also known as region. The default is global.
axios({
url: '/breaksentence', method: 'post', headers: {
- 'Ocp-Apim-Subscription-Key': subscriptionKey,
+ 'Ocp-Apim-Subscription-Key': key,
'Ocp-Apim-Subscription-Region': location, 'Content-type': 'application/json', 'X-ClientTraceId': uuidv4().toString()
axios({
```python import requests, uuid, json
-# Add your subscription key and endpoint
-subscription_key = "YOUR_SUBSCRIPTION_KEY"
+# Add your key and endpoint
+key = "YOUR_KEY"
endpoint = "https://api.cognitive.microsofttranslator.com" # Add your location, also known as region. The default is global.
params = {
} headers = {
- 'Ocp-Apim-Subscription-Key': subscription_key,
+ 'Ocp-Apim-Subscription-Key': key,
'Ocp-Apim-Subscription-Region': location, 'Content-type': 'application/json', 'X-ClientTraceId': str(uuid.uuid4())
using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
class Program {
- private static readonly string subscriptionKey = "YOUR-SUBSCRIPTION-KEY";
+ private static readonly string key = "YOUR-KEY";
private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com/"; // Add your location, also known as region. The default is global.
class Program
request.Method = HttpMethod.Post; request.RequestUri = new Uri(endpoint + route); request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
- request.Headers.Add("Ocp-Apim-Subscription-Key", subscriptionKey);
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
request.Headers.Add("Ocp-Apim-Subscription-Region", location); // Send the request and get response.
import (
) func main() {
- subscriptionKey := "YOUR-SUBSCRIPTION-KEY"
+ key := "YOUR-KEY"
// Add your location, also known as region. The default is global. // This is required if using a Cognitive Services resource. location := "YOUR_RESOURCE_LOCATION";
func main() {
log.Fatal(err) } // Add required headers to the request
- req.Header.Add("Ocp-Apim-Subscription-Key", subscriptionKey)
+ req.Header.Add("Ocp-Apim-Subscription-Key", key)
req.Header.Add("Ocp-Apim-Subscription-Region", location) req.Header.Add("Content-Type", "application/json")
import com.google.gson.*;
import com.squareup.okhttp.*; public class DictionaryLookup {
- private static String subscriptionKey = "YOUR_SUBSCRIPTION_KEY";
+ private static String key = "YOUR_KEY";
// Add your location, also known as region. The default is global. // This is required if using a Cognitive Services resource.
public class DictionaryLookup {
RequestBody body = RequestBody.create(mediaType, "[{\"Text\": \"Shark\"}]"); Request request = new Request.Builder().url(url).post(body)
- .addHeader("Ocp-Apim-Subscription-Key", subscriptionKey)
+ .addHeader("Ocp-Apim-Subscription-Key", key)
.addHeader("Ocp-Apim-Subscription-Region", location) .addHeader("Content-type", "application/json") .build();
public class DictionaryLookup {
const axios = require('axios').default; const { v4: uuidv4 } = require('uuid');
-var subscriptionKey = "YOUR_SUBSCRIPTION_KEY";
+var key = "YOUR_KEY";
var endpoint = "https://api.cognitive.microsofttranslator.com"; // Add your location, also known as region. The default is global.
axios({
url: '/dictionary/lookup', method: 'post', headers: {
- 'Ocp-Apim-Subscription-Key': subscriptionKey,
+ 'Ocp-Apim-Subscription-Key': key,
'Ocp-Apim-Subscription-Region': location, 'Content-type': 'application/json', 'X-ClientTraceId': uuidv4().toString()
axios({
```python import requests, uuid, json
-# Add your subscription key and endpoint
-subscription_key = "YOUR_SUBSCRIPTION_KEY"
+# Add your key and endpoint
+key = "YOUR_KEY"
endpoint = "https://api.cognitive.microsofttranslator.com" # Add your location, also known as region. The default is global.
params = {
} headers = {
- 'Ocp-Apim-Subscription-Key': subscription_key,
+ 'Ocp-Apim-Subscription-Key': key,
'Ocp-Apim-Subscription-Region': location, 'Content-type': 'application/json', 'X-ClientTraceId': str(uuid.uuid4())
using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
class Program {
- private static readonly string subscriptionKey = "YOUR-SUBSCRIPTION-KEY";
+ private static readonly string key = "YOUR-KEY";
private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com/"; // Add your location, also known as region. The default is global.
class Program
request.Method = HttpMethod.Post; request.RequestUri = new Uri(endpoint + route); request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
- request.Headers.Add("Ocp-Apim-Subscription-Key", subscriptionKey);
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
request.Headers.Add("Ocp-Apim-Subscription-Region", location); // Send the request and get response.
import (
) func main() {
- subscriptionKey := "YOUR-SUBSCRIPTION-KEY"
+ key := "YOUR-KEY"
// Add your location, also known as region. The default is global. // This is required if using a Cognitive Services resource. location := "YOUR_RESOURCE_LOCATION";
func main() {
log.Fatal(err) } // Add required headers to the request
- req.Header.Add("Ocp-Apim-Subscription-Key", subscriptionKey)
+ req.Header.Add("Ocp-Apim-Subscription-Key", key)
req.Header.Add("Ocp-Apim-Subscription-Region", location) req.Header.Add("Content-Type", "application/json")
import com.google.gson.*;
import com.squareup.okhttp.*; public class DictionaryExamples {
- private static String subscriptionKey = "YOUR_SUBSCRIPTION_KEY";
+ private static String key = "YOUR_KEY";
// Add your location, also known as region. The default is global. // This is required if using a Cognitive Services resource.
public class DictionaryExamples {
RequestBody body = RequestBody.create(mediaType, "[{\"Text\": \"Shark\", \"Translation\": \"tibur├│n\"}]"); Request request = new Request.Builder().url(url).post(body)
- .addHeader("Ocp-Apim-Subscription-Key", subscriptionKey)
+ .addHeader("Ocp-Apim-Subscription-Key", key)
.addHeader("Ocp-Apim-Subscription-Region", location) .addHeader("Content-type", "application/json") .build();
public class DictionaryExamples {
const axios = require('axios').default; const { v4: uuidv4 } = require('uuid');
-var subscriptionKey = "YOUR_SUBSCRIPTION_KEY";
+var key = "YOUR_KEY";
var endpoint = "https://api.cognitive.microsofttranslator.com"; // Add your location, also known as region. The default is global.
axios({
url: '/dictionary/examples', method: 'post', headers: {
- 'Ocp-Apim-Subscription-Key': subscriptionKey,
+ 'Ocp-Apim-Subscription-Key': key,
'Ocp-Apim-Subscription-Region': location, 'Content-type': 'application/json', 'X-ClientTraceId': uuidv4().toString()
axios({
```python import requests, uuid, json
-# Add your subscription key and endpoint
-subscription_key = "YOUR_SUBSCRIPTION_KEY"
+# Add your key and endpoint
+key = "YOUR_KEY"
endpoint = "https://api.cognitive.microsofttranslator.com" # Add your location, also known as region. The default is global.
params = {
} headers = {
- 'Ocp-Apim-Subscription-Key': subscription_key,
+ 'Ocp-Apim-Subscription-Key': key,
'Ocp-Apim-Subscription-Region': location, 'Content-type': 'application/json', 'X-ClientTraceId': str(uuid.uuid4())
After a successful call, you should see the following response. For more informa
||-|--| | 200 | OK | The request was successful. | | 400 | Bad Request | A required parameter is missing, empty, or null. Or, the value passed to either a required or optional parameter is invalid. A common issue is a header that is too long. |
-| 401 | Unauthorized | The request is not authorized. Check to make sure your subscription key or token is valid and in the correct region. *See also* [Authentication](reference/v3-0-reference.md#authentication).|
+| 401 | Unauthorized | The request is not authorized. Check to make sure your key or token is valid and in the correct region. *See also* [Authentication](reference/v3-0-reference.md#authentication).|
| 429 | Too Many Requests | You have exceeded the quota or rate of requests allowed for your subscription. | | 502 | Bad Gateway | Network or server-side issue. May also indicate invalid headers. |
cognitive-services V3 0 Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/reference/v3-0-reference.md
curl -X POST " my-ch-n.cognitiveservices.azure.com/translator/text/v3.0/translat
## Authentication
-Subscribe to Translator or [Cognitive Services multi-service](https://azure.microsoft.com/pricing/details/cognitive-services/) in Azure Cognitive Services, and use your subscription key (available in the Azure portal) to authenticate.
+Subscribe to Translator or [Cognitive Services multi-service](https://azure.microsoft.com/pricing/details/cognitive-services/) in Azure Cognitive Services, and use your key (available in the Azure portal) to authenticate.
There are three headers that you can use to authenticate your subscription. This table describes how each is used:
When you use a multi-service secret key, you must include two authentication hea
|Ocp-Apim-Subscription-Key| The value is the Azure secret key for your multi-service resource.| |Ocp-Apim-Subscription-Region| The value is the region of the multi-service resource. |
-Region is required for the multi-service Text API subscription. The region you select is the only region that you can use for text translation when using the multi-service subscription key, and must be the same region you selected when you signed up for your multi-service subscription through the Azure portal.
+Region is required for the multi-service Text API subscription. The region you select is the only region that you can use for text translation when using the multi-service key, and must be the same region you selected when you signed up for your multi-service subscription through the Azure portal.
If you pass the secret key in the query string with the parameter `Subscription-Key`, then you must specify the region with query parameter `Subscription-Region`.
cognitive-services Sovereign Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/sovereign-clouds.md
Translate a single sentence from English to Simplified Chinese.
**Request** ```curl
-curl -X POST "https://api.cognitive.microsofttranslator.us/translate?api-version=3.0?&from=en&to=zh-Hans" -H "Ocp-Apim-Subscription-Key: <subscription key>" -H "Ocp-Apim-Subscription-Region: chinanorth" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'你好, 你叫什么名字?'}]"
+curl -X POST "https://api.cognitive.microsofttranslator.us/translate?api-version=3.0?&from=en&to=zh-Hans" -H "Ocp-Apim-Subscription-Key: <key>" -H "Ocp-Apim-Subscription-Region: chinanorth" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'你好, 你叫什么名字?'}]"
``` **Response body**
cognitive-services Translator How To Signup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/translator-how-to-signup.md
Title: Create a Translator resource
-description: This article will show you how to create an Azure Cognitive Services Translator resource and get a subscription key and endpoint URL.
+description: This article will show you how to create an Azure Cognitive Services Translator resource and get a key and endpoint URL.
Last updated 02/24/2022
# Create a Translator resource
-In this article, you'll learn how to create a Translator resource in the Azure portal. [Azure Translator](translator-overview.md) is a cloud-based machine translation service that is part of the [Azure Cognitive Services](../what-are-cognitive-services.md) family of REST APIs. Azure resources are instances of services that you create. All API requests to Azure services require an **endpoint** URL and a read-only **subscription key** for authenticating access.
+In this article, you'll learn how to create a Translator resource in the Azure portal. [Azure Translator](translator-overview.md) is a cloud-based machine translation service that is part of the [Azure Cognitive Services](../what-are-cognitive-services.md) family of REST APIs. Azure resources are instances of services that you create. All API requests to Azure services require an **endpoint** URL and a read-only **key** for authenticating access.
## Prerequisites
All Cognitive Services API requests require an endpoint URL and a read-only key
1. After your new resource deploys, select **Go to resource** or navigate directly to your resource page. 1. In the left rail, under *Resource Management*, select **Keys and Endpoint**.
-1. Copy and paste your subscription keys and endpoint URL in a convenient location, such as *Microsoft Notepad*.
+1. Copy and paste your keys and endpoint URL in a convenient location, such as *Microsoft Notepad*.
:::image type="content" source="../media/cognitive-services-apis-create-account/get-cog-serv-keys.png" alt-text="Get key and endpoint.":::
cognitive-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/how-to/create-project.md
To create a new intent, click on *+Add* button and start by giving your intent a
> The list of projects you can connect to are only projects that are owned by the same Language resource you are using to create the orchestration project. In Orchestration Workflow projects, the data used to train connected intents isn't provided within the project. Instead, the project pulls the data from the connected service (such as connected LUIS applications, Conversational Language Understanding projects, or Custom Question Answering knowledge bases) during training. However, if you create intents that are not connected to any service, you still need to add utterances to those intents.
To import a project, select the arrow button on the projects page next to **Crea
## Next Steps
-[Build schema](./train-model.md)
+[Build schema](./train-model.md)
cognitive-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/how-to/train-model.md
Previously updated : 03/03/2022 Last updated : 03/21/2022
Select **Train model** on the left of the screen. Select **Start a training job*
Enter a new model name or select an existing model from the **Model Name** dropdown. Click the **Train** button and wait for training to complete. You will see the training status of your model in the view model details page. Only successfully completed jobs will generate models.
Click the **Train** button and wait for training to complete. You will see the t
After model training is completed, you can view your model details and see how well it performs against the test set in the training step. Observing how well your model performed is called evaluation. The test set is composed of 20% of your utterances, and this split is done at random before training. The test set consists of data that was not introduced to the model during the training process. For the evaluation process to complete there must be at least 10 utterances in your training set.
-In the **view model details** page, you'll be able to see all your models, with their current training status, and the date they were last trained.
-
+In the **view model details** page, you'll be able to see all your models, with their score. Scores are only available if you have enabled evaluation before hand.
* Click on the model name for more details. A model name is only clickable if you've enabled evaluation before hand. * In the **Overview** section you can find the macro precision, recall and F1 score for the collective intents.
communication-services Calling Chat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/calling-chat.md
const call = callAgent.startCall([teamsCallee]);
[Communication Services voice and video calling events](../../../event-grid/communication-services-voice-video-events.md) are raised for calls between a Communication Services user and Teams users. **Limitations and known issues**
+- This functionality is not currently available in the .NET Calling SDK.
- Teams users must be in "TeamsOnly" mode. Skype for Business users can't receive 1:1 calls from Communication Services users. - Escalation to a group call isn't supported. - Communication Services call recording isn't available for 1:1 calls.
While in private preview, a Communication Services user can do various actions u
## Privacy Interoperability between Azure Communication Services and Microsoft Teams enables your applications and users to participate in Teams calls, meetings, and chat. It is your responsibility to ensure that the users of your application are notified when recording or transcription are enabled in a Teams call or meeting.
-Microsoft will indicate to you via the Azure Communication Services API that recording or transcription has commenced and you must communicate this fact in real time to your users within your application's user interface. You agree to indemnify Microsoft for all costs and damages incurred as a result of your failure to comply with this obligation.
+Microsoft will indicate to you via the Azure Communication Services API that recording or transcription has commenced and you must communicate this fact in real time to your users within your application's user interface. You agree to indemnify Microsoft for all costs and damages incurred as a result of your failure to comply with this obligation.
confidential-computing Confidential Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-containers.md
This framework extends the confidentiality, integrity, and verifiability propert
Marblerun supports confidential containers created with Graphene, Occlum, and EGo, with [examples for each SDK](https://docs.edgeless.systems/marblerun/#/examples?id=examples). The framework runs on Kubernetes alongside your existing cloud-native tooling. There's a CLI and helm charts. Marblerun also supports confidential computing nodes on AKS. Follow Marblerun's [guide to deploy Marblerun on AKS](https://docs.edgeless.systems/marblerun/#/deployment/cloud?id=cloud-deployment).
-## Confidential Containers demo
-
-For a sample application, see the [healthcare demo with confidential containers](https://github.com/Azure-Samples/confidential-container-samples/blob/main/confidential-healthcare-scone-confinf-onnx/README.md).
-
-> [!VIDEO https://www.youtube.com/embed/PiYCQmOh0EI]
+## Confidential Containers reference architectures
+- [Confidential data messaging for healthcare reference architecture and sample with Intel SGX confidential containers](https://github.com/Azure-Samples/confidential-container-samples/blob/main/confidential-healthcare-scone-confinf-onnx/README.md).
+- [Confidential big-data processing with Apache Spark on AKS with Intel SGX confidential containers](https://docs.microsoft.com/azure/architecture/example-scenario/confidential/data-analytics-containers-spark-kubernetes-azure-sql).
## Get in touch
container-instances Container Instances Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-github-action.md
This article shows two ways to set up the workflow:
![Screenshot of the Fork button (highlighted) in GitHub](../container-registry/media/container-registry-tutorial-quick-build/quick-build-01-fork.png)
-* Ensure Actions is enabled for your repository. Navigate to your forked repository and select **Settings** > **Actions**. In **Actions permissions**, ensure that **Enable local and third party Actions for this repository** is selected.
+* Ensure Actions is enabled for your repository. Navigate to your forked repository and select **Settings** > **Actions**. In **Actions permissions**, ensure that **Allow all actions** is selected.
## Configure GitHub workflow
In the GitHub workflow, you need to supply Azure credentials to authenticate to
First, get the resource ID of your resource group. Substitute the name of your group in the following [az group show][az-group-show] command: ```azurecli
-groupId=$(az group show \
+$groupId=$(az group show \
--name <resource-group-name> \ --query id --output tsv) ```
Update the Azure service principal credentials to allow push and pull access to
Get the resource ID of your container registry. Substitute the name of your registry in the following [az acr show][az-acr-show] command: ```azurecli
-registryId=$(az acr show \
+$registryId=$(az acr show \
--name <registry-name> \ --query id --output tsv) ```
az role assignment create \
### Save credentials to GitHub repo
-1. In the GitHub UI, navigate to your forked repository and select **Settings** > **Secrets**.
+1. In the GitHub UI, navigate to your forked repository and select **Settings** > **Secrets** > **Actions**.
-1. Select **Add a new secret** to add the following secrets:
+1. Select **New repository secret** to add the following secrets:
|Secret |Value | |||
az role assignment create \
### Create workflow file
-1. In the GitHub UI, select **Actions** > **New workflow**.
-1. Select **Set up a workflow yourself**.
+1. In the GitHub UI, select **Actions**.
+1. Select **set up a workflow yourself**.
1. In **Edit new file**, paste the following YAML contents to overwrite the sample code. Accept the default filename `main.yml`, or provide a filename you choose. 1. Select **Start commit**, optionally provide short and extended descriptions of your commit, and select **Commit new file**.
container-instances Container Instances Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-managed-identity.md
az container exec \
Run the following commands in the bash shell in the container. To get an access token to use Azure Active Directory to authenticate to key vault, run the following command: ```bash
-curl 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fvault.azure.net' -H Metadata:true -s
+client_id="CLIENT ID (xxxxxxxx-5523-45fc-9f49-xxxxxxxxxxxx)"
+curl "http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fvault.azure.net&client_id=$client_id" -H Metadata:true -s
``` Output:
container-instances Container Instances Region Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-region-availability.md
The following regions and maximum resources are available to container groups wi
| East US 2 | 4 | 16 | 4 | 16 | 50 | N/A | Y | | France Central | 4 | 16 | 4 | 16 | 50 | N/A | Y| | Germany West Central | 4 | 16 | N/A | N/A | 50 | N/A | Y |
-| Japan East | 2 | 8 | 4 | 16 | 50 | N/A | Y |
+| Japan East | 4 | 16 | 4 | 16 | 50 | N/A | Y |
| Japan West | 4 | 16 | N/A | N/A | 50 | N/A | N | | Korea Central | 4 | 16 | N/A | N/A | 50 | N/A | N | | North Central US | 2 | 3.5 | 4 | 16 | 50 | K80, P100, V100 | N |
For information on troubleshooting container instance deployment, see [Troublesh
[azure-support]: https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest
-[az-region-support]: ../availability-zones/az-overview.md#regions
+[az-region-support]: ../availability-zones/az-overview.md#regions
container-instances Container Instances Tutorial Deploy App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-tutorial-deploy-app.md
az acr show --name <acrName> --query loginServer
Now, use the [az container create][az-container-create] command to deploy the container. Replace `<acrLoginServer>` with the value you obtained from the previous command. Replace `<service-principal-ID>` and `<service-principal-password>` with the service principal ID and password that you created to access the registry. Replace `<aciDnsLabel>` with a desired DNS name. ```azurecli
-az container create --resource-group myResourceGroup --name aci-tutorial-app --image <acrLoginServer>/aci-tutorial-app:v1 --cpu 1 --memory 1 --registry-login-server <acrLoginServer> --registry-username <service-principal-ID> --registry-password <service-principal-password> --dns-name-label <aciDnsLabel> --ports 80
+az container create --resource-group myResourceGroup --name aci-tutorial-app --image <acrLoginServer>/aci-tutorial-app:v1 --cpu 1 --memory 1 --registry-login-server <acrLoginServer> --registry-username <service-principal-ID> --registry-password <service-principal-password> --ip-address Public --dns-name-label <aciDnsLabel> --ports 80
``` Within a few seconds, you should receive an initial response from Azure. The `--dns-name-label` value must be unique within the Azure region you create the container instance. Modify the value in the preceding command if you receive a **DNS name label** error message when you execute the command.
defender-for-cloud Configure Email Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/configure-email-notifications.md
To avoid alert fatigue, Defender for Cloud limits the volume of outgoing mails.
|Aspect|Details| |-|:-| |Release state:|General availability (GA)|
-|Pricing:|Free|
+|Pricing:|Email notifications are free; for security alerts, enable the enhanced security plans ([plan pricing](https://azure.microsoft.com/pricing/details/defender-for-cloud/)) |
|Required roles and permissions:|**Security Admin**<br>**Subscription Owner** | |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)| |||
defender-for-cloud Quickstart Enable Database Protections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-enable-database-protections.md
Title: Enable database protection for your subscription
description: Learn how to enable Microsoft Defender for Cloud for all of your database types for your entire subscription. Previously updated : 02/28/2022 Last updated : 03/21/2022 # Quickstart: Microsoft Defender for Cloud database protection
The types of protected databases are:
- Azure SQL Databases - SQL servers on machines - Open-source relational databases (OSS RDB) -- Microsoft Defender for Azure Cosmos DB
+- Azure Cosmos DB
Database provides protection to engines, and data types, with different attack surface, and security risks. Security detections are made for the specific attack surface of each DB type.
You can enable database protection on your subscription, or exclude specific dat
## Enable database protection on your subscription
-**To enable Defender for Storage for individual storage accounts under a specific subscription**:
+**To enable Defender for Databases on a specific subscription**:
1. Sign in to the [Azure portal](https://ms.portal.azure.com).
-1. Navigate to **Microsoft Defender fo Cloud** > **Environment settings**.
+1. Navigate to **Microsoft Defender for Cloud** > **Environment settings**.
1. Select the relevant subscription. 1. If you want to enable specific plans, set the plans toggle to **On**.
-1. (Optional) Select **Select types** to and enable specific resource types.
+1. (Optional) Use **Select types** to enable protections for specific resource types.
:::image type="content" source="media/quickstart-enable-database-protections/select-type.png" alt-text="Screenshot showing the toggles to enable specific resource types."::: 1. Toggle each desired resource type to **On**.
- :::image type="content" source="media/quickstart-enable-database-protections/resource-type.png" alt-text="Screenshot showing the types of resources available.":::
+ :::image type="content" source="media/quickstart-enable-database-protections/resource-type.png" alt-text="Screenshot showing the types of resources available.":::
1. Select **Continue**.
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 03/16/2022 Last updated : 03/20/2022 # What's new in Microsoft Defender for Cloud?
Updates in March include:
- [Moved the recommendation Vulnerabilities in container security configurations should be remediated from the secure score to best practices](#moved-the-recommendation-vulnerabilities-in-container-security-configurations-should-be-remediated-from-the-secure-score-to-best-practices) - [Deprecated the recommendation to use service principals to protect your subscriptions](#deprecated-the-recommendation-to-use-service-principals-to-protect-your-subscriptions) - [Legacy implementation of ISO 27001 is being replaced with new ISO 27001:2013](#legacy-implementation-of-iso-27001-is-being-replaced-with-new-iso-270012013)-
+- [Deprecated Microsoft Defender for IoT device recommendations](#deprecated-microsoft-defender-for-iot-device-recommendations)
+- [Deprecated Microsoft Defender for IoT device alerts](#deprecated-microsoft-defender-for-iot-device-alerts)
### Deprecated the recommendations to install the network traffic data collection agent Changes in our roadmap and priorities have removed the need for the network traffic data collection agent. Consequently, the following two recommendations and their related policies were deprecated.
The legacy implementation of ISO 27001 has been removed from Defender for Cloud'
:::image type="content" source="media/upcoming-changes/removing-iso-27001-legacy-implementation.png" alt-text="Defender for Cloud's regulatory compliance dashboard showing the message about the removal of the legacy implementation of ISO 27001." lightbox="media/upcoming-changes/removing-iso-27001-legacy-implementation.png":::
+### Deprecated Microsoft Defender for IoT device recommendations
+
+Microsoft Defender for IoT device recommendations is no longer visible in Microsoft Defender for Cloud. These recommendations are still available on Microsoft Defender for IoT's Recommendations page.
+
+The following recommendations are deprecated:
+
+| Assessment key | Recommendations |
+|--|--|
+| 1a36f14a-8bd8-45f5-abe5-eef88d76ab5b: IoT Devices | Open Ports On Device |
+| ba975338-f956-41e7-a9f2-7614832d382d: IoT Devices | Permissive firewall rule in the input chain was found |
+| beb62be3-5e78-49bd-ac5f-099250ef3c7c: IoT Devices | Permissive firewall policy in one of the chains was found |
+| d5a8d84a-9ad0-42e2-80e0-d38e3d46028a: IoT Devices | Permissive firewall rule in the output chain was found |
+| 5f65e47f-7a00-4bf3-acae-90ee441ee876: IoT Devices | Operating system baseline validation failure |
+|a9a59ebb-5d6f-42f5-92a1-036fd0fd1879: IoT Devices | Agent sending underutilized messages |
+| 2acc27c6-5fdb-405e-9080-cb66b850c8f5: IoT Devices | TLS cipher suite upgrade needed |
+|d74d2738-2485-4103-9919-69c7e63776ec: IoT Devices | Auditd process stopped sending events |
+
+### Deprecated Microsoft Defender for IoT device alerts
+
+All Microsoft Defender for IoT device alerts are no longer visible in Microsoft Defender for Cloud. These alerts are still available on Microsoft Defender for IoT's Alert page, and in Microsoft Sentinel.
+ ## February 2022 Updates in February include:
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
If you're looking for the latest release notes, you'll find them in the [What's
| [Changes to recommendations for managing endpoint protection solutions](#changes-to-recommendations-for-managing-endpoint-protection-solutions) | March 2022 | | [AWS and GCP recommendations to GA](#aws-and-gcp-recommendations-to-ga) | March 2022 | | [Relocation of custom recommendations](#relocation-of-custom-recommendations) | March 2022 |
-| [Deprecating Microsoft Defender for IoT device recommendations](#deprecating-microsoft-defender-for-iot-device-recommendations)| March 2022 |
-| [Deprecating Microsoft Defender for IoT device alerts](#deprecating-microsoft-defender-for-iot-device-alerts) | March 2022 |
| [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | May 2022 | ### Changes to recommendations for managing endpoint protection solutions
When the move occurs, the custom recommendations will be found via a new "recomm
Learn more in [Create custom security initiatives and policies](custom-security-policies.md).
-### Deprecating Microsoft Defender for IoT device recommendations
-
-**Estimated date for change:** March 2022
-
-Microsoft Defender for IoT device recommendations will no longer be visible in Microsoft Defender for Cloud. These recommendations will still be available on Microsoft Defender for IoT's Recommendations page, and in Microsoft Sentinel.
-
-The following recommendations will be deprecated:
-
-| Assessment key | Recommendations |
-|--|--|
-| 1a36f14a-8bd8-45f5-abe5-eef88d76ab5b: IoT Devices | Open Ports On Device |
-| ba975338-f956-41e7-a9f2-7614832d382d: IoT Devices | Permissive firewall rule in the input chain was found |
-| beb62be3-5e78-49bd-ac5f-099250ef3c7c: IoT Devices | Permissive firewall policy in one of the chains was found |
-| d5a8d84a-9ad0-42e2-80e0-d38e3d46028a: IoT Devices | Permissive firewall rule in the output chain was found |
-| 5f65e47f-7a00-4bf3-acae-90ee441ee876: IoT Devices | Operating system baseline validation failure |
-|a9a59ebb-5d6f-42f5-92a1-036fd0fd1879: IoT Devices | Agent sending underutilized messages |
-| 2acc27c6-5fdb-405e-9080-cb66b850c8f5: IoT Devices | TLS cipher suite upgrade needed |
-|d74d2738-2485-4103-9919-69c7e63776ec: IoT Devices | Auditd process stopped sending events |
-
-### Deprecating Microsoft Defender for IoT device alerts
-
-**Estimated date for change:** March 2022
-
-All Microsoft Defender for IoT device alerts will no longer be visible in Microsoft Defender for Cloud. These alerts will still be available on Microsoft Defender for IoT's Alert page, and in Microsoft Sentinel.
- ### Multiple changes to identity recommendations **Estimated date for change:** May 2022
defender-for-iot Architecture Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/architecture-connections.md
+
+ Title: Microsoft Defender for IoT sensor connection methods
+description: Learn about the architecture models available for connecting your sensors to Microsoft Defender for IoT.
+ Last updated : 03/08/2022++
+# Sensor connection methods
+
+This article describes the architectures and connection methods supported for connecting your sensors to Microsoft Defender for IoT in the Azure portal.
+
+All supported connection methods provide:
+
+- **Simple deployment**, requiring no additional installations in your private Azure environment, such as for an IoT Hub
+
+- **Improved security**, without needing to configure or lock down any resource security settings in the Azure VNET
+
+- **Scalability** for new features supported only in the cloud
+
+- **Flexible connectivity** using any of the connection methods described in this article
+
+For more information, see [Choose a sensor connection method](connect-sensors.md#choose-a-sensor-connection-method).
++
+> [!IMPORTANT]
+> To ensure that your network is ready, we recommend that you first run the migration in a lab or testing environment so that you can safely validate your Azure service configurations.
+>
+
+## Proxy connections with an Azure proxy
+
+The following image shows how you can connect your sensors to the Defender for IoT portal in Azure through a proxy in the Azure VNET, ensuring confidentiality for all communications between your sensor and Azure.
++
+Depending on your network configuration, you can access the VNET via a VPN connection or an ExpressRoute connection.
+
+This method uses a proxy server hosted within Azure. To handle load balancing and failover, the proxy is configured to scale automatically behind a load balancer.
+
+For more information, see [Connect via an Azure proxy](connect-sensors.md#connect-via-an-azure-proxy).
+
+## Proxy connections with proxy chaining
+
+The following image shows how you can connect your sensors to the Defender for IoT portal in Azure through multiple proxies, using different levels of the Purdue model and the enterprise network hierarchy.
++
+This method supports connecting your sensors without direct internet access, using an SSL-encrypted tunnel to transfer data from the sensor to the service endpoint via proxy servers. The proxy server does not perform any data inspection, analysis, or caching.
+
+With a proxy chaining method, Defender for IoT does not support your proxy service. It's the customer's responsibility to set up and maintain the proxy service.
+
+For more information, see [Connect via proxy chaining](connect-sensors.md#connect-via-proxy-chaining).
+
+## Direct connections
+
+The following image shows how you can connect your sensors to the Defender for IoT portal in Azure directly over the internet from remote sites, without transversing the enterprise network.
++
+With direct connections:
+
+- Any sensors connected to Azure data centers directly over the internet have a secure and encrypted connection to the Azure data centers. Transport Layer Security (TLS) provides *always-on* communication between the sensor and Azure resources.
+
+- The sensor initiates all connections to the Azure portal. Initiating connections only from the sensor protects internal network devices from unsolicited inbound connections, but also means that you don't need to configure any inbound firewall rules.
+
+For more information, see [Connect directly](connect-sensors.md#connect-directly).
+
+## Multi-cloud connections
+
+You can connect your sensors to the Defender for IoT portal in Azure from other public clouds for OT/IoT management process monitoring.
+
+Depending on your environment configuration, you might connect using one of the following methods:
+
+- ExpressRoute with customer-managed routing
+
+- ExpressRoute with a cloud exchange provider
+
+- A site-to-site VPN over the internet.
+
+For more information, see [Connect via multi-cloud vendors](connect-sensors.md#connect-via-multi-cloud-vendors).
+
+## Working with a mixture of sensor software versions
+
+If you are a customer with an existing production deployment, we recommend that upgrade any legacy sensor versions to version 22.1.x.
+
+While you'll need to migrate your connections before the [legacy version reaches end of support](release-notes.md#versions-and-support-dates), you can currently deploy a hybrid network of sensors, including legacy software versions with their IoT Hub connections, and sensors with the connection methods described in this article.
+
+After migrating, you can remove any relevant IoT Hubs from your subscription as they'll no longer be required for your sensor connections.
+
+For more information, see [Update a standalone sensor version](how-to-manage-individual-sensors.md#update-a-standalone-sensor-version) and [Migration for existing customers](connect-sensors.md#migration-for-existing-customers).
+
+## Next steps
+
+For more information, see [Connect your sensors to Microsoft Defender for IoT](connect-sensors.md).
defender-for-iot Concept Key Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-key-concepts.md
The platform provides an intuitive data-mining interface for granular searching
The Sensor Cloud Management mode determines where device, alert, and other information that the sensor detects is displayed.
-For **cloud-connected sensors**, information that the sensor detects is displayed in the sensor console. Alert information is delivered through an IoT hub and can be shared with other Azure services, such as Microsoft Sentinel.
+For **cloud-connected sensors**, information that the sensor detects is displayed in the sensor console. Alert information is delivered to Azure and can be shared with other Azure services, such as Microsoft Sentinel.
For **locally connected sensors**, information that the sensor detects is displayed in the sensor console. Detection information is also shared with the on-premises management console if the sensor is connected to it.
Defender for IoT enables the effective management of multiple deployments and a
The on-premises management console is a web-based administrative platform that lets you monitor and control the activities of global sensor installations. In addition to managing the data received from deployed sensors, the on-premises management console seamlessly integrates data from various business resources: CMDBs, DNS, firewalls, Web APIs, and more. - We recommend that you familiarize yourself with the concepts, capabilities, and features available to sensors before working with the on-premises management console. ## Integrations
defender-for-iot Connect Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/connect-sensors.md
+
+ Title: Connect sensors to Microsoft Defender for IoT
+description: Learn how to connect your sensors to Microsoft Defender for IoT on Azure
+ Last updated : 03/13/2022++
+# Connect your sensors to Microsoft Defender for IoT
+
+This article describes how to connect your sensors to the Defender for IoT portal in Azure.
+
+For more information about each connection method, see [Sensor connection methods](architecture-connections.md).
++
+## Choose a sensor connection method
+
+Use this section to help determine which connection method is right for your organization.
+
+|If ... |... Then |
+|||
+|- You require private connectivity between your sensor and Azure, <br>- Your site is connected to Azure via ExpressRoute, or <br>- Your site is connected to Azure over a VPN | **[Connect via an Azure proxy](#connect-via-an-azure-proxy)** |
+|- Your sensor needs a proxy to reach from the OT network to the cloud, or <br>- You want multiple sensors to connect to Azure through a single point | **[Connect via proxy chaining](#connect-via-proxy-chaining)** |
+|- You want to connect your sensor to Azure directly | **[Connect directly](#connect-directly)** |
+|- You have sensors hosted in multiple public clouds | **[Connect via multi-cloud vendors](#connect-via-multi-cloud-vendors)** |
++
+## Connect via an Azure proxy
+
+This section describes how to connect your sensor to Defender for IoT in Azure using an Azure proxy. Use this procedure in the following situations:
+
+- You require private connectivity between your sensor and Azure
+- Your site is connected to Azure via ExpressRoute
+- Your site is connected to Azure over a VPN
+
+For more information, see [Proxy connections with an Azure proxy](architecture-connections.md#proxy-connections-with-an-azure-proxy).
+
+### Prerequisites
+
+Before you start, make sure that you have:
+
+- An Azure Subscription and an account with **Contributor** permissions to the subscription
+
+- A Log Analytics workspace for monitoring logs
+
+- Remote site connectivity to the Azure VNET
+
+- A proxy server resource, with firewall permissions to access Microsoft cloud services. The procedure described in this article uses a Squid server hosted in Azure.
+
+- Outbound HTTPS traffic on port 443 to the following hostnames:
+
+ - **IoT Hub**: `*.azure-devices.net`
+ - **Threat Intelligence**: `*.blob.core.windows.net`
+ - **EventHub**: `*.servicebus.windows.net`
+
+> [!IMPORTANT]
+> Microsoft Defender for IoT does not offer support for Squid or any other proxy services. It is the customer's responsibility to set up and maintain the proxy service.
+>
+
+### Configure sensor proxy settings
+
+If you already have a proxy set up in your Azure VNET, you can start working with a proxy by defining the proxy settings on your sensor console.
+
+1. On your sensor console, go to **System settings > Sensor Network Settings**.
+
+1. Toggle on the **Enable Proxy** option and define your proxy host, port, username, and password.
+
+If you do not yet have a proxy configured in your Azure VNET, use the following procedures to configure your proxy:
+
+1. [Define a storage account for NSG logs](#step-1-define-a-storage-account-for-nsg-logs)
+
+1. [Define virtual networks and subnets](#step-2-define-virtual-networks-and-subnets)
+1. [Define a virtual or local network gateway](#step-3-define-a-virtual-or-local-network-gateway)
+1. [Define network security groups](#step-4-define-network-security-groups)
+1. [Define an Azure virtual machine scale set](#step-5-define-an-azure-virtual-machine-scale-set)
+1. [Create an Azure load balancer](#step-6-create-an-azure-load-balancer)
+1. [Configure a NAT gateway](#step-7-configure-a-nat-gateway)
+
+### Step 1: Define a storage account for NSG logs
+
+In the Azure portal, create a new storage account with the following settings:
+
+|Area |Settings |
+|||
+|**Basics** |**Performance**: Standard <br>**Account kind**: Blob storage <br>**Replication**: LRS |
+|**Network** | **Connectivity method**: Public endpoint (selected network) <br>**In Virtual Networks**: None <br>**Routing Preference**: Microsoft network routing |
+|**Data Protection** | Keep all options cleared |
+|**Advanced** | Keep all default values |
++
+### Step 2: Define virtual networks and subnets
+
+Create the following VNET and contained Subnets:
+
+|Name |Recommended size |
+|||
+|`MD4IoT-VNET` | /26 or /25 with Bastion |
+|**Subnets**: | |
+|- `GatewaySubnet` | /27 |
+|- `ProxyserverSubnet` |/27 |
+|- `AzureBastionSubnet` (optional) | /26 |
+| | |
+
+### Step 3: Define a virtual or local network gateway
+
+Create a VPN or ExpressRoute Gateway for virtual gateways, or create a local gateway, depending on how you connect your on-premises network to Azure.
+
+Attach the gateway to the `GatewaySubnet` subnet you created [earlier](#step-2-define-virtual-networks-and-subnets).
+
+For more information, see:
+
+- [About VPN gateways](/azure/vpn-gateway/vpn-gateway-about-vpngateways)
+- [Connect a virtual network to an ExpressRoute circuit using the portal](/azure/expressroute/expressroute-howto-linkvnet-portal-resource-manager)
+- [Modify local network gateway settings using the Azure portal](/azure/vpn-gateway/vpn-gateway-modify-local-network-gateway-portal)
+
+### Step 4: Define network security groups
+
+1. Create an NSG and define the following inbound rules:
+
+ - Create rule `100` to allow traffic from your sensors (the sources) to the load balancer's private IP address (the destination). Use port `tcp3128`.
+
+ - Create rule `4095` as a duplicate of the `65001` system rule. This is because rule `65001` will get overwritten by rule `4096`.
+
+ - Create rule `4096` to deny all traffic for micro-segmentation.
+
+ - Optional. If you're using Bastion, create rule `4094` to allow Bastion SSH to the servers. Use the Bastion subnet as the source.
+
+1. Assign the NSG to the `ProxyserverSubnet` you created [earlier](#step-2-define-virtual-networks-and-subnets).
+
+1. Define your NSG logging:
+
+ 1. Select your new NSG and then select **Diagnostic setting > Add diagnostic setting**.
+
+ 1. Enter a name for your diagnostic setting. Under **Category** ,select **allLogs**.
+
+ 1. Select **Sent to Log Analytics workspace**, and then select the Log Analytics workspace you want to use.
+
+ 1. Select to send **NSG flow logs** and then define the following values:
+
+ **On the Basics tab**:
+
+ - Enter a meaningful name
+ - Select the storage account you'd created [earlier](#step-1-define-a-storage-account-for-nsg-logs)
+ - Define your required retention days
+
+ **On the Configuration tab**:
+
+ - Select **Version 2**
+ - Select **Enable Traffic Analytics**
+ - Select your Log Analytics workspace
+
+### Step 5: Define an Azure virtual machine scale set
+
+Define an Azure virtual machine scale set to create and manage a group of load-balanced virtual machine, where you can automatically increase or decrease the number of virtual machines as needed.
+
+Use the following procedure to create a scale set to use with your sensor connection. For more information, see [What are Virtual Machine scale sets?](/azure/virtual-machine-scale-sets/overview)
+
+1. Create a scale set with the following parameter definitions:
+
+ - **Orchestration Mode**: Uniform
+ - **Security Type**: standard
+ - **Image**: Ubuntu server 18.04 LTS ΓÇô Gen1
+ - **Size**: Standard_DS1_V2
+ - **Authentication**: Based on your corporate standard
+
+ Keep the default value for **Disks** settings.
+
+1. Create a network interface in the `Proxyserver` subnet you created [earlier](#step-2-define-virtual-networks-and-subnets), but do not yet define a load balancer.
+
+1. Define your scaling settings as follows:
+
+ - Define the initial instance count as **1**
+ - Define the scaling policy as **Manual**
+
+1. Define the following management settings:
+
+ - For the upgrade mode, select **Automatic - instance will start upgrading**
+ - Disable boot diagnostics
+ - Clear the settings for **Identity** and **Azure AD**
+ - Select **Overprovisioning**
+ - Select **Enabled automatic OS upgrades**
+
+1. Define the following health settings:
+
+ - Select **Enable application health monitoring**
+ - Select the **TCP** protocol and port **3128**
+
+1. Under advanced settings, define the **Spreading algorithm** as **Max Spreading**.
+
+1. For the custom data script, do the following:
+
+ 1. Create the following configuration script, depending on the port and services you are using:
+
+ ```txt
+ # Recommended minimum configuration:
+ # Squid listening port
+ http_port 3128
+ # Do not allow caching
+ cache deny all
+ # allowlist sites allowed
+ acl allowed_http_sites dstdomain .azure-devices.net
+ acl allowed_http_sites dstdomain .blob.core.windows.net
+ acl allowed_http_sites dstdomain .servicebus.windows.net
+ http_access allow allowed_http_sites
+ # allowlisting
+ acl SSL_ports port 443
+ acl CONNECT method CONNECT
+ # Deny CONNECT to other unsecure ports
+ http_access deny CONNECT !SSL_ports
+ # default network rules
+ http_access allow localhost
+ http_access deny all
+ ```
+
+ 1. Encode the contents of your script file in [base-64](https://www.base64encode.org/).
+
+ 1. Copy the contents of the encoded file, and then create the following configuration script:
+
+ ```txt
+ #cloud-config
+ # updates packages
+ apt_upgrade: true
+ # Install squid packages
+ packages:
+ - squid
+ run cmd:
+ - systemctl stop squid
+ - mv /etc/squid/squid.conf /etc/squid/squid.conf.factory
+ write_files:
+ - encoding: b64
+ content: <replace with base64 encoded text>
+ path: /etc/squid/squid.conf
+ permissions: '0644'
+ run cmd:
+ - systemctl start squid
+ - apt-get -y upgrade; [ -e /var/run/reboot-required ] && reboot
+ ```
+
+### Step 6: Create an Azure load balancer
+
+Azure Load Balancer is a layer-4 load balancer that distributes incoming traffic among healthy virtual machine instances using a hash-based distribution algorithm.
+
+For more information, see the [Azure Load Balancer documentation](/azure/load-balancer/load-balancer-overview).
+
+To create an Azure load balancer for your sensor connection:
+
+1. Create a load balancer with a standard SKU and an **Internal** type to ensure that the load balancer is closed to the internet.
+
+1. Define a dynamic frontend IP address in the `proxysrv` subnet you created [earlier](#step-2-define-virtual-networks-and-subnets), setting the availability to zone-redundant.
+
+1. For a backend, choose the VM scale set you created in the [earlier](#step-5-define-an-azure-virtual-machine-scale-set).
+
+1. On the port defined in the sensor, create a TCP load balancing rule connecting the frontend IP address with the backend pool. The default port is 3128.
+
+1. Create a new health probe, and define a TCP health probe on port 3128.
+
+1. Define your load balancer logging:
+
+ 1. In the Azure portal, go to the load balancer you've just created.
+
+ 1. Select **Diagnostic setting** > **Add diagnostic setting**.
+
+ 1. Enter a meaningful name, and define the category as **allMetrics**.
+
+ 1. Select **Sent to Log Analytics workspace**, and then select your Log Analytics workspace.
+
+### Step 7: Configure a NAT gateway
+
+To configure a NAT gateway for your sensor connection:
+
+1. Create a new NAT Gateway.
+
+1. In the **Outbound IP** tab, select **Create a new public IP address**.
+
+1. In the **Subnet** tab, select the `ProxyserverSubnet` subnet you created [earlier](#step-2-define-virtual-networks-and-subnets).
+
+## Connect via proxy chaining
+
+This section describes how to connect your sensor to Defender for IoT in Azure using proxy chaining. Use this procedure in the following situations:
+
+- Your sensor needs a proxy to reach from the OT network to the cloud
+- You want multiple sensors to connect to Azure through a single point
+
+For more information, see [Proxy connections with proxy chaining](architecture-connections.md#proxy-connections-with-proxy-chaining).
+
+### Prerequisites
+
+Before you start, make sure that you have a host server running a proxy process within the site network. The proxy process must be accessible to both the sensor and the next proxy in the chain.
+
+We have validated this procedure using the open-source [Squid](http://www.squid-cache.org/) proxy. This proxy uses HTTP tunneling and the HTTP CONNECT command for connectivity. Any other proxy chaining connection that supports the CONNECT command can be used for this connection method.
+
+> [!IMPORTANT]
+> Microsoft Defender for IoT does not offer support for Squid or any other proxy services. It is the customer's responsibility to set up and maintain the proxy service.
+>
+
+### Configuration
+
+This procedure describes how to install and configure a connection between your sensors and Defender for IoT using the latest version of Squid on an Ubuntu server.
+
+1. Define your proxy settings on each sensor:
+
+ 1. On your sensor console, go to **System settings > Sensor Network Settings**.
+
+ 1. Toggle on the **Enable Proxy** option and define your proxy host, port, username, and password.
+
+1. Install the Squid proxy:
+
+ 1. Sign into your proxy Ubuntu machine and launch a terminal window.
+
+ 1. Update your system and install Squid. For example:
+
+ ```bash
+ sudo apt-get update
+ sudu apt-get install squid
+ ```
+
+ 1. Locate the Squid configuration file. For example, at `/etc/squid/squid.conf` or `/etc/squid/conf.d/`, and open the file in a text editor.
+
+ 1. In the Squid configuration file, search for the following text: `# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS`.
+
+ 1. Add `acl <sensor-name> src <sensor-ip>`, and `http_access allow <sensor-name>` into the file. For example:
+
+ ```text
+ # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
+ acl sensor1 src 10.100.100.1
+ http_access allow sensor1
+ ```
+
+ Add more sensors as needed by adding extra lines for sensor.
+
+ 1. Configure the Squid service to start at launch. Run:
+
+ ```bash
+ sudo systemctl enable squid
+ ```
+
+1. Connect your proxy to Defender for IoT. Enable outbound HTTP traffic on port 443 from the sensor to the following Azure hostnames:
+
+ - **IoT Hub**: `*.azure-devices.net`
+ - **Threat Intelligence**: `*.blob.core.windows.net`
+ - **Eventhub**: `*.servicebus.windows.net`
+
+> [!IMPORTANT]
+> Some organizations must define firewall rules by IP addresses. If this is true for your organization, it's important to know that the Azure public IP ranges are updated weekly.
+>
+> Make sure to download the new JSON file each week and make the required changes on your site to correctly identify services running in Azure. You'll need the updated IP ranges for **AzureIoTHub**, **Storage**, and **EventHub**.
+>
+
+## Connect directly
+
+This section describes what you need to configure a direct sensor connection to Defender for IoT in Azure. For more information, see [Direct connections](architecture-connections.md#direct-connections).
+
+1. Ensure that your sensor can access the cloud using HTTP on port 443 to the following Microsoft domains:
+
+ - **IoT Hub**: `*.azure-devices.net`
+ - **Threat Intelligence**: `*.blob.core.windows.net`
+ - **Eventhub**: `*.servicebus.windows.net`
+
+1. Azure public IP addresses are updated weekly. If you must define firewall rules based on IP addresses, make sure to download the new JSON file each week and make the required changes on your site to correctly identify services running in Azure. You'll need the updated IP ranges for **AzureIoTHub**, **Storage**, and **EventHub**.
+
+## Connect via multi-cloud vendors
+
+This section describes how to connect your sensor to Defender for IoT in Azure from sensors deployed in one or more public clouds. For more information, see [Multi-cloud connections](architecture-connections.md#multi-cloud-connections).
+
+### Prerequisites
+
+Before you start:
+
+- Make sure that you have a sensor deployed in a public cloud, such as AWS or Google Cloud, and configured to monitor SPAN traffic.
+
+- Choose the multi-cloud connectivity method that's right for your organization:
+
+ Use the following flow chart to determine which connectivity method to use:
+
+ :::image type="content" source="media/architecture-connections/multi-cloud-flow-chart.png" alt-text="Flow chart to determine which connectivity method to use.":::
+
+ - **Use public IP addresses over the internet** if you do not need to exchange data using private IP addresses
+
+ - **Use site-to-site VPN over the internet** only if you do *not* require any of the following:
+
+ - Predictable throughput
+ - SLA
+ - High data volume transfers
+ - Avoid connections over the public internet
+
+ - **Use ExpressRoute** if you require predictable throughput, SLA, high data volume transfers, or to avoid connections over the public internet.
+
+ In this case:
+
+ - If you want to own and manage the routers making the connection, use ExpressRoute with customer-managed routing.
+ - If you do not need to own and manage the routers making the connection, use ExpressRoute with a cloud exchange provider.
+
+### Configuration
+
+1. Configure your sensor to connect to the cloud using one of the Azure Cloud Adoption Framework recommended methods. For more information, see [Connectivity to other cloud providers](/azure/cloud-adoption-framework/ready/azure-best-practices/connectivity-to-other-providers).
+
+1. To enable private connectivity between your VPCs and Defender for IoT, connect your VPC to an Azure VNET over a VPN connection. For example if you are connecting from an AWS VPC, see our TechCommunity blog: [How to create a VPN between Azure and AWS using only managed solutions](https://techcommunity.microsoft.com/t5/fasttrack-for-azure/how-to-create-a-vpn-between-azure-and-aws-using-only-managed/ba-p/2281900).
+
+1. After your VPC and VNET are configured, connect to Defender for IoT as you would when connecting via an Azure proxy. For more information, see [Connect via an Azure proxy](#connect-via-an-azure-proxy).
+
+## Migration for existing customers
+
+If you're an existing customer with a production deployment and sensors connected using the legacy IoT Hub method, start with the following steps to ensure a full and safe migration to an updated connection method.
+
+1. **Review your existing production deployment** and how sensors are currently connection to Azure. Confirm that the sensors in production networks can reach the Azure data center resource ranges.
+
+1. **Determine which connection method is right** for each production site. For more information, see [Choose a sensor connection method](connect-sensors.md#choose-a-sensor-connection-method).
+
+1. **Configure any additional resources required** as described in the procedure in this article for your chosen connectivity method. For example, additional resources might include a proxy, VPN, or ExpressRoute.
+
+ For any connectivity resources outside of Defender for IoT, such as a VPN or proxy, consult with Microsoft solution architects to ensure correct configurations, security, and high availability.
+
+1. **If you have legacy sensor versions installed**, we recommend that you update your sensors at least to a version 22.1.x or higher. In this case, make sure that you reactivate each sensor and update your firewall rules.
+
+ Sign in to each sensor after the update to verify that the activation file was applied successfully. Also check the Defender for IoT **Sites and sensors** page in the Azure portal to make sure that the updated sensors show as **Connected**.
+
+ For more information, see [Update a standalone sensor version](how-to-manage-individual-sensors.md#update-a-standalone-sensor-version) and [Sensor access to Azure portal](how-to-set-up-your-network.md#sensor-access-to-azure-portal).
+
+1. **Start migrating with a test lab or reference project** where you can validate your connection and fix any issues found.
+
+1. **Create a plan of action for your migration**, including planning any maintenance windows needed.
+
+1. **After the migration in your production environment**, you can delete any previous IoT Hubs that you had used before the migration. Make sure that any IoT Hubs you delete are not used by any other
+
+ - If you've upgraded your versions, make sure that all updated sensors indicate software version 22.1.x or higher.
+
+ - Check the active resources in your account and make sure there are no other services connected to your IoT Hub.
+
+ - If you're running a hybrid environment with multiple sensor versions, make sure any sensors with software version 22.1.x can connect to Azure. Use firewall rules that allow outbound HTTPS traffic on port 443 to the following hostnames:
+
+ - **IoT Hub**: `*.azure-devices.net`
+ - **Threat Intelligence**: `*.blob.core.windows.net`
+ - **EventHub**: `*.servicebus.windows.net`
++
+While you'll need to migrate your connections before the [legacy version reaches end of support](release-notes.md#versions-and-support-dates), you can currently deploy a hybrid network of sensors, including legacy software versions with their IoT Hub connections, and sensors with the connection methods described in this article.
+
+## Next steps
+
+For more information, see [Sensor connection methods](architecture-connections.md).
defender-for-iot Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/getting-started.md
Here's what you need to get started with Defender for IoT.
- Network switches that support traffic monitoring via SPAN port. - Hardware appliances for NTA sensors. - The Azure Subscription Contributor role. It's required only during onboarding for defining committed devices and connection to Microsoft Sentinel.-- Azure IoT Hub (Free or Standard tier) **Contributor** role, for cloud-connected management. Make sure that the **Microsoft Defender for IoT** feature is enabled.+
+If you are using a Defender for IoT sensor version lower than 22.1.x, you must also have an Azure IoT Hub (Free or Standard tier) **Contributor** role, for cloud-connected management. Make sure that the **Microsoft Defender for IoT** feature is enabled.
### Supported service regions
-Defender for IoT routes all traffic from all European regions to the West Europe regional datacenter. It routes traffic from all remaining regions to the Central US regional datacenter. Review [IoT Hub supported regions](https://azure.microsoft.com/global-infrastructure/services/?products=iot-hub).
+Defender for IoT routes all traffic from all European regions to the West Europe regional datacenter. It routes traffic from all remaining regions to the Central US regional datacenter.
+
+If you are connecting your sensors using an IoT Hub (legacy), see also the [IoT Hub supported regions](https://azure.microsoft.com/global-infrastructure/services/?products=iot-hub).
### Permissions: Sensors and on-premises management consoles
Research your:
For more information, see [About Microsoft Defender for IoT network setup](how-to-set-up-your-network.md).
-**Clarify which sensors and management console appliances are required to handle the network load**
+**Clarify which sensor appliances are required to handle the network load**
Microsoft Defender for IoT supports both physical and virtual deployments. For the physical deployments, you can purchase various certified appliances. For more information, see [Identify required appliances](how-to-identify-required-appliances.md).
-We recommend that you calculate the approximate number of devices that will be monitored. Later, when you register your Azure subscription to the portal, you'll be asked to enter this number. Numbers can be added in intervals of 1,000,for example 1000, 2000, 3000. The numbers of monitored devices are called *committed devices*.
+We recommend that you calculate the approximate number of devices that will be monitored. Later, when you register your Azure subscription to the portal, you'll be asked to enter this number. Numbers can be added in intervals of 1,000, for example 1000, 2000, 3000. The numbers of monitored devices are called *committed devices*.
+If you are using a Defender for IoT sensor version lower than 22.1.x, you must also clarify your appliances for the on-premises management console.
## Register with Microsoft Defender for IoT Registration includes:
For information on how to offboard a subscription, see [Offboard a subscription]
## Install and set up the on-premises management console
+This section is required only when you are using a Defender for IoT sensor version lower than 22.1.x.
+ After you acquire your on-premises management console appliance: - Download the ISO package from the Azure portal.
After you acquire your on-premises management console appliance:
1. Activate and set up the management console. For more information, see [Activate and set up your on-premises management console](how-to-activate-and-set-up-your-on-premises-management-console.md).
-## Onboard a sensor ##
+## Onboard a sensor
Onboard a sensor by registering it with Microsoft Defender for IoT and downloading a sensor activation file:
Onboard a sensor by registering it with Microsoft Defender for IoT and downloadi
1. Choose a sensor connection mode:
- - **Cloud connected sensors**: Information that sensors detect is displayed in the sensor console. In addition, alert information is delivered through an IoT hub and can be shared with other Azure services, such as Microsoft Sentinel. You can also choose to automatically push threat intelligence packages from Defender for IoT to your sensors. For more information, see [Threat intelligence research and packages](how-to-work-with-threat-intelligence-packages.md).
+ - **Cloud connected sensors**: Information that sensors detect is displayed in the sensor console. In addition, alert information is delivered to Azure and can be shared with other Azure services, such as Microsoft Sentinel. You can also choose to automatically push threat intelligence packages from Defender for IoT to your sensors. For more information, see [Threat intelligence research and packages](how-to-work-with-threat-intelligence-packages.md).
- **Locally managed sensors**: Information that sensors detect is displayed in the sensor console. If you're working in an air-gapped network and want a unified view of all information detected by multiple locally managed sensors, work with the on-premises management console.
-1. Select a site to associate your sensor to within an IoT Hub. The IoT Hub will serve as a gateway between this sensor and Microsoft Defender for IoT. Define the display name, and zone. You can also add descriptive tags. The display name, zone, and tags are descriptive entries on the [Sites and Sensors page](how-to-manage-sensors-on-the-cloud.md#manage-on-boarded-sensors).
+1. Select a site to associate your sensor to. Define the display name, and zone. You can also add descriptive tags. The display name, zone, and tags are descriptive entries on the [Sites and Sensors page](how-to-manage-sensors-on-the-cloud.md#manage-on-boarded-sensors).
1. Select **Register**.
Download the ISO package from the Azure portal, install the software, and set up
1. Activate and set up your sensor. For more information, see [Sign in and activate a sensor](how-to-activate-and-set-up-your-sensor.md).
+## Connect sensors to Defender for IoT
+
+This section is required only when you are using a Defender for IoT sensor version 22.1.x or higher.
+
+Connect your sensors to Defender for IoT to ensure that sensors send alert and device inventory information to Defender for IoT on the Azure portal.
+
+For more information, see [Sensor connection methods](architecture-connections.md) and [Connect your sensors to Microsoft Defender for IoT](connect-sensors.md)
+.
+ ## Connect sensors to an on-premises management console Connect sensors to the management console to ensure that:
defender-for-iot How To Activate And Set Up Your Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-activate-and-set-up-your-sensor.md
Your sensor was onboarded to Microsoft Defender for IoT in a specific management
| Mode type | Description | |--|--|
-| **Cloud connected mode** | Information that the sensor detects is displayed in the sensor console. Alert information is also delivered through the IoT hub and can be shared with other Azure services, such as Microsoft Sentinel. You can also enable automatic threat intelligence updates. |
+| **Cloud connected mode** | Information that the sensor detects is displayed in the sensor console. Alert information is also delivered to Azure and can be shared with other Azure services, such as Microsoft Sentinel. You can also enable automatic threat intelligence updates. |
| **Locally connected mode** | Information that the sensor detects is displayed in the sensor console. Detection information is also shared with the on-premises management console, if the sensor is connected to it. | A locally connected, or cloud-connected activation file was generated and downloaded for this sensor during onboarding. The activation file contains instructions for the management mode of the sensor. *A unique activation file should be uploaded to each sensor you deploy.* The first time you sign in, you need to upload the relevant activation file for this sensor.
defender-for-iot How To Connect Sensor By Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-connect-sensor-by-proxy.md
Title: Connect sensors with a proxy
-description: Learn how to configure Microsoft Defender for IoT to communicate with a sensor through a proxy with no direct internet access.
+ Title: Connect sensors with a proxy (legacy)
+description: Learn how to configure Microsoft Defender for IoT to communicate with a sensor through a proxy with no direct internet access (legacy procedure).
Last updated 02/06/2022
-# Connect Microsoft Defender for IoT sensors without direct internet access by using a proxy
+# Connect Microsoft Defender for IoT sensors without direct internet access by using a proxy (legacy)
-This article describes how to configure Microsoft Defender for IoT to communicate with a sensor through a proxy with no direct internet access. Connect the sensor with a forwarding proxy that has HTTP tunneling, and uses the HTTP CONNECT command for connectivity. The instructions here are given uses the open-source Squid proxy, any other proxy that supports CONNECT can be used.
+This article describes how to connect Microsoft Defender for IoT sensors to Defender for IoT via a proxy, with no direct internet access, and is only relevant if you are using a legacy connection method via your own IoT Hub.
-The proxy uses an encrypted SSL tunnel to transfer data from the sensors to the service. The proxy doesn't inspect, analyze, or cache any data.
+Starting with sensor software versions 22.1.x, updated connection methods are supported that don't require customers to have their own IoT Hub. For more information, see [Sensor connection methods](architecture-connections.md) and [Connect your sensors to Microsoft Defender for IoT](connect-sensors.md).
+
+We recommend that you use the procedures in this article only if you are using a legacy sensor version lower than 22.1.x.
+
+## Overview
+
+Connect the sensor with a forwarding proxy that has HTTP tunneling, and uses the HTTP CONNECT command for connectivity. The instructions here are given uses the open-source Squid proxy, any other proxy that supports CONNECT can be used.
+
+The proxy uses an encrypted SSL tunnel to transfer data from the sensors to the service. The proxy doesn't inspect, analyze, or cache any data.
The following diagram shows data going from Microsoft Defender for IoT to the IoT sensor in the OT segment to cloud via a proxy located in the IT network, and industrial DMZ.
defender-for-iot How To Investigate Sensor Detections In A Device Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-investigate-sensor-detections-in-a-device-inventory.md
Title: Gain insight into devices discovered by a specific sensor description: The device inventory displays an extensive range of device attributes that a sensor detects. Previously updated : 02/02/2022 Last updated : 03/09/2022
This section describes device details available from the inventory and describes
**To view the device inventory:** -- In the console left pane, select **Device inventory**. The following attributes appear in the inventory.
+In the console left pane, select **Device inventory**.
-| Parameter | Description |
+The following columns are available for each device.
+
+| Name | Description |
|--|--|
-| Name | The name of the device as the sensor discovered it, or as entered by the user. |
-| Type | The type of device as determined by the sensor, or as entered by the user. |
-| Vendor | The name of the device's vendor, as defined in the MAC address. |
-| Operating System | The OS of the device, if detected. |
-| Firmware version | The device's firmware, if detected. |
-| IP Address | The IP address of the device. |
-| VLAN | The VLAN of the device. For details about instructing the sensor to discover VLANs, see [Define VLAN names](how-to-manage-the-on-premises-management-console.md#define-vlan-names).(how-to-define-management-console-network-settings.md#define-vlan-names). |
-| MAC Address | The MAC address of the device. |
-| Protocols | The protocols that the device uses. |
-| Unacknowledged Alerts | The number of unacknowledged alerts associated with this device. |
-| Is Authorized | The authorization status defined by the user:<br />- **True**: The device has been authorized.<br />- **False**: The device hasn't been authorized. |
-| Is Known as Scanner | Defined as a network scanning device by the user. |
-| Is Programming device | Defined as an authorized programming device by the user. <br />- **True**: The device performs programming activities for PLCs, RTUs, and controllers, which are relevant to engineering stations. <br />- **False**: The device isn't a programming device. |
-| Groups | The groups that this device participates in. |
-| Last Activity | The last activity that the device performed. |
-| Discovered | When this device was first seen in the network. |
-| PLC mode (preview) | The PLC operating mode includes the Key state (physical) and run state (logical). Possible **Key** states include, Run, Program, Remote, Stop, Invalid, Programming Disabled.Possible Run. The possible **Run** states are Run, Program, Stop, Paused, Exception, Halted, Trapped, Idle, Offline. If both states are the same, only one state is presented. |
+| **Description** | A description of the device |
+| **Discovered** | When this device was first seen in the network. |
+| **Firmware version** | The device's firmware, if detected. |
+| **FQDN** | The device's FQDN value |
+| **FQDN lookup time** | The device's FQDN lookup time |
+| **Groups** | The groups that this device participates in. |
+| **IP Address** | The IP address of the device. |
+| **Is Authorized** | The authorization status defined by the user:<br />- **True**: The device has been authorized.<br />- **False**: The device hasn't been |
+| **Is Known as Scanner** | Defined as a network scanning device by the user. |
+| **Is Programming device** | Defined as an authorized programming device by the user. <br />- **True**: The device performs programming activities for PLCs, RTUs, and controllers, which are relevant to engineering stations. <br />- **False**: The device isn't a programming device. |
+| **Last Activity** | The last activity that the device performed. |
+| **MAC Address** | The MAC address of the device. |
+| **Name** | The name of the device as the sensor discovered it, or as entered by the user. |
+| **Operating System** | The OS of the device, if detected. |
+| **PLC mode** (preview) | The PLC operating mode includes the Key state (physical) and run state (logical). Possible **Key** states include, Run, Program, Remote, Stop, Invalid, Programming Disabled.Possible Run. The possible **Run** states are Run, Program, Stop, Paused, Exception, Halted, Trapped, Idle, Offline. If both states are the same, only one state is presented. |
+| **Protocols** | The protocols that the device uses. |
+| **Type** | The type of device as determined by the sensor, or as entered by the user. |
+| **Unacknowledged Alerts** | The number of unacknowledged alerts associated with this device. |
+| **Vendor** | The name of the device's vendor, as defined in the MAC address. |
+| **VLAN** | The VLAN of the device. For more information, see [Define VLAN names](how-to-manage-the-on-premises-management-console.md#define-vlan-names). |
**To hide and display columns:**
defender-for-iot How To Manage Device Inventory For Organizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-device-inventory-for-organizations.md
Title: Manage your IoT devices with the device inventory for organizations description: Learn how to manage your IoT devices with the device inventory for organizations. Previously updated : 11/11/2021 Last updated : 03/09/2022 # Manage your IoT devices with the device inventory for organizations
+> [!NOTE]
+> The **Device inventory** page in Defender for IoT on the Azure portal is in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+ The device inventory can be used to view device systems, and network information. The search, filter, edit columns, and export tools can be used to manage this information. :::image type="content" source="media/how-to-manage-device-inventory-on-the-cloud/device-inventory-screenshot.png" alt-text="A total overview of Defender for IoT's device inventory screen." lightbox="media/how-to-manage-device-inventory-on-the-cloud/device-inventory-screenshot.png"::: Some of the benefits of the device inventory include: -- Identify all IOT, and OT devices from different inputs. For example, allowing you to understand which devices in your environment are not communicating, and will require troubleshooting.
+- Identify all IT, IoT, and OT devices from different inputs. For example, to identify new devices detected in the last day or which devices aren't communicating and might require troubleshooting.
- Group, and filter devices by site, type, or vendor.
Some of the benefits of the device inventory include:
- Export the entire device inventory to a CSV file for your reports.
-## Device inventory overview
-
-The Device inventory gives you an overview of all devices within your environment. Here you can see the individual details of each device and filter, and order your search by various options.
-
-The following table describes the different device properties in the device inventory.
-
-| Parameter | Description | Default value |
-|--|--|--|
-| **Application** | The application the exists on the device. | - |
-| **Class** | The class of the device. | IoT |
-| **Data source** | The source of the data, such as Micro Agent, OtSensor, and Mde. | MicroAgent |
-| **Description** | The description of the device. | - |
-| **Firmware vendor** | The vendor of the device's firmware. | - |
-| **Firmware version** | The version of the firmware. | - |
-| **First seen** | The date, and time the device was first seen. Presented in format MM/DD/YYYY HH:MM:SS AM/PM. | - |
-| **Importance** | The level of importance of the device. | - |
-| **IPv4 Address** | The IPv4 address of the device. | - |
-| **IPv6 Address** | The IPv6 address of the device. | - |
-| **Last activity** | The date, and time the device last sent an event to the cloud. Presented in format MM/DD/YYYY HH:MM:SS AM/PM. | - |
-| **Last update time** | The date, and time the device last sent a system information event to the cloud. Presented in format MM/DD/YYYY HH:MM:SS AM/PM. | - |
-| **Location** | The physical location of the device. | - |
-| **MAC Address** | The MAC address of the device. | - |
-| **Model** | The device's model. | - |
-| **Name** | The name of the device as the sensor discovered it, or as entered by the user. | - |
-| **OS architecture** | The architecture of the operating system. | - |
-| **OS distribution** | The distribution of the operating system, such as Android, Linux, and Haiku. | - |
-| **OS platform** | The OS of the device, if detected. | - |
-| **OS version** | The version of the operating system, such as Windows 10 and Ubuntu 20.04.1. | - |
-| **PLC mode** | The PLC operating mode which includes the Key state (physical, or logical), and the Run state (logical). Possible Key states include, `Run`, `Program`, `Remote`, `Stop`, `Invalid`, and `Programming Disabled`. Possible Run states are `Run`, `Program`, `Stop`, `Paused`, `Exception`, `Halted`, `Trapped`, `Idle`, or `Offline`. If both states are the same, then only one state is presented. | - |
-| **PLC secured** | Determines if the PLC mode is in a secure state. A possible secure state is `Run`. A possible unsecured state cab be either `Program`, or `Remote`. | - |
-| **Programming time** | The last time the device was programmed. | - |
-| **Protocols** | The protocols that the device uses. | - |
-| **Purdue level** | The Purdue level in which the device exists. | - |
-| **Scanner** | Whether the device performs scanning-like activities in the network. | - |
-| **Sensor** | The sensor the device is connected to. | - |
-| **Site** | The site that contains this device. | - |
-| **Slots** | The number of slots the device has. | - |
-| **Subtype** | The subtype of the device, such as speaker and smart tv. | Managed Device |
-| **Type** | The type of device, such as communication, and industrial. | Miscellaneous |
-| **Vendor** | The name of the device's vendor, as defined in the MAC address. | - |
-| **VLAN** | The VLAN of the device. | - |
-| **Zone** | The zone that contains this device. | - |
-
-**To view the device inventory**:
+## View the device inventory
1. Open the [Azure portal](https://portal.azure.com).
The following table describes the different device properties in the device inve
:::image type="content" source="media/how-to-manage-device-inventory-on-the-cloud/device-inventory.png" alt-text="Select device inventory from the left side menu under Defender for IoT."::: + ## Customize the device inventory table In the device inventory table, you can add or remove columns. You can also change the column order by dragging and dropping a field.
If you want to reset the device inventory to the default settings, in the Edit c
You can search, and filter the device inventory to define what information the table displays.
-For a list of filters that can be applied to the device inventory table, see the [Device inventory overview](#device-inventory-overview).
- **To filter the device inventory**: 1. Select **Add filter**
For a list of filters that can be applied to the device inventory table, see the
1. Select the **Apply button**.
-Multiple filters can be applied at one time. The filters are not saved when you leave the Device inventory page.
+Multiple filters can be applied at one time. The filters aren't saved when you leave the Device inventory page.
## View device information
To view a specific devices information, select the device and the device informa
:::image type="content" source="media/how-to-manage-device-inventory-on-the-cloud/device-information-window.png" alt-text="Select a device to see all of that device's information." lightbox="media/how-to-manage-device-inventory-on-the-cloud/device-information-window.png":::
+## Edit device details
+
+As you manage your devices, you may need to update their details, such as to modify security value as assets change, to personalize an inventory so that you can better identify specific devices, or if a device was classified incorrectly.
+
+You can edit device details for each device, one at a time, or select multiple devices to edit details together.
+
+**To edit details for a single device**:
+
+1. In the **Device inventory** page, select the device you want to edit, and then select **Edit** :::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/edit-device-details.png" border="false"::: in the toolbar at the top of the page.
+
+ The **Edit** pane opens at the right.
+
+1. Modify any of the field values as needed. For more information, see [Reference of editable fields](#reference-of-editable-fields).
+
+1. Select **Save** when you're finished editing the device details.
+
+**To edit details for multiple devices simultaneously**:
+
+1. In the **Device inventory** page, select the devices you want to edit, and then select **Edit** :::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/edit-device-details.png" border="false"::: in the toolbar at the top of the page.
+
+ The **Edit** pane opens at the right.
+
+1. Select **Add field type**, and then select one or more fields to edit.
+
+1. Update your field definitions as needed, and then select **Save**. For more information, see [Reference of editable fields](#reference-of-editable-fields).
+
+Your updates are saved for all selected devices.
+
+### Reference of editable fields
+
+The following device fields are supported for editing in the Device inventory page:
+
+**General information**:
+
+|Name |Description |
+|||
+|**Name** | Mandatory. Supported for editing only when editing a single device. |
+|**Authorized Device** |Toggle on or off as needed as device security changes. |
+|**Description** | Enter a meaningful description for the device. |
+|**Location** | Enter a meaningful location for the device. |
+|**Category** | Use the **Class**, **Type**, and **Subtype** options to categorize the device. |
+|**Business Function** | Enter a meaningful description of the device's business function. |
+|**Hardware Model** | Select the device's hardware model from the dropdown menu. |
+|**Hardware Vendor** | Select the device's hardware vendor from the dropdown menu. |
+|**Firmware** | Device the device's firmware name and version. You can either select the **delete** button to delete an existing firmware definition, or select **+ Add** to add a new one. |
+|**Tags** | Enter meaningful tags for the device. Select the **delete** button to delete an existing tag, or select **+ Add** to add a new one. |
++
+**Settings**:
+
+|Name |Description |
+|||
+|**Importance** | Select **Low**, **Normal**, or **High** to modify the device's importance. |
+|**Programming device** | Toggle the **Programming Device** option on or off as needed for your device. |
+ ## Export the device inventory to CSV You can export your device inventory to a CSV file. Any filters that you apply to the device inventory table will be exported, when you export the table. Select the :::image type="icon" source="media/how-to-manage-device-inventory-on-the-cloud/export-button.png" border="false"::: button to export your current device inventory to a CSV file.
-## How to identify devices that have not recently communicated with the Azure cloud
+## How to identify devices that haven't recently communicated with the Azure cloud
-If you are under the impression that certain devices are not actively communicating, there is a way to check, and see which devices have not communicated in a specified time period.
+If you are under the impression that certain devices aren't actively communicating, there's a way to check, and see which devices haven't communicated in a specified time period.
**To identify all devices that have not communicated recently**:
If you are under the impression that certain devices are not actively communicat
1. Navigate to **Defender for IoT** > **Device inventory**.
-1. Select the :::image type="icon" source="media/how-to-manage-device-inventory-on-the-cloud/edit-columns-icon.png" border="false"::: button.
-
-1. Add a column by selecting the :::image type="icon" source="media/how-to-manage-device-inventory-on-the-cloud/add-column-icon.png" border="false"::: button.
-
-1. Select **Last Activity**.
-
-1. Select **Save**
+1. Select **Edit columns** > **Add column** > **Last Activity** > **Save**.
1. On the main Device inventory page, select **Last activity** to sort the page by last activity. :::image type="content" source="media/how-to-manage-device-inventory-on-the-cloud/last-activity.png" alt-text="Screenshot of the device inventory organized by last activity." lightbox="media/how-to-manage-device-inventory-on-the-cloud/last-activity.png":::
-1. Select the :::image type="icon" source="media/how-to-manage-device-inventory-on-the-cloud/add-filter-icon.png" border="false"::: to add a filter on the last activity column.
+1. Select **Add filter** to add a filter on the last activity column.
:::image type="content" source="media/how-to-manage-device-inventory-on-the-cloud/last-activity-filter.png" alt-text="Screenshot of the add filter screen where you can select the time period to see the last activity."::: 1. Enter a time period, or a custom date range, and select **Apply**.
-## See next
--- [Welcome to Microsoft Defender for IoT for device builders](overview.md)
+## Delete a device
+
+If you have devices no longer in use, delete them from the device inventory so that they're no longer connected to Defender for IoT.
+
+Devices must be inactive for 14 days or more in order for you to be able to delete them.
+
+**To delete a device**:
+
+In the **Device inventory** page, select the device you want to delete, and then select **Delete** :::image type="icon" source="media/how-to-manage-device-inventory-on-the-cloud/delete-device.png" border="false"::: in the toolbar at the top of the page.
+
+If your device has had activity in the past 14 days, it isn't considered inactive, and the **Delete** button will be grayed-out.
+
+At the prompt, select **Yes** to confirm that you want to delete the device from Defender for IoT.
+
+## Device inventory column reference
+
+The following table describes the device properties shown in the device inventory table.
+
+| Parameter | Description |
+|--|--|
+| **Application** | The application that exists on the device. |
+| **Class** | The class of the device. <br>Default: `IoT`|
+| **Data source** | The source of the data, such as a micro agent, OT sensor, or Microsoft Defender for Endpoint. <br>Default: `MicroAgent`|
+| **Description** | The description of the device. |
+| **Firmware vendor** | The vendor of the device's firmware. |
+| **Firmware version** | The version of the firmware. |
+| **First seen** | The date, and time the device was first seen. Presented in format MM/DD/YYYY HH:MM:SS AM/PM. |
+| **Importance** | The level of importance of the device. |
+| **IPv4 Address** | The IPv4 address of the device. |
+| **IPv6 Address** | The IPv6 address of the device. |
+| **Last activity** | The date, and time the device last sent an event to the cloud. Presented in format MM/DD/YYYY HH:MM:SS AM/PM. |
+| **Last update time** | The date, and time the device last sent a system information event to the cloud. Presented in format MM/DD/YYYY HH:MM:SS AM/PM. |
+| **Location** | The physical location of the device. |
+| **MAC Address** | The MAC address of the device. |
+| **Model** | The device's model. |
+| **Name** | The name of the device as the sensor discovered it, or as entered by the user. |
+| **OS architecture** | The architecture of the operating system. |
+| **OS distribution** | The distribution of the operating system, such as Android, Linux, and Haiku. |
+| **OS platform** | The OS of the device, if detected. |
+| **OS version** | The version of the operating system, such as Windows 10 and Ubuntu 20.04.1. |
+| **PLC mode** | The PLC operating mode that includes the Key state (physical, or logical), and the Run state (logical). Possible Key states include, `Run`, `Program`, `Remote`, `Stop`, `Invalid`, and `Programming Disabled`. Possible Run states are `Run`, `Program`, `Stop`, `Paused`, `Exception`, `Halted`, `Trapped`, `Idle`, or `Offline`. If both states are the same, then only one state is presented. |
+| **PLC secured** | Determines if the PLC mode is in a secure state. A possible secure state is `Run`. A possible unsecured state can be either `Program`, or `Remote`. |
+| **Programming time** | The last time the device was programmed. |
+| **Protocols** | The protocols that the device uses. |
+| **Purdue level** | The Purdue level in which the device exists. |
+| **Scanner** | Whether the device performs scanning-like activities in the network. |
+| **Sensor** | The sensor the device is connected to. |
+| **Site** | The site that contains this device. |
+| **Slots** | The number of slots the device has. |
+| **Subtype** | The subtype of the device, such as speaker and smart tv. <br>**Default**: `Managed Device` |
+| **Tags** | Tagging data for each device. |
+| **Type** | The type of device, such as communication, and industrial. <br>**Default**: `Miscellaneous` |
+| **Underlying devices** | Any relevant underlying devices for the device |
+| **Underlying device region** | The region for an underlying device |
+| **Vendor** | The name of the device's vendor, as defined in the MAC address. |
+| **VLAN** | The VLAN of the device. |
+| **Zone** | The zone that contains this device. |
++
+## Next steps
+
+For more information, see [Welcome to Microsoft Defender for IoT for device builders](overview.md).
defender-for-iot How To Manage Individual Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-individual-sensors.md
You can continue to work with Defender for IoT features even if the activation f
### About activation files for cloud-connected sensors
-Sensors that are cloud connected are associated with the Defender for IoT hub. These sensors are not limited by time periods for the activation file. The activation file for cloud-connected sensors is used to ensure connection to the Defender for IoT hub.
+Sensors that are cloud connected are not limited by time periods for their activation file. The activation file for cloud-connected sensors is used to ensure the connection to Defender for IoT.
### Upload new activation files
You might need to upload a new activation file for an onboarded sensor when:
- You want to work in a different sensor management mode. -- You want to assign a new Defender for IoT hub to a cloud-connected sensor.
+- For sensors connected via an IoT Hub ([legacy](architecture-connections.md)), you want to assign a new Defender for IoT hub to a cloud-connected sensor.
**To add a new activation file:**
You'll receive an error message if the activation file could not be uploaded. Th
- **For locally connected sensors**: The activation file is not valid. If the file is not valid, go to [Defender for IoT in the Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started). On the **Sensor Management** page, select the sensor with the invalid file, and download a new activation file. -- **For cloud-connected sensors**: The sensor can't connect to the internet. Check the sensor's network configuration. If your sensor needs to connect through a web proxy to access the internet, verify that your proxy server is configured correctly on the **Sensor Network Configuration** screen. Verify that \*.azure-devices.net:443 is allowed in the firewall and/or proxy. If wildcards are not supported or you want more control, the FQDN for your specific Defender for IoT hub should be opened in your firewall and/or proxy. For details, see [Reference - IoT Hub endpoints](../../iot-hub/iot-hub-devguide-endpoints.md).
+- **For cloud-connected sensors**: The sensor can't connect to the internet. Check the sensor's network configuration. If your sensor needs to connect through a web proxy to access the internet, verify that your proxy server is configured correctly on the **Sensor Network Configuration** screen. Verify that \*.azure-devices.net:443 is allowed in the firewall and/or proxy. If wildcards are not supported or you want more control, the FQDN for your specific endpoint (either a sensor, or for legacy connections, an IoT hub) should be opened in your firewall and/or proxy. For more information, see [Reference - IoT Hub endpoints](../../iot-hub/iot-hub-devguide-endpoints.md).
- **For cloud-connected sensors**: The activation file is valid but Defender for IoT rejected it. If you can't resolve this problem, you can download another activation from the **Sites and Sensors** page in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started). If this doesn't work, contact Microsoft Support.
If you create a new IP address, you might be required to sign in again.
2. In the **System Settings** window, select **Network**.
- :::image type="content" source="media/how-to-manage-individual-sensors/edit-network-configuration-screen.png" alt-text="Configure your network settings.":::
- 3. Set the parameters: | Parameter | Description |
If you create a new IP address, you might be required to sign in again.
You can configure the sensor's time and region so that all the users see the same time and region. - | Parameter | Description | |--|--| | Timezone | The time zone definition for:<br />- Alerts<br />- Trends and statistics widgets<br />- Data mining reports<br /> -Risk assessment reports<br />- Attack vectors |
defender-for-iot How To Set Up Your Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-set-up-your-network.md
Verify that your organizational security policy allows access to the following:
| Protocol | Transport | In/Out | Port | Purpose | Source | Destination | |--|--|--|--|--|--|--|
-| HTTPS | TCP | Out | 443 | Access to Azure portal | Sensor | `*.azure-devices.net`<br> `*.blob.core.windows.net`<br> `*.servicebus.windows.net` |
+| HTTPS | TCP | Out | 443 | Access to Azure | Sensor | `*.azure-devices.net`<br> `*.blob.core.windows.net`<br> `*.servicebus.windows.net` |
#### Sensor access to the on-premises management console
defender-for-iot References Work With Defender For Iot Cli Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/references-work-with-defender-for-iot-cli-commands.md
When you're using the tool:
- Verify that the certificate files are readable on the appliance. - Confirm with IT the appliance domain (as it appears in the certificate) with your DNS server and the corresponding IP address.
-
+
+## Sign out of a support shell
+
+Starting in version 22.1.3, you're automatically signed out of an SSH session after an inactive period of 300 seconds.
+
+To sign out of your session manually, enter the following command:
+
+```azurecli-interactive
+logout
+```
+ ## Next steps For more information, see [Defender for IoT API sensor and management console APIs](references-work-with-defender-for-iot-apis.md).
defender-for-iot Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes-archive.md
+
+ Title: What's new archive for Microsoft Defender for IoT for organizations
+description: Learn about the features and enhancements released for Microsoft Defender for IoT for organizations more than 6 months ago.
+ Last updated : 03/03/2022++
+# What's new archive for in Microsoft Defender for IoT for organizations
++
+This article serves as an archive for features and enhancements released for Microsoft Defender for IoT for organizations more than 6 months ago.
+
+For more recent updates, see [What's new in Microsoft Defender for IoT?](release-notes.md).
+
+Noted features listed below are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## April 2021
+
+### Work with automatic threat Intelligence updates (Public Preview)
+
+New threat intelligence packages can now be automatically pushed to cloud connected sensors as they're released by Microsoft Defender for IoT. This is in addition to downloading threat intelligence packages and then uploading them to sensors.
+
+Working with automatic updates helps reduce operational efforts and ensure greater security.
+Enable automatic updating by onboarding your cloud connected sensor on the Defender for IoT portal with the **Automatic Threat Intelligence Updates** toggle turned on.
+
+If you would like to take a more conservative approach to updating your threat intelligence data, you can manually push packages from the Microsoft Defender for IoT portal to cloud connected sensors only when you feel it's required.
+This gives you the ability to control when a package is installed, without the need to download and then upload it to your sensors. Manually push updates to sensors from the Defender for IoT **Sites and Sensors** page.
+
+You can also review the following information about threat intelligence packages:
+
+- Package version installed
+- Threat intelligence update mode
+- Threat intelligence update status
+
+### View cloud connected sensor information (Public Preview)
+
+View important operational information about cloud connected sensors on the **Sites and Sensors** page.
+
+- The sensor version installed
+- The sensor connection status to the cloud.
+- The last time the sensor was detected connecting to the cloud.
+
+### Alert API enhancements
+
+New fields are available for users working with alert APIs.
+
+**On-premises management console**
+
+- Source and destination address
+- Remediation steps
+- The name of sensor defined by the user
+- The name of zone associated with the sensor
+- The name of site associated with the sensor
+
+**Sensor**
+
+- Source and destination address
+- Remediation steps
+
+API version 2 is required when working with the new fields.
+
+### Features delivered as Generally Available (GA)
+
+The following features were previously available for Public Preview, and are now Generally Available (GA) features:
+
+- Sensor - enhanced custom alert rules
+- On-premises management console - export alerts
+- Add second network interface to On-premises management console
+- Device builder - new micro agent
+
+## March 2021
+
+### Sensor - enhanced custom alert rules (Public Preview)
+
+You can now create custom alert rules based on the day, group of days and time-period network activity was detected. Working with day and time rule conditions is useful, for example in cases where alert severity is derived by the time the alert event takes place. For example, create a custom rule that triggers a high severity alert when network activity is detected on a weekend or in the evening.
+
+This feature is available on the sensor with the release of version 10.2.
+
+### On-premises management console - export alerts (Public Preview)
+
+Alert information can now be exported to a .csv file from the on-premises management console. You can export information of all alerts detected or export information based on the filtered view.
+
+This feature is available on the on-premises management console with the release of version 10.2.
+
+### Add second network interface to On-premises management console (Public Preview)
+
+You can now enhance the security of your deployment by adding a second network interface to your on-premises management console. This feature allows your on-premises management to have its connected sensors on one secure network, while allowing your users to access the on-premises management console through a second separate network interface.
+
+This feature is available on the on-premises management console with the release of version 10.2.
+
+## January 2021
+
+- [Security](#security)
+- [Onboarding](#onboarding)
+- [Usability](#usability)
+- [Other updates](#other-updates)
+
+### Security
+
+Certificate and password recovery enhancements were made for this release.
+
+#### Certificates
+
+This version lets you:
+
+- Upload SSL certificates directly to the sensors and on-premises management consoles.
+- Perform validation between the on-premises management console and connected sensors, and between a management console and a High Availability management console. Validation is based on expiration dates, root CA authenticity, and Certificate Revocation Lists. If validation fails, the session won't continue.
+
+For upgrades:
+
+- There's no change in SSL certificate or validation functionality during the upgrade.
+- After upgrading, sensor and on-premises management console administrative users can replace SSL certificates, or activate SSL certificate validation from the System Settings, SSL Certificate window.
+
+For Fresh Installations:
+
+- During first-time sign-in, users are required to either use an SSL Certificate (recommended) or a locally generated self-signed certificate (not recommended)
+- Certificate validation is turned on by default for fresh installations.
+
+#### Password recovery
+
+Sensor and on-premises management console Administrative users can now recover passwords from the Microsoft Defender for IoT portal. Previously password recovery required intervention by the support team.
+
+### Onboarding
+
+#### On-premises management console - committed devices
+
+Following initial sign-in to the on-premises management console, users are now required to upload an activation file. The file contains the aggregate number of devices to be monitored on the organizational network. This number is referred to as the number of committed devices.
+Committed devices are defined during the onboarding process on the Microsoft Defender for IoT portal, where the activation file is generated.
+First-time users and users upgrading are required to upload the activation file.
+After initial activation, the number of devices detected on the network might exceed the number of committed devices. This event might happen, for example, if you connect more sensors to the management console. If there's a discrepancy between the number of detected devices and the number of committed devices, a warning appears in the management console. If this event occurs, you should upload a new activation file.
+
+#### Pricing page options
+
+Pricing page lets you onboard new subscriptions to Microsoft Defender for IoT and define committed devices in your network.
+Additionally, the Pricing page now lets you manage existing subscriptions associated with a sensor and update device commitment.
+
+#### View and manage onboarded sensors
+
+A new Site and Sensors portal page lets you:
+
+- Add descriptive information about the sensor. For example, a zone associated with the sensor, or free-text tags.
+- View and filter sensor information. For example, view details about sensors that are cloud connected or locally managed or view information about sensors in a specific zone.
+
+### Usability
+
+#### Azure Sentinel new connector page
+
+The Microsoft Defender for IoT data connector page in Azure Sentinel has been redesigned. The data connector is now based on subscriptions rather than IoT Hubs; allowing customers to better manage their configuration connection to Azure Sentinel.
+
+#### Azure portal permission updates
+
+Security Reader and Security Administrator support has been added.
+
+### Other updates
+
+#### Access group - zone permissions
+
+The on-premises management console Access Group rules won't include the option to grant access to a specific zone. There's no change in defining rules that use sites, regions, and business units. Following upgrade, Access Groups that contained rules allowing access to specific zones will be modified to allow access to its parent site, including all its zones.
+
+#### Terminology changes
+
+The term asset has been renamed device in the sensor and on-premises management console, reports, and other solution interfaces.
+In sensor and on-premises management console Alerts, the term Manage this Event has been named Remediation Steps.
+
+## Next steps
+
+[Getting started with Defender for IoT](getting-started.md)
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
Title: What's new in Microsoft Defender for IoT description: This article lets you know what's new in the latest release of Defender for IoT. Previously updated : 03/13/2022 Last updated : 03/15/2022 # What's new in Microsoft Defender for IoT? [!INCLUDE [Banner for top of topics](../includes/banner.md)]
-This article lists new features and feature enhancements for Defender for IoT in February 2022.
+This article lists Defender for IoT's new features and enhancements for organizations from the last 6 months.
+
+Features released earlier than 6 months ago are listed in [What's new archive for in Microsoft Defender for IoT for organizations](release-notes-archive.md).
Noted features listed below are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
The Defender for IoT sensor and on-premises management console update packages i
| Version | Date released | End support date | |--|--|--|
+| 22.1.2 | 03/2022 | 11/2022 |
| 22.1.1 | 02/2022 | 10/2022 | | 10.5.5 | 12/2021 | 09/2022 | | 10.5.4 | 12/2021 | 09/2022 | | 10.5.3 | 10/2021 | 07/2022 | | 10.5.2 | 10/2021 | 07/2022 |
-| 10.0 | 01/2021 | 10/2021 |
-| 10.3 | 04/2021 | 01/2022 |
++
+## March 2022
+
+- [Edit and delete devices from the Azure portal](#edit-and-delete-devices-from-the-azure-portal-public-preview)
+- [Key state alert updates](#key-state-alert-updates-public-preview)
+- [Sign out of a CLI session](#sign-out-of-a-cli-session)
+
+### Edit and delete devices from the Azure portal (Public preview)
+
+The **Device inventory** page in the Azure portal now supports the ability to edit device details, such as security, classification, location, and more:
++
+For more information, see [Edit device details](how-to-manage-device-inventory-for-organizations.md#edit-device-details).
+
+You can also delete devices from Defender for IoT, if they've been inactive for more than 14 days. For more information, see [Delete a device](how-to-manage-device-inventory-for-organizations.md#delete-a-device).
+
+### Key state alert updates (Public preview)
+
+Defender for IoT now supports the Rockwell protocol for PLC operating mode detections.
+
+For the Rockwell protocol, the **Device inventory** pages in both the Azure portal and the sensor console now indicate the PLC operating mode key and run state, and whether the device is currently in a secure mode.
+
+If the device's PLC operating mode is ever switched to an unsecured mode, such as *Program* or *Remote*, a **PLC Operating Mode Changed** alert is generated.
+
+For more information, see [Manage your IoT devices with the device inventory for organizations](how-to-manage-device-inventory-for-organizations.md).
+
+### Sign out of a CLI session
+
+Starting in this version, CLI users are automatically signed out of their session after 300 inactive seconds. To sign out manually, use the new `logout` CLI command.
+
+For more information, see [Work with Defender for IoT CLI commands](references-work-with-defender-for-iot-cli-commands.md).
++ ## February 2022
As part of the containerized sensor, the following CLI commands have been modifi
|`cyberx-xsense-reconfigure-hostname` | `sudo dpkg-reconfigure iot-sensor` | | `cyberx-xsense-system-remount-disks` |`sudo dpkg-reconfigure iot-sensor` | - The `sudo cyberx-xsense-limit-interface-I eth0 -l value` CLI command was removed. This command was used to limit the interface bandwidth that the sensor uses for day-to-day procedures, and is no longer supported. For more information, see [Defender for IoT installation](how-to-install-software.md) and [Work with Defender for IoT CLI commands](references-work-with-defender-for-iot-cli-commands.md).
For more information, see [Update a standalone sensor version](how-to-manage-ind
### New connectivity model and firewall requirements
-With this version, users are only required to install sensors and connect to Defender for IoT on the Azure portal. Defender for IoT no longer requires you to install, pay for, or manage an IoT Hub.
+Defender for IoT version 22.1.x supports a new set of sensor connection methods that provide simplified deployment, improved security, scalability, and flexible connectivity.
+
+In addition to [migration steps](connect-sensors.md#migration-for-existing-customers), this new connectivity model requires that you open a new firewall rule. For more information, see:
-This new connectivity model requires that you open a new firewall rule. For more information, see [Sensor access to Azure portal](how-to-set-up-your-network.md#sensor-access-to-azure-portal).
+- **New firewall requirements**: [Sensor access to Azure portal](how-to-set-up-your-network.md#sensor-access-to-azure-portal).
+- **Architecture**: [Sensor connection methods](architecture-connections.md)
+- **Connection procedures**: [Connect your sensors to Microsoft Defender for IoT](connect-sensors.md)
### Protocol improvements
Webhook extended can be used to send extra data to the endpoint. The extended fe
Unicode characters are now supported when working with sensor certificate passphrases. For more information, see [About certificates](how-to-deploy-certificates.md#about-certificates)
-## April 2021
-
-### Work with automatic threat Intelligence updates (Public Preview)
-
-New threat intelligence packages can now be automatically pushed to cloud connected sensors as they're released by Microsoft Defender for IoT. This is in addition to downloading threat intelligence packages and then uploading them to sensors.
-
-Working with automatic updates helps reduce operational efforts and ensure greater security.
-Enable automatic updating by onboarding your cloud connected sensor on the Defender for IoT portal with the **Automatic Threat Intelligence Updates** toggle turned on.
-
-If you would like to take a more conservative approach to updating your threat intelligence data, you can manually push packages from the Microsoft Defender for IoT portal to cloud connected sensors only when you feel it's required.
-This gives you the ability to control when a package is installed, without the need to download and then upload it to your sensors. Manually push updates to sensors from the Defender for IoT **Sites and Sensors** page.
-
-You can also review the following information about threat intelligence packages:
--- Package version installed-- Threat intelligence update mode-- Threat intelligence update status-
-### View cloud connected sensor information (Public Preview)
-
-View important operational information about cloud connected sensors on the **Sites and Sensors** page.
--- The sensor version installed-- The sensor connection status to the cloud.-- The last time the sensor was detected connecting to the cloud.-
-### Alert API enhancements
-
-New fields are available for users working with alert APIs.
-
-**On-premises management console**
--- Source and destination address-- Remediation steps-- The name of sensor defined by the user-- The name of zone associated with the sensor-- The name of site associated with the sensor-
-**Sensor**
--- Source and destination address-- Remediation steps-
-API version 2 is required when working with the new fields.
-
-### Features delivered as Generally Available (GA)
-
-The following features were previously available for Public Preview, and are now Generally Available (GA) features:
--- Sensor - enhanced custom alert rules-- On-premises management console - export alerts-- Add second network interface to On-premises management console-- Device builder - new micro agent-
-## March 2021
-
-### Sensor - enhanced custom alert rules (Public Preview)
-
-You can now create custom alert rules based on the day, group of days and time-period network activity was detected. Working with day and time rule conditions is useful, for example in cases where alert severity is derived by the time the alert event takes place. For example, create a custom rule that triggers a high severity alert when network activity is detected on a weekend or in the evening.
-
-This feature is available on the sensor with the release of version 10.2.
-
-### On-premises management console - export alerts (Public Preview)
-
-Alert information can now be exported to a .csv file from the on-premises management console. You can export information of all alerts detected or export information based on the filtered view.
-
-This feature is available on the on-premises management console with the release of version 10.2.
-
-### Add second network interface to On-premises management console (Public Preview)
-
-You can now enhance the security of your deployment by adding a second network interface to your on-premises management console. This feature allows your on-premises management to have its connected sensors on one secure network, while allowing your users to access the on-premises management console through a second separate network interface.
-
-This feature is available on the on-premises management console with the release of version 10.2.
-
-## January 2021
--- [Security](#security)-- [Onboarding](#onboarding)-- [Usability](#usability)-- [Other updates](#other-updates)-
-### Security
-
-Certificate and password recovery enhancements were made for this release.
-
-#### Certificates
-
-This version lets you:
--- Upload SSL certificates directly to the sensors and on-premises management consoles.-- Perform validation between the on-premises management console and connected sensors, and between a management console and a High Availability management console. Validation is based on expiration dates, root CA authenticity, and Certificate Revocation Lists. If validation fails, the session won't continue.-
-For upgrades:
--- There's no change in SSL certificate or validation functionality during the upgrade.-- After upgrading, sensor and on-premises management console administrative users can replace SSL certificates, or activate SSL certificate validation from the System Settings, SSL Certificate window. -
-For Fresh Installations:
--- During first-time sign-in, users are required to either use an SSL Certificate (recommended) or a locally generated self-signed certificate (not recommended)-- Certificate validation is turned on by default for fresh installations.-
-#### Password recovery
-
-Sensor and on-premises management console Administrative users can now recover passwords from the Microsoft Defender for IoT portal. Previously password recovery required intervention by the support team.
-
-### Onboarding
-
-#### On-premises management console - committed devices
-
-Following initial sign-in to the on-premises management console, users are now required to upload an activation file. The file contains the aggregate number of devices to be monitored on the organizational network. This number is referred to as the number of committed devices.
-Committed devices are defined during the onboarding process on the Microsoft Defender for IoT portal, where the activation file is generated.
-First-time users and users upgrading are required to upload the activation file.
-After initial activation, the number of devices detected on the network might exceed the number of committed devices. This event might happen, for example, if you connect more sensors to the management console. If there's a discrepancy between the number of detected devices and the number of committed devices, a warning appears in the management console. If this event occurs, you should upload a new activation file.
-
-#### Pricing page options
-
-Pricing page lets you onboard new subscriptions to Microsoft Defender for IoT and define committed devices in your network.
-Additionally, the Pricing page now lets you manage existing subscriptions associated with a sensor and update device commitment.
-
-#### View and manage onboarded sensors
-
-A new Site and Sensors portal page lets you:
--- Add descriptive information about the sensor. For example, a zone associated with the sensor, or free-text tags.-- View and filter sensor information. For example, view details about sensors that are cloud connected or locally managed or view information about sensors in a specific zone. -
-### Usability
-
-#### Azure Sentinel new connector page
-
-The Microsoft Defender for IoT data connector page in Azure Sentinel has been redesigned. The data connector is now based on subscriptions rather than IoT Hubs; allowing customers to better manage their configuration connection to Azure Sentinel.
-
-#### Azure portal permission updates
-
-Security Reader and Security Administrator support has been added.
-
-### Other updates
-
-#### Access group - zone permissions
-
-The on-premises management console Access Group rules won't include the option to grant access to a specific zone. There's no change in defining rules that use sites, regions, and business units. Following upgrade, Access Groups that contained rules allowing access to specific zones will be modified to allow access to its parent site, including all its zones.
-
-#### Terminology changes
-
-The term asset has been renamed device in the sensor and on-premises management console, reports, and other solution interfaces.
-In sensor and on-premises management console Alerts, the term Manage this Event has been named Remediation Steps.
- ## Next steps [Getting started with Defender for IoT](getting-started.md)
defender-for-iot Resources Frequently Asked Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/resources-frequently-asked-questions.md
Microsoft Defender for IoT delivers comprehensive security across all your IoT/O
## Do I have to be an Azure customer?
-No, for the agentless version of Microsoft Defender for IoT, you do not need to be an Azure customer. However, if you want to send alerts to Microsoft Sentinel; provision network sensors and monitor their health from the cloud; and benefit from automatic software and threat intelligence updates, you will need to connect the sensor to Azure via Azure IoT Hub.
+No, for the agentless version of Microsoft Defender for IoT, you do not need to be an Azure customer. However, if you want to send alerts to Microsoft Sentinel; provision network sensors and monitor their health from the cloud; and benefit from automatic software and threat intelligence updates, you will need to connect the sensor to Azure and Defender for IoT. For more information, see [Sensor connection methods](architecture-connections.md).
For the agent-based version of Microsoft Defender for IoT, you must be an Azure customer.
defender-for-iot Tutorial Getting Started Eiot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-getting-started-eiot-sensor.md
The environment will now have to be prepared.
* **Storage**: *.blob.core.windows.net
- * **IoT Hub**: *.azure-devices.net
- * **Download Center**: download.microsoft.com
+ * **IoT Hub**: *.azure-devices.net
+ You can also download, and add the [Azure public IP ranges](https://www.microsoft.com/download/details.aspx?id=56519) to your firewall will allow the Azure resources that are specified above along with their region. > [!Note]
defender-for-iot Tutorial Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-onboarding.md
Before you can start using your Defender for IoT sensor, you will need to onboar
1. Choose a sensor connection mode by using the **Cloud connected** toggle. If the toggle is on, the sensor is cloud connected. If the toggle is off, the sensor is locally managed.
- - **Cloud-connected sensors**: Information that the sensor detects is displayed in the sensor console. Alert information is delivered through an IoT hub and can be shared with other Azure services, such as Microsoft Sentinel. In addition, threat intelligence packages can be pushed from Defender for IoT to sensors. Conversely when, the sensor is not cloud connected, you must download threat intelligence packages and then upload them to your enterprise sensors. To allow Defender for IoT to push packages to sensors, enable the **Automatic Threat Intelligence Updates** toggle. For more information, see [Threat intelligence research and packages](how-to-work-with-threat-intelligence-packages.md).
+ - **Cloud-connected sensors**: Information that the sensor detects is displayed in the sensor console. Alert information is delivered to Defender for Cloud on Azure and can be shared with other Azure services, such as Microsoft Sentinel. In addition, threat intelligence packages can be pushed from Defender for IoT to sensors. Conversely when, the sensor is not cloud connected, you must download threat intelligence packages and then upload them to your enterprise sensors. To allow Defender for IoT to push packages to sensors, enable the **Automatic Threat Intelligence Updates** toggle. For more information, see [Threat intelligence research and packages](how-to-work-with-threat-intelligence-packages.md).
- For cloud connected sensors, the name defined during onboarding is the name that appears in the sensor console. You can't change this name from the console directly. For locally managed sensors, the name applied during onboarding will be stored in Azure but can be updated in the sensor console.
+ For cloud connected sensors, the name defined during onboarding is the name that appears in the sensor console. You can't change this name from the console directly. For locally managed sensors, the name applied during onboarding will be stored in Azure but can be updated in the sensor console.
+
+ For more information, see [Sensor connection methods](architecture-connections.md) and [Connect your sensors to Microsoft Defender for IoT](connect-sensors.md).
- **Locally managed sensors**: Information that sensors detect is displayed in the sensor console. If you're working in an air-gapped network and want a unified view of all information detected by multiple locally managed sensors, work with the on-premises management console.
-1. Select a site to associate your sensor to within an IoT Hub. The IoT Hub will serve as a gateway between this sensor and Microsoft Defender for IoT. Define the display name, and zone. You can also add descriptive tags. The display name, zone, and tags are descriptive entries on the [View onboarded sensors](how-to-manage-sensors-on-the-cloud.md#manage-on-boarded-sensors).
+1. Select a site to associate your sensor to. Define the display name, and zone. You can also add descriptive tags. The display name, zone, and tags are descriptive entries on the [View onboarded sensors](how-to-manage-sensors-on-the-cloud.md#manage-on-boarded-sensors).
1. Select **Register**.
digital-twins How To Integrate Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-integrate-logic-apps.md
Navigate to the [Logic Apps Custom Connector](https://portal.azure.com/#blade/Hu
:::image type="content" source="media/how-to-integrate-logic-apps/logic-apps-custom-connector.png" alt-text="Screenshot of the 'Logic Apps Custom Connector' page in the Azure portal. The 'Add' button is highlighted.":::
-In the **Create logic apps custom connector** page that follows, select your subscription and resource group, and a name and deployment location for your new connector. Select **Review + create**.
+In the **Create logic apps custom connector** page that follows, select your subscription and resource group, and a name and deployment region for your new connector. Select **Review + create**.
+
+>[!IMPORTANT]
+> The custom connector and the logic app that you'll create later will need to be in the same deployment region.
:::image type="content" source="media/how-to-integrate-logic-apps/create-logic-apps-custom-connector.png" alt-text="Screenshot of the 'Create logic apps custom connector' page in the Azure portal.":::
-Doing so will take you to the **Review + create** tab, where you can select **Create** at the bottom to create your resource.
+Doing so will take you to the **Review + create** tab, where you can select **Create** at the bottom to create your custom connector.
:::image type="content" source="media/how-to-integrate-logic-apps/review-logic-apps-custom-connector.png" alt-text="Screenshot of the 'Review + create' tab of the 'Review Logic Apps Custom Connector' page in the Azure portal.":::
In the **Edit Logic Apps Custom Connector** page that follows, configure this in
* **Custom connectors** - **API Endpoint**: **REST** (leave default) - **Import mode**: **OpenAPI file** (leave default)
- - **File**: This configuration will be the custom Swagger file you downloaded earlier. Select **Import**, locate the file on your machine (*Azure_Digital_Twins_custom_Swaggers__Logic_Apps_connector_\LogicApps\...\digitaltwins.json*), and select **Open**.
+ - **File**: This configuration will be the custom Swagger file you downloaded earlier. Select **Import**, locate the file on your machine (*digital-twins-custom-swaggers-main\LogicApps\...\digitaltwins.json*), and select **Open**.
* **General information**
- - **Icon**: Upload an icon that you like.
- - **Icon background color**: Enter hexadecimal code in the format '#xxxxxx' for your color.
- - **Description**: Fill whatever values you want.
- - **Connect via on-premises data gateway**: **Toggled off** (leave default)
+ - **Icon**: If you want, upload an icon.
+ - **Icon background color**: If you want, enter a background color.
+ - **Description**: If you want, customize a description for your connector.
+ - **Connect via on-premises data gateway**: Toggled off (leave default)
- **Scheme**: **HTTPS** (leave default) - **Host**: The host name of your Azure Digital Twins instance. - **Base URL**: */* (leave default)
In the Security step, select **Edit** and configure this information:
* **Authentication type**: **OAuth 2.0** * **OAuth 2.0**: - **Identity provider**: **Azure Active Directory**
- - **Client ID**: The Application (client) ID for the Azure AD app registration you created in [Prerequisites](#prerequisites)
- - **Client secret**: The Client secret from the app registration
+ - **Client ID**: The application (client) ID for the Azure AD app registration you created in [Prerequisites](#prerequisites)
+ - **Client secret**: The client secret value from the app registration
- **Login URL**: `https://login.windows.net` (leave default)
- - **Tenant ID**: The Directory (tenant) ID for your Azure AD app registration
+ - **Tenant ID**: The directory (tenant) ID from the app registration
- **Resource URL**: *0b07f429-9f4b-4714-9392-cc5e8e80c8b0*
+ - **Enable on-behalf-of login**: *false* (leave default)
- **Scope**: *Directory.AccessAsUser.All*
- - **Redirect URL**: (leave default for now)
-The Redirect URL field says **Save the custom connector to generate the redirect URL**. Generate it now by selecting **Update connector** across the top of the pane to confirm your connector settings.
+The **Redirect URL** field says **Save the custom connector to generate the redirect URL**. Generate it now by selecting **Update connector** across the top of the pane to confirm your connector settings.
:::image type="content" source="media/how-to-integrate-logic-apps/update-connector.png" alt-text="Screenshot of the top of the 'Edit Logic Apps Custom Connector' page. Highlight around 'Update connector' button.":::
-Return to the Redirect URL field and copy the value that has been generated. You'll use it in the next step.
+Return to the **Redirect URL** field and copy the value that has been generated. You'll use it in the next step.
:::image type="content" source="media/how-to-integrate-logic-apps/copy-redirect-url.png" alt-text="Screenshot of the Redirect URL field in the 'Edit Logic Apps Custom Connector' page. The button to copy the value is highlighted.":::
-Now you've entered all the information that is required to create your connector (no need to continue past Security to the Definition step). You can close the **Edit Logic Apps Custom Connector** pane.
+Now you've entered all the information that is required to create your connector (no need to continue past **Security** to the **Definition** step). You can close the **Edit Logic Apps Custom Connector** pane.
>[!NOTE]
->Back on your connector's Overview page where you originally selected **Edit**, note that selecting Edit again will restart the entire process of entering your configuration choices. It will not populate your values from the last time you went through it, so if you want to save an updated configuration with any changed values, you must re-enter all the other values as well to avoid their being overwritten by the defaults.
+>Back on your connector's Overview page where you originally selected **Edit**, if you select Edit again, it will restart the entire process of entering your configuration choices. It will not populate your values from the last time you went through it, so if you want to save an updated configuration with any changed values, you must re-enter all the other values as well to keep them from being overwritten by the defaults.
### Grant connector permissions in the Azure AD app
Under **Authentication** from the registration's menu, add a URI.
:::image type="content" source="media/how-to-integrate-logic-apps/add-uri.png" alt-text="Screenshot of the Authentication page for the app registration in the Azure portal, highlighting the 'Add a URI' button and the 'Authentication' menu.":::
-Enter the custom connector's redirect URL into the new field, and select the **Save** icon.
-
+Enter the custom connector's redirect URL into the new field, and select **Save** at the bottom of the page.
-You're now done setting up a custom connector that can access the Azure Digital Twins APIs.
+Now you're done setting up a custom connector that can access the Azure Digital Twins APIs.
## Create logic app
In the [Azure portal](https://portal.azure.com), search for *Logic apps* in the
:::image type="content" source="media/how-to-integrate-logic-apps/create-logic-app.png" alt-text="Screenshot of the 'Logic Apps' page in the Azure portal, highlighting the 'Create logic app' button.":::
-In the **Logic App** page that follows, enter your subscription and resource group. Under **Instance Details** select a **Consumption** instance type, choose a name for your logic app, and select the deployment location. Choose whether you want to enable or disable log analytics.
+In the **Create Logic App** page that follows, enter your subscription, resource group, and a name and region for your logic app. Choose whether you want to enable or disable log analytics. Under **Plan**, select a **Consumption** plan type.
+
+>[!IMPORTANT]
+> The logic app should be in the same deployment region as the custom connector you created earlier.
Select the **Review + create** button.
-Doing so will take you to the **Review + create** tab, where you can review your details and select **Create** at the bottom to create your resource.
+Doing so will take you to the **Review + create** tab, where you can review your details and select **Create** at the bottom to create your logic app.
You'll be taken to the deployment page for the logic app. When it's finished deploying, select the **Go to resource** button to continue to the **Logic Apps Designer**, where you'll fill in the logic of the workflow.
In the Logic Apps Designer, under **Start with a common trigger**, select **Recu
:::image type="content" source="media/how-to-integrate-logic-apps/logic-apps-designer-recurrence.png" alt-text="Screenshot of the 'Logic Apps Designer' page in the Azure portal, highlighting the 'Recurrence' common trigger.":::
-In the Logic Apps Designer page that follows, change the **Recurrence** Frequency to **Second**, so that the event is triggered every 3 seconds. Selecting this frequency will make it easy to see the results later without having to wait long.
+In the Logic Apps Designer page that follows, change the Recurrence **Frequency** to **Second**, so that the event is triggered every 3 seconds. Selecting this frequency will make it easy to see the results later without having to wait long.
Select **+ New step**.
-Doing so will open a Choose an action box. Switch to the **Custom** tab. You should see your custom connector from earlier in the top box.
+Doing so will open a box to choose an operation. Switch to the **Custom** tab. You should see your custom connector from earlier in the top box.
+Select it to display the list of APIs contained in that connector. Use the search bar or scroll through the list to select **DigitalTwins_Update**. (The **DigitalTwins_Update** action is the API call used in this article, but you could also select any other API as a valid choice for a Logic Apps connection).
-Select it to display the list of APIs contained in that connector. Use the search bar or scroll through the list to select **DigitalTwins_Add**. (The **DigitalTwins_Add** action is the API call used in this article, but you could also select any other API as a valid choice for a Logic Apps connection).
You may be asked to sign in with your Azure credentials to connect to the connector. If you get a **Permissions requested** dialogue, follow the prompts to grant consent for your app and accept.
-In the new **DigitalTwinsAdd** box, fill the fields as follows:
-* **id**: Fill the *Twin ID* of the digital twin in your instance that you want the Logic App to update.
-* **twin**: This field is where you'll enter the body that the chosen API request requires. For **DigitalTwinsUpdate**, this body is in the form of JSON Patch code. For more about structuring a JSON Patch to update your twin, see the [Update a digital twin](how-to-manage-twin.md#update-a-digital-twin) section of *How-to: Manage digital twins*.
-* **api-version**: The latest API version. Currently, this value is *2020-10-31*.
+In the new **DigitalTwins Update** box, fill the fields as follows:
+* **id**: Fill the *twin ID* of the digital twin in your instance that you want the Logic App to update.
+* **Item - 1**: This field is for the body of the **DigitalTwins Update** API request. Enter JSON Patch code to update one of the fields on your twin. For more information about creating JSON Patch to update your twin, see [Update a digital twin](how-to-manage-twin.md#update-a-digital-twin).
+* **api-version**: Select the latest API version.
-Select **Save** in the Logic Apps Designer.
+>[!TIP]
+>You can add additional operations to the logic app by selecting **+ New step** from this page.
-You can choose other operations by selecting **+ New step** on the same window.
+Select **Save** in the Logic Apps Designer.
## Query twin to see the update Now that your logic app has been created, the twin update event you defined in the Logic Apps Designer should occur on a recurrence of every three seconds. This configured frequency means that after three seconds, you should be able to query your twin and see your new patched values reflected.
-You can query your twin via your method of choice (such as a [custom client app](tutorial-command-line-app.md), the [Azure Digital Twins Explorer](concepts-azure-digital-twins-explorer.md), the [SDKs and APIs](concepts-apis-sdks.md), or the [CLI](concepts-cli.md)).
-
-For more about querying your Azure Digital Twins instance, see [Query the twin graph](how-to-query-graph.md).
+There are many ways to query for your twin information, including the Azure Digital Twins [APIs and SDKs](concepts-apis-sdks.md), [CLI commands](concepts-cli.md), or [Azure Digital Twins Explorer](concepts-azure-digital-twins-explorer.md). For more information about querying your Azure Digital Twins instance, see [Query the twin graph](how-to-query-graph.md).
## Next steps
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/overview.md
# What are Azure management groups?
-If your organization has many subscriptions, you may need a way to efficiently manage access,
-policies, and compliance for those subscriptions. Azure management groups provide a level of scope
-above subscriptions. You organize subscriptions into containers called "management groups" and apply
-your governance conditions to the management groups. All subscriptions within a management group
-automatically inherit the conditions applied to the management group. Management groups give you
-enterprise-grade management at a large scale no matter what type of subscriptions you might have.
-All subscriptions within a single management group must trust the same Azure Active Directory
+If your organization has many Azure subscriptions, you may need a way to efficiently manage access,
+policies, and compliance for those subscriptions. _Management groups_ provide a governance scope
+above subscriptions. You organize subscriptions into management groups the governance conditions you apply
+cascade by inheritence to all associated subscriptions.
+
+Management groups give you
+enterprise-grade management at scale no matter what type of subscriptions you might have.
+However, all subscriptions within a single management group must trust the same Azure Active Directory (Azure AD)
tenant. For example, you can apply policies to a management group that limits the regions available for
-virtual machine (VM) creation. This policy would be applied to all management groups,
-subscriptions, and resources under that management group by only allowing VMs to be created in that
-region.
+virtual machine (VM) creation. This policy would be applied to all nested management groups,
+subscriptions, and resources, and allow VM creation only in authorized regions.
## Hierarchy of management groups and subscriptions
iot-develop Concepts Azure Rtos Security Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-azure-rtos-security-practices.md
Title: Azure RTOS Security Guidance for Embedded Devices
+ Title: Azure RTOS security guidance for embedded devices
description: Learn best practices for developing secure applications on embedded devices with Azure RTOS.
Last updated 11/11/2021
-# Guidelines to develop secure embedded applications with Azure RTOS
+# Develop secure embedded applications with Azure RTOS
-## Introduction
+This article offers guidance on implementing security for IoT devices that run Azure RTOS and connect to Azure IoT services. Azure RTOS is a real-time operating system (RTOS) for embedded devices. It includes a networking stack and middleware and helps you securely connect your application to the cloud.
-This article offers guidance on implementing security for IoT devices that run Azure RTOS, and connect to Azure IoT services. Azure RTOS is a real-time operating system for embedded devices. It includes a networking stack and middleware, and helps you securely connect your application to the cloud.
-
-The security of an IoT application depends on your choice of hardware, and how your application implements and uses security features. We recommend that you use this article as a starting point to understand the main issues for further investigation.
+The security of an IoT application depends on your choice of hardware and how your application implements and uses security features. Use this article as a starting point to understand the main issues for further investigation.
## Microsoft security principles
-Microsoft recommends an approach based on the principle of *Zero Trust* when designing IoT devices. We highly recommend reading the [Zero Trust: Cyber security for IoT](https://azure.microsoft.com/mediahandler/files/resourcefiles/zero-trust-cybersecurity-for-the-internet-of-things/Zero%20Trust%20Security%20Whitepaper_4.30_3pm.pdf) whitepaper as a prerequisite to this article. This brief paper outlines several categories that should be considered when implementing security across an IoT ecosystem with an emphasis on device security. The following sections overview the key components for cryptographic security.
+When you design IoT devices, we recommend an approach based on the principle of *Zero Trust*. As a prerequisite to this article, read [Zero Trust: Cyber security for IoT](https://azure.microsoft.com/mediahandler/files/resourcefiles/zero-trust-cybersecurity-for-the-internet-of-things/Zero%20Trust%20Security%20Whitepaper_4.30_3pm.pdf). This brief paper outlines categories to consider when you implement security across an IoT ecosystem. Device security is emphasized.
-- **Strong identity**
+The following sections discuss the key components for cryptographic security.
- - *Hardware Root of Trust (RoT)* A strong, hardware-based identity. This identity should be immutable and backed by hardware isolation and protection mechanisms.
+- **Strong identity:** Devices need a strong identity that includes the following technology solutions:
- - *Password-less authentication* is often achieved using X.509 certificates and asymmetric cryptography, where private keys are secured and isolated in hardware. Password-less authentication should be used both for the device identity often used in onboarding and/or attestation scenarios, and the device's operational identity with other cloud services.
+ - **Hardware root of trust**: This strong hardware-based identity should be immutable and backed by hardware isolation and protection mechanisms.
+ - **Passwordless authentication**: This type of authentication is often achieved by using X.509 certificates and asymmetric cryptography, where private keys are secured and isolated in hardware. Use passwordless authentication for the device identity in onboarding or attestation scenarios and the device's operational identity with other cloud services.
+ - **Renewable credentials**: Secure the device's operational identity by using renewable, short-lived credentials. X.509 certificates backed by a secure public key infrastructure (PKI) with a renewal period appropriate for the device's security posture provide an excellent solution.
- - *Renewable credentials* The device's operational identity should be secured using renewable, relatively short-lived credentials. X.509 certificates that are backed by a secure PKI with a renewal period appropriate for the device's security posture provide an excellent solution.
+- **Least-privileged access:** Devices should enforce least-privileged access control on local resources across workloads. For example, a firmware component that reports battery level shouldn't be able to access a camera component.
+- **Continual updates**: A device should enable the over-the-air (OTA) feature, such as the [Device Update for IoT Hub](../iot-hub-device-update/device-update-azure-real-time-operating-system.md) to push the firmware that contains the patches or bug fixes.
+- **Security monitoring and responses**: A device should be able to proactively report the security postures for the solution builder to monitor the potential threats for a large number of devices. You can use [Microsoft Defender for IoT](../defender-for-iot/device-builders/concept-rtos-security-module.md) for that purpose.
-- **Least-privileged access** Devices should enforce least-privileged access control on local resources across workloads. For example, a firmware component that reports battery level shouldn't be able to access a camera component.
+## Embedded security components: Cryptography
-- **Continual updates** A device should enable the Over-the-Air (OTA) feature, such as the [Device Update for IoT Hub](../iot-hub-device-update/device-update-azure-real-time-operating-system.md) to push the firmware that contains the patches or bug fixes.
+Cryptography is a foundation of security in networked devices. Networking protocols such as Transport Layer Security (TLS) rely on cryptography to protect and authenticate information that travels over a network or the public internet.
-- **Security monitoring and responses** A device should be able to proactively report the security postures for the solution builder to monitor the potential threats for a large number of devices. The [Microsoft Defender for IoT](../defender-for-iot/device-builders/concept-rtos-security-module.md) can be used for that purpose.
+A secure IoT device that connects to a server or cloud service by using TLS or similar protocols requires strong cryptography with protection for keys and secrets that are based in hardware. Most other security mechanisms provided by those protocols are built on cryptographic concepts. Proper cryptographic support is the most critical consideration when you develop a secure connected IoT device.
+The following sections discuss the key components for cryptographic security.
-## Embedded security components - cryptography
+### True random hardware-based entropy source
-Cryptography is a foundation of security in networked devices. However, networking protocols such as TLS rely on cryptography to protect and authenticate information traveling over a network or the public internet. A secure IoT device that connects to a server or cloud service using Transport Layer Security (TLS) or similar protocols requires strong cryptography with protection for keys and secrets that are based in hardware. Most other security mechanisms provided by those protocols are built on cryptographic concepts. Therefore, having proper cryptographic support is the single most critical consideration in developing a secure connected IoT device. The following sections overview the key components for cryptographic security.
+Any cryptographic application using TLS or cryptographic operations that require random values for keys or secrets must have an approved random entropy source. Without proper true randomness, statistical methods can be used to derive keys and secrets much faster than brute-force attacks, weakening otherwise strong cryptography.
-### True random hardware-based entropy source
+Modern embedded devices should support some form of cryptographic random number generator (CRNG) or "true" random number generator (TRNG). CRNGs and TRNGs are used to feed the random number generator that's passed into a TLS application.
-Any cryptographic application using TLS or cryptographic operations that require random values for keys or secrets must have an approved random entropy source. Without proper true randomness, statistical methods can be used to derive keys and secrets much faster than brute-force attacks, weakening otherwise strong cryptography. Modern embedded devices should support some form of Cryptographic Random Number Generator (CRNG) or ΓÇ£TrueΓÇ¥ Random Number Generator (TRNG) that can be used to feed the random number generator that is passed into a TLS application.
+Hardware random number generators (HRNGs) supply some of the best sources of entropy. HRNGs typically generate values based on statistically random noise signals generated in a physical process rather than from a software algorithm.
-Hardware Random Number Generators (HRNG) supply some of the best sources of entropy. HRNGs typically generate values based on statistically random noise signals generated in a physical process rather than from a software algorithm. Many government agencies and standards bodies around the world provide guidelines for random number generators. Some examples are the National Institute of Standards and Technology (NIST) in the US, the National Cypersecurity Agency of France (ANSSI) in France, and the Federal Office for Information Security (BSI) in Germany.
+Government agencies and standards bodies around the world provide guidelines for random number generators. Some examples are the National Institute of Standards and Technology (NIST) in the US, the National Cybersecurity Agency of France, and the Federal Office for Information Security in Germany.
**Hardware**: True entropy can only come from hardware sources. There are various methods to obtain cryptographic randomness, but all require physical processes to be considered secure.
-**Azure RTOS**: Azure RTOS uses random numbers for cryptography and TLS. For more information, see the User Guide for each protocol in the [Azure RTOS NetX Duo documentation](/azure/rtos/netx-duo/overview-netx-duo).
+**Azure RTOS**: Azure RTOS uses random numbers for cryptography and TLS. For more information, see the user guide for each protocol in the [Azure RTOS NetX Duo documentation](/azure/rtos/netx-duo/overview-netx-duo).
-**Application**: A random number function must be provided by the application developer and linked into the application, including Azure RTOS.
+**Application**: You must provide a random number function and link it into your application, including Azure RTOS.
> [!IMPORTANT]
-> The C library function `rand()` does NOT utilize a hardware-based RNG by default. It's critical to assure that a proper random routine is used. The setup will be specific to your hardware platform.
+> The C library function `rand()` does *not* use a hardware-based RNG by default. It's critical to assure that a proper random routine is used. The setup is specific to your hardware platform.
### Real-time capability
-Real-time capability is primarily needed for checking the expiration date of X.509 certificates. TLS also uses timestamps as part of its session negotiation, and certain applications may require accurate time reporting. There are many options for obtaining accurate time such as a Real-Time Clock (RTC) device, Network Time Protocol (NTP) to obtain time over a network, and a Global Positioning System (GPS), which includes timekeeping.
+Real-time capability is primarily needed for checking the expiration date of X.509 certificates. TLS also uses timestamps as part of its session negotiation. Certain applications might require accurate time reporting. Options for obtaining accurate time include:
+
+- A real-time clock (RTC) device.
+- The Network Time Protocol (NTP) to obtain time over a network.
+- A Global Positioning System (GPS), which includes timekeeping.
> [!IMPORTANT]
-> Having an accurate time is nearly as critical as having a TRNG for secure applications that use TLS and X.509.
+> Accurate time is nearly as critical as a TRNG for secure applications that use TLS and X.509.
+
+Many devices use a hardware RTC backed by synchronization over a network service or GPS. Devices might also rely solely on an RTC or on a network service or GPS. Regardless of the implementation, take measures to prevent drift.
-Many devices will use a hardware RTC backed by synchronization over a network service or GPS. Devices may also rely solely on an RTC or solely on a network service (or GPS). Regardless of the implementation, measures should be taken to prevent drift, protect hardware components from tampering, and guard against spoofing attacks when using network services or GPS. If an attacker can spoof time, they can induce your device to accept expired certificates.
+You also need to protect hardware components from tampering. And you need to guard against spoofing attacks when you use network services or GPS. If an attacker can spoof time, they can induce your device to accept expired certificates.
-**Hardware**: If you implement a hardware RTC and NTP or other network-based solutions are unavailable for synching, the RTC should:
+**Hardware**: If you implement a hardware RTC and NTP or other network-based solutions are unavailable for syncing, the RTC should:
-- Be accurate enough for certificate expiration checks (hour resolution or better).
+- Be accurate enough for certificate expiration checks of an hour resolution or better.
- Be securely updatable or resistant to drift over the lifetime of the device. - Maintain time across power failures or resets.
-An invalid time will disrupt all TLS communication, possibly rendering the device unreachable.
+An invalid time disrupts all TLS communication. The device might even be rendered unreachable.
-**Azure RTOS**: Azure RTOS TLS uses time data for several security-related functions, but the application developer must provide a function for retrieving time data from the RTC or network. For more information, see the [NetX Secure TLS user guide](/azure/rtos/netx-duo/netx-secure-tls/chapter1).
+**Azure RTOS**: Azure RTOS TLS uses time data for several security-related functions. You must provide a function for retrieving time data from the RTC or network. For more information, see the [NetX secure TLS user guide](/azure/rtos/netx-duo/netx-secure-tls/chapter1).
-**Application**: Depending on the time source used, the application may be required to initialize the functionality so that TLS can properly obtain the time information.
+**Application**: Depending on the time source used, your application might be required to initialize the functionality so that TLS can properly obtain the time information.
### Use approved cryptographic routines with strong key sizes
-There are a wide variety of cryptographic routines available today. When you design an application, research the cryptographic routines that you'll need and choose the strongest (largest) keys possible. Look to NIST or other organizations that provide guidance on appropriate cryptography for different applications.
+Many cryptographic routines are available today. When you design an application, research the cryptographic routines that you'll need. Choose the strongest and largest keys possible. Look to NIST or other organizations that provide guidance on appropriate cryptography for different applications. Consider these factors:
-- Choose key sizes that are appropriate for your application. Rivest Shamir Adleman encryption (RSA) is still acceptable in some organizations but only if the key is 2048 bits or larger. For the Advanced Encryption Standard (AES), minimum key sizes of 128 bits are often required.-- Choose modern, widely accepted algorithms and choose cipher modes that provide the highest level of security available for your application.-- Avoid using algorithms that are considered obsolete like the Data Encryption Standard (DES) and the Message Digest Algorithm 5 (MD5).-- Consider the lifetime of your application, and adjust your choices to account for continued reduction in the security of current routines and key sizes.
+- Choose key sizes that are appropriate for your application. Rivest-Shamir-Adleman (RSA) encryption is still acceptable in some organizations, but only if the key is 2048 bits or larger. For the Advanced Encryption Standard (AES), minimum key sizes of 128 bits are often required.
+- Choose modern, widely accepted algorithms. Choose cipher modes that provide the highest level of security available for your application.
+- Avoid using algorithms that are considered obsolete like the Data Encryption Standard and the Message Digest Algorithm 5.
+- Consider the lifetime of your application. Adjust your choices to account for continued reduction in the security of current routines and key sizes.
- Consider making key sizes and algorithms updatable to adjust to changing security requirements.-- Constant-time cryptographic techniques should be used whenever possible to mitigate timing attack vulnerabilities.
+- Use constant-time cryptographic techniques whenever possible to mitigate timing attack vulnerabilities.
-**Hardware**: If you're using hardware-based cryptography (which is recommended), your choices may be limited. Choose hardware that exceeds your minimum cryptographic and security needs and use the strongest routines and keys available on that platform.
+**Hardware**: If you use hardware-based cryptography, your choices might be limited. Choose hardware that exceeds your minimum cryptographic and security needs. Use the strongest routines and keys available on that platform.
**Azure RTOS**: Azure RTOS provides drivers for select cryptographic hardware platforms and software implementations for certain routines. Adding new routines and key sizes is straightforward.
-**Application**: Applications that require cryptographic operations should also use the strongest approved routines possible.
+**Application**: If your application requires cryptographic operations, use the strongest approved routines possible.
### Hardware-based cryptography acceleration
-Cryptography implemented in hardware for acceleration is there to unburden CPU cycles and almost always requires software that applies it to achieve security goals. Timing attacks exploit the duration of a cryptographic operation to derive information about a secret key. When you perform cryptographic operations in constant time, regardless of the key or data properties, hardware cryptographic peripherals prevent this kind of attack. Every platform will likely be different as there's no accepted standard for cryptographic hardware (other than the accepted cryptographic algorithms like AES and RSA).
+Cryptography implemented in hardware for acceleration is there to unburden CPU cycles. It almost always requires software that applies it to achieve security goals. Timing attacks exploit the duration of a cryptographic operation to derive information about a secret key.
+
+When you perform cryptographic operations in constant time, regardless of the key or data properties, hardware cryptographic peripherals prevent this kind of attack. Every platform is likely to be different. There's no accepted standard for cryptographic hardware. Exceptions are the accepted cryptographic algorithms like AES and RSA.
> [!IMPORTANT] > Hardware cryptographic acceleration doesn't necessarily equate to enhanced security. For example: >
-> - Some cryptographic accelerators implement only the Electronic Codebook (ECB) mode of the cipher, and it's left to the software developer to implement more secure modes like Galois/Counter Mode (GCM), Counter with CBC-MAC (CCM), or Cipher Block Chaining (CBC). ECB is not semantically secure.
+> - Some cryptographic accelerators implement only the Electronic Codebook (ECB) mode of the cipher. You must implement more secure modes like Galois/Counter Mode, Counter with CBC-MAC, or Cipher Block Chaining (CBC). ECB isn't semantically secure.
>
-> - Cryptographic accelerators often leave key protection to the software developer.
+> - Cryptographic accelerators often leave key protection to the developer.
>
-Combining hardware cryptography acceleration that implements secure cipher modes with hardware-based protection for keys provides a higher level of security for cryptographic operations.
+Combine hardware cryptography acceleration that implements secure cipher modes with hardware-based protection for keys. The combination provides a higher level of security for cryptographic operations.
+
+**Hardware**: There are few standards for hardware cryptographic acceleration, so each platform varies in available functionality. For more information, consult with your microcontroller unit (MCU) vendor.
-**Hardware**: There are few standards for hardware cryptographic acceleration so each platform will vary in available functionality. For more information, consult with your Micro Controller Unit (MCU) vendor.
+**Azure RTOS**: Azure RTOS provides drivers for select cryptographic hardware platforms. For more information on hardware-based cryptography, check your Azure RTOS cryptography documentation.
-**Azure RTOS**: Azure RTOS provides drivers for select cryptographic hardware platforms. Check your Azure RTOS Cryptography documentation for more information on hardware-based cryptography.
+**Application**: If your application requires cryptographic operations, make use of all hardware-based cryptography that's available.
-**Application**: Applications that require cryptographic operations should make use of all hardware-based cryptography that is available.
+## Embedded security components: Device identity
-## Embedded security components ΓÇô device identity
+In IoT systems, the notion that each endpoint represents a unique physical device challenges some of the assumptions that are built into the modern internet. As a result, a secure IoT device must be able to uniquely identify itself. If not, an attacker could imitate a valid device to steal data, send fraudulent information, or tamper with device functionality.
+Confirm that each IoT device that connects to a cloud service identifies itself in a way that can't be easily bypassed.
-In IoT systems, the notion that each endpoint represents a unique physical device challenges some of the assumptions that are built into the modern internet. As a result, a secure IoT device must be able to uniquely identify itself or an attacker could imitate a valid device to steal data, send fraudulent information, or tamper with device functionality. Confirm that each IoT device that connects to a cloud service identifies itself in a way that can't be easily bypassed. The following sections overview the key security components for device identity.
+The following sections discuss the key security components for device identity.
### Unique verifiable device identifier
-A unique device identifier (device ID) allows a cloud service to verify the identity of a specific physical device and to verify that the device belongs to a particular group. It's the digital equivalent of a physical serial number, but it must be globally unique and protected. If the device ID is compromised, there's no way to distinguish between the physical device it represents and a fraudulent client.
+A unique device identifier is known as a device ID. It allows a cloud service to verify the identity of a specific physical device. It also verifies that the device belongs to a particular group. A device ID is the digital equivalent of a physical serial number. It must be globally unique and protected. If the device ID is compromised, there's no way to distinguish between the physical device it represents and a fraudulent client.
-In most modern connected devices, the device ID will be tied to cryptography. For example:
+In most modern connected devices, the device ID is tied to cryptography. For example:
-- It may be a private-public key pair, where the private key is globally unique and associated only with the device.-- It may be a private-public key pair, where the private key is associated with a set of devices and is used in combination with another identifier that's unique to the device.-- It may be cryptographic material that is used to derive private keys unique to the device.
+- It might be a private-public key pair, where the private key is globally unique and associated only with the device.
+- It might be a private-public key pair, where the private key is associated with a set of devices and is used in combination with another identifier that's unique to the device.
+- It might be cryptographic material that's used to derive private keys unique to the device.
-Regardless of implementation, the device ID and any associated cryptographic material must be hardware-protected, for example by using a Hardware Security Module (HSM).
+Regardless of implementation, the device ID and any associated cryptographic material must be hardware protected. For example, use a hardware security module (HSM).
-While the device ID can be used for client authentication with a cloud service or server, it's highly advisable to split the device ID from operational certificates typically used for such purposes. To lessen the attack surface, operational certificates should be relatively short-lived, and the public portion of the device ID shouldn't be widely distributed. Instead, the device ID can be used to sign and/or derive private keys associated with operational certificates.
+The device ID can be used for client authentication with a cloud service or server. It's best to split the device ID from operational certificates typically used for such purposes. To lessen the attack surface, operational certificates should be short-lived. The public portion of the device ID shouldn't be widely distributed. Instead, the device ID can be used to sign or derive private keys associated with operational certificates.
> [!NOTE]
-> A device ID is tied to a physical device (usually in a cryptographic manner) and provides a root of trust. It can be thought of as a ΓÇ£birth certificateΓÇ¥ for the deviceΓÇô-it represents a unique identity that applies to the entire lifespan of the device. Other forms of IDs such as for attestation or operational identification are designed to be updated periodically-ΓÇôit is equivalent to a driverΓÇÖs license (following the birth certificate analogy)ΓÇôit frequently identifies the owner, and security is maintained by requiring periodic updates or renewals. Just like a birth certificate can be provided to procure a driver's license, the device ID can be used to procure an operational ID. Note that within IoT, both the device ID and operational ID are frequently provided as X.509 certificates, utilizing the associated private keys to cryptographically tie the IDs to the specific hardware.
+> A device ID is tied to a physical device, usually in a cryptographic manner. It provides a root of trust. It can be thought of as a "birth certificate" for the device. A device ID represents a unique identity that applies to the entire lifespan of the device.
+>
+> Other forms of IDs, such as for attestation or operational identification, are updated periodically, like a driver's license. They frequently identify the owner. Security is maintained by requiring periodic updates or renewals.
+>
+> Just like a birth certificate is used to get a driver's license, the device ID is used to get an operational ID. Within IoT, both the device ID and operational ID are frequently provided as X.509 certificates. They use the associated private keys to cryptographically tie the IDs to the specific hardware.
-**Hardware**: A device ID must be tied to the hardware and must not be easily replicated. You should require hardware-based cryptographic features such as those found in an HSM. Some MCU devices may provide similar functionality.
+**Hardware**: Tie a device ID to the hardware. It must not be easily replicated. Require hardware-based cryptographic features like those found in an HSM. Some MCU devices might provide similar functionality.
-**Azure RTOS**: No specific Azure RTOS features use device IDs. However, communication to cloud services via TLS may require an X.509 certificate that is tied to the device ID.
+**Azure RTOS**: No specific Azure RTOS features use device IDs. Communication to cloud services via TLS might require an X.509 certificate that's tied to the device ID.
-**Application**: No specific features required for user applications, but a unique device ID may be required for certain applications.
+**Application**: No specific features are required for user applications. A unique device ID might be required for certain applications.
### Certificate management
-If your device utilizes a certificate from a Public Key Infrastructure (PKI), your application will need the ability to update those certificates periodically. This is true for both for the device and any trusted certificates used for verifying servers. The more frequent the update, the more secure your application will be.
+If your device uses a certificate from a PKI, your application needs to update those certificates periodically. This is true for the device and any trusted certificates used for verifying servers. The more frequent the update, the more secure your application will be.
-**Hardware**: All certificate private keys should be tied to your device. Ideally, the key should be generated internally by the hardware and never exposed to your application. You should mandate the ability to generate X.509 certificate requests on the device.
+**Hardware**: Tie all certificate private keys to your device. Ideally, the key is generated internally by the hardware and is never exposed to your application. Mandate the ability to generate X.509 certificate requests on the device.
-**Azure RTOS**: Azure RTOS TLS provides basic X.509 certificate support. Certificate Revocation Lists (CRLs) and policy parsing are supported but require manual management in your application without a supporting Software Development Kit (SDK).
+**Azure RTOS**: Azure RTOS TLS provides basic X.509 certificate support. Certificate revocation lists (CRLs) and policy parsing are supported. They require manual management in your application without a supporting SDK.
-**Application**: Make use of CRLs or Online Certificate Status Protocol (OCSP) to validate that certificates haven't been revoked by your PKI. Make sure to enforce X.509 policies, including validity periods and expiration dates, as required by your PKI.
+**Application**: Make use of CRLs or Online Certificate Status Protocol to validate that certificates haven't been revoked by your PKI. Make sure to enforce X.509 policies, validity periods, and expiration dates required by your PKI.
### Attestation
-Some devices provide a secret key or value that is uniquely loaded (usually using permanent fuses) into each specific device for the purposes of checking ownership or status of the device. Whenever possible, this hardware-based value should be utilized, though not necessarily directly, as part of any process where the device needs to identify itself to a remote host.
+Some devices provide a secret key or value that's uniquely loaded into each specific device. Usually, permanent fuses are used. The secret key or value is used to check the ownership or status of the device. Whenever possible, it's best to use this hardware-based value, though not necessarily directly. Use it as part of any process where the device needs to identify itself to a remote host.
-This should be coupled with a secure boot mechanism to prevent fraudulent use of the secret ID. Depending on the cloud services being used and their PKI, the device ID may be tied to an X.509 certificate. However, whenever possible the attestation device ID should be separate from "operational" certificates used to authenticate a device.
+This value is coupled with a secure boot mechanism to prevent fraudulent use of the secret ID. Depending on the cloud services being used and their PKI, the device ID might be tied to an X.509 certificate. Whenever possible, the attestation device ID should be separate from "operational" certificates used to authenticate a device.
-Device status in attestation scenarios can include information like firmware version, life-cycle state (for example, running vs. debug), component health, or any number of other factors that will help a service determine the device's state. For example, device attestation is often involved in OTA firmware update protocols to ensure that the correct updates are delivered to the intended device.
+Device status in attestation scenarios can include information to help a service determine the device's state. Information can include firmware version and component health. It can also include life-cycle state, for example, running versus debugging. Device attestation is often involved in OTA firmware update protocols to ensure that the correct updates are delivered to the intended device.
> [!NOTE]
-> ΓÇ£AttestationΓÇ¥ is distinct from ΓÇ£authenticationΓÇ¥. Attestation uses an external authority to determine whether a device belongs to a particular group using cryptography. ΓÇ£AuthenticationΓÇ¥ uses cryptography to verify that a host (device) owns a private key in a challenge-response process, such as the TLS handshake.
+> "Attestation" is distinct from "authentication." Attestation uses an external authority to determine whether a device belongs to a particular group by using cryptography. Authentication uses cryptography to verify that a host (device) owns a private key in a challenge-response process, such as the TLS handshake.
-**Hardware**: The selected hardware must provide functionality to provide a secret unique identifier. This functionality is usually tied into cryptographic hardware like a TPM or HSM and requires a specific API for attestation services.
+**Hardware**: The selected hardware must provide functionality to provide a secret unique identifier. This functionality is usually tied into cryptographic hardware like a TPM or HSM. A specific API is required for attestation services.
**Azure RTOS**: No specific Azure RTOS functionality is required.
-**Application**: The user application may be required to implement logic to tie the hardware features to whatever attestation the chosen cloud service(s) requires.
+**Application**: The user application might be required to implement logic to tie the hardware features to whatever attestation the chosen cloud service requires.
-## Embedded security components ΓÇô memory protection
+## Embedded security components: Memory protection
-Many successful hacking attacks utilize buffer overflow errors to gain access to privileged information or even to execute arbitrary code on a device. Numerous technologies and languages have been created to battle overflow problems, but the fact remains that system-level embedded development requires low-level programming. As a result, most embedded development is done using C or assembly language. These languages lack modern memory protection schemes but allow for less restrictive memory manipulation. Given the lack of built-in protection, the Azure RTOS developer must be vigilant about memory corruption. The following recommendations leverage functionality provided by some MCU platforms and Azure RTOS itself to help mitigate the impact of overflow errors on security. The following sections overview the key security components for memory protection.
+Many successful hacking attacks use buffer overflow errors to gain access to privileged information or even to execute arbitrary code on a device. Numerous technologies and languages have been created to battle overflow problems. Because system-level embedded development requires low-level programming, most embedded development is done by using C or assembly language.
-### Protection against reading/writing memory
+These languages lack modern memory protection schemes but allow for less restrictive memory manipulation. Because built-in protection is lacking, you must be vigilant about memory corruption. The following recommendations make use of functionality provided by some MCU platforms and Azure RTOS itself to help mitigate the impact of overflow errors on security.
-An MCU may provide a latching mechanism that enables a tamper-resistant state, either by preventing reading of sensitive data or by locking areas of memory from being overwritten. This may be part of, or in addition to, a Memory Protection Unit (MPU) or a Memory Management Unit (MMU).
+The following sections discuss the key security components for memory protection.
+
+### Protection against reading or writing memory
+
+An MCU might provide a latching mechanism that enables a tamper-resistant state. It works either by preventing reading of sensitive data or by locking areas of memory from being overwritten. This technology might be part of, or in addition to, a Memory Protection Unit (MPU) or a Memory Management Unit (MMU).
**Hardware**: The MCU must provide the appropriate hardware and interface to use memory protection.
-**Azure RTOS**: If the memory protection mechanism is not an MMU or MPU, Azure RTOS doesn't require any specific support. For more advanced memory protection, Azure RTOS ThreadX Modules may be used to provide detailed control over memory spaces for threads and other RTOS control structures.
+**Azure RTOS**: If the memory protection mechanism isn't an MMU or MPU, Azure RTOS doesn't require any specific support. For more advanced memory protection, you can use Azure RTOS ThreadX Modules for detailed control over memory spaces for threads and other RTOS control structures.
-**Application**: The application developer may be required to enable memory protection when the device is first booted ΓÇô refer to secure boot documentation. For simple mechanisms (not MMU or MPU), the application may place sensitive data (for example, certificates) into the protected memory region and access it using the hardware platform APIs.
+**Application**: Application developers might be required to enable memory protection when the device is first booted. For more information, see secure boot documentation. For simple mechanisms that aren't MMU or MPU, the application might place sensitive data like certificates into the protected memory region. The application can then access the data by using the hardware platform APIs.
### Application memory isolation
-If your hardware platform has a Memory Management Unit (MMU) or Memory Protection Unit (MPU), then those features can be utilized to isolate the memory spaces used by individual threads or processes. More sophisticated mechanisms also exist, such as TrustZone that provide additional protections above and beyond what a simple MPU can do. This isolation can thwart attackers from using a hijacked thread or process to corrupt or view memory in another thread or process.
+If your hardware platform has an MMU or MPU, those features can be used to isolate the memory spaces used by individual threads or processes. Sophisticated mechanisms like Trust Zone also provide protections beyond what a simple MPU can do. This isolation can thwart attackers from using a hijacked thread or process to corrupt or view memory in another thread or process.
**Hardware**: The MCU must provide the appropriate hardware and interface to use memory protection.
-**Azure RTOS**: Azure RTOS allows for ΓÇÿThreadX ModulesΓÇÖ that are built independently/separately and are provided with their own instruction and data area addresses at run-time. Memory protection can then be enabled such that a context switch to a thread in a module will disallow code from accessing memory outside of the assigned area.
+**Azure RTOS**: Azure RTOS allows for ThreadX Modules that are built independently or separately and are provided with their own instruction and data area addresses at runtime. Memory protection can then be enabled so that a context switch to a thread in a module disallows code from accessing memory outside of the assigned area.
> [!NOTE]
-> TLS and Message Queuing Telemetry Transport (MQTT) arenΓÇÖt yet supported from ThreadX Modules.
+> TLS and Message Queuing Telemetry Transport (MQTT) aren't yet supported from ThreadX Modules.
-**Application**: The application developer may be required to enable memory protection when the device is first booted ΓÇô refer to secure boot and ThreadX Modules documentation. Note: Use of ThreadX Modules may introduce additional memory and CPU overhead.
+**Application**: You might be required to enable memory protection when the device is first booted. For more information, see secure boot and ThreadX Modules documentation. Use of ThreadX Modules might introduce more memory and CPU overhead.
### Protection against execution from RAM
-Many MCU devices contain an internal ΓÇ£program flashΓÇ¥ where the application firmware is stored. The application code is sometimes run directly from the flash hardware, using the RAM only for data. If the MCU allows execution of code from RAM, look for a way to disable that feature. Many attacks will try to modify the application code in some way, but if the attacker can't execute code from RAM it becomes more difficult to compromise the device. Placing your application in flash makes it more difficult to change because of the nature of flash technology (unlock/erase/write process) and increases the challenge for an attacker. It's not a perfect solution, but, to provide for renewable security, the flash needs to be updatable. A completely read-only code section would be better at preventing attacks on executable code but would prevent updating.
+Many MCU devices contain an internal "program flash" where the application firmware is stored. The application code is sometimes run directly from the flash hardware and uses the RAM only for data.
+
+If the MCU allows execution of code from RAM, look for a way to disable that feature. Many attacks try to modify the application code in some way. If the attacker can't execute code from RAM, it's more difficult to compromise the device.
+
+Placing your application in flash makes it more difficult to change. Flash technology requires an unlock, erase, and write process. Although flash increases the challenge for an attacker, it's not a perfect solution. To provide for renewable security, the flash needs to be updatable. A completely read-only code section is better at preventing attacks on executable code, but it prevents updating.
-**Hardware**: Presence of a program flash used for code storage and execution. If running in RAM is required, then consider leveraging an MMU or MPU (if available) to protect from writing to the executable memory space.
+**Hardware**: Presence of a program flash used for code storage and execution. If running in RAM is required, consider using an MMU or MPU, if available. Use of an MMU or MPU protects from writing to the executable memory space.
**Azure RTOS**: No specific features.
-**Application**: Application may need to disable flash writing during secure boot depending on the hardware.
+**Application**: The application might need to disable flash writing during secure boot depending on the hardware.
### Memory buffer checking
-Avoiding buffer overflow problems is a primary concern for code running on connected devices. Applications written in unmanaged languages like C are particularly susceptible to buffer overflow issues. Safe coding practices can alleviate some of the problem but whenever possible try to incorporate buffer checking into your application. You may be able to make use of built-in features of the selected hardware platform, third-party libraries, tools, and, in some cases, features in the hardware itself that provide a mechanism for detecting or preventing overflow conditions.
+Avoiding buffer overflow problems is a primary concern for code running on connected devices. Applications written in unmanaged languages like C are susceptible to buffer overflow issues. Safe coding practices can alleviate some of the problems.
-**Hardware**: Some platforms may provide memory checking functionality. Consult with your MCU vendor for more information.
+Whenever possible, try to incorporate buffer checking into your application. You might be able to make use of built-in features of the selected hardware platform, third-party libraries, and tools. Even features in the hardware itself can provide a mechanism for detecting or preventing overflow conditions.
-**Azure RTOS**: No specific Azure RTOS functionality provided.
+**Hardware**: Some platforms might provide memory checking functionality. Consult with your MCU vendor for more information.
+
+**Azure RTOS**: No specific Azure RTOS functionality is provided.
**Application**: Follow good coding practice by requiring applications to always supply buffer size or the number of elements in an operation. Avoid relying on implicit terminators such as NULL. With a known buffer size, the program can check bounds during memory or array operations, such as when calling APIs like `memcpy`. Try to use safe versions of APIs like `memcpy_s`.
-### Enable run-time stack checking
+### Enable runtime stack checking
-Preventing stack overflow is a primary security concern for any application. Azure RTOS has some stack checking features that should be utilized whenever possible. These features are covered in the Azure RTOS ThreadX User Guide.
+Preventing stack overflow is a primary security concern for any application. Whenever possible, use Azure RTOS stack checking features. These features are covered in the Azure RTOS ThreadX user guide.
-**Hardware**: Some MCU platform vendors may provide hardware-based stack checking. Use any functionality that is available.
+**Hardware**: Some MCU platform vendors might provide hardware-based stack checking. Use any functionality that's available.
**Azure RTOS**: Azure RTOS ThreadX provides some stack checking functionality that can be optionally enabled at compile time. For more information, see the [Azure RTOS ThreadX documentation](/azure/rtos/threadx/).
-**Application**: Certain compilers such as IAR also have ΓÇ£stack canaryΓÇ¥ support that helps to catch stack overflow conditions. Check your tools to see what options are available and enable them if possible.
+**Application**: Certain compilers such as IAR also have "stack canary" support that helps to catch stack overflow conditions. Check your tools to see what options are available and enable them if possible.
+
+## Embedded security components: Secure boot and firmware update
+
+ An IoT device, unlike a traditional embedded device, is often connected over the internet to a cloud service for monitoring and data gathering. As a result, it's nearly certain that the device will be probed in some way. Probing can lead to an attack if a vulnerability is found.
+
+A successful attack might result in the discovery of an unknown vulnerability that compromises the device. Other devices of the same kind could also be compromised. For this reason, it's critical that an IoT device can be updated quickly and easily. The firmware image itself must be verified because if an attacker can load a compromised image onto a device, that device is lost.
-## Embedded security components ΓÇô secure boot and firmware update
+The solution is to pair a secure boot mechanism with remote firmware update capability. This capability is also called an OTA update. Secure boot verifies that a firmware image is valid and trusted. An OTA update mechanism allows updates to be quickly and securely deployed to the device.
- An IoT device, unlike a traditional embedded device, will often be connected over the internet to a cloud service for monitoring and data gathering. As a result, it's nearly certain that the device will be probed in some way, which could lead to an attack if a vulnerability is found. A successful attack may result in the discovery of an unknown vulnerability that further compromises the device and, more importantly, other devices of the same kind. For this reason, it's critical that an IoT device can be updated quickly and easily. This means that the firmware image itself must be verified, because if an attacker can load a compromised image onto a device then that device is lost. The solution is to pair a secure boot mechanism with remote firmware update (also called Over the Air, or OTA, update) capability. Secure boot verifies that a firmware image is valid and trusted, while an OTA update mechanism allows updates to be quickly and securely deployed to the device. The following sections overview the key security components for secure boot and firmware update.
+The following sections discuss the key security components for secure boot and firmware update.
### Secure boot
-It is vital that a device can be proven to be running valid firmware upon reset. Secure boot prevents the device from running untrusted or modified firmware images. Secure boot mechanisms are tied to the hardware platform and validate the firmware image against internally protected measurements before loading the application. If validation fails, the device will refuse to boot the corrupted image.
+It's vital that a device can prove it's running valid firmware upon reset. Secure boot prevents the device from running untrusted or modified firmware images. Secure boot mechanisms are tied to the hardware platform. They validate the firmware image against internally protected measurements before loading the application. If validation fails, the device refuses to boot the corrupted image.
-**Hardware**: MCU vendors may provide their own proprietary secure boot mechanisms as secure boot is tied to the hardware.
+**Hardware**: MCU vendors might provide their own proprietary secure boot mechanisms because secure boot is tied to the hardware.
-**Azure RTOS**: No specific Azure RTOS functionality is required for secure boot. There are third-party commercial vendors that offer secure boot products.
+**Azure RTOS**: No specific Azure RTOS functionality is required for secure boot. Third-party commercial vendors offer secure boot products.
-**Application**: The application may be affected by secure boot if over-the-air updates are enabled because the application itself may need to be responsible for retrieving and loading new firmware images (OTA update is tied to secure boot). The application will also need to be built with versioning and code-signing to support updates with secure boot.
+**Application**: The application might be affected by secure boot if OTA updates are enabled because the application itself might need to be responsible for retrieving and loading new firmware images. OTA update is tied to secure boot. You need to build the application with versioning and code-signing to support updates with secure boot.
### Firmware or OTA update
-An OTA update, sometimes referred to as ΓÇ£firmware updateΓÇ¥, involves updating the firmware image on your device to a new version to add features or fix bugs. OTA update is important for security because any vulnerabilities that are discovered must be patched as soon as possible.
+An OTA update, sometimes referred to as a firmware update, involves updating the firmware image on your device to a new version to add features or fix bugs. OTA update is important for security because vulnerabilities that are discovered must be patched as soon as possible.
> [!NOTE]
-> OTA updates MUST be tied to secure boot and code signing, or it is impossible to validate that new images arenΓÇÖt compromised.
+> OTA updates *must* be tied to secure boot and code signing. Otherwise, it's impossible to validate that new images aren't compromised.
-**Hardware**: Various implementations for OTA update exist, and some MCU vendors provide OTA update solutions that are tied to their hardware. Some OTA update mechanisms can also utilize extra storage space (for example, flash) for rollback protection and to provide uninterrupted application functionality during update downloads.
+**Hardware**: Various implementations for OTA update exist. Some MCU vendors provide OTA update solutions that are tied to their hardware. Some OTA update mechanisms can also use extra storage space, for example, flash. The storage space is used for rollback protection and to provide uninterrupted application functionality during update downloads.
**Azure RTOS**: No specific Azure RTOS functionality is required for OTA updates.
-**Application**: Some third-party software solutions for OTA update also exist and may be utilized by an Azure RTOS application. The application will also need to be built with versioning and code-signing to support updates with secure boot.
+**Application**: Third-party software solutions for OTA update also exist and might be used by an Azure RTOS application. You need to build the application with versioning and code-signing to support updates with secure boot.
### Rollback or downgrade protection
-Secure boot and OTA update must work together to provide an effective firmware update mechanism. Secure boot must be able to ingest a new firmware image from the OTA mechanism and mark the new version as being trusted. The OTA/Secure boot mechanism must also protect against downgrade attacks. If an attacker can force a rollback to an earlier trusted version that has known vulnerabilities, then the OTA/secure boot fails to provide proper security.
+Secure boot and OTA update must work together to provide an effective firmware update mechanism. Secure boot must be able to ingest a new firmware image from the OTA mechanism and mark the new version as being trusted.
+
+The OTA and secure boot mechanism must also protect against downgrade attacks. If an attacker can force a rollback to an earlier trusted version that has known vulnerabilities, the OTA and secure boot fails to provide proper security.
Downgrade protection also applies to revoked certificates or credentials.
-**Hardware**: No specific hardware functionality required (except as part of secure boot, OTA, or certificate management).
+**Hardware**: No specific hardware functionality is required, except as part of secure boot, OTA, or certificate management.
-**Azure RTOS**: No specific Azure RTOS functionality required.
+**Azure RTOS**: No specific Azure RTOS functionality is required.
-**Application**: No specific application support required, depends on requirements for OTA, secure boot, and certificate management.
+**Application**: No specific application support is required, depending on requirements for OTA, secure boot, and certificate management.
### Code signing
-Make use of any features for signing and verifying code or credential updates. Code signing involves generating a cryptographic hash of the firmware or application image. That hash is used to verify the integrity of the image received by the device, usually using a trusted root X.509 certificate to verify the hash signature. This process is usually tied into secure boot and OTA update mechanisms.
+Make use of any features for signing and verifying code or credential updates. Code signing involves generating a cryptographic hash of the firmware or application image. That hash is used to verify the integrity of the image received by the device. Typically, a trusted root X.509 certificate is used to verify the hash signature. This process is usually tied into secure boot and OTA update mechanisms.
-**Hardware**: No specific hardware functionality required (except as part of OTA update or secure boot). Using hardware-based signature verification is recommended if available.
+**Hardware**: No specific hardware functionality is required except as part of OTA update or secure boot. Use hardware-based signature verification if it's available.
-**Azure RTOS**: No specific Azure RTOS functionality required
+**Azure RTOS**: No specific Azure RTOS functionality is required.
-**Application** : Code signing can be is tied to secure boot and OTA update mechanisms to verify the integrity of downloaded firmware images.
+**Application**: Code signing is tied to secure boot and OTA update mechanisms to verify the integrity of downloaded firmware images.
-## Embedded security components ΓÇô protocols
+## Embedded security components: Protocols
-The following sections overview the key security components for protocols.
+The following sections discuss the key security components for protocols.
### Use the latest version of TLS possible for connectivity Support current TLS versions: - TLS 1.2 is currently (as of 2022) the most widely used TLS version.--- TLS 1.3 is the latest TLS version. Finalized in 2018, it adds many security and performance enhancements, but is not yet widely deployed. However, if your application can support TLS 1.3 it's recommended for new applications.
+- TLS 1.3 is the latest TLS version. Finalized in 2018, TLS 1.3 adds many security and performance enhancements. It isn't widely deployed. If your application can support TLS 1.3, we recommend it for new applications.
> [!NOTE]
-> TLS 1.0 and TLS 1.1 are obsolete protocols and shouldn't be used for new application development. They're disabled by default in Azure RTOS.
+> TLS 1.0 and TLS 1.1 are obsolete protocols. Don't use them for new application development. They're disabled by default in Azure RTOS.
**Hardware**: No specific hardware requirements.
-**Azure RTOS**: TLS 1.2 is enabled by default. TLS 1.3 support must be explicitly enabled in Azure RTOS as TLS 1.2 is still the de facto standard.
+**Azure RTOS**: TLS 1.2 is enabled by default. TLS 1.3 support must be explicitly enabled in Azure RTOS because TLS 1.2 is still the de-facto standard.
-**Application**: To use TLS with cloud services, a certificate will be required. The certificate must be managed by the application.
+**Application**: To use TLS with cloud services, a certificate is required. The certificate must be managed by the application.
### Use X.509 certificates for TLS authentication
-X.509 certificates are used to authenticate a device to a server and a server to a device. A device certificate is used to prove the identity of a device to a server. Trusted root CA certificates are used by a device to authenticate a server or service to which it connects. The ability to update these certificates is critical as certificates can be compromised and have limited lifespans.
+X.509 certificates are used to authenticate a device to a server and a server to a device. A device certificate is used to prove the identity of a device to a server.
-Using hardware-based X.509 certificates with TLS mutual authentication and a PKI with active monitoring of certificate status provides the highest level of security.
+Trusted root CA certificates are used by a device to authenticate a server or service to which it connects. The ability to update these certificates is critical. Certificates can be compromised and have limited lifespans.
+
+Use hardware-based X.509 certificates with TLS mutual authentication and a PKI with active monitoring of certificate status for the highest level of security.
**Hardware**: No specific hardware requirements. **Azure RTOS**: Azure RTOS TLS provides basic X.509 authentication through TLS and some user APIs for further processing.
-**Application**: Depending on requirements, the application may have to enforce X.509 policies. CRLs should be enforced to ensure revoked certificates are rejected.
+**Application**: Depending on requirements, the application might have to enforce X.509 policies. CRLs should be enforced to ensure revoked certificates are rejected.
-### Use strongest cryptographic options and cipher suites for TLS
+### Use the strongest cryptographic options and cipher suites for TLS
-Use the strongest cryptography and cipher suites available for TLS. Having the ability to update TLS and cryptography is also important because over time certain cipher suites and TLS versions may become compromised or discontinued.
+Use the strongest cryptography and cipher suites available for TLS. You need the ability to update TLS and cryptography. Over time, certain cipher suites and TLS versions might become compromised or discontinued.
**Hardware**: If cryptographic acceleration is available, use it.
-**Azure RTOS**: Azure RTOS TLS provides hardware drivers for select devices that support cryptography in hardware. For routines not supported in hardware, the [Azure RTOS cryptography library](/azure/rtos/netx/netx-crypto/chapter1) is designed specifically for embedded systems. A FIPS 140-2 certified library that uses the same code base is also available.
+**Azure RTOS**: Azure RTOS TLS provides hardware drivers for select devices that support cryptography in hardware. For routines not supported in hardware, the [Azure RTOS cryptography library](/azure/rtos/netx/netx-crypto/chapter1) is designed specifically for embedded systems. A FIPS 140-2 certified library that uses the same code base is also available.
-**Application**: Applications using TLS should choose cipher suites that utilize hardware-based cryptography (when available) and the strongest keys available.
+**Application**: Applications that use TLS should choose cipher suites that use hardware-based cryptography when it's available. They should also use the strongest keys available.
### TLS mutual certificate authentication
-When using X.509 authentication in TLS, opt for mutual certificate authentication. With mutual authentication, both the server and client must provide a verifiable certificate for identification.
+When you use X.509 authentication in TLS, opt for mutual certificate authentication. With mutual authentication, both the server and client must provide a verifiable certificate for identification.
-Using hardware-based X.509 certificates with TLS mutual authentication and a PKI with active monitoring of certificate status provides the highest level of security.
+Use hardware-based X.509 certificates with TLS mutual authentication and a PKI with active monitoring of certificate status for the highest level of security.
**Hardware**: No specific hardware requirements.
-**Azure RTOS**: Azure RTOS TLS provides support for mutual certificate authentication in both TLS Server and Client applications. For more information, see the [Azure RTOS NetX Secure TLS documentation](/azure/rtos/netx-duo/netx-secure-tls/chapter1#netx-secure-unique-features).
+**Azure RTOS**: Azure RTOS TLS provides support for mutual certificate authentication in both TLS server and client applications. For more information, see the [Azure RTOS NetX secure TLS documentation](/azure/rtos/netx-duo/netx-secure-tls/chapter1#netx-secure-unique-features).
-**Application**: Applications using TLS should always default to mutual certificate authentication whenever possible. Mutual authentication requires TLS clients to have a device certificate. Mutual authentication is an optional TLS feature but is highly recommended when possible.
+**Application**: Applications that use TLS should always default to mutual certificate authentication whenever possible. Mutual authentication requires TLS clients to have a device certificate. Mutual authentication is an optional TLS feature, but you should use it when possible.
### Only use TLS-based MQTT
If your device uses MQTT for cloud communication, only use MQTT over TLS.
**Azure RTOS**: Azure RTOS provides MQTT over TLS as a default configuration.
-**Application**: Applications using MQTT should only use TLS-based MQTT with mutual certificate authentication.
+**Application**: Applications that use MQTT should only use TLS-based MQTT with mutual certificate authentication.
-## Embedded security components ΓÇô application design and development
+## Embedded security components: Application design and development
-The following sections overview the key security components for application design and development.
+The following sections discuss the key security components for application design and development.
### Disable debugging features
-For development, most MCU devices use a JTAG interface or similar interface to provide information to debuggers or other applications. Leaving a debugging interface enabled on your device gives an attacker an easy door into your application. Make sure to disable all debugging interfaces and remove associated debugging code from your application before deployment.
+For development, most MCU devices use a JTAG interface or similar interface to provide information to debuggers or other applications. If you leave a debugging interface enabled on your device, you give an attacker an easy door into your application. Make sure to disable all debugging interfaces. Also remove associated debugging code from your application before deployment.
-**Hardware**: Some devices may have hardware support to disable debugging interfaces permanently or the interface may be able to be removed physically from the device. (Note that removing the interface physically from the device does NOT mean the interface is disabled.) You may need to disable the interface on boot (for example, during a secure boot process), but it should always be disabled in production devices.
+**Hardware**: Some devices might have hardware support to disable debugging interfaces permanently or the interface might be able to be removed physically from the device. Removing the interface physically from the device does *not* mean the interface is disabled. You might need to disable the interface on boot, for example, during a secure boot process. Always disable the debugging interface in production devices.
**Azure RTOS**: Not applicable.
-**Application**: If the device doesn't have a feature to permanently disable debugging interfaces, the application may have to disable those interfaces on boot. Disabling debugging interfaces should be done as early as possible in the boot process, preferably during a secure boot before the application is running.
+**Application**: If the device doesn't have a feature to permanently disable debugging interfaces, the application might have to disable those interfaces on boot. Disable debugging interfaces as early as possible in the boot process. Preferably, disable those interfaces during a secure boot before the application is running.
### Watchdog timers
-When available, an IoT device should use a watchdog timer to reset an unresponsive application. Having the watchdog timer reset the device when time runs out will limit the amount of time an attacker may have to execute an exploit. The watchdog can be reinitialized by the application and some basic integrity checks can also be done such as looking for code executing in RAM, checksums on data, identity checks, and so on. If an attacker doesn't account for the watchdog timer reset while trying to compromise the device, then the device would reboot into a (theoretically) clean state. Note that this would require a secure boot mechanism to verify the identity of the application image.
+When available, an IoT device should use a watchdog timer to reset an unresponsive application. Resetting the device when time runs out limits the amount of time an attacker might have to execute an exploit.
+
+The watchdog can be reinitialized by the application. Some basic integrity checks can also be done like looking for code executing in RAM, checksums on data, and identity checks. If an attacker doesn't account for the watchdog timer reset while trying to compromise the device, the device would reboot into a (theoretically) clean state. A secure boot mechanism would be required to verify the identity of the application image.
**Hardware**: Watchdog timer support in hardware, secure boot functionality.
-**Azure RTOS**: No specific Azure RTOS functionality required.
+**Azure RTOS**: No specific Azure RTOS functionality is required.
-**Application**: Watchdog timer management--refer to device hardware platform documentation for more information.
+**Application**: Watchdog timer management. For more information, see the device hardware platform documentation.
### Remote error logging
Use cloud resources to record and analyze device failures remotely. Aggregate er
**Hardware**: No specific hardware requirements.
-**Azure RTOS**: No specific Azure RTOS requirements but consider logging Azure RTOS API return codes to look for specific problems with lower-level protocols (for example, TLS alert causes, TCP failures) that may indicate problems.
-
-**Application**: Make use of logging libraries and your cloud service's client SDK to push error logs to the cloud where they can be stored and analyzed safely without using valuable device storage space. Integration with [Microsoft Defender for IoT](https://azure.microsoft.com/services/azure-defender-for-iot/) would provide this functionality and more. Microsoft Defender for IoT provides agent-less monitoring of devices in an IoT solution. Monitoring can be enhanced by including the [Microsoft Defender for IOT micro-agent for Azure RTOS](../defender-for-iot/device-builders/iot-security-azure-rtos.md) on your device. For more information, see the [Runtime security monitoring and threat detection](#runtime-security-monitoring-and-threat-detection) recommendation.
+**Azure RTOS**: No specific Azure RTOS requirements. Consider logging Azure RTOS API return codes to look for specific problems with lower-level protocols that might indicate problems. Examples include TLS alert causes and TCP failures.
-### Disable unused protocols and features
+**Application**: Make use of logging libraries and your cloud service's client SDK to push error logs to the cloud where they can be stored and analyzed safely without using valuable device storage space. Integration with [Microsoft Defender for IoT](https://azure.microsoft.com/services/azure-defender-for-iot/) provides this functionality and more. Microsoft Defender for IoT provides agentless monitoring of devices in an IoT solution. Monitoring can be enhanced by including the [Microsoft Defender for IOT micro-agent for Azure RTOS](../defender-for-iot/device-builders/iot-security-azure-rtos.md) on your device. For more information, see the [Runtime security monitoring and threat detection](#runtime-security-monitoring-and-threat-detection) recommendation.
-RTOS and MCU-based applications will typically have a few dedicated functions. This feature is in sharp contrast to general purpose computing machines running higher-level operating systems, such as Windows and Linux, that enable dozens or hundreds of protocols and features by default. When designing an RTOS MCU application, look closely at what networking protocols are required. Every protocol that is enabled represents a different avenue for attackers to gain a foothold within the device. If you donΓÇÖt need a feature or protocol, donΓÇÖt enable it.
+Microsoft Defender for IoT provides agentless monitoring of devices in an IoT solution. Monitoring can be enhanced by including the [Microsoft Defender for IOT micro-agent for Azure RTOS](/azure/defender-for-iot/device-builders/iot-security-azure-rtos) on your device. For more information, see the [Runtime security monitoring and threat detection](#runtime-security-monitoring-and-threat-detection) recommendation.
-**Hardware**: No specific hardware requirements, but if the platform allows unused peripherals and ports to be disabled, use that functionality to reduce your attack surface.
+### Disable unused protocols and features
-**Azure RTOS**: Azure RTOS has a ΓÇ£disabled by defaultΓÇ¥ philosophy. Only enable protocols and features that are required for your application. Resist the temptation to enable features ΓÇ£just in caseΓÇ¥.
+RTOS and MCU-based applications typically have a few dedicated functions. This feature is in sharp contrast to general-purpose computing machines running higher-level operating systems, such as Windows and Linux. These machines enable dozens or hundreds of protocols and features by default.
-**Application**: When designing your application, try to reduce the feature set to the bare minimum. Fewer features make an application easier to analyze for security vulnerabilities and reduce your application attack surface.
+When you design an RTOS MCU application, look closely at what networking protocols are required. Every protocol that's enabled represents a different avenue for attackers to gain a foothold within the device. If you donΓÇÖt need a feature or protocol, don't enable it.
-### Use all possible complier and linker security features when building your application
+**Hardware**: No specific hardware requirements. If the platform allows unused peripherals and ports to be disabled, use that functionality to reduce your attack surface.
-Modern compilers and linkers provide numerous options for additional security at build time. Utilize as many compiler- and linker-based options as possible to improve your application with proven security mitigations. Some options may affect size, performance, or RTOS functionality, so care is required when enabling certain features.
+**Azure RTOS**: Azure RTOS has a "disabled by default" philosophy. Only enable protocols and features that are required for your application. Resist the temptation to enable features "just in case."
-**Hardware**: No specific hardware requirements but your hardware platform may support security features that can be enabled during the compiling or linking processes.
+**Application**: When you design your application, try to reduce the feature set to the bare minimum. Fewer features make an application easier to analyze for security vulnerabilities. Fewer features also reduce your application attack surface.
-**Azure RTOS**: As an RTOS, some compiler-based security features may interfere with the real-time guarantees of Azure RTOS. Consider your RTOS needs when selecting compiler options and test them thoroughly.
+### Use all possible compiler and linker security features
-**Application**: If using GCC, the following list of options should be considered. For more information, see the GCC documentation.
+Modern compilers and linkers provide many options for more security at build time. When you build your application, use as many compiler- and linker-based options as possible. They'll improve your application with proven security mitigations. Some options might affect size, performance, or RTOS functionality. Be careful when you enable certain features.
-If using other development tools, consult your documentation for appropriate options. In general, the following guidelines should help in building a more secure configuration:
+**Hardware**: No specific hardware requirements. Your hardware platform might support security features that can be enabled during the compiling or linking processes.
-- All builds should have maximum error and warning levels enabled. Production code should compile and link cleanly with no errors or warnings.
+**Azure RTOS**: As an RTOS, some compiler-based security features might interfere with the real-time guarantees of Azure RTOS. Consider your RTOS needs when you select compiler options and test them thoroughly.
-- Enable all runtime checking that is available. Examples include stack checking, buffer overflow detection, Address Space Layout Randomization (ASLR), and integer overflow detection.
+**Application**: If you use other development tools, consult your documentation for appropriate options. In general, the following guidelines should help you build a more secure configuration:
-- Some tools and devices may provide options to place code in protected or read-only areas of memory. Make use of any available protection mechanisms to prevent an attacker from being able to run arbitrary code on your device. Simply protecting code by making it read-only doesn't completely protect against arbitrary code execution, but it does help.
+- Enable maximum error and warning levels for all builds. Production code should compile and link cleanly with no errors or warnings.
+- Enable all runtime checking that's available. Examples include stack checking, buffer overflow detection, Address Space Layout Randomization (ASLR), and integer overflow detection.
+- Some tools and devices might provide options to place code in protected or read-only areas of memory. Make use of any available protection mechanisms to prevent an attacker from being able to run arbitrary code on your device. Making code read-only doesn't completely protect against arbitrary code execution, but it does help.
### Make sure memory access alignment is correct
-Some MCU devices permit unaligned memory accesses, but others do not. Consider the properties of your specific device when developing your application.
+Some MCU devices permit unaligned memory access, but others don't. Consider the properties of your specific device when you develop your application.
-**Hardware**: The memory access alignment behavior will be specific to your selected device.
+**Hardware**: Memory access alignment behavior is specific to your selected device.
-**Azure RTOS**: For processors that do NOT support unaligned access, ensure that the macro `NX_CRYPTO_DISABLE_UNALIGNED_ACCESS` is defined. Failure to do so will result in possible CPU faults during certain cryptographic operations.
+**Azure RTOS**: For processors that do *not* support unaligned access, ensure that the macro `NX_CRYPTO_DISABLE_UNALIGNED_ACCESS` is defined. Failure to do so results in possible CPU faults during certain cryptographic operations.
-**Application**: In any memory operation (for example, copy or move) considers the memory alignment behavior of your hardware platform.
+**Application**: In any memory operation like copy or move, consider the memory alignment behavior of your hardware platform.
### Runtime security monitoring and threat detection
-Connected IoT devices may not have the necessary resources to implement all security features locally. However, with connection to the cloud, there are remote security options that can be utilized to improve the security of your application without adding significant overhead to the embedded device.
+Connected IoT devices might not have the necessary resources to implement all security features locally. With connection to the cloud, you can use remote security options to improve the security of your application. These options don't add significant overhead to the embedded device.
-**Hardware**: No specific hardware features required (other than a network interface).
+**Hardware**: No specific hardware features required other than a network interface.
**Azure RTOS**: Azure RTOS supports [Microsoft Defender for IoT](https://azure.microsoft.com/services/azure-defender-for-iot/).
-**Application**: The [Microsoft Defender for IOT micro-agent for Azure RTOS](../defender-for-iot/device-builders/iot-security-azure-rtos.md) provides a comprehensive security solution for Azure RTOS devices. The module provides security services via a small software agent that is built into your deviceΓÇÖs firmware and comes as part of Azure RTOS. The service includes detection of malicious network activities, device behavior baselining based on custom alerts, and recommendations that will help to improve the security hygiene of your devices. Whether you're using Azure RTOS in combination with Azure Sphere or not, the Microsoft Defender for IoT micro-agent provides an additional layer of security that is built right into the RTOS by default.
+**Application**: The [Microsoft Defender for IOT micro-agent for Azure RTOS](../defender-for-iot/device-builders/iot-security-azure-rtos.md) provides a comprehensive security solution for Azure RTOS devices. The module provides security services via a small software agent that's built into your device's firmware and comes as part of Azure RTOS. The service includes detection of malicious network activities, device behavior baselining based on custom alerts, and recommendations that will help to improve the security hygiene of your devices. Whether you're using Azure RTOS in combination with Azure Sphere or not, the Microsoft Defender for IoT micro-agent provides an extra layer of security that's built into the RTOS by default.
## Azure RTOS IoT application security checklist
-The previous sections detailed specific design considerations with descriptions of the necessary hardware, operating system, and application requirements to help mitigate security threats. This section provides a basic checklist of security-related issues to consider when designing and implementing IoT applications with Azure RTOS. This shortlist of measures is meant as a complement to, not a replacement for, the more detailed discussion in previous sections. Ultimately, a comprehensive analysis of the physical and cyber security threats posed by the environment your device will be deployed into coupled with careful consideration and rigorous implementation of the measures needed to mitigate those threats must be done to provide the highest possible level of security for your device.
-
-### Security DOs
--- DO always use a hardware source of entropy (CRNG, TRNG based in hardware). Azure RTOS uses a macro (`NX_RAND`) that allows you to define your random function.
+The previous sections detailed specific design considerations with descriptions of the necessary hardware, operating system, and application requirements to help mitigate security threats. This section provides a basic checklist of security-related issues to consider when you design and implement IoT applications with Azure RTOS.
-- DO always supply a Real-Time Clock for calendar date/time to check certificate expiration.
+This short list of measures is meant as a complement to, not a replacement for, the more detailed discussion in previous sections. You must perform a comprehensive analysis of the physical and cybersecurity threats posed by the environment your device will be deployed into. You also need to carefully consider and rigorously implement measures to mitigate those threats. The goal is to provide the highest possible level of security for your device.
-- DO use Certificate Revocation Lists (CRL) to validate certificate status. With Azure RTOS TLS, a CRL is retrieved by the application and passed via a callback to the TLS implementation. For more information, see the [NetX Secure TLS User Guide](/azure/rtos/netx-duo/netx-secure-tls/chapter1).
+The service includes detection of malicious network activities, device behavior baselining based on custom alerts, and recommendations to help improve the security hygiene of your devices.
-- DO use the X.509 ΓÇ£Key UsageΓÇ¥ extension when possible to check for certificate acceptable uses. In Azure RTOS, the use of a callback to access the X.509 extension information is required.
+Whether you're using Azure RTOS in combination with Azure Sphere or not, the Microsoft Defender for IoT micro-agent provides another layer of security that's built into the RTOS by default.
-- DO use X.509 policies in your certificates that are consistent with the services to which your device will connect (for example, ExtendedKeyUsage).
+### Security measures to take
-- DO use approved cipher suites in the Azure RTOS Crypto library-
- - Supplied examples provide the required cipher suites to be compatible with TLS RFCs, but stronger cipher suites may be more suitable. Cipher suites include multiple ciphers for different TLS operations, so choose carefully. For example, using Elliptic-Curve Diffie-Hellman Ephemeral (ECDHE) may be preferable to RSA for key exchange, but the benefits can be lost if the cipher suite also uses RC4 for application data. Make sure every cipher in a cipher suite meets your security needs.
+- Always use a hardware source of entropy (CRNG, TRNG based in hardware). Azure RTOS uses a macro (`NX_RAND`) that allows you to define your random function.
+- Always supply a real-time clock for calendar date and time to check certificate expiration.
+- Use CRLs to validate certificate status. With Azure RTOS TLS, a CRL is retrieved by the application and passed via a callback to the TLS implementation. For more information, see the [NetX secure TLS user guide](/azure/rtos/netx-duo/netx-secure-tls/chapter1).
+- Use the X.509 "Key Usage" extension when possible to check for certificate acceptable uses. In Azure RTOS, the use of a callback to access the X.509 extension information is required.
+- Use X.509 policies in your certificates that are consistent with the services to which your device will connect. An example is ExtendedKeyUsage.
+- Use approved cipher suites in the Azure RTOS Crypto library:
+ - Supplied examples provide the required cipher suites to be compatible with TLS RFCs, but stronger cipher suites might be more suitable. Cipher suites include multiple ciphers for different TLS operations, so choose carefully. For example, using Elliptic-Curve Diffie-Hellman Ephemeral (ECDHE) might be preferable to RSA for key exchange, but the benefits can be lost if the cipher suite also uses RC4 for application data. Make sure every cipher in a cipher suite meets your security needs.
- Remove cipher suites that aren't needed. Doing so saves space and provides extra protection against attack.-
- - Use hardware drivers when applicable. Azure RTOS provides hardware cryptography drivers for select platforms. For more information, see the [NetX Crypto documentation](/azure/rtos/netx/netx-crypto/chapter1).
--- DO favor ephemeral public-key algorithms (like ECDHE) over static algorithms (like classic RSA) when possible as these provide forward secrecy. Note that TLS 1.3 ONLY supports ephemeral cipher modes so moving to TLS 1.3 (when possible) will satisfy this goal.--- DO make use of memory checking functionality provided by your tools (for example, compiler and third-party memory checking tools) and libraries (for example, Azure RTOS ThreadX stack checking).--- DO scrutinize all input data for length/buffer overflow conditions. Any data coming from outside a functional block (the device, thread, and even each function/method) should be considered suspect and checked thoroughly with application logic. Some of the easiest vulnerabilities to exploit come from unchecked input data causing buffer overflows.--- DO make sure code builds cleanly. All warnings and errors should be accounted for and scrutinized for vulnerabilities.--- DO use static code analysis tools to determine if there are any errors in logic or pointer arithmetic ΓÇô all errors can be potential vulnerabilities.--- DO research fuzz testing (or ΓÇ£fuzzingΓÇ¥) for your application. Fuzzing is a security-focused process where message parsing for incoming data is subjected to large quantities of random or semi-random data. The purpose is to observe the behavior when invalid data is processed. It's based on techniques used by hackers to discover buffer overflow and other errors that may be used in an exploit to attack a system.--- DO perform code walk-through audits to look for confusing logic and other errors. If you canΓÇÖt understand a piece of code, itΓÇÖs possible that code contains vulnerabilities.--- DO use an MPU/MMU (when available and overhead is acceptable) to prevent code from executing from RAM and prevent threads from accessing memory outside their own memory space. Azure RTOS ThreadX Modules can be used to isolate application threads from each other to prevent access across memory boundaries.--- DO use watchdogs to prevent run-away code and to make attacks more difficult by limiting the window during which an attack can be executed.--- DO consider Safety and Security certified code. Using certified code and certifying your own applications will subject your application to higher scrutiny and increase the likelihood of discovering vulnerabilities before the application is deployed. Formal certification may not be required for your device, but following the rigorous testing and review processes required for certification can provide enormous benefit.-
-### Security DON'Ts
--- DO NOT use the standard C-library `rand()` function as it doesn't provide cryptographic randomness. Consult your hardware documentation for a proper source of cryptographic entropy.--- DO NOT hard-code private keys or credentials (certificates, passwords, usernames, etc.) in your application. Private keys should be updated regularly (the actual schedule depends on several factors) to provide a higher level of security. In addition, hard-coded values may be readable in memory or even in transit over a network if the firmware image is not encrypted. The actual mechanism for updating keys and certificates will depend heavily on your application and the PKI being used.--- DO NOT use self-signed device certificates and instead use a proper PKI for device identification. (Some exceptions may apply, but generally this is a rule for most organizations and systems.)--- DO NOT use any TLS extensions that aren't needed. Azure RTOS TLS disables many features by default. Only enable features you need.--- DO NOT try to implement ΓÇ£Security by obscurityΓÇ¥. It's NOT SECURE. The industry is plagued with examples where a developer tried to be clever by obscuring or hiding code or algorithms. Obscuring your code or secret information like keys or passwords may prevent some intruders but it won't stop a dedicated attacker. Obscured code provides a false sense of security.--- DO NOT leave unnecessary functionality enabled or unused network or hardware ports open. If your application doesnΓÇÖt need a feature, disable it. DonΓÇÖt fall into the trap of leaving a TCP port open ΓÇ£just in caseΓÇ¥. The more functionality that is enabled, the higher the risk that an exploit will go undetected, and the interaction between different features can introduce new vulnerabilities.--- DO NOT leave debugging enabled in production code. If an attacker can simply plug in a JTAG debugger and dump the contents of RAM on your device, there's very little that can be done to secure your application. Leaving a debugging port open is the equivalent of leaving your front door open with your valuables lying in plain sight. DonΓÇÖt do it.--- DO NOT allow buffer overflows in your application. Many remote attacks start with a buffer overflow that is used to probe the contents of memory or inject malicious code to be executed. The best defense is to write defensive code. Double check any input that comes from, or is derived from, sources outside the device (network stack, display/GUI interface, external interrupts, etc.) and handle the error gracefully. Use compiler, linker, and runtime system tools to detect and mitigate overflow problems.--- DO NOT put network packets on local thread stacks where an overflow can affect return addresses, leading to Return-Oriented Programming vulnerabilities.--- DO NOT put buffers in program stacks. Allocate them statically whenever possible.--- DO NOT use dynamic memory and heap operations when possible. Heap overflows can be problematic since the layout of dynamically allocated memory, for example, from functions like `malloc()`, is difficult to predict. Static buffers can be more easily managed and protected.--- DO NOT embed function pointers in data packets where overflow can overwrite function pointers.--- DO NOT try to implement your own cryptography. Accepted cryptographic routines like Elliptic Curve Cryptography (ECC) and AES have been developed by experts in cryptography. These routines have gone through rigorous analysis over many years (sometimes decades) to prove their security. It's highly unlikely that any algorithm you develop on your own will have the security required to protect sensitive communications and data.--- DO NOT implement roll-your-own cryptography schemes. Simply using AES doesn't mean your application is secure. Protocols like TLS use various methods to mitigate well known attacks. For example:-
- - Known plaintext attacks, which use known, unencrypted data to derive information about encrypted data.
+ - Use hardware drivers when applicable. Azure RTOS provides hardware cryptography drivers for select platforms. For more information, see the [NetX crypto documentation](/azure/rtos/netx/netx-crypto/chapter1).
+
+- Favor ephemeral public-key algorithms like ECDHE over static algorithms like classic RSA when possible. Public-key algorithms provide forward secrecy. TLS 1.3 *only* supports ephemeral cipher modes, so moving to TLS 1.3 when possible satisfies this goal.
+- Make use of memory checking functionality like compiler and third-party memory checking tools and libraries like Azure RTOS ThreadX stack checking.
+- Scrutinize all input data for length/buffer overflow conditions. Be suspicious of any data that comes from outside a functional block like the device, thread, and even each function or method. Check it thoroughly with application logic. Some of the easiest vulnerabilities to exploit come from unchecked input data causing buffer overflows.
+- Make sure code builds cleanly. All warnings and errors should be accounted for and scrutinized for vulnerabilities.
+- Use static code analysis tools to determine if there are any errors in logic or pointer arithmetic. All errors can be potential vulnerabilities.
+- Research fuzz testing, also known as "fuzzing," for your application. Fuzzing is a security-focused process where message parsing for incoming data is subjected to large quantities of random or semi-random data. The purpose is to observe the behavior when invalid data is processed. It's based on techniques used by hackers to discover buffer overflow and other errors that might be used in an exploit to attack a system.
+- Perform code walk-through audits to look for confusing logic and other errors. If you can't understand a piece of code, it's possible that code contains vulnerabilities.
+- Use an MPU or MMU when available and overhead is acceptable. An MPU or MMU helps to prevent code from executing from RAM and threads from accessing memory outside their own memory space. Use Azure RTOS ThreadX Modules to isolate application threads from each other to prevent access across memory boundaries.
+- Use watchdogs to prevent runaway code and to make attacks more difficult. They limit the window during which an attack can be executed.
+- Consider safety and security certified code. Using certified code and certifying your own applications subjects your application to higher scrutiny and increases the likelihood of discovering vulnerabilities before the application is deployed. Formal certification might not be required for your device. Following the rigorous testing and review processes required for certification can provide enormous benefit.
+
+### Security measures to avoid
+
+- Don't use the standard C-library `rand()` function because it doesn't provide cryptographic randomness. Consult your hardware documentation for a proper source of cryptographic entropy.
+- Don't hard-code private keys or credentials like certificates, passwords, or usernames in your application. To provide a higher level of security, update private keys regularly. The actual schedule depends on several factors. Also, hard-coded values might be readable in memory or even in transit over a network if the firmware image isn't encrypted. The actual mechanism for updating keys and certificates depends on your application and the PKI being used.
+- Don't use self-signed device certificates. Instead, use a proper PKI for device identification. Some exceptions might apply, but this rule is for most organizations and systems.
+- Don't use any TLS extensions that aren't needed. Azure RTOS TLS disables many features by default. Only enable features you need.
+- Don't try to implement "security by obscurity." It's *not secure*. The industry is plagued with examples where a developer tried to be clever by obscuring or hiding code or algorithms. Obscuring your code or secret information like keys or passwords might prevent some intruders, but it won't stop a dedicated attacker. Obscured code provides a false sense of security.
+- Don't leave unnecessary functionality enabled or unused network or hardware ports open. If your application doesn't need a feature, disable it. Don't fall into the trap of leaving a TCP port open just in case. The more functionality that's enabled, the higher the risk that an exploit will go undetected. The interaction between different features can introduce new vulnerabilities.
+- Don't leave debugging enabled in production code. If an attacker can plug in a JTAG debugger and dump the contents of RAM on your device, not much can be done to secure your application. Leaving a debugging port open is like leaving your front door open with your valuables lying in plain sight. Don't do it.
+- Don't allow buffer overflows in your application. Many remote attacks start with a buffer overflow that's used to probe the contents of memory or inject malicious code to be executed. The best defense is to write defensive code. Double-check any input that comes from, or is derived from, sources outside the device like the network stack, display or GUI interface, and external interrupts. Handle the error gracefully. Use compiler, linker, and runtime system tools to detect and mitigate overflow problems.
+- Don't put network packets on local thread stacks where an overflow can affect return addresses. This practice can lead to return-oriented programming vulnerabilities.
+- Don't put buffers in program stacks. Allocate them statically whenever possible.
+- Don't use dynamic memory and heap operations when possible. Heap overflows can be problematic because the layout of dynamically allocated memory, for example, from functions like `malloc()`, is difficult to predict. Static buffers can be more easily managed and protected.
+- Don't embed function pointers in data packets where overflow can overwrite function pointers.
+- Don't try to implement your own cryptography. Accepted cryptographic routines like elliptic curve cryptography (ECC) and AES were developed by experts in cryptography. These routines went through rigorous analysis over many years to prove their security. It's unlikely that any algorithm you develop on your own will have the security required to protect sensitive communications and data.
+- Don't implement roll-your-own cryptography schemes. Simply using AES doesn't mean your application is secure. Protocols like TLS use various methods to mitigate well-known attacks, for example:
+
+ - Known plain-text attacks, which use known unencrypted data to derive information about encrypted data.
- Padding oracles, which use modified cryptographic padding to gain access to secret data. - Predictable secrets, which can be used to break encryption.
- Whenever possible, try to use accepted security protocols like TLS when securing your application.
+ Whenever possible, try to use accepted security protocols like TLS when you secure your application.
## Recommended security resources -- The [Zero trust: cyber security for IoT](https://azure.microsoft.com/mediahandler/files/resourcefiles/zero-trust-cybersecurity-for-the-internet-of-things/Zero%20Trust%20Security%20Whitepaper_4.30_3pm.pdf) whitepaper provides an overview of Microsoft's approach to security across all aspects of an IoT ecosystem, with an emphasis on devices.--- The [IoT Security maturity model (SMM)](https://www.iiconsortium.org/smm.htm) proposes a standard set of security domains, subdomains, and practices as well as an iterative process you can use to understand, target, and implement security measures important for your device. This set of standards is targeted to all levels of IoT stakeholders and provides a process framework for considering security in the context of a componentΓÇÖs interactions in an IoT system.--- The [Seven properties of highly secured devices](https://www.microsoft.com/research/publication/seven-properties-2nd-edition/) whitepaper published by Microsoft Research provides an overview of security properties that must be addressed to produce highly secure devices: Hardware root of trust, Defense in depth, Small trusted computing base, Dynamic compartments, Password-less authentication, Error reporting, and Renewable security. These properties are applicable, depending on cost constraints and target application and environment, too many embedded devices.--- The [PSA certified 10 security goals explained](https://www.psacertified.org/blog/psa-certified-10-security-goals-explained/)
-ARMΓÇÖs Platform Security Architecture provides a standardized framework for building secure embedded devices using ARM TrustZone technology. Microcontroller manufacturers can certify designs with ARMΓÇÖs PSA Certified program giving a level of confidence about the security of applications built on ARM technologies.
--- [Common criteria](https://www.commoncriteriaportal.org/)
-The Common Criteria is an international agreement that provides standardized guidelines and an authorized laboratory program to evaluate products for IT security. Certification provides a level of confidence in the security posture of applications using devices that have been evaluated using the program guidelines.
--- [SESIP](https://globalplatform.org/sesip/)
-The Security Evaluation Standard for IoT Platforms is a standardized methodology for evaluating the security of connected IoT products and components.
--- [IoT Security maturity model](https://www.iiconsortium.org/smm.htm)
-The SMM is a framework for building customized IoT security models, allowing IoT manufacturers to create detailed models to evaluate and measure the security posture of their products.
--- [ISO 27000 family](https://www.iso.org/isoiec-27001-information-security.html)
-ISO 27000 is a collection of standards regarding the management and security of information assets, providing baseline guarantees about the security of digital information in certified products.
--- [FIPS 140-2/3](https://csrc.nist.gov/publications/detail/fips/140/3/final)
-FIPS 140 is a US government program that standardizes cryptographic algorithms and implementations used in US government and military applications. Along with documented standards, certified laboratories provide FIPS certification to guarantee specific cryptographic implementations adhere to regulations.
+- [Zero Trust: Cyber security for IoT](https://azure.microsoft.com/mediahandler/files/resourcefiles/zero-trust-cybersecurity-for-the-internet-of-things/Zero%20Trust%20Security%20Whitepaper_4.30_3pm.pdf) provides an overview of Microsoft's approach to security across all aspects of an IoT ecosystem, with an emphasis on devices.
+- [IoT Security Maturity Model](https://www.iiconsortium.org/smm.htm) proposes a standard set of security domains, subdomains, and practices and an iterative process you can use to understand, target, and implement security measures important for your device. This set of standards is directed to all levels of IoT stakeholders and provides a process framework for considering security in the context of a component's interactions in an IoT system.
+- [Seven properties of highly secured devices](https://www.microsoft.com/research/publication/seven-properties-2nd-edition/), published by Microsoft Research, provides an overview of security properties that must be addressed to produce highly secure devices. The seven properties are hardware root of trust, defense in depth, small trusted computing base, dynamic compartments, passwordless authentication, error reporting, and renewable security. These properties are applicable, depending on cost constraints and target application and environment, to many embedded devices.
+- [PSA Certified 10 security goals explained](https://www.psacertified.org/blog/psa-certified-10-security-goals-explained/) discusses the Azure Resource Manager Platform Security Architecture (PSA). It provides a standardized framework for building secure embedded devices by using Resource Manager TrustZone technology. Microcontroller manufacturers can certify designs with the Resource Manager PSA Certified program giving a level of confidence about the security of applications built on Resource Manager technologies.
+- [Common Criteria](https://www.commoncriteriaportal.org/) is an international agreement that provides standardized guidelines and an authorized laboratory program to evaluate products for IT security. Certification provides a level of confidence in the security posture of applications using devices that were evaluated by using the program guidelines.
+- [Security Evaluation Standard for IoT Platforms (SESIP)](https://globalplatform.org/sesip/) is a standardized methodology for evaluating the security of connected IoT products and components.
+- [ISO 27000 family](https://www.iso.org/isoiec-27001-information-security.html) is a collection of standards regarding the management and security of information assets. The standards provide baseline guarantees about the security of digital information in certified products.
+- [FIPS 140-2/3](https://csrc.nist.gov/publications/detail/fips/140/3/final) is a US government program that standardizes cryptographic algorithms and implementations used in US government and military applications. Along with documented standards, certified laboratories provide FIPS certification to guarantee specific cryptographic implementations adhere to regulations.
iot-dps Concepts Device Reprovision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-device-reprovision.md
When designing your solution and defining a reprovisioning logic there are a few
* How often you expect your devices to restart * The [DPS quotas and limits](about-iot-dps.md#quotas-and-limits) * Expected deployment time for your fleet (phased rollout vs all at once)
-* Retry capability implemented on your client code, as described on the [Retry general guidance](/architecture/best-practices/transient-faults) at the Azure Architecture Center
+* Retry capability implemented on your client code, as described on the [Retry general guidance](/azure/architecture/best-practices/transient-faults) at the Azure Architecture Center
>[!TIP]
-> We recommend not provisioning on every reboot of the device, as this could cause some issues when reprovisioning several thousands or millions of devices at once. Instead you should attempt to use the [Device Registration Status Lookup](/rest/api/iot-dps/service/runtime-registration/device-registration-status-lookup) API and try to connect with that information to IoT Hub. If that fails, then try to reprovision as the IoT Hub information might have changed. Keep in mind that querying for the registration state will count as a new device registration, so you should consider the [Device registration limit]( about-iot-dps.md#quotas-and-limits). Also consider implementing an appropriate retry logic, such as exponential back-off with randomization, as described on the [Retry general guidance](/architecture/best-practices/transient-faults).
+> We recommend not provisioning on every reboot of the device, as this could cause some issues when reprovisioning several thousands or millions of devices at once. Instead you should attempt to [get the device registration state](/rest/api/iot-dps/service/device-registration-state/get) and try to connect with that information to IoT Hub. If that fails, then try to reprovision as the IoT Hub information might have changed. Keep in mind that querying for the registration state will count as a new device registration, so you should consider the [Device registration limit]( about-iot-dps.md#quotas-and-limits). Also consider implementing an appropriate retry logic, such as exponential back-off with randomization, as described on the [Retry general guidance](/azure/architecture/best-practices/transient-faults).
>In some cases, depending on the device capabilities, itΓÇÖs possible to save the IoT Hub information directly on the device to connect directly to IoT Hub after the first-time provisioning using DPS occurred. If you choose to do this, make sure you implement a fallback mechanism in case you get specific [errors from Hub occur](../iot-hub/troubleshoot-message-routing.md#common-error-codes), for example, consider the following scenarios: > * Retry the Hub operation if the result code is 429 (Too Many Requests) or an error in the 5xx range. Do not retry for any other errors. > * For 429 errors, only retry after the time indicated in the Retry-After header.
iot-dps How To Reprovision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-reprovision.md
How often a device submits a provisioning request depends on the scenario. When
* How often you expect your devices to restart * The [DPS quotas and limits](about-iot-dps.md#quotas-and-limits) * Expected deployment time for your fleet (phased rollout vs all at once)
-* Retry capability implemented on your client code, as described on the [Retry general guidance](/architecture/best-practices/transient-faults) at the Azure Architecture Center
+* Retry capability implemented on your client code, as described on the [Retry general guidance](/azure/architecture/best-practices/transient-faults) at the Azure Architecture Center
>[!TIP]
-> We recommend not provisioning on every reboot of the device, as this could cause some issues when reprovisioning several thousands or millions of devices at once. Instead you should attempt to use the [Device Registration Status Lookup](/rest/api/iot-dps/service/runtime-registration/device-registration-status-lookup) API and try to connect with that information to IoT Hub. If that fails, then try to reprovision as the IoT Hub information might have changed. Keep in mind that querying for the registration state will count as a new device registration, so you should consider the [Device registration limit]( about-iot-dps.md#quotas-and-limits). Also consider implementing an appropriate retry logic, such as exponential back-off with randomization, as described on the [Retry general guidance](/architecture/best-practices/transient-faults).
+> We recommend not provisioning on every reboot of the device, as this could cause some issues when reprovisioning several thousands or millions of devices at once. Instead you should attempt to [get the device registration state](/rest/api/iot-dps/service/device-registration-state/get) and try to connect with that information to IoT Hub. If that fails, then try to reprovision as the IoT Hub information might have changed. Keep in mind that querying for the registration state will count as a new device registration, so you should consider the [Device registration limit]( about-iot-dps.md#quotas-and-limits). Also consider implementing an appropriate retry logic, such as exponential back-off with randomization, as described on the [Retry general guidance](/azure/architecture/best-practices/transient-faults).
>In some cases, depending on the device capabilities, itΓÇÖs possible to save the IoT Hub information directly on the device to connect directly to IoT Hub after the first-time provisioning using DPS occurred. If you choose to do this, make sure you implement a fallback mechanism in case you get specific [errors from Hub occur](../iot-hub/troubleshoot-message-routing.md#common-error-codes), for example, consider the following scenarios: > * Retry the Hub operation if the result code is 429 (Too Many Requests) or an error in the 5xx range. Do not retry for any other errors. > * For 429 errors, only retry after the time indicated in the Retry-After header.
How often a device submits a provisioning request depends on the scenario. When
- To learn more Reprovisioning, see [IoT Hub Device reprovisioning concepts](concepts-device-reprovision.md) - To learn more Deprovisioning, see [How to deprovision devices that were previously auto-provisioned](how-to-unprovision-devices.md) -----------
iot-edge Tutorial Develop For Linux On Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-develop-for-linux-on-windows.md
+
+ Title: 'Tutorial - Develop module for Linux devices using Azure IoT Edge for Linux on Windows'
+description: This tutorial walks through setting up your development machine and cloud resources to develop IoT Edge modules using Linux containers for Windows devices using Azure IoT Edge for Linux on Windows
+++ Last updated : 03/01/2020++++++
+# Tutorial: Develop IoT Edge modules with Linux containers using IoT Edge for Linux on Windows
++
+Use Visual Studio 2019 to develop, debug and deploy code to devices running IoT Edge for Linux on Windows.
+
+This tutorial walks through developing, debugging and deploying your own code to an IoT Edge device using IoT Edge for Linux on Windows. This article is a useful prerequisite for the other tutorials, which go into more detail about specific programming languages or Azure services.
+
+This tutorial uses the example of deploying a **C# module to a Linux device**. This example was chosen because it's the most common developer scenario for IoT Edge solutions. Even if you plan on using a different language or deploying an Azure service, this tutorial is still useful to learn about the development tools and concepts. Complete this introduction to the development process, then choose your preferred language or Azure service to dive into the details.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+>
+> * Set up your development machine.
+> * Use the IoT Edge tools for Visual Studio Code to create a new project.
+> * Build your project as a container and store it in an Azure container registry.
+> * Deploy your code to an IoT Edge device.
+
+## Prerequisites
+
+This article assumes that you use a machine running Windows as your development machine. On Windows computers, you can develop either Windows or Linux modules. This tutorial will guide you through the development of **Linux containers**, using [IoT Edge for Linux on Windows](./iot-edge-for-linux-on-windows.md) for building and deploying the modules.
+
+* Install [IoT Edge for Linux on Windows (EFLOW)](./how-to-provision-single-device-linux-on-windows-x509.md)
+* Quickstart: [Deploy your first IoT Edge module to a Windows device](./quickstart.md)
+* [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download/dotnet/3.1).
+
+Install Visual Studio on your development machine. Make sure you include the **Azure development** and **Desktop development with C#** workloads in your Visual Studio 2019 installation. You can [Modify Visual Studio 2019](/visualstudio/install/modify-visual-studio?view=vs-2019&preserve-view=true) to add the required workloads.
+
+After your Visual Studio 2019 is ready, you also need the following tools and components:
+
+* Download and install [Azure IoT Edge Tools](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vs16iotedgetools) from the Visual Studio marketplace to create an IoT Edge project in Visual Studio 2019.
+
+ > [!TIP]
+ > If you are using Visual Studio 2017, download and install [Azure IoT Edge Tools for VS 2017](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vsiotedgetools) from the Visual Studio marketplace
+
+Cloud resources:
+
+* A free or standard-tier [IoT hub](../iot-hub/iot-hub-create-through-portal.md) in Azure.
++
+### Check your tools version
+
+1. From the **Extensions** menu, select **Manage Extensions**. Expand **Installed > Tools** and you can find **Azure IoT Edge Tools for Visual Studio** and **Cloud Explorer for Visual Studio**.
+
+1. Note the installed version. You can compare this version with the latest version on Visual Studio Marketplace ([Cloud Explorer](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.CloudExplorerForVS2019), [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vs16iotedgetools))
+
+1. If your version is older than what's available on Visual Studio Marketplace, update your tools in Visual Studio as shown in the following section.
+
+> [!NOTE]
+> If you are using Visual Studio 2022, [Cloud Explorer](/visualstudio/azure/vs-azure-tools-resources-managing-with-cloud-explorer?view=vs-2022&preserve-view=true) is retired. To deploy Azure IoT Edge modules, use [Azure CLI](how-to-deploy-modules-cli.md?view=iotedge-2020-11&preserve-view=true) or [Azure portal](how-to-deploy-modules-portal.md?view=iotedge-2020-11&preserve-view=true).
+
+### Update your tools
+
+1. In the **Manage Extensions** window, expand **Updates > Visual Studio Marketplace**, select **Azure IoT Edge Tools** or **Cloud Explorer for Visual Studio** and select **Update**.
+
+1. After the tools update is downloaded, close Visual Studio to trigger the tools update using the VSIX installer.
+
+1. In the installer, select **OK** to start and then **Modify** to update the tools.
+
+1. After the update is complete, select **Close** and restart Visual Studio.
++
+## Key concepts
+
+This tutorial walks through the development of an IoT Edge module. An *IoT Edge module*, or sometimes just *module* for short, is a container with executable code. You can deploy one or more modules to an IoT Edge device. Modules perform specific tasks like ingesting data from sensors, cleaning and analyzing data, or sending messages to an IoT hub. For more information, see [Understand Azure IoT Edge modules](iot-edge-modules.md).
+
+When developing IoT Edge modules, it's important to understand the difference between the development machine and the target IoT Edge device where the module will eventually be deployed. The container that you build to hold your module code must match the operating system (OS) of the *target device*. For example, the most common scenario is someone developing a module on a Windows computer intending to target a Linux device running IoT Edge. In that case, the container operating system would be Linux. As you go through this tutorial, keep in mind the difference between the *development machine OS* and the *container OS*. For this tutorial, you'll be using your Windows host for development and the IoT Edge for Linux on Windows (EFLOW) VM for building and deploying the modules.
+
+This tutorial targets devices running IoT Edge with Linux containers. You can use your preferred operating system as long as your development machine runs Linux containers. We recommend using Visual Studio to develop with Linux containers, so that's what this tutorial will use. You can use Visual Studio Code as well, although there are differences in support between the two tools. For more information, refer to [Tutorial: Develop IoT Edge modules with Linux containers](./tutorial-develop-for-linux.md).
++
+## Set up docker-cli and Docker engine remote connection
+
+IoT Edge modules are packaged as containers, so you need a container engine on your development machine to build and manage them. The EFLOW virtual machine already contains an instance of Docker engine, so this tutorial will guide on how to remotely connect from the Windows developer machine to the EFLOW VM Docker instance. By using this, we remove the dependency on Docker Desktop for Windows.
+
+The first step is to configure docker-cli on the Windows development machine to be able to connect to the remote docker engine.
+
+1. Download the precompiled **docker.exe** version of the docker-cli from [docker-cli Chocolatey](https://download.docker.com/win/static/stable/x86_64/docker-20.10.12.zip). You can also download the official **cli** project from [docker/cli GitHub](https://github.com/docker/cli) and compile it following the repo instructions.
+2. Extract the **docker.exe** to a directory in your development machine. For example, _C:\Docker\bin_
+3. Open **About your PC** -> **System Info** -> **Advanced system settings**
+4. Select **Advanced** -> **Environment variables** -> Under **User variables** check **Path**
+5. Edit the **Path** variable and add the location of the **docker.exe**
+6. Open an elevated PowerShell session
+7. Check that Docker CLI is accessible using the command
+ ```powershell
+ docker --version
+ ```
+If everything was successfully configurated, the previous command should output the docker version, something like _Docker version 20.10.12, build e91ed57_.
+
+The second step is to configure the EFLOW virtual machine Docker engine to accept external connections, and add the appropriate firewall rules.
+
+>[!WARNING]
+>Exposing Docker engine to external connections may increase security risks. This configuration should only be used for development purposes. Make sure to revert the configuration to default settings after development is finished.
+
+1. Open an elevated PowerShell session and run the following commands
+
+ ```powershell
+ # Configure the EFLOW virtual machine Docker engine to accept external connections, and add the appropriate firewall rules.
+ Invoke-EflowVmCommand "sudo iptables -A INPUT -p tcp --dport 2375 -j ACCEPT"
+
+ # Create a copy of the EFLOW VM _docker.service_ in the system folder.
+ Invoke-EflowVmCommand "sudo cp /lib/systemd/system/docker.service /etc/systemd/system/docker.service"
+
+ # Replace the service execution line to listen for external connections.
+ Invoke-EflowVmCommand "sudo sed -i 's/-H fd:\/\// -H fd:\/\/ -H tcp:\/\/0.0.0.0:2375/g' /etc/systemd/system/docker.service"
+
+ # Reload the EFLOW VM services configurations.
+ Invoke-EflowVmCommand "sudo systemctl daemon-reload"
+
+ # Reload the Docker engine service.
+ Invoke-EflowVmCommand "sudo systemctl restart docker.service"
+
+ # Check that the Docker engine is listening to external connections.
+ Invoke-EflowVmCommand "sudo netstat -lntp | grep dockerd"
+ ```
+
+ The following is example output.
+
+ ```output
+ PS C:\> # Configure the EFLOW virtual machine Docker engine to accept external connections, and add the appropriate firewall rules.
+ PS C:\> Invoke-EflowVmCommand "sudo iptables -A INPUT -p tcp --dport 2375 -j ACCEPT"
+ PS C:\>
+ PS C:\> # Create a copy of the EFLOW VM docker.service in the system folder.
+ PS C:\> Invoke-EflowVmCommand "sudo cp /lib/systemd/system/docker.service /etc/systemd/system/docker.service"
+ PS C:\>
+ PS C:\> # Replace the service execution line to listen for external connections.
+ PS C:\> Invoke-EflowVmCommand "sudo sed -i 's/-H fd:\/\// -H fd:\/\/ -H tcp:\/\/0.0.0.0:2375/g' /etc/systemd/system/docker.service"
+ PS C:\>
+ PS C:\> # Reload the EFLOW VM services configurations.
+ PS C:\> Invoke-EflowVmCommand "sudo systemctl daemon-reload"
+ PS C:\>
+ PS C:\> # Reload the Docker engine service.
+ PS C:\> Invoke-EflowVmCommand "sudo systemctl restart docker.service"
+ PS C:\>
+ PS C:\> # Check that the Docker engine is listening to external connections.
+ PS C:\> Invoke-EflowVmCommand "sudo netstat -lntp | grep dockerd"
+ tcp6 0 0 :::2375 :::* LISTEN 2790/dockerd
+ ```
+++
+1. The final step is to test the Docker connection to the EFLOW VM Docker engine. First, you will need the EFLOW VM IP address.
+ ```powershell
+ Get-EflowVmAddr
+ ```
+ >[!TIP]
+ >If the EFLOW VM was deployed without Static IP, the IP address may change across Windows host OS reboots or networking changes. Make sure you are using the correct EFLOW VM IP address every time you want to establish a remote Docker engine connection.
+
+ The following is example output.
+
+ ```output
+ PS C:\> Get-EflowVmAddr
+ [03/15/2022 15:22:30] Querying IP and MAC addresses from virtual machine (DESKTOP-J1842A1-EFLOW)
+ - Virtual machine MAC: 00:15:5d:6f:da:78
+ - Virtual machine IP : 172.31.24.105 retrieved directly from virtual machine
+ 00:15:5d:6f:da:78
+ 172.31.24.105
+ ```
+
+1. Using the obtained IP address, connect to the EFLOW VM Docker engine, and run the Hello-World sample container.Replace \<EFLOW-VM-IP\> with the EFLOW VM IP address obtained in the previous step.
+ ```powershell
+ docker -H tcp://<EFLOW-VM-IP>:2375 run --rm hello-world
+ ```
+ You should see that the container is being downloaded, and after will run and output the following.
+
+ ```output
+ PS C:\> docker -H tcp://172.31.24.105:2375 run --rm hello-world
+ Unable to find image 'hello-world:latest' locally
+ latest: Pulling from library/hello-world
+ 2db29710123e: Pull complete
+ Digest: sha256:4c5f3db4f8a54eb1e017c385f683a2de6e06f75be442dc32698c9bbe6c861edd
+ Status: Downloaded newer image for hello-world:latest
+
+ Hello from Docker!
+ This message shows that your installation appears to be working correctly.
+
+ To generate this message, Docker took the following steps:
+ 1. The Docker client contacted the Docker daemon.
+ 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
+ (amd64)
+ 3. The Docker daemon created a new container from that image which runs the
+ executable that produces the output you are currently reading.
+ 4. The Docker daemon streamed that output to the Docker client, which sent it
+ to your terminal.
+
+ To try something more ambitious, you can run an Ubuntu container with:
+ $ docker run -it ubuntu bash
+
+ Share images, automate workflows, and more with a free Docker ID:
+ https://hub.docker.com/
+
+ For more examples and ideas, visit:
+ https://docs.docker.com/get-started/
+ ```
+
+## Create an Azure IoT Edge project
+
+The IoT Edge project template in Visual Studio creates a solution that can be deployed to IoT Edge devices. First you create an Azure IoT Edge solution, and then you generate the first module in that solution. Each IoT Edge solution can contain more than one module.
+
+> [!TIP]
+> The IoT Edge project structure created by Visual Studio is not the same as in Visual Studio Code.
+
+1. In Visual Studio, create a new project.
+
+1. On the **Create a new project** page, search for **Azure IoT Edge**. Select the project that matches the platform (Linux IoT Edge module) and architecture for your IoT Edge device, and select **Next**.
+
+ :::image type="content" source="./media/how-to-visual-studio-develop-module/create-new-project.png" alt-text="Create New Project":::
+
+1. On the **Configure your new project** page, enter a name for your project and specify the location, then select **Create**.
+
+1. On the **Add Module** window, select the type of module you want to develop. You can also select **Existing module** to add an existing IoT Edge module to your deployment. Specify your module name and module image repository.
+
+ Visual Studio autopopulates the repository URL with **localhost:5000/<module name\>**. If you use a local Docker registry for testing, then **localhost** is fine. If you use Azure Container Registry, then replace **localhost:5000** with the log in server from your registry's settings. The login server looks like **_\<registry name\>_.azurecr.io**.The final result should look like **\<*registry name*\>.azurecr.io/_\<module name\>_**.
+
+ Select **Add** to add your module to the project.
+
+ ![Screenshot of adding how to add an application and module to Visual Studio solution](./media/how-to-visual-studio-develop-csharp-module/add-module.png)
+
+Now, you have an IoT Edge project and an IoT Edge module in your Visual Studio solution.
+
+The module folder contains a file for your module code, named either `program.cs` or `main.c` depending on the language you chose. This folder also contains a file named `module.json` that describes the metadata of your module. Various Docker files provide the information needed to build your module as a Windows or Linux container.
+
+The project folder contains a list of all the modules included in that project. Right now it should show only one module, but you can add more.
+
+The project folder also contains a file named `deployment.template.json`. This file is a template of an IoT Edge deployment manifest, which defines all the modules that will run on a device along with how they'll communicate with each other. For more information about deployment manifests, see [Learn how to deploy modules and establish routes](module-composition.md). If you open this deployment template, you see that the two runtime modules, **edgeAgent** and **edgeHub** are included, along with the custom module that you created in this Visual Studio project. A fourth module named **SimulatedTemperatureSensor** is also included. This default module generates simulated data that you can use to test your modules, or delete if it's not necessary. To see how the simulated temperature sensor works, view the [SimulatedTemperatureSensor.csproj source code](https://github.com/Azure/iotedge/tree/master/edge-modules/SimulatedTemperatureSensor).
+
+### Set IoT Edge runtime version
+
+The IoT Edge extension defaults to the latest stable version of the IoT Edge runtime when it creates your deployment assets. Currently, the latest stable version is version 1.2. If you're developing modules for devices running the 1.1 long-term support version or the earlier 1.0 version, update the IoT Edge runtime version in Visual Studio to match.
+
+1. In the Solution Explorer, right-click the name of your project and select **Set IoT Edge runtime version**.
+
+ :::image type="content" source="./media/how-to-visual-studio-develop-module/set-iot-edge-runtime-version.png" alt-text="Right-click your project name and select set IoT Edge runtime version.":::
+
+1. Use the drop-down menu to choose the runtime version that your IoT Edge devices are running, then select **OK** to save your changes.
+
+1. Re-generate your deployment manifest with the new runtime version. Right-click the name of your project and select **Generate deployment for IoT Edge**.
+
+> [!WARNING]
+> If you are chaging the IoT Edge runtime version, make sure the _deploymnet templates_ reflect the necessary changes. Currently there's a known issue with Azure IoT Edge Tools, that won't change the _"schemVersion"_ inside the _"properties.desired"_ object of _"$edgeHub"_ module (last section of the json file).
++
+### Set up Visual Studio 2019 remote Docker engine instance
+
+Use the Azure IoT Edge tools extensions for Visual Studio Code to IoT Edge modules and configure it to use the remote Docker engine running inside the EFLOW virtual machine.
+
+1. Select **Tools** -> **Azure IoT Edge tools** -> **IoT Edge tools settings...**
+
+1. Replace the _DOCKER\_HOST_ localhost value with the EFLOW VM IP address. If you don't remember the IP address, use the EFLOW PowerShell cmdlet `Get-EflowVmAddr` to obtain it. For exmaple, if the EFLOW VM IP address is _172.20.1.100_, then the new value should be _tcp://172.20.1.100:2375_.
+
+ ![Screenshot of IoT Edge Tools settings](./media/tutorial-develop-for-linux-on-windows/iot-edge-tools-settings.png)
+
+1. Select **OK**
+
+## Develop your module
+
+When you add a new module, it comes with default code that is ready to be built and deployed to a device so that you can start testing without touching any code. The module code is located within the module folder in a file named `Program.cs` (for C#) or `main.c` (for C).
+
+The default solution is built so that the simulated data from the **SimulatedTemperatureSensor** module is routed to your module, which takes the input and then sends it to IoT Hub.
+
+When you're ready to customize the module template with your own code, use the [Azure IoT Hub SDKs](../iot-hub/iot-hub-devguide-sdks.md) to build other modules that address the key needs for IoT solutions such as security, device management, and reliability.
++
+## Build and push a single module
+
+Typically, you'll want to test and debug each module before running it within an entire solution with multiple modules. Because the solution will be build or debug using the Docker engine running inside the EFLOW VM, the first step will be building and publishing the module to enable remote debugging.
+
+1. In **Solution Explorer**, right-click the module folder and select **Set as StartUp Project** from the menu.
+
+ ![Screenshot of setting the start-up project](./media/how-to-visual-studio-develop-csharp-module/module-start-up-project.png)
+
+1. To debug the C# Linux module, we need to update Dockerfile.amd64.debug to enable SSH service. Update the Dockerfile.amd64.debug file to use the following template: [Dockerfile for Azure IoT Edge AMD64 C# Module with Remote Debug Support](https://raw.githubusercontent.com/Azure/iotedge-eflow/main/debugging/Dockerfile.amd64.debug).
+
+ > [!NOTE]
+ > When choosing **Debug**, Visual Studio uses `Dockerfile.(amd64|windows-amd64).debug` to build Docker images. This includes the .NET Core command-line debugger VSDBG in your container image while building it. For production-ready IoT Edge modules, we recommend that you use the **Release** configuration, which uses `Dockerfile.(amd64|windows-amd64)` without VSDBG.
+
+ >[!WARNING]
+ > Make sure the last line of the template _ENTRYPOINT ["dotnet", "IotEdgeModule1.dll"]_ the name of the DLL matches the name of your IoT Edge module project.
++
+ ![Screenshot of setting the Dockerfile template](./media/tutorial-develop-for-linux-on-windows/visual-studio-solution.png)
+
+1. To establish an SSH connection with the Linux module, we need to create an RSA key. Open an elevated PowerShell session and run the following commands to create a new RSA key. Make sure you save the RSA key under the same IoT Edge module folder, and the name of the key is _id\_rsa_.
+
+ ```cmd
+ ssh-keygen -t RSA -b 4096 -m PEM
+ ```
+
+ ![Screenshot of how to create an SSH key](./media/tutorial-develop-for-linux-on-windows/ssh-keygen.png)
+
+1. If you're using a private registry like Azure Container Registry (ACR), use the following Docker command to sign in to it. You can get the username and password from the **Access keys** page of your registry in the Azure portal. If you're using local registry, you can [run a local registry](https://docs.docker.com/registry/deploying/#run-a-local-registry).
+
+ ```cmd
+ docker -H tcp://<EFLOW-VM-IP>:2375 login -u <ACR username> -p <ACR password> <ACR login server>
+ ```
+1. In **Solution Explorer**, right-click the project folder and select **Build and Push IoT Edge Modules** to build and push the Docker image for each module.
++
+## Deploy and debug the solution
+
+1. If you're using a private registry like Azure Container Registry, you need to add your registry login information to the runtime settings found in the file `deployment.template.json`. Replace the placeholders with your actual ACR admin username, password, and registry name.
+
+ ```json
+ "settings": {
+ "minDockerVersion": "v1.25",
+ "loggingOptions": "",
+ "registryCredentials": {
+ "registry1": {
+ "username": "<username>",
+ "password": "<password>",
+ "address": "<registry name>.azurecr.io"
+ }
+ }
+ }
+ ```
+
+ >[!NOTE]
+ >This article uses admin login credentials for Azure Container Registry, which are convenient for development and test scenarios. When you're ready for production scenarios, we recommend a least-privilege authentication option like service principals. For more information, see [Manage access to your container registry](production-checklist.md#manage-access-to-your-container-registry).
+
+1. It's necessary to expose port 22 to access the module SSH service. This tutorial uses 10022 as the host port, but you may specify a different port, which will be used as an SSH port to connect into the Linux C# module. You need to add the SSH port information to the "createOptions" of this Linux module setting found in the file `deployment.debug.template.json`.
+
+ ```json
+ "createOptions": {
+ "HostConfig": {
+ "Privileged": true,
+ "PortBindings": {
+ "22/tcp": [
+ {
+ "HostPort": "10022"
+ }
+ ]
+ }
+ }
+ }
+ ```
+
+1. In **Solution Explorer**, right-click the project folder and select **Generate Deployment for IoT Edge** to build the new IoT Edge deployment json.
+
+1. Open **Cloud Explorer** by clicking **View** > **Cloud Explorer**. Make sure you've logged in to Visual Studio 2019.
+
+1. In **Cloud Explorer**, expand your subscription, find your Azure IoT Hub and the Azure IoT Edge device you want to deploy.
+
+1. Right-click on the IoT Edge device and choose **Create deployment**. Navigate to the debug deployment manifest configured for your platform located in the **config** folder in your Visual Studio solution, such as `deployment.amd64.json`.
+
+1. In **Cloud Explorer**, right-click your edge device and refresh to see the new module running along with **$edgeAgent** and **$edgeHub** modules.
+
+1. Using and elevated PowerShell session run the following commands
+
+ 1. Get the moduleId based on the name used for the Linux C# module. Make sure to replace the _\<iot-edge-module-name\>_ placeholder with your module's name.
+
+ ```powershell
+ $moduleId = Invoke-EflowVmCommand "sudo docker ps -aqf name=<iot-edge-module-name>"
+ ```
+
+ 1. Check that the $moduleId is correct ΓÇô If the variable is empty, make sure youΓÇÖre using the correct module name
+
+ 1. Start the SSH service inside the Linux container
+
+ ```powershell
+ Invoke-EflowVmCommand "sudo docker exec -it -d $moduleId service ssh start"
+ ```
+ 1. Open the module SSH port on the EFLOW VM (this tutorial uses port 10022)
+
+ ```powershell
+ Invoke-EflowVmCommand "sudo iptables -A INPUT -p tcp --dport 10022 -j ACCEPT"
+ ```
+ >[!WARNING]
+ >For security reasons, every time the EFLOW VM reboots, the IP table rule will delete and go back to the original settings. Also, the module SSH service will have to be started again manually.
+
+1. After successfully starting SSH service, select **Debug** -> **Attach to Process**, set Connection Type to SSH, and Connection target to the IP address of your EFLOW VM. If you donΓÇÖt know the EFLOW VM IP, you can use the `Get-EflowVmAddr` PowerShell cmdlet. First, type the IP and then press enter. In the pop-up window, input the following configurations:
+
+ | Field | Value |
+ ||-|
+ | **Hostname** | Use the EFLOW VM IP |
+ | **Port** | 10022 (Or the one you used in your deployment configuration) |
+ | **Username** | root |
+ | **Authentication type** | Private Key |
+ | **Private Key File** | Full path to the id_rsa that was previously created in Step 5 |
+ | **Passphrase** | The one used for the key created in Step 5 |
++
+ ![Screenshot of how to connect to a remote system](./media/tutorial-develop-for-linux-on-windows/connect-remote-system.png)
+
+1. After successfully connecting to the module using SSH, then you can choose the process and select Attach. For the C# module you need to choose process dotnet and **Attach to** to Managed (CoreCLR). It may take 10/C20 seconds the first time you do so.
+
+ [ ![Screenshot of how to attach an edge module process.](./media/tutorial-develop-for-linux-on-windows/attach-process.png) ](./media/tutorial-develop-for-linux-on-windows/attach-process.png#lightbox)
+
+1. Set a breakpoint to inspect the module.
+
+ * If developing in C#, set a breakpoint in the `PipeMessage()` function in **Program.cs**.
+ * If using C, set a breakpoint in the `InputQueue1Callback()` function in **main.c**.
+
+1. The output of the **SimulatedTemperatureSensor** should be redirected to **input1** of the custom Linux C# module. The breakpoint should be triggered. You can watch variables in the Visual Studio **Locals** window.
+
+ ![Screenshot of how to debug a single module](./media/tutorial-develop-for-linux-on-windows/debug-single-module.png)
+
+1. Press **Ctrl + F5** or select the stop button to stop debugging.
+
+## Clean up resources
+
+If you plan to continue to the next recommended article, you can keep the resources and configurations that you created and reuse them. You can also keep using the same IoT Edge device as a test device.
+
+Otherwise, you can delete the local configurations and the Azure resources that you used in this article to avoid charges.
++
+## Next steps
+
+In this tutorial, you set up Visual Studio on your development machine and deployed and debugged your first IoT Edge module from it. Now that you know the basic concepts, try adding functionality to a module so that it can analyze the data passing through it. Choose your preferred language:
+
+> [!div class="nextstepaction"]
+> [C](tutorial-c-module.md)
+> [C#](tutorial-csharp-module.md)
+> [Java](tutorial-java-module.md)
+> [Node.js](tutorial-node-module.md)
+> [Python](tutorial-python-module.md)
iot-hub Iot Hub Vscode Iot Toolkit Cloud Device Messaging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-vscode-iot-toolkit-cloud-device-messaging.md
Title: Use Azure IoT Tools for VSCode to manager IT Hub messaging
+ Title: Use Azure IoT Tools for VSCode to manage IoT Hub messaging
description: Learn how to use Azure IoT Tools for Visual Studio Code to monitor device to cloud messages and send cloud to device messages in Azure IoT Hub.
To send a message from your IoT hub to your device, follow these steps:
YouΓÇÖve learned how to monitor device-to-cloud messages and send cloud-to-device messages between your IoT device and Azure IoT Hub.
key-vault Policy Grammar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/policy-grammar.md
+
+ Title: Azure Key Vault secure key release policy grammar
+description: Azure Key Vault secure key release policy grammar
+++++++ Last updated : 03/21/2022+
+
+# Azure Key Vault secure key release policy grammar
+
key-vault Policy Grammar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/policy-grammar.md
+
+ Title: Azure Managed HSM Secure key release policy grammar
+description: Managed HSM Secure key release policy grammar
+keywords:
+++++++ Last updated : 03/21/2022+
+
+# Azure Managed HSM secure key release policy grammar
+
key-vault About Managed Storage Account Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/about-managed-storage-account-keys.md
The following permissions can be used when authorizing a user or application pri
- Permissions for privileged operations - *purge*: Purge (permanently delete) a managed storage account
-For more information, see the [Storage account operations in the Key Vault REST API reference](/rest/api/keyvault). For information on establishing permissions, see [Vaults - Create or Update](/rest/api/keyvault/vaults/createorupdate) and [Vaults - Update Access Policy](/rest/api/keyvault/keyvault/vaults/update-access-policy).
+For more information, see the [Storage account operations in the Key Vault REST API reference](/rest/api/keyvault). For information on establishing permissions, see [Vaults - Create or Update](/rest/api/keyvault/keyvault/vaults/create-or-update) and [Vaults - Update Access Policy](/rest/api/keyvault/keyvault/vaults/update-access-policy).
## Next steps
key-vault About Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/about-secrets.md
The following permissions can be used, on a per-principal basis, in the secrets
- Permissions for privileged operations - *purge*: Purge (permanently delete) a deleted secret
-For more information on working with secrets, see [Secret operations in the Key Vault REST API reference](/rest/api/keyvault). For information on establishing permissions, see [Vaults - Create or Update](/rest/api/keyvault/vaults/createorupdate) and [Vaults - Update Access Policy](/rest/api/keyvault/vaults/updateaccesspolicy).
+For more information on working with secrets, see [Secret operations in the Key Vault REST API reference](/rest/api/keyvault). For information on establishing permissions, see [Vaults - Create or Update](/rest/api/keyvault/keyvault/vaults/create-or-update) and [Vaults - Update Access Policy](/rest/api/keyvault/keyvault/vaults/update-access-policy).
How-to guides to control access in Key Vault: - [Assign a Key Vault access policy using CLI](../general/assign-access-policy-cli.md)
load-balancer Quickstart Load Balancer Standard Internal Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-cli.md
Create a resource group with [az group create](/cli/azure/group#az_group_create)
>[!NOTE] >Standard SKU load balancer is recommended for production workloads. For more information about skus, see **[Azure Load Balancer SKUs](skus.md)**.
-In this section, you create a load balancer that load balances virtual machines. When you create an internal load balancer, a virtual network is configured as the network for the load balancer. The following diagram shows the resources created in this quickstart:
-
+In this section, you create a load balancer that load balances virtual machines. When you create an internal load balancer, a virtual network is configured as the network for the load balancer.
### Configure the virtual network
Add the virtual machines to the back-end pool with [az network nic ip-config add
>[!NOTE] >Standard SKU load balancer is recommended for production workloads. For more information about skus, see **[Azure Load Balancer SKUs](skus.md)**.
-In this section, you create a load balancer that load balances virtual machines. When you create an internal load balancer, a virtual network is configured as the network for the load balancer. The following diagram shows the resources created in this quickstart:
-
+In this section, you create a load balancer that load balances virtual machines. When you create an internal load balancer, a virtual network is configured as the network for the load balancer.
### Configure the virtual network
load-balancer Quickstart Load Balancer Standard Internal Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-portal.md
Previously updated : 08/09/2021 Last updated : 03/21/2022 #Customer intent: I want to create a internal load balancer so that I can load balance internal traffic to VMs.
# Quickstart: Create an internal load balancer to load balance VMs using the Azure portal
-Get started with Azure Load Balancer by using the Azure portal to create an internal load balancer and three virtual machines.
+Get started with Azure Load Balancer by using the Azure portal to create an internal load balancer and two virtual machines.
## Prerequisites
Get started with Azure Load Balancer by using the Azure portal to create an inte
Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com). --
-# [**Standard SKU**](#tab/option-1-create-internal-load-balancer-standard)
-
->[!NOTE]
->Standard SKU load balancer is recommended for production workloads. For more information about SKUs, see **[Azure Load Balancer SKUs](skus.md)**.
+## Create the virtual network
When you create an internal load balancer, a virtual network is configured as the network for the load balancer.
-A private IP address in the virtual network is configured as the frontend (named as **LoadBalancerFrontend** by default) for the load balancer.
+A private IP address in the virtual network is configured as the frontend for the load balancer. The frontend IP address can be **Static** or **Dynamic**.
-The frontend IP address can be **Static** or **Dynamic**.
+An Azure Bastion host is created to securely manage the virtual machines and install IIS.
-## Create the virtual network
-
-In this section, you'll create a virtual network and subnet.
+In this section, you'll create a virtual network, subnet, and Azure Bastion host.
1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual Networks** in the search results.
In this section, you'll create a virtual network and subnet.
| Resource Group | Select **Create new**. </br> In **Name** enter **CreateIntLBQS-rg**. </br> Select **OK**. | | **Instance details** | | | Name | Enter **myVNet** |
- | Region | Select **(Europe) West Europe** |
+ | Region | Select **West US 3** |
4. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
In this section, you'll create a virtual network and subnet.
12. Select **Create**.
-## Create NAT gateway
-
-In this section, you'll create a NAT gateway for outbound internet access for resources in the virtual network.
-
-1. In the search box at the top of the portal, enter **NAT gateway**. Select **NAT gateways** in the search results.
-
-2. In **NAT gateways**, select **+ Create**.
-
-3. In **Create network address translation (NAT) gateway**, enter or select the following information:
-
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **CreateIntLBQS-rg**. |
- | **Instance details** | |
- | NAT gateway name | Enter **myNATgateway**. |
- | Availability zone | Select **None**. |
- | Idle timeout (minutes) | Enter **15**. |
-
-4. Select the **Outbound IP** tab or select the **Next: Outbound IP** button at the bottom of the page.
-
-5. In **Outbound IP**, select **Create a new public IP address** next to **Public IP addresses**.
-
-6. Enter **myNATgatewayIP** in **Name** in **Add a public IP address**.
-
-7. Select **OK**.
-
-8. Select the **Subnet** tab or select the **Next: Subnet** button at the bottom of the page.
-
-9. In **Virtual network** in the **Subnet** tab, select **myVNet**.
-
-10. Select **myBackendSubnet** under **Subnet name**.
-
-11. Select the blue **Review + create** button at the bottom of the page, or select the **Review + create** tab.
-
-12. Select **Create**.
+ > [!NOTE]
+ > The virtual network and subnet are created immediately. The Bastion host creation is submitted as a job and will complete within 10 minutes. You can proceed to the next steps while the Bastion host is created.
-## <a name="create-load-balancer-resources"></a> Create load balancer
+## Create load balancer
In this section, you create a load balancer that load balances virtual machines.
During the creation of the load balancer, you'll configure:
| Resource group | Select **CreateIntLBQS-rg**. | | **Instance details** | | | Name | Enter **myLoadBalancer** |
- | Region | Select **(Europe) West Europe**. |
- | Type | Select **Internal**. |
+ | Region | Select **West US 3**. |
| SKU | Leave the default **Standard**. |
+ | Type | Select **Internal**. |
+ | Tier | Leave the default of **Regional**. |
+
:::image type="content" source="./media/quickstart-load-balancer-standard-internal-portal/create-standard-internal-load-balancer.png" alt-text="Screenshot of create standard load balancer basics tab." border="true"::: 4. Select **Next: Frontend IP configuration** at the bottom of the page.
-5. In **Frontend IP configuration**, select **+ Add a frontend IP**.
+5. In **Frontend IP configuration**, select **+ Add a frontend IP configuration**.
-6. Enter **LoadBalancerFrontend** in **Name**.
+6. Enter **myFrontend** in **Name**.
7. Select **myBackendSubnet** in **Subnet**.
During the creation of the load balancer, you'll configure:
| - | -- | | Name | Enter **myHTTPRule** | | IP Version | Select **IPv4** or **IPv6** depending on your requirements. |
- | Frontend IP address | Select **LoadBalancerFrontend**. |
+ | Frontend IP address | Select **myFrontend**. |
+ | Backend pool | Select **myBackendPool**. |
| Protocol | Select **TCP**. | | Port | Enter **80**. | | Backend port | Enter **80**. |
- | Backend pool | Select **myBackendPool**. |
| Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **TCP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. | | Session persistence | Select **None**. | | Idle timeout (minutes) | Enter or select **15**. |
During the creation of the load balancer, you'll configure:
22. Select **Create**. > [!NOTE]
- > In this example you created a NAT gateway to provide outbound Internet access. The outbound rules tab in the configuration is bypassed and isn't needed with the NAT gateway. For more information on Azure NAT gateway, see [What is Azure Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md)
+ > In this example you'll create a NAT gateway to provide outbound Internet access. The outbound rules tab in the configuration is bypassed and isn't needed with the NAT gateway. For more information on Azure NAT gateway, see [What is Azure Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md)
> For more information about outbound connections in Azure, see [Source Network Address Translation (SNAT) for outbound connections](../load-balancer/load-balancer-outbound-connections.md)
-## Create virtual machines
-
-In this section, you'll create three VMs (**myVM1**, **myVM2** and **myVM3**) in three different zones (**Zone 1**, **Zone 2**, and **Zone 3**).
-
-These VMs are added to the backend pool of the load balancer that was created earlier.
+## Create NAT gateway
-1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+In this section, you'll create a NAT gateway for outbound internet access for resources in the virtual network.
-2. In **Virtual machines**, select **+ Create** > **Virtual machine**.
-
-3. In **Create a virtual machine**, enter or select the values in the **Basics** tab:
+1. In the search box at the top of the portal, enter **NAT gateway**. Select **NAT gateways** in the search results.
- | Setting | Value |
- |--|-|
- | **Project Details** | |
- | Subscription | Select your Azure subscription |
- | Resource Group | Select **CreateIntLBQS-rg** |
- | **Instance details** | |
- | Virtual machine name | Enter **myVM1** |
- | Region | Select **(Europe) West Europe** |
- | Availability Options | Select **Availability zones** |
- | Availability zone | Select **1** |
- | Image | Select **Windows Server 2019 Datacenter - Gen1** |
- | Azure Spot instance | Leave the default of unchecked. |
- | Size | Choose VM size or take default setting |
- | **Administrator account** | |
- | Username | Enter a username |
- | Password | Enter a password |
- | Confirm password | Reenter password |
- | **Inbound port rules** | |
- | Public inbound ports | Select **None** |
+2. In **NAT gateways**, select **+ Create**.
-4. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
-
-5. In the Networking tab, select or enter:
+3. In **Create network address translation (NAT) gateway**, enter or select the following information:
| Setting | Value |
- |-|-|
- | **Network interface** | |
- | Virtual network | **myVNet** |
- | Subnet | **myBackendSubnet** |
- | Public IP | Select **None**. |
- | NIC network security group | Select **Advanced**|
- | Configure network security group | Select **Create new**. </br> In the **Create network security group**, enter **myNSG** in **Name**. </br> Under **Inbound rules**, select **+Add an inbound rule**. </br> Under **Destination port ranges**, enter **80**. </br> Under **Priority**, enter **100**. </br> In **Name**, enter **myNSGRule** </br> Select **Add** </br> Select **OK** |
- | **Load balancing** |
- | Place this virtual machine behind an existing load-balancing solution? | Select the box. |
- | **Load balancing settings** |
- | Load-balancing options | Select **Azure load balancing** |
- | Select a load balancer | Select **myLoadBalancer** |
- | Select a backend pool | Select **myBackendPool** |
-
-6. Select **Review + create**.
-
-7. Review the settings, and then select **Create**.
-
-8. Follow the steps 1 through 7 to create two more VMs with the following values and all the other settings the same as **myVM1**:
-
- | Setting | VM 2| VM 3|
- | - | -- ||
- | Name | **myVM2** |**myVM3**|
- | Availability zone | **2** |**3**|
- | Network security group | Select the existing **myNSG**| Select the existing **myNSG**|
--
-# [**Basic SKU**](#tab/option-2-create-load-balancer-basic)
-
->[!NOTE]
->Standard SKU load balancer is recommended for production workloads. For more information about SKUs, see **[Azure Load Balancer SKUs](skus.md)**.
-
-## Create the virtual network
-
-In this section, you'll create a virtual network and subnet.
-
-1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual Networks** in the search results.
-
-2. In **Virtual networks**, select **+ Create**.
-
-3. In **Create virtual network**, enter or select this information in the **Basics** tab:
-
- | **Setting** | **Value** |
- ||--|
- | **Project Details** | |
- | Subscription | Select your Azure subscription |
- | Resource Group | Select **Create new**. </br> In **Name** enter **CreateIntLBQS-rg**. </br> Select **OK**. |
- | **Instance details** | |
- | Name | Enter **myVNet** |
- | Region | Select **(Europe) West Europe** |
-
-4. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
-
-5. In the **IP Addresses** tab, enter this information:
-
- | Setting | Value |
- |--|-|
- | IPv4 address space | Enter **10.1.0.0/16** |
-
-6. Under **Subnet name**, select the word **default**.
-
-7. In **Edit subnet**, enter this information:
-
- | Setting | Value |
- |--|-|
- | Subnet name | Enter **myBackendSubnet** |
- | Subnet address range | Enter **10.1.0.0/27** |
-
-8. Select **Save**.
-
-9. Select the **Security** tab.
-
-10. Under **BastionHost**, select **Enable**. Enter this information:
-
- | Setting | Value |
- |--|-|
- | Bastion name | Enter **myBastionHost** |
- | AzureBastionSubnet address space | Enter **10.1.1.0/24** |
- | Public IP Address | Select **Create new**. </br> For **Name**, enter **myBastionIP**. </br> Select **OK**. |
-
-11. Select the **Review + create** tab or select the **Review + create** button.
-
-12. Select **Create**.
-
-## Create load balancer
-
-In this section, you create a load balancer that load balances virtual machines.
-
-During the creation of the load balancer, you'll configure:
-
-* Frontend IP address
-* Backend pool
-* Inbound load-balancing rules
-
-1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
-
-2. In the **Load balancer** page, select **Create**.
-
-3. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information:
-
- | Setting | Value |
- | | |
+ | - | -- |
| **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **CreateIntLBQS-rg**. |
- | **Instance details** | |
- | Name | Enter **myLoadBalancer** |
- | Region | Select **(Europe) West Europe**. |
- | Type | Select **Internal**. |
- | SKU | Select **Basic**. |
-
- :::image type="content" source="./media/quickstart-load-balancer-standard-internal-portal/create-basic-internal-load-balancer.png" alt-text="Screenshot of create standard load balancer basics tab." border="true":::
-
-4. Select **Next: Frontend IP configuration** at the bottom of the page.
-
-5. In **Frontend IP configuration**, select **+ Add a frontend IP**.
-
-6. Enter **LoadBalancerFrontend** in **Name**.
-
-7. Select **myBackendSubnet** in **Subnet**.
-
-8. Select **Dynamic** for **Assignment**.
-
-9. Select **Add**.
-
-10. Select **Next: Backend pools** at the bottom of the page.
-
-11. In the **Backend pools** tab, select **+ Add a backend pool**.
-
-12. Enter **myBackendPool** for **Name** in **Add backend pool**.
-
-13. Select **Virtual machines** in **Associated to**.
-
-14. Select **IPv4** or **IPv6** for **IP version**.
+ | Subscription | Select your subscription. |
+ | Resource group | Select **CreateIntLBQS-rg**. |
+ | **Instance details** | |
+ | NAT gateway name | Enter **myNATgateway**. |
+ | Region | Select **West US 3**. |
+ | Availability zone | Select **None**. |
+ | Idle timeout (minutes) | Enter **15**. |
-15. Select **Add**.
+4. Select the **Outbound IP** tab or select the **Next: Outbound IP** button at the bottom of the page.
-16. Select the **Next: Inbound rules** button at the bottom of the page.
+5. In **Outbound IP**, select **Create a new public IP address** next to **Public IP addresses**.
-17. In **Load balancing rule** in the **Inbound rules** tab, select **+ Add a load balancing rule**.
+6. Enter **myNATgatewayIP** in **Name** in **Add a public IP address**.
-18. In **Add load balancing rule**, enter or select the following information:
+7. Select **OK**.
- | Setting | Value |
- | - | -- |
- | Name | Enter **myHTTPRule** |
- | IP Version | Select **IPv4** or **IPv6** depending on your requirements. |
- | Frontend IP address | Select **LoadBalancerFrontend**. |
- | Protocol | Select **TCP**. |
- | Port | Enter **80**. |
- | Backend port | Enter **80**. |
- | Backend pool | Select **myBackendPool**. |
- | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **HTTP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. |
- | Session persistence | Select **None**. |
- | Idle timeout (minutes) | Enter or select **15**. |
- | Floating IP | Select **Disabled**. |
+8. Select the **Subnet** tab or select the **Next: Subnet** button at the bottom of the page.
-19. Select **Add**.
+9. In **Virtual network**, select **myVNet**.
-20. Select the blue **Review + create** button at the bottom of the page.
+10. Select **myBackendSubnet** under **Subnet name**.
-21. Select **Create**.
+11. Select the blue **Review + create** button at the bottom of the page, or select the **Review + create** tab.
- > [!NOTE]
- > In this example we created a NAT gateway to provide outbound Internet access. The outbound rules tab in the configuration is bypassed and isn't needed with the NAT gateway. For more information on Azure NAT gateway, see [What is Azure Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md)
- > For more information about outbound connections in Azure, see [Source Network Address Translation (SNAT) for outbound connections](../load-balancer/load-balancer-outbound-connections.md)
+12. Select **Create**.
## Create virtual machines
-In this section, you'll create three VMs (**myVM1**, **myVM2**, and **myVM3**).
+In this section, you'll create two VMs (**myVM1** and **myVM2**) in two different zones (**Zone 1** and **Zone 2**).
-The three VMs will be added to an availability set named **myAvailabilitySet**.
+These VMs are added to the backend pool of the load balancer that was created earlier.
1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
-2. In **Virtual machines**, select **+ Create** > **Virtual machine**.
+2. In **Virtual machines**, select **+ Create** > **Azure virtual machine**.
-3. In **Create a virtual machine**, type or select the values in the **Basics** tab:
+3. In **Create a virtual machine**, enter or select the values in the **Basics** tab:
| Setting | Value | |--|-|
The three VMs will be added to an availability set named **myAvailabilitySet**.
| Resource Group | Select **CreateIntLBQS-rg** | | **Instance details** | | | Virtual machine name | Enter **myVM1** |
- | Region | Select **(Europe) West Europe** |
- | Availability Options | Select **Availability set** |
- | Availability set | Select **Create new**. </br> Enter **myAvailabilitySet** in **Name**. </br> Select **OK** |
- | Image | **Windows Server 2019 Datacenter - Gen1** |
+ | Region | Select **(US) West US 3** |
+ | Availability Options | Select **Availability zones** |
+ | Availability zone | Select **1** |
+ | Security type | Select **Standard**. |
+ | Image | Select **Windows Server 2019 Datacenter - Gen2** |
| Azure Spot instance | Leave the default of unchecked. | | Size | Choose VM size or take default setting | | **Administrator account** | | | Username | Enter a username | | Password | Enter a password | | Confirm password | Reenter password |
- | **Inbound port rules** | |
- | Public inbound ports | Select **None**. |
+ | **Inbound port rules** | |
+ | Public inbound ports | Select **None** |
4. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
The three VMs will be added to an availability set named **myAvailabilitySet**.
| Setting | Value | |-|-| | **Network interface** | |
- | Virtual network | Select **myVNet** |
- | Subnet | Select **myBackendSubnet** |
- | Public IP | Select **None** |
+ | Virtual network | **myVNet** |
+ | Subnet | **myBackendSubnet** |
+ | Public IP | Select **None**. |
| NIC network security group | Select **Advanced**|
- | Configure network security group | Select **Create new**. </br> In the **Create network security group**, enter **myNSG** in **Name**. </br> Under **Inbound rules**, select **+Add an inbound rule**. </br> Under **Destination port ranges**, enter **80**. </br> Under **Priority**, enter **100**. </br> In **Name**, enter **myNSGRule** </br> Select **Add** </br> Select **OK** |
+ | Configure network security group | Select **Create new**. </br> In the **Create network security group**, enter **myNSG** in **Name**. </br> Under **Inbound rules**, select **+Add an inbound rule**. </br> In **Service**, select **HTTP**. </br> Under **Priority**, enter **100**. </br> In **Name**, enter **myNSGRule** </br> Select **Add** </br> Select **OK** |
| **Load balancing** |
- | Place this virtual machine behind an existing load-balancing solution? | Select the box |
- | **Load balancing settings** | |
- | Load-balancing options | Select **Azure load balancer**. |
- | Select a load balancer | Select **myLoadBalancer**>. |
- | Select a backend pool | Select **myBackendPool**. |
-
+ | Place this virtual machine behind an existing load-balancing solution? | Select the box. |
+ | **Load balancing settings** |
+ | Load-balancing options | Select **Azure load balancing** |
+ | Select a load balancer | Select **myLoadBalancer** |
+ | Select a backend pool | Select **myBackendPool** |
+
6. Select **Review + create**. 7. Review the settings, and then select **Create**.
-8. Follow the steps 1 through 7 to create two more VMs with the following values and all the other settings the same as **myVM1**:
+8. Follow the steps 1 through 7 to create one more VM with the following values and all the other settings the same as **myVM1**:
- | Setting | VM 2 | VM 3 |
- | - | -- ||
- | Name | **myVM2** |**myVM3**|
- | Availability set| Select **myAvailabilitySet** | Select **myAvailabilitySet**|
- | Network security group | Select the existing **myNSG**| Select the existing **myNSG**|
+ | Setting | VM 2 |
+ | - | -- |
+ | Name | **myVM2** |
+ | Availability zone | **2** |
+ | Network security group | Select the existing **myNSG** |
[!INCLUDE [ephemeral-ip-note.md](../../includes/ephemeral-ip-note.md)]- ## Create test virtual machine
In this section, you'll create a VM named **myTestVM**. This VM will be used to
1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
-2. In **Virtual machines**, select **+ Create** > **Virtual machine**.
+2. In **Virtual machines**, select **+ Create** > **Azure virtual machine**.
2. In **Create a virtual machine**, type or select the values in the **Basics** tab: | Setting | Value |
- |--|-|
+ |-- | - |
| **Project Details** | | | Subscription | Select your Azure subscription | | Resource Group | Select **CreateIntLBQS-rg** | | **Instance details** | | | Virtual machine name | Enter **myTestVM** |
- | Region | Select **(Europe) West Europe** |
+ | Region | Select **(US) West US 3** |
| Availability Options | Select **No infrastructure redundancy required** |
- | Image | Select **Windows Server 2019 Datacenter** |
+ | Security type | Select **Standard**. |
+ | Image | Select **Windows Server 2019 Datacenter - Gen2** |
| Azure Spot instance | Leave the default of unselected. | | Size | Choose VM size or take default setting | | **Administrator account** | |
In this section, you'll create a VM named **myTestVM**. This VM will be used to
3. In the **Overview** page, select **Connect**, then **Bastion**.
-4. Select **Use Bastion**.
-
-5. Enter the username and password entered during VM creation.
+4. Enter the username and password entered during VM creation.
-6. Select **Connect**.
+5. Select **Connect**.
-7. On the server desktop, navigate to **Windows Administrative Tools** > **Windows PowerShell** > **Windows PowerShell**.
+6. On the server desktop, navigate to **Windows Administrative Tools** > **Windows PowerShell** > **Windows PowerShell**.
-8. In the PowerShell Window, execute the following commands to:
+7. In the PowerShell Window, execute the following commands to:
* Install the IIS server. * Remove the default iisstart.htm file.
In this section, you'll create a VM named **myTestVM**. This VM will be used to
Add-Content -Path "C:\inetpub\wwwroot\iisstart.htm" -Value $("Hello World from " + $env:computername) ```
-9. Close the Bastion session with **myVM1**.
+8. Close the Bastion session with **myVM1**.
-10. Repeat steps 1 through 9 to install IIS and the updated iisstart.htm file on **myVM2** and **myVM3**.
+9. Repeat steps 1 through 8 to install IIS and the updated iisstart.htm file on **myVM2**.
## Test the load balancer
In this section, you'll test the load balancer by connecting to the **myTestVM**
2. Select **myLoadBalancer**.
-3. Make note or copy the address next to **Private IP Address** in the **Overview** of **myLoadBalancer**.
+3. Make note or copy the address next to **Private IP address** in the **Overview** of **myLoadBalancer**. If you can't see the **Private IP address** field, select **See more** in the information window.
4. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
In this section, you'll test the load balancer by connecting to the **myTestVM**
6. In the **Overview** page, select **Connect**, then **Bastion**.
-7. Select **Use Bastion**.
+7. Enter the username and password entered during VM creation.
-8. Enter the username and password entered during VM creation.
+8. Open **Internet Explorer** on **myTestVM**.
-9. Open **Internet Explorer** on **myTestVM**.
-
-10. Enter the IP address from the previous step into the address bar of the browser. The custom page displaying one of the backend server names is displayed on the browser.
+9. Enter the IP address from the previous step into the address bar of the browser. The custom page displaying one of the backend server names is displayed on the browser. In this example, it's **10.1.0.4**.
:::image type="content" source="./media/quickstart-load-balancer-standard-internal-portal/load-balancer-test.png" alt-text="Screenshot shows a browser window displaying the customized page, as expected." border="true":::
When no longer needed, delete the resource group, load balancer, and all related
In this quickstart, you:
-* Created an Azure Standard or Basic Internal Load Balancer
-* Attached 3 VMs to the load balancer.
-* Configured the load balancer traffic rule, health probe, and then tested the load balancer.
+* Created an internal Azure Load Balancer
+
+* Attached 2 VMs to the load balancer
+
+* Configured the load balancer traffic rule, health probe, and then tested the load balancer
To learn more about Azure Load Balancer, continue to: > [!div class="nextstepaction"]
load-balancer Quickstart Load Balancer Standard Internal Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-powershell.md
In this section, you create a load balancer that load balances virtual machines.
When you create an internal load balancer, a virtual network is configured as the network for the load balancer.
-The following diagram shows the resources created in this quickstart:
-- ## Configure virtual network - Standard Before you deploy VMs and test your load balancer, create the supporting virtual network resources.
In this section, you create a load balancer that load balances virtual machines.
When you create an internal load balancer, a virtual network is configured as the network for the load balancer.
-The following diagram shows the resources created in this quickstart:
-- ## Configure virtual network - Basic Before you deploy VMs and test your load balancer, create the supporting virtual network resources.
machine-learning Concept Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-customer-managed-keys.md
+
+ Title: Customer-managed keys
+
+description: 'Learn about using customer-managed keys to improve data security with Azure Machine Learning.'
+++++++ Last updated : 03/17/2022+
+# Customer-managed keys for Azure Machine Learning
+
+Azure Machine Learning is built on top of multiple Azure services. While the data is stored securely using encryption keys that Microsoft provides, you can enhance security by also providing your own (customer-managed) keys. The keys you provide are stored securely using Azure Key Vault.
++
+In addition to customer-managed keys, Azure Machine Learning also provides a [hbi_workspace flag](/python/api/azureml-core/azureml.core.workspace%28class%29#create-name--auth-none--subscription-id-none--resource-group-none--location-none--create-resource-group-true--sku--basicfriendly-name-none--storage-account-none--key-vault-none--app-insights-none--container-registry-none--cmk-keyvault-none--resource-cmk-uri-none--hbi-workspace-false--default-cpu-compute-target-none--default-gpu-compute-target-none--exist-ok-false--show-output-true-). Enabling this flag reduces the amount of data Microsoft collects for diagnostic purposes and enables [extra encryption in Microsoft-managed environments](../security/fundamentals/encryption-atrest.md). This flag also enables the following behaviors:
+
+* Starts encrypting the local scratch disk in your Azure Machine Learning compute cluster, provided you havenΓÇÖt created any previous clusters in that subscription. Else, you need to raise a support ticket to enable encryption of the scratch disk of your compute clusters.
+* Cleans up your local scratch disk between runs.
+* Securely passes credentials for your storage account, container registry, and SSH account from the execution layer to your compute clusters using your key vault.
+
+> [!TIP]
+> The `hbi_workspace` flag does not impact encryption in transit, only encryption at rest.
+
+## Prerequisites
+
+* An Azure subscription.
+* An Azure Key Vault instance. The key vault contains the key(s) used to encrypt your services.
+
+ * The key vault instance must enable soft delete and purge protection.
+ * The managed identity for the services secured by a customer-managed key must have the following permissions in key vault:
+
+ * wrap key
+ * unwrap key
+ * get
+
+ For example, the managed identity for Azure Cosmos DB would need to have those permissions to the key vault.
+
+## Limitations
+
+* The customer-managed key for resources the workspace depends on canΓÇÖt be updated after workspace creation.
+* Resources managed by Microsoft in your subscription canΓÇÖt transfer ownership to you.
+* You can't delete Microsoft-managed resources used for customer-managed keys without also deleting your workspace.
+
+## How workspace metadata is stored
+
+The following resources store metadata for your workspace:
+
+| Service | How itΓÇÖs used |
+| -- | -- |
+| Azure Cosmos DB | Stores run history data. |
+| Azure Cognitive Search | Stores indices that are used to help query your machine learning content. |
+| Azure Storage Account | Stores other metadata such as Azure Machine Learning pipelines data. |
+
+Your Azure Machine Learning workspace reads and writes data using its managed identity. This identity is granted access to the resources using a role assignment (Azure role-based access control) on the data resources. The encryption key you provide is used to encrypt data that is stored on Microsoft-managed resources. It's also used to create indices for Azure Cognitive Search, which are created at runtime.
+
+## Customer-managed keys
+
+When you __don't use a customer-managed key__, Microsoft creates and manages these resources in a Microsoft owned Azure subscription and uses a Microsoft-managed key to encrypt the data.
+
+When you __use a customer-managed key__, these resources are _in your Azure subscription_ and encrypted with your key. While they exist in your subscription, these resources are __managed by Microsoft__. They're automatically created and configured when you create your Azure Machine Learning workspace.
+
+> [!IMPORTANT]
+> When using a customer-managed key, the costs for your subscription will be higher because these resources are in your subscription. To estimate the cost, use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/).
+
+These Microsoft-managed resources are located in a new Azure resource group is created in your subscription. This group is in addition to the resource group for your workspace. This resource group will contain the Microsoft-managed resources that your key is used with. The resource group will be named using the formula of `<Azure Machine Learning workspace resource group name><GUID>`.
+
+> [!TIP]
+> * The [__Request Units__](/azure/cosmos-db/request-units) for the Azure Cosmos DB automatically scale as needed.
+> * If your Azure Machine Learning workspace uses a private endpoint, this resource group will also contain a Microsoft-managed Azure Virtual Network. This VNet is used to secure communications between the managed services and the workspace. You __cannot provide your own VNet for use with the Microsoft-managed resources__. You also __cannot modify the virtual network__. For example, you cannot change the IP address range that it uses.
+
+> [!IMPORTANT]
+> If your subscription does not have enough quota for these services, a failure will occur.
+
+> [!WARNING]
+> __Don't delete the resource group__ that contains this Azure Cosmos DB instance, or any of the resources automatically created in this group. If you need to delete the resource group or Microsoft-managed services in it, you must delete the Azure Machine Learning workspace that uses it. The resource group resources are deleted when the associated workspace is deleted.
+
+## How compute data is stored
+
+Azure Machine Learning uses compute resources to train and deploy machine learning models. The following table describes the compute options and how data is encrypted by each one:
+
+| Compute | Encryption |
+| -- | -- |
+| Azure Container Instance | Data is encrypted by a Microsoft-managed key or a customer-managed key.</br>For more information, see [Encrypt data with a customer-managed key](../container-instances/container-instances-encrypt-data.md). |
+| Azure Kubernetes Service | Data is encrypted by a Microsoft-managed key or a customer-managed key.</br>For more information, see [Bring your own keys with Azure disks in Azure Kubernetes Services](/azure/aks/azure-disk-customer-managed-keys). |
+| Azure Machine Learning compute instance | Local scratch disk is encrypted if the `hbi_workspace` flag is enabled for the workspace. |
+| Azure Machine Learning compute cluster | OS disk encrypted in Azure Storage with Microsoft-managed keys. Temporary disk is encrypted if the `hbi_workspace` flag is enabled for the workspace. |
+
+**Compute cluster**
+The OS disk for each compute node stored in Azure Storage is encrypted with Microsoft-managed keys in Azure Machine Learning storage accounts. This compute target is ephemeral, and clusters are typically scaled down when no runs are queued. The underlying virtual machine is de-provisioned, and the OS disk is deleted. Azure Disk Encryption isn't supported for the OS disk.
+
+Each virtual machine also has a local temporary disk for OS operations. If you want, you can use the disk to stage training data. If the workspace was created with the `hbi_workspace` parameter set to `TRUE`, the temporary disk is encrypted. This environment is short-lived (only during your run) and encryption support is limited to system-managed keys only.
+
+**Compute instance**
+The OS disk for compute instance is encrypted with Microsoft-managed keys in Azure Machine Learning storage accounts. If the workspace was created with the `hbi_workspace` parameter set to `TRUE`, the local temporary disk on compute instance is encrypted with Microsoft managed keys. Customer managed key encryption isn't supported for OS and temp disk.
+
+### HBI_workspace flag
+
+* The `hbi_workspace` flag can only be set when a workspace is created. It canΓÇÖt be changed for an existing workspace.
+* When this flag is set to True, it may increase the difficulty of troubleshooting issues because less telemetry data is sent to Microsoft. ThereΓÇÖs less visibility into success rates or problem types. Microsoft may not be able to react as proactively when this flag is True.
+
+To enable the `hbi_workspace` flag when creating an Azure Machine Learning workspace, follow the steps in one of the following articles:
+
+* [How to create and manage a workspace](how-to-manage-workspace.md).
+* [How to create and manage a workspace using the Azure CLI](how-to-manage-workspace-cli.md).
+* [How to create a workspace using Hashicorp Terraform](how-to-manage-workspace-terraform.md).
+* [How to create a workspace using Azure Resource Manager templates](how-to-create-workspace-template.md).
+
+## Next Steps
+
+* [How to configure customer-managed keys with Azure Machine Learning](how-to-setup-customer-managed-keys.md).
machine-learning Concept Data Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data-encryption.md
Azure Machine Learning uses a variety of Azure data storage services and compute
## Encryption at rest
-> [!IMPORTANT]
-> If your workspace contains sensitive data we recommend setting the [hbi_workspace flag](/python/api/azureml-core/azureml.core.workspace%28class%29#create-name--auth-none--subscription-id-none--resource-group-none--location-none--create-resource-group-true--sku--basicfriendly-name-none--storage-account-none--key-vault-none--app-insights-none--container-registry-none--cmk-keyvault-none--resource-cmk-uri-none--hbi-workspace-false--default-cpu-compute-target-none--default-gpu-compute-target-none--exist-ok-false--show-output-true-) while creating your workspace. The `hbi_workspace` flag can only be set when a workspace is created. It cannot be changed for an existing workspace.
-
-The `hbi_workspace` flag controls the amount of [data Microsoft collects for diagnostic purposes](#microsoft-collected-data) and enables [additional encryption in Microsoft-managed environments](../security/fundamentals/encryption-atrest.md). In addition, it enables the following actions:
-
-* Starts encrypting the local scratch disk in your Azure Machine Learning compute cluster provided you have not created any previous clusters in that subscription. Else, you need to raise a support ticket to enable encryption of the scratch disk of your compute clusters
-* Cleans up your local scratch disk between runs
-* Securely passes credentials for your storage account, container registry, and SSH account from the execution layer to your compute clusters using your key vault
-
-When this flag is set to True, one possible impact is increased difficulty troubleshooting issues. This could happen because some telemetry isn't sent to Microsoft and there is less visibility into success rates or problem types, and therefore may not be able to react as proactively when this flag is True.
-
-> [!TIP]
-> The `hbi_workspace` flag does not impact encryption in transit, only encryption at rest.
+Azure Machine Learning relies on multiple Azure Services, each of which have their own encryption capabilities.
### Azure Blob storage
-Azure Machine Learning stores snapshots, output, and logs in the Azure Blob storage account that's tied to the Azure Machine Learning workspace and your subscription. All the data stored in Azure Blob storage is encrypted at rest with Microsoft-managed keys.
+Azure Machine Learning stores snapshots, output, and logs in the Azure Blob storage account (default storage account) that's tied to the Azure Machine Learning workspace and your subscription. All the data stored in Azure Blob storage is encrypted at rest with Microsoft-managed keys.
For information on how to use your own keys for data stored in Azure Blob storage, see [Azure Storage encryption with customer-managed keys in Azure Key Vault](../storage/common/customer-managed-keys-configure-key-vault.md).
For information on regenerating the access keys, see [Regenerate storage access
Azure Machine Learning stores metadata in an Azure Cosmos DB instance. This instance is associated with a Microsoft subscription managed by Azure Machine Learning. All the data stored in Azure Cosmos DB is encrypted at rest with Microsoft-managed keys.
-To use your own (customer-managed) keys to encrypt the Azure Cosmos DB instance, you can create a dedicated Cosmos DB instance for use with your workspace. We recommend this approach if you want to store your data, such as run history information, outside of the multi-tenant Cosmos DB instance hosted in our Microsoft subscription.
-
-To enable provisioning a Cosmos DB instance in your subscription with customer-managed keys, perform the following actions:
-
-* Register the Microsoft.MachineLearning and Microsoft.DocumentDB resource providers in your subscription, if not done already.
-
-* Use the following parameters when creating the Azure Machine Learning workspace. Both parameters are mandatory and supported in SDK, Azure CLI, REST APIs, and Resource Manager templates.
-
- * `cmk_keyvault`: This parameter is the resource ID of the key vault in your subscription. This key vault needs to be in the same region and subscription that you will use for the Azure Machine Learning workspace.
-
- * `resource_cmk_uri`: This parameter is the full resource URI of the customer managed key in your key vault, including the [version information for the key](../key-vault/general/about-keys-secrets-certificates.md#objects-identifiers-and-versioning).
-
- > [!NOTE]
- > Enabling soft delete and purge protection on the CMK key vault instance is required before creating an encrypted machine learning workspace to protect against accidental data loss in case of vault deletion.
-
- > [!NOTE]
- > This key vault instance can be different than the key vault that is created by Azure Machine Learning when you provision the workspace. If you want to use the same key vault instance for the workspace, pass the same key vault while provisioning the workspace by using the [key_vault parameter](/python/api/azureml-core/azureml.core.workspace%28class%29#create-name--auth-none--subscription-id-none--resource-group-none--location-none--create-resource-group-true--sku--basicfriendly-name-none--storage-account-none--key-vault-none--app-insights-none--container-registry-none--cmk-keyvault-none--resource-cmk-uri-none--hbi-workspace-false--default-cpu-compute-target-none--default-gpu-compute-target-none--exist-ok-false--show-output-true-).
--
-If you need to __rotate or revoke__ your key, you can do so at any time. When rotating a key, Cosmos DB will start using the new key (latest version) to encrypt data at rest. When revoking (disabling) a key, Cosmos DB takes care of failing requests. It usually takes an hour for the rotation or revocation to be effective.
-
-For more information on customer-managed keys with Cosmos DB, see [Configure customer-managed keys for your Azure Cosmos DB account](../cosmos-db/how-to-setup-cmk.md).
+When using your own (customer-managed) keys to encrypt the Azure Cosmos DB instance, a Microsoft managed Azure Cosmos DB instance is created in your subscription. This instance is created in a Microsoft-managed resource group, which is different than the resource group for your workspace. For more information, see [Customer-managed keys](concept-customer-managed-keys.md).
### Azure Container Registry
Each virtual machine also has a local temporary disk for OS operations. If you w
**Compute instance** The OS disk for compute instance is encrypted with Microsoft-managed keys in Azure Machine Learning storage accounts. If the workspace was created with the `hbi_workspace` parameter set to `TRUE`, the local temporary disk on compute instance is encrypted with Microsoft managed keys. Customer managed key encryption is not supported for OS and temp disk.
+For more information, see [Customer-managed keys](concept-customer-managed-keys.md).
+ ### Azure Databricks Azure Databricks can be used in Azure Machine Learning pipelines. By default, the Databricks File System (DBFS) used by Azure Databricks is encrypted using a Microsoft-managed key. To configure Azure Databricks to use customer-managed keys, see [Configure customer-managed keys on default (root) DBFS](/azure/databricks/security/customer-managed-keys-dbfs).
Each workspace has an associated system-assigned managed identity that has the s
* [Get data from a datastore](how-to-create-register-datasets.md) * [Connect to data](how-to-connect-data-ui.md) * [Train with datasets](how-to-train-with-datasets.md)
+* [Customer-managed keys](concept-customer-managed-keys.md).
machine-learning How To Create Workspace Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-workspace-template.md
Previously updated : 10/21/2021 Last updated : 03/08/2022 # Customer intent: As a DevOps person, I need to automate or customize the creation of Azure Machine Learning by using templates.
The following example template demonstrates how to create a workspace with three
* Enable encryption for the workspace. * Uses an existing Azure Key Vault to retrieve customer-managed keys. Customer-managed keys are used to create a new Cosmos DB instance for the workspace.
- [!INCLUDE [machine-learning-customer-managed-keys.md](../../includes/machine-learning-customer-managed-keys.md)]
- > [!IMPORTANT] > Once a workspace has been created, you cannot change the settings for confidential data, encryption, key vault ID, or key identifiers. To change these values, you must create a new workspace using the new values.
-For more information, see [Encryption at rest](concept-data-encryption.md#encryption-at-rest).
+For more information, see [Customer-managed keys](concept-customer-managed-keys.md).
> [!IMPORTANT] > There are some specific requirements your subscription must meet before using this template: > * You must have an existing Azure Key Vault that contains an encryption key. > * The Azure Key Vault must be in the same region where you plan to create the Azure Machine Learning workspace. > * You must specify the ID of the Azure Key Vault and the URI of the encryption key.
+>
+> For steps on creating the vault and key, see [Configure customer-managed keys](how-to-setup-customer-managed-keys.md).
__To get the values__ for the `cmk_keyvault` (ID of the Key Vault) and the `resource_cmk_uri` (key URI) parameters needed by this template, use the following steps:
machine-learning How To Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace.md
Previously updated : 12/30/2021 Last updated : 03/08/2022
To limit the data that Microsoft collects on your workspace, select __High busin
#### Use your own key
-You can provide your own key for data encryption. Doing so creates the Azure Cosmos DB instance that stores metadata in your Azure subscription.
+You can provide your own key for data encryption. Doing so creates the Azure Cosmos DB instance that stores metadata in your Azure subscription. For more information, see [Customer-managed keys](concept-customer-managed keys.md).
Use the following steps to provide your own key: > [!IMPORTANT] > Before following these steps, you must first perform the following actions: >
-> 1. Authorize the __Machine Learning App__ (in Identity and Access Management) with contributor permissions on your subscription.
-> 1. Follow the steps in [Configure customer-managed keys](../cosmos-db/how-to-setup-cmk.md) to:
-> * Register the Azure Cosmos DB provider
-> * Create and configure an Azure Key Vault
-> * Generate a key
->
-> You do not need to manually create the Azure Cosmos DB instance, one will be created for you during workspace creation. This Azure Cosmos DB instance will be created in a separate resource group using a name based on this pattern: `<your-workspace-resource-name>_<GUID>`.
->
-> You cannot change this setting after workspace creation. If you delete the Azure Cosmos DB used by your workspace, you must also delete the workspace that is using it.
+> Follow the steps in [Configure customer-managed keys](how-to-setup-customer-managed-keys.md) to:
+> * Register the Azure Cosmos DB provider
+> * Create and configure an Azure Key Vault
+> * Generate a key
# [Python](#tab/python)
With the public preview search capability, you can search for machine learning a
Type search text into the global search bar on the top of portal and hit enter to trigger a 'contains' search. A contains search scans across all metadata fields for the given asset and sorts results relevance.
-You can use the asset quick links to navigate to search results for jobs, models, and components that you created.
+You can use the asset quick links to navigate to search results for jobs, models, components, environments, and datasets that you created.
Also, you can change the scope of applicable subscriptions and workspaces via the 'Change' link in the search bar drop down.
Select any number of filters to create more specific search queries. The followi
* Environment: * Dataset:
-If an asset filter (job, model, component) is present, results are scoped to those tabs. Other filters apply to all assets unless an asset filter is also present in the query. Similarly, free text search can be provided alongside filters, but are scoped to the tabs chosen by asset filters, if present.
+If an asset filter (job, model, component, environment, dataset) is present, results are scoped to those tabs. Other filters apply to all assets unless an asset filter is also present in the query. Similarly, free text search can be provided alongside filters, but are scoped to the tabs chosen by asset filters, if present.
> [!TIP] > * Filters search for exact matches of text. Use free text queries for a contains search.
If an asset filter (job, model, component) is present, results are scoped to tho
### View search results
-You can view your search results in the individual **Jobs**, **Models** and **Components** tabs. Select an asset to open its **Details** page in the context of the relevant workspace. Results from workspaces you don't have permissions to view are not displayed.
+You can view your search results in the individual **Jobs**, **Models**, **Components**, **Environments**, and **Datasets** tabs. Select an asset to open its **Details** page in the context of the relevant workspace. Results from workspaces you don't have permissions to view are not displayed.
:::image type="content" source="./media/how-to-manage-workspace/results.png" alt-text="Results displayed after search":::
machine-learning How To Move Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-move-workspace.md
Moving the workspace enables you to migrate the workspace and its contents as a
Once the validation has succeeded, move the workspace. You may also include any associated resources into move operation by adding them to the ```ids``` parameter. This operation may take several minutes. ```azurecli-interactive
-az resource move --destination-group destination-rg --destination-subsctiption-id destination-sub-id --ids "/subscriptions/origin-sub-id/resourceGroups/origin-rg/providers/Microsoft.MachineLearningServices/workspaces/origin-workspace-name"
+az resource move --destination-group destination-rg --destination-subscription-id destination-sub-id --ids "/subscriptions/origin-sub-id/resourceGroups/origin-rg/providers/Microsoft.MachineLearningServices/workspaces/origin-workspace-name"
``` After the move has completed, recreate any computes and redeploy any web service endpoints at the new location. ## Next steps
-* Learn about [resource move](../azure-resource-manager/management/move-resource-group-and-subscription.md)
+* Learn about [resource move](../azure-resource-manager/management/move-resource-group-and-subscription.md)
machine-learning How To Setup Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-customer-managed-keys.md
+
+ Title: Use customer-managed keys
+
+description: 'Learn how to improve data security with Azure Machine Learning by using customer-managed keys.'
+++++++ Last updated : 03/17/2022+
+# Use customer-managed keys with Azure Machine Learning
+
+In the [customer-managed keys concepts article](concept-customer-managed-keys.md), you learned about the encryption capabilities that Azure Machine Learning provides. Now learn how to use customer-managed keys with Azure Machine Learning.
++
+## Prerequisites
+
+* An Azure subscription.
+
+* The following Azure resource providers must be registered:
+
+ | Resource provider | Why it's needed |
+ | -- | -- |
+ | Microsoft.MachineLearningServices | Creating the Azure Machine Learning workspace.
+ | Microsoft.Storage Azure | Storage Account is used as the default storage for the workspace.
+ | Microsoft.KeyVault |Azure Key Vault is used by the workspace to store secrets.
+ | Microsoft.DocumentDB/databaseAccounts | Azure Cosmos DB instance that logs metadata for the workspace.
+ | Microsoft.Search/searchServices | Azure Search provides indexing capabilities for the workspace.
+
+ For information on registering resource providers, see [Resolve errors for resource provider registration](/azure/azure-resource-manager/templates/error-register-resource-provider).
++
+## Limitations
+
+* The customer-managed key for resources the workspace depends on canΓÇÖt be updated after workspace creation.
+* Resources managed by Microsoft in your subscription canΓÇÖt transfer ownership to you.
+* You can't delete Microsoft-managed resources used for customer-managed keys without also deleting your workspace.
+
+> [!IMPORTANT]
+> When using a customer-managed key, the costs for your subscription will be higher because of the additional resources in your subscription. To estimate the cost, use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/).
+
+## Create Azure Key Vault
+
+To create the key vault, see [Create a key vault](/azure/key-vault/general/quick-create-portal). When creating Azure Key Vault, you must enable __soft delete__ and __purge protection__.
+
+### Create a key
+
+> [!TIP]
+> If you have problems creating the key, it may be caused by Azure role-based access controls that have been applied in your subscription. Make sure that the security principal (user, managed identity, service principal, etc.) you are using to create the key has been assigned the __Contributor__ role for the key vault instance. You must also configure an __Access policy__ in key vault that grants the security principal __Create__, __Get__, __Delete__, and __Purge__ authorization.
+>
+> If you plan to use a user-assigned managed identity for your workspace, the managed identity must also be assigned these roles and access policies.
+>
+> For more information, see the following articles:
+> * [Provide access to key vault keys, certificates, and secrets](/azure/key-vault/general/rbac-guide)
+> * [Assign a key vault access policy](/azure/key-vault/general/assign-access-policy)
+> * [Use managed identities with Azure Machine Learning](how-to-use-managed-identities.md)
+
+1. From the [Azure portal](https://portal.azure.com), select the key vault instance. Then select __Keys__ from the left.
+1. Select __+ Generate/import__ from the top of the page. Use the following values to create a key:
+
+ * Set __Options__ to __Generate__.
+ * Enter a __Name__ for the key. The name should be something that identifies what the planned use is. For example, `my-cosmos-key`.
+ * Set __Key type__ to __RSA__.
+ * We recommend selecting at least __3072__ for the __RSA key size__.
+ * Leave __Enabled__ set to yes.
+
+ Optionally you can set an activation date, expiration date, and tags.
+
+1. Select __Create__ to create the key.
+
+### Allow Azure Cosmos DB to access the key
+
+1. To configure the key vault, select it in the [Azure portal](https://portal.azure.com) and then select __Access polices__ from the left menu.
+1. To create permissions for Azure Cosmos DB, select __+ Create__ at the top of the page. Under __Key permissions__, select __Get__, __Unwrap Key__, and __Wrap key__ permissions.
+1. Under __Principal__, search for __Azure Cosmos DB__ and then select it. The principal ID for this entry is `a232010e-820c-4083-83bb-3ace5fc29d0b` for all regions other than Azure Government. For Azure Government, the principal ID is `57506a73-e302-42a9-b869-6f12d9ec29e9`.
+1. Select __Review + Create__, and then select __Create__.
+
+## Create a workspace that uses a customer-managed key
+
+Create an Azure Machine Learning workspace. When creating the workspace, you must select the __Azure Key Vault__ and the __key__. Depending on how you create the workspace, you specify these resources in different ways:
+
+* __Azure portal__: Select the key vault and key from a dropdown input box when configuring the workspace.
+* __SDK, REST API, and Azure Resource Manager templates__: Provide the Azure Resource Manager ID of the key vault and the URL for the key. To get these values, use the [Azure CLI](/cli/azure/install-azure-cli) and the following commands:
+
+ ```azurecli
+ # Replace `mykv` with your key vault name.
+ # Replace `mykey` with the name of your key.
+
+ # Get the Azure Resource Manager ID of the key vault
+ az keyvault show --name mykv --query id
+ # Get the URL for the key
+ az keyvault key show --vault-name mykv -n mykey --query key.kid
+ ```
+
+ The key vault ID value will be similar to `/subscriptions/{GUID}/resourceGroups/{resource-group-name}/providers/Microsoft.KeyVault/vaults/mykv`. The URL for the key will be similar to `https://mykv.vault.azure.net/keys/mykey/{GUID}`.
+
+For examples of creating the workspace with a customer-managed key, see the following articles:
+
+| Creation method | Article |
+| -- | -- |
+| CLI | [Create a workspace with Azure CLI](how-to-manage-workspace-cli.md#customer-managed-key-and-high-business-impact-workspace) |
+| Azure portal/</br>Python SDK | [Create and manage a workspace](how-to-manage-workspace.md#use-your-own-key) |
+| Azure Resource Manager</br>template | [Create a workspace with a template](how-to-create-workspace-template.md#deploy-an-encrypted-workspace) |
+| REST API | [Create, run, and delete Azure ML resources with REST](how-to-manage-rest.md#create-a-workspace-using-customer-managed-encryption-keys) |
+
+Once the workspace has been created, you'll notice that Azure resource group is created in your subscription. This group is in addition to the resource group for your workspace. This resource group will contain the Microsoft-managed resources that your key is used with. The resource group will be named using the formula of `<Azure Machine Learning workspace resource group name><GUID>`. It will contain an Azure Cosmos DB instance, Azure Storage Account, and Azure Cognitive Search.
+
+> [!TIP]
+> * The [__Request Units__](/azure/cosmos-db/request-units) for the Azure Cosmos DB instance automatically scale as needed.
+> * If your Azure Machine Learning workspace uses a private endpoint, this resource group will also contain a Microsoft-managed Azure Virtual Network. This VNet is used to secure communications between the managed services and the workspace. You __cannot provide your own VNet for use with the Microsoft-managed resources__. You also __cannot modify the virtual network__. For example, you cannot change the IP address range that it uses.
+
+> [!IMPORTANT]
+> If your subscription does not have enough quota for these services, a failure will occur.
+
+> [!WARNING]
+> __Don't delete the resource group__ that contains this Azure Cosmos DB instance, or any of the resources automatically created in this group. If you need to delete the resource group or Microsoft-managed services in it, you must delete the Azure Machine Learning workspace that uses it. The resource group resources are deleted when the associated workspace is deleted.
+
+For more information on customer-managed keys with Cosmos DB, see [Configure customer-managed keys for your Azure Cosmos DB account](../cosmos-db/how-to-setup-cmk.md).
+
+### Azure Container Instance
+
+When __deploying__ a trained model to an Azure Container instance (ACI), you can encrypt the deployed resource using a customer-managed key. For information on generating a key, see [Encrypt data with a customer-managed key](../container-instances/container-instances-encrypt-data.md#generate-a-new-key).
+
+To use the key when deploying a model to Azure Container Instance, create a new deployment configuration using `AciWebservice.deploy_configuration()`. Provide the key information using the following parameters:
+
+* `cmk_vault_base_url`: The URL of the key vault that contains the key.
+* `cmk_key_name`: The name of the key.
+* `cmk_key_version`: The version of the key.
+
+For more information on creating and using a deployment configuration, see the following articles:
+
+* [AciWebservice.deploy_configuration()](/python/api/azureml-core/azureml.core.webservice.aci.aciwebservice#deploy-configuration-cpu-cores-none--memory-gb-none--tags-none--properties-none--description-none--location-none--auth-enabled-none--ssl-enabled-none--enable-app-insights-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--ssl-cname-none--dns-name-label-none--primary-key-none--secondary-key-none--collect-model-data-none--cmk-vault-base-url-none--cmk-key-name-none--cmk-key-version-none-) reference
+* [Where and how to deploy](how-to-deploy-and-where.md)
+* [Deploy a model to Azure Container Instances](how-to-deploy-azure-container-instance.md)
+
+For more information on using a customer-managed key with ACI, see [Encrypt data with a customer-managed key](../container-instances/container-instances-encrypt-data.md#encrypt-data-with-a-customer-managed-key).
+
+### Azure Kubernetes Service
+
+You may encrypt a deployed Azure Kubernetes Service resource using customer-managed keys at any time. For more information, see [Bring your own keys with Azure Kubernetes Service](../aks/azure-disk-customer-managed-keys.md).
+
+This process allows you to encrypt both the Data and the OS Disk of the deployed virtual machines in the Kubernetes cluster.
+
+> [!IMPORTANT]
+> This process only works with AKS K8s version 1.17 or higher.
+
+## Next steps
+
+* [Customer-managed keys with Azure Machine Learning](concept-customer-managed-keys.md)
+* [Create a workspace with Azure CLI](how-to-manage-workspace-cli.md#customer-managed-key-and-high-business-impact-workspace) |
+* [Create and manage a workspace](how-to-manage-workspace.md#use-your-own-key) |
+* [Create a workspace with a template](how-to-create-workspace-template.md#deploy-an-encrypted-workspace) |
+* [Create, run, and delete Azure ML resources with REST](how-to-manage-rest.md#create-a-workspace-using-customer-managed-encryption-keys) |
machine-learning Tutorial Train Deploy Image Classification Model Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-train-deploy-image-classification-model-vscode.md
In this tutorial, you learn the following tasks:
## Prerequisites -- Azure subscription. If you don't have one, sign up to try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
+- Azure subscription. If you don't have one, sign up to try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/). If you're using the free version, use a CPU cluster for training instead of GPU.
- Install [Visual Studio Code](https://code.visualstudio.com/docs/setup/setup-overview), a lightweight, cross-platform code editor. - Azure Machine Learning Studio Visual Studio Code extension. For install instructions see the [Setup Azure Machine Learning Visual Studio Code extension guide](./how-to-setup-vs-code.md) - CLI (v2) (preview). For installation instructions, see [Install, set up, and use the CLI (v2) (preview)](how-to-configure-cli.md)
media-services Media Services Specifications Live Timed Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/previous/media-services-specifications-live-timed-metadata.md
The following documents contain provisions, which, through reference in this tex
| Standard | Definition | | -- | -- |
-| [Adobe-Primetime] | [Primetime Digital Program Insertion Signaling Specification 1.2](https://www.adobe.com/content/dam/acom/en/devnet/primetime/PrimetimeDigitalProgramInsertionSignalingSpecification.pdf) |
+| [Adobe-Primetime] | Primetime Digital Program Insertion Signaling Specification 1.2 |
| [Adobe-Flash-AS] | [FLASH ActionScript Language Reference](https://help.adobe.com/archive/en_US/as2/flashlite_2.x_3.x_aslr.pdf) | | [AMF0] | ["Action Message Format AMF0"](https://download.macromedia.com/pub/labs/amf/amf0_spec_121207.pdf) | | [DASH-IF-IOP] | DASH Industry Forum Interop Guidance v 4.2 [https://dashif-documents.azurewebsites.net/DASH-IF-IOP/master/DASH-IF-IOP.html](https://dashif-documents.azurewebsites.net/DASH-IF-IOP/master/DASH-IF-IOP.html) |
migrate Concepts Assessment Calculation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-assessment-calculation.md
Assessments you create with Azure Migrate are a point-in-time snapshot of data.
**Assessment type** | **Details** | **Data** | |
-**Performance-based** | Assessments that make recommendations based on collected performance data | The VM size recommendation is based on CPU and RAM-utilization data.<br/><br/> The disk-type recommendation is based on the input/output operations per second (IOPS) and throughput of the on-premises disks. Disk types are Azure Standard HDD, Azure Standard SSD, Azure Premium disks, and Azure Ultra disks.
-**As-is on-premises** | Assessments that don't use performance data to make recommendations | The VM size recommendation is based on the on-premises server size.<br/><br> The recommended disk type is based on the selected storage type for the assessment.
+**Performance-based** | Assessments that make recommendations based on collected performance data | The VM size recommendation is based on CPU and RAM-utilization data.<br><br> The disk-type recommendation is based on the input/output operations per second (IOPS) and throughput of the on-premises disks. Disk types are Azure Standard HDD, Azure Standard SSD, Azure Premium disks, and Azure Ultra disks.
+**As-is on-premises** | Assessments that don't use performance data to make recommendations | The VM size recommendation is based on the on-premises server size.<br><br> The recommended disk type is based on the selected storage type for the assessment.
## How do I run an assessment?
Calculations occur in these three stages:
1. **Calculate sizing recommendations**: Estimate compute, storage, and network sizing. 1. **Calculate monthly costs**: Calculate the estimated monthly compute and storage costs for running the servers in Azure after migration.
-Calculations are in the preceding order. A server server moves to a later stage only if it passes the previous one. For example, if a server fails the Azure readiness stage, it's marked as unsuitable for Azure. Sizing and cost calculations aren't done for that server.
+Calculations are in the preceding order. A server moves to a later stage only if it passes the previous one. For example, if a server fails the Azure readiness stage, it's marked as unsuitable for Azure. Sizing and cost calculations aren't done for that server.
## What's in an Azure VM assessment?
Here's what's included in an Azure VM assessment:
**Property** | **Details** |
-**Target location** | The location to which you want to migrate. The assessment currently supports these target Azure regions:<br/><br/> Australia East, Australia Southeast, Brazil South, Canada Central, Canada East, Central India, Central US, China East, China North, East Asia, East US, East US 2, Germany Central, Germany Northeast, Japan East, Japan West, Korea Central, Korea South, North Central US, North Europe, South Central US, Southeast Asia, South India, UK South, UK West, US Gov Arizona, US Gov Texas, US Gov Virginia, West Central US, West Europe, West India, West US, and West US 2.
-**Target storage disk (as-is sizing)** | The type of disk to use for storage in Azure. <br/><br/> Specify the target storage disk as Premium-managed, Standard SSD-managed, Standard HDD-managed, or Ultra disk.
-**Target storage disk (performance-based sizing)** | Specifies the type of target storage disk as automatic, Premium-managed, Standard HDD-managed, Standard SSD-managed, or Ultra disk.<br/><br/> **Automatic**: The disk recommendation is based on the performance data of the disks, meaning the IOPS and throughput.<br/><br/>**Premium or Standard or Ultra disk**: The assessment recommends a disk SKU within the storage type selected.<br/><br/> If you want a single-instance VM service-level agreement (SLA) of 99.9%, consider using Premium-managed disks. This use ensures that all disks in the assessment are recommended as Premium-managed disks.<br/><br/> If you are looking to run data-intensive workloads that need high throughput, high IOPS, and consistent low latency disk storage, consider using Ultra disks.<br/><br/> Azure Migrate supports only managed disks for migration assessment.
-**Azure Reserved VM Instances** | Specifies [reserved instances](https://azure.microsoft.com/pricing/reserved-vm-instances/) so that cost estimations in the assessment take them into account.<br/><br/> When you select 'Reserved instances', the 'Discount (%)' and 'VM uptime' properties are not applicable.<br/><br/> Azure Migrate currently supports Azure Reserved VM Instances only for pay-as-you-go offers.
-**Sizing criteria** | Used to rightsize the Azure VM.<br/><br/> Use as-is sizing or performance-based sizing.
+**Target location** | The location to which you want to migrate. The assessment currently supports these target Azure regions:<br><br> Australia Central, Australia Central 2, Australia East, Australia Southeast, Brazil South, Canada Central, Canada East, Central India, Central US, China East, China East 2, China North, China North 2, East Asia, East US, East US 2, France Central, France South, Germany North, Germany West Central, Japan East, Japan West, Korea Central, Korea South, North Central US, North Europe, Norway East, Norway West, South Africa North, South Africa West, South Central US, Southeast Asia, South India, Switzerland North, Switzerland West, UAE Central, UAE North, UK South, UK West, West Central US, West Europe, West India, West US, West US 2, JioIndiaCentral, JioIndiaWest, US Gov Arizona, US Gov Iowa, US Gov Texas, US Gov Virginia.
+**Target storage disk (as-is sizing)** | The type of disk to use for storage in Azure. <br><br> Specify the target storage disk as Premium-managed, Standard SSD-managed, Standard HDD-managed, or Ultra disk.
+**Target storage disk (performance-based sizing)** | Specifies the type of target storage disk as automatic, Premium-managed, Standard HDD-managed, Standard SSD-managed, or Ultra disk.<br><br> **Automatic**: The disk recommendation is based on the performance data of the disks, meaning the IOPS and throughput.<br><br>**Premium or Standard or Ultra disk**: The assessment recommends a disk SKU within the storage type selected.<br><br> If you want a single-instance VM service-level agreement (SLA) of 99.9%, consider using Premium-managed disks. This use ensures that all disks in the assessment are recommended as Premium-managed disks.<br><br> If you are looking to run data-intensive workloads that need high throughput, high IOPS, and consistent low latency disk storage, consider using Ultra disks.<br><br> Azure Migrate supports only managed disks for migration assessment.
+**Azure Reserved VM Instances** | Specifies [reserved instances](https://azure.microsoft.com/pricing/reserved-vm-instances/) so that cost estimations in the assessment take them into account.<br><br> When you select 'Reserved instances', the 'Discount (%)' and 'VM uptime' properties are not applicable.<br><br> Azure Migrate currently supports Azure Reserved VM Instances only for pay-as-you-go offers.
+**Sizing criteria** | Used to rightsize the Azure VM.<br><br> Use as-is sizing or performance-based sizing.
**Performance history** | Used with performance-based sizing. Performance history specifies the duration used when performance data is evaluated. **Percentile utilization** | Used with performance-based sizing. Percentile utilization specifies the percentile value of the performance sample used for rightsizing. **VM series** | The Azure VM series that you want to consider for rightsizing. For example, if you don't have a production environment that needs A-series VMs in Azure, you can exclude A-series from the list of series.
-**Comfort factor** | The buffer used during assessment. It's applied to the CPU, RAM, disk, and network data for VMs. It accounts for issues like seasonal usage, short performance history, and likely increases in future usage.<br/><br/> For example, a 10-core VM with 20% utilization normally results in a two-core VM. With a comfort factor of 2.0, the result is a four-core VM instead.
+**Comfort factor** | The buffer used during assessment. It's applied to the CPU, RAM, disk, and network data for VMs. It accounts for issues like seasonal usage, short performance history, and likely increases in future usage.<br><br> For example, a 10-core VM with 20% utilization normally results in a two-core VM. With a comfort factor of 2.0, the result is a four-core VM instead.
**Offer** | The [Azure offer](https://azure.microsoft.com/support/legal/offer-details/) in which you're enrolled. The assessment estimates the cost for that offer. **Currency** | The billing currency for your account. **Discount (%)** | Any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%.
-**VM uptime** | The duration in days per month and hours per day for Azure VMs that won't run continuously. Cost estimates are based on that duration.<br/><br/> The default values are 31 days per month and 24 hours per day.
+**VM uptime** | The duration in days per month and hours per day for Azure VMs that won't run continuously. Cost estimates are based on that duration.<br><br> The default values are 31 days per month and 24 hours per day.
**Azure Hybrid Benefit** | Specifies whether you have software assurance and are eligible for [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-use-benefit/). If the setting has the default value "Yes," Azure prices for operating systems other than Windows are considered for Windows VMs.
-**EA subscription** | Specifies that an Enterprise Agreement (EA) subscription is used for cost estimation. Takes into account the discount applicable to the subscription. <br/><br/> Leave the settings for reserved instances, discount (%) and VM uptime properties with their default settings.
+**EA subscription** | Specifies that an Enterprise Agreement (EA) subscription is used for cost estimation. Takes into account the discount applicable to the subscription. <br><br> Leave the settings for reserved instances, discount (%) and VM uptime properties with their default settings.
[Review the best practices](best-practices-assessment.md) for creating an assessment with Azure Migrate.
For an Azure VM Assessment, the assessment reviews the following properties of a
Property | Details | Azure readiness status | | **Boot type** | Azure supports UEFI boot type for OS mentioned [here](./common-questions-server-migration.md#which-operating-systems-are-supported-for-migration-of-uefi-based-machines-to-azure)| Not ready if the boot type is UEFI and Operating System running on the VM is: Windows Server 2003/Windows Server 2003 R2/Windows Server 2008/Windows Server 2008 R2
-**Cores** | Each server must have no more than 128 cores, which is the maximum number an Azure VM supports.<br/><br/> If performance history is available, Azure Migrate considers the utilized cores for comparison. If the assessment settings specify a comfort factor, the number of utilized cores is multiplied by the comfort factor.<br/><br/> If there's no performance history, Azure Migrate uses the allocated cores to apply the comfort factor. | Ready if the number of cores is within the limit
-**RAM** | Each server must have no more than 3,892 GB of RAM, which is the maximum size an Azure M-series Standard_M128m&nbsp;<sup>2</sup> VM supports. [Learn more](../virtual-machines/sizes.md).<br/><br/> If performance history is available, Azure Migrate considers the utilized RAM for comparison. If a comfort factor is specified, the utilized RAM is multiplied by the comfort factor.<br/><br/> If there's no history, the allocated RAM is used to apply a comfort factor.<br/><br/> | Ready if the amount of RAM is within the limit
-**Storage disk** | The allocated size of a disk must be no more than 64 TB.<br/><br/> The number of disks attached to the server, including the OS disk, must be 65 or fewer. | Ready if the disk size and number are within the limits
+**Cores** | Each server must have no more than 128 cores, which is the maximum number an Azure VM supports.<br><br> If performance history is available, Azure Migrate considers the utilized cores for comparison. If the assessment settings specify a comfort factor, the number of utilized cores is multiplied by the comfort factor.<br><br> If there's no performance history, Azure Migrate uses the allocated cores to apply the comfort factor. | Ready if the number of cores is within the limit
+**RAM** | Each server must have no more than 3,892 GB of RAM, which is the maximum size an Azure M-series Standard_M128m&nbsp;<sup>2</sup> VM supports. [Learn more](../virtual-machines/sizes.md).<br><br> If performance history is available, Azure Migrate considers the utilized RAM for comparison. If a comfort factor is specified, the utilized RAM is multiplied by the comfort factor.<br><br> If there's no history, the allocated RAM is used to apply a comfort factor.<br><br> | Ready if the amount of RAM is within the limit
+**Storage disk** | The allocated size of a disk must be no more than 64 TB.<br><br> The number of disks attached to the server, including the OS disk, must be 65 or fewer. | Ready if the disk size and number are within the limits
**Networking** | A server must have no more than 32 network interfaces (NICs) attached to it. | Ready if the number of NICs is within the limit ### Guest operating system
Windows 2000, Windows 98, Windows 95, Windows NT, Windows 3.1, and MS-DOS | Thes
Windows 7, Windows 8, and Windows 10 | Azure provides support with a [Visual Studio subscription only.](../virtual-machines/windows/client-images.md) | Conditionally ready for Azure. Windows 10 Pro | Azure provides support with [Multitenant Hosting Rights.](../virtual-machines/windows/windows-desktop-multitenant-hosting-deployment.md) | Conditionally ready for Azure. Windows Vista and Windows XP Professional | These operating systems have passed their end-of-support dates. The server might start in Azure, but Azure provides no OS support. | Conditionally ready for Azure. We recommend that you upgrade the OS before migrating to Azure.
-Linux | See the [Linux operating systems](../virtual-machines/linux/endorsed-distros.md) that Azure endorses. Other Linux operating systems might start in Azure. But we recommend that you upgrade the OS to an endorsed version before you migrate to Azure. | Ready for Azure if the version is endorsed.<br/><br/>Conditionally ready if the version isn't endorsed.
+Linux | See the [Linux operating systems](../virtual-machines/linux/endorsed-distros.md) that Azure endorses. Other Linux operating systems might start in Azure. But we recommend that you upgrade the OS to an endorsed version before you migrate to Azure. | Ready for Azure if the version is endorsed.<br><br>Conditionally ready if the version isn't endorsed.
Other operating systems like Oracle Solaris, Apple macOS, and FreeBSD | Azure doesn't endorse these operating systems. The server might start in Azure, but Azure provides no OS support. | Conditionally ready for Azure. We recommend that you install a supported OS before migrating to Azure. OS specified as **Other** in vCenter Server | Azure Migrate can't identify the OS in this case. | Unknown readiness. Ensure that Azure supports the OS running inside the VM. 32-bit operating systems | The server might start in Azure, but Azure might not provide full support. | Conditionally ready for Azure. Consider upgrading to a 64-bit OS before migrating to Azure.
Here are a few reasons why an assessment could get a low confidence rating:
- Assessment is not able to collect the performance data for some or all the servers in the assessment period. For a high confidence rating, please ensure that: - Servers are powered on for the duration of the assessment - Outbound connections on ports 443 are allowed
- - For Hyper-V servers dynamic memory is enabled
+ - For Hyper-V servers, dynamic memory is enabled
Please 'Recalculate' the assessment to reflect the latest changes in confidence rating.
migrate Troubleshoot Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-assessment.md
This table lists help for fixing the following assessment readiness issues.
**Issue** | **Fix** |
-Unsupported boot type | Azure does not support UEFI boot type for VMs with the Operating Systems: Windows Server 2003/Windows Server 2003 R2/Windows Server 2008/Windows Server 2008 R2. Check list of OS that support UEFI-based machines [here](./common-questions-server-migration.md#which-operating-systems-are-supported-for-migration-of-uefi-based-machines-to-azure)
+Unsupported boot type | Azure does not support UEFI boot type for VMs with the Operating Systems: Windows Server 2003/Windows Server 2003 R2/Windows Server 2008/Windows Server 2008 R2. Check list of Operating Systems that support UEFI-based machines [here](./common-questions-server-migration.md#which-operating-systems-are-supported-for-migration-of-uefi-based-machines-to-azure)
Conditionally supported Windows operating system | The operating system has passed its end-of-support date and needs a Custom Support Agreement for [support in Azure](/troubleshoot/azure/virtual-machines/server-software-support). Consider upgrading before you migrate to Azure. Review information about [preparing servers running Windows Server 2003](prepare-windows-server-2003-migration.md) for migration to Azure. Unsupported Windows operating system | Azure supports only [selected Windows OS versions](/troubleshoot/azure/virtual-machines/server-software-support). Consider upgrading the server before you migrate to Azure. Conditionally endorsed Linux OS | Azure endorses only [selected Linux OS versions](../virtual-machines/linux/endorsed-distros.md). Consider upgrading the server before you migrate to Azure. For more information, see [this website](#linux-vms-are-conditionally-ready-in-an-azure-vm-assessment).
Requires a Microsoft Visual Studio subscription | The server is running a Window
VM not found for the required storage performance | The storage performance (input/output operations per second [IOPS] and throughput) required for the server exceeds Azure VM support. Reduce storage requirements for the server before migration. VM not found for the required network performance | The network performance (in/out) required for the server exceeds Azure VM support. Reduce the networking requirements for the server. VM not found in the specified location | Use a different target location before migration.
-One or more unsuitable disks | One or more disks attached to the VM don't meet Azure requirements.<br/><br/> Azure Migrate: Discovery and assessment assesses the disks based on the disk limits for Ultra disks (64 TB).<br/><br/> For each disk attached to the VM, make sure that the size of the disk is <64 TB (supported by Ultra SSD disks).<br/><br/> If it isn't, reduce the disk size before you migrate to Azure, or use multiple disks in Azure and [stripe them together](../virtual-machines/premium-storage-performance.md#disk-striping) to get higher storage limits. Make sure that the performance (IOPS and throughput) needed by each disk is supported by Azure [managed virtual machine disks](../azure-resource-manager/management/azure-subscription-service-limits.md#storage-limits).
+One or more unsuitable disks | One or more disks attached to the VM don't meet Azure requirements.<br><br> Azure Migrate: Discovery and assessment assesses the disks based on the disk limits for Ultra disks (64 TB).<br><br> For each disk attached to the VM, make sure that the size of the disk is <64 TB (supported by Ultra SSD disks).<br><br> If it isn't, reduce the disk size before you migrate to Azure, or use multiple disks in Azure and [stripe them together](../virtual-machines/premium-storage-performance.md#disk-striping) to get higher storage limits. Make sure that the performance (IOPS and throughput) needed by each disk is supported by Azure [managed virtual machine disks](../azure-resource-manager/management/azure-subscription-service-limits.md#storage-limits).
One or more unsuitable network adapters | Remove unused network adapters from the server before migration. Disk count exceeds limit | Remove unused disks from the server before migration. Disk size exceeds limit | Azure Migrate: Discovery and assessment supports disks with up to 64-TB size (Ultra disks). Shrink disks to less than 64 TB before migration, or use multiple disks in Azure and [stripe them together](../virtual-machines/premium-storage-performance.md#disk-striping) to get higher storage limits.
This result is possible because not all VM sizes that support Ultra disk are pre
Your assessment was created with an offer that is no longer valid and hence, the **Edit** and **Recalculate** buttons are disabled. You can create a new assessment with any of the valid offers - *Pay as you go*, *Pay as you go Dev/Test*, and *Enterprise Agreement*. You can also use the **Discount(%)** field to specify any custom discount on top of the Azure offer. [Learn more](how-to-create-assessment.md).
+## Why is my assessment showing a warning that it was created with a target Azure location that has been deprecated?
+
+Your assessment was created with an Azure region that has been deprecated and hence the **Edit** and **Recalculate** buttons are disabled. You can [create a new assessment](how-to-create-assessment.md) with any of the valid target locations. [Learn more.](concepts-assessment-calculation.md#whats-in-an-azure-vm-assessment)
+ ## Why is my assessment showing a warning that it was created with an invalid combination of Reserved Instances, VM uptime, and Discount (%)? When you select **Reserved Instances**, the **Discount (%)** and **VM uptime** properties aren't applicable. As your assessment was created with an invalid combination of these properties, the **Edit** and **Recalculate** buttons are disabled. Create a new assessment. [Learn more](./concepts-assessment-calculation.md#whats-an-assessment).
mysql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-backup-restore.md
Backup redundancy ensures that your database meets its availability and durabili
- **Zone-redundant backup storage** : When the backups are stored in zone-redundant backup storage, multiple copies are not only stored within the availability zone in which your server is hosted, but are also replicated to another availability zone in the same region. This option can be leveraged for scenarios that require high availability or for restricting replication of data to within a country/region to meet data residency requirements. Also this provides at least 99.9999999999% (12 9's) durability of Backups objects over a given year. One can select Zone-Redundant High Availability option at server create time to ensure zone-redundant backup storage. High Availability for a server can be disabled post create however the backup storage will continue to remain zone-redundant. -- **Geo-Redundant backup storage** : When the backups are stored in geo-redundant backup storage, multiple copies are not only stored within the region in which your server is hosted, but are also replicated to its geo-paired region. This provides better protection and ability to restore your server in a different region in the event of a disaster. Also this provides at least 99.99999999999999% (16 9's) durability of Backups objects over a given year. One can enable Geo-Redundancy option at server create time to ensure geo-redundant backup storage. Geo redundancy is supported for servers hosted in any of the [Azure paired regions](overview.md#azure-regions).
+- **Geo-Redundant backup storage** : When the backups are stored in geo-redundant backup storage, multiple copies are not only stored within the region in which your server is hosted, but are also replicated to its geo-paired region. This provides better protection and ability to restore your server in a different region in the event of a disaster. Also this provides at least 99.99999999999999% (16 9's) durability of Backups objects over a given year.One can enable Geo-Redundancy option at server create time to ensure geo-redundant backup storage. Additionally, you can move from locally redundant storage to geo-redundant storage post server create. Geo redundancy is supported for servers hosted in any of the [Azure paired regions](overview.md#azure-regions).
> [!NOTE]
-> Geo-redundancy and zone-redundant High Availability to support zone redundancy is current surfaced as a create time operation only.
+> Zone-redundant High Availability to support zone redundancy is current surfaced as a create time operation only. Currently, for a Zone-redundant High Availability server geo-redundancy can only be enabled/disabled at server create time.
-## Moving from other backup storage options to geo-redundant backup storage
+## Moving from other backup storage options to geo-redundant backup storage
-Configuring geo-redundant storage for backup is only allowed during server create. Once the server is provisioned, you cannot change the backup storage redundancy option. However you can still move your existing backups storage to geo-redundant storage using the following suggested ways:
+You can move your existing backups storage to geo-redundant storage using the following suggested ways:
-- **Moving from locally redundant to geo-redundant backup storage** - In order to move your backup storage from locally redundant storage to geo-redundant storage, you can perform a point-in-time restore operation and change the Compute + Storage server configuration to enable Geo-redundancy for the locally redundant source server. Same Zone Redundant HA servers can also be restored as a geo-redundant server in a similar fashion as the underlying backup storage is locally redundant for the same.
+- **Moving from locally redundant to geo-redundant backup storage** - In order to move your backup storage from locally redundant storage to geo-redundant storage, you can change the Compute + Storage server configuration from Azure portal to enable Geo-redundancy for the locally redundant source server. Same Zone Redundant HA servers can also be restored as a geo-redundant server in a similar fashion as the underlying backup storage is locally redundant for the same.
- **Moving from zone-redundant to geo-redundant backup storage** - Azure Database for MySQL does not support zone-redundant storage to geo-redundant storage conversion through Compute + Storage settings change or point-in-time restore operation. In order to move your backup storage from zone-redundant storage to geo-redundant storage, creating a new server and migrating the data using [dump and restore](../concepts-migrate-dump-restore.md) is the only supported option.
Configuring geo-redundant storage for backup is only allowed during server creat
Backups are retained based on the backup retention period setting on the server. You can select a retention period of 1 to 35 days with a default retention period is seven days. You can set the retention period during server creation or later by updating the backup configuration using Azure portal.
-The backup retention period governs how far back in time can a point-in-time restore operation be performed, since its based on backups available. The backup retention period can also be treated as a recovery window from a restore perspective. All backups required to perform a point-in-time restore within the backup retention period are retained in backup storage. For example - if the backup retention period is set to seven days, the recovery window is considered last seven days. In this scenario, all the backups required to restore the server in last seven days are retained. With a backup retention window of seven days, database snapshots and transaction log backups are stored for the last eight days (1 day prior to the window).
+The backup retention period governs how far back in time can a point-in-time restore operation be performed, since it's based on backups available. The backup retention period can also be treated as a recovery window from a restore perspective. All backups required to perform a point-in-time restore within the backup retention period are retained in backup storage. For example - if the backup retention period is set to seven days, the recovery window is considered last seven days. In this scenario, all the backups required to restore the server in last seven days are retained. With a backup retention window of seven days, database snapshots and transaction log backups are stored for the last eight days (1 day prior to the window).
## Backup storage cost
Geo-restore is the default recovery option when your server is unavailable becau
During geo-restore, the server configurations that can be changed include only security configuration (firewall rules and virtual network settings). Changing other server configurations such as compute, storage or pricing tier (Basic, General Purpose, or Memory Optimized) during geo-restore is not supported.
+Geo-restore can also be performed on a stopped server leveraging Azure CLI. Read [Restore Azure Database for MySQL - Flexible Server with Azure CLI](how-to-restore-server-cli.md) to learn more about geo-restoring a server with Azure CLI.
+ The estimated time of recovery depends on several factors including the database sizes, the transaction log size, the network bandwidth, and the total number of databases recovering in the same region at the same time. > [!NOTE]
mysql Concepts Networking Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-networking-vnet.md
Here are some concepts to be familiar with when using virtual networks with MySQ
> [!IMPORTANT] > Private DNS zone names must end with `mysql.database.azure.com`.
+ >If you are connecting to the Azure Database for MySQL - Flexible sever with SSL and are using an option to perform full verification (sslmode=VERTIFY_IDENTITY) with certificate subject name, use <servername>.mysql.database.azure.com in your connection string.
Learn how to create a flexible server with private access (VNet integration) in [the Azure portal](how-to-manage-virtual-network-portal.md) or [the Azure CLI](how-to-manage-virtual-network-cli.md).
mysql How To Connect Tls Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-connect-tls-ssl.md
mysql -h mydemoserver.mysql.database.azure.com -u mydemouser -p --ssl-mode=REQUI
> [!Note] > Confirm that the value passed to `--ssl-ca` matches the file path for the certificate you saved.
+>If you are connecting to the Azure Database for MySQL- Flexible with SSL and are using an option to perform full verification (sslmode=VERTIFY_IDENTITY) with certificate subject name, use <servername>.mysql.database.azure.com in your connection string.
++ If you try to connect to your server with unencrypted connections, you will see error stating connections using insecure transport are prohibited similar to one below:
mysql How To Restore Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-restore-server-cli.md
Last updated 04/01/2021
-# Point-in-time restore of a Azure Database for MySQL Flexible Server with Azure CLI
+# Point-in-time restore of an Azure Database for MySQL Flexible Server with Azure CLI
[[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
This article provides step-by-step procedure to perform point-in-time recoveries
## Prerequisites -- An Azure account with an active subscription.
+- An Azure account with an active subscription.
[!INCLUDE [flexible-server-free-trial-note](../includes/flexible-server-free-trial-note.md)] - Install or upgrade Azure CLI to the latest version. See [Install Azure CLI](/cli/azure/install-azure-cli).-- Login to Azure account using [az login](/cli/azure/reference-index#az_login) command. Note the **id** property, which refers to **Subscription ID** for your Azure account.
+- Login to Azure account using [az login](/cli/azure/reference-index#az_login) command. Note the **id** property, which refers to **Subscription ID** for your Azure account.
```azurecli-interactive az login
This article provides step-by-step procedure to perform point-in-time recoveries
- If you have multiple subscriptions, choose the appropriate subscription in which you want to create the server using the ```az account set``` command. `+ ```azurecli az account set --subscription <subscription id> ```
This article provides step-by-step procedure to perform point-in-time recoveries
You can run the following command to restore a server to an earliest existing backup. **Usage**+ ```azurecli az mysql flexible-server restore --restore-time --source-server
az mysql flexible-server restore \
--restore-time "2021-03-03T13:10:00Z" \ --source-server mydemoserver ```+ Time taken to restore will depend on the size of the data stored in the server.
+## Geo-Restore a server from geo-backup to a new server
+
+You can run the following command to geo-restore a server to the most recent backup available.
+
+**Usage**
+
+```azurecli
+az mysql flexible-server geo-restore --source-server
+ --location
+ [--name]
+ [--no-wait]
+ [--resource-group]
+ [--subscription]
+```
+
+**Example:**
+Geo-restore 'mydemoserver' in region East US to a new server 'mydemoserver-restored' in itΓÇÖs geo-paired location West US with the same network setting.
+
+```azurecli
+az mysql flexible-server geo-restore \
+--name mydemoserver-restored \
+--resource-group myresourcegroup \
+--location "West US" \
+--source-server mydemoserver
+```
+ ## Perform post-restore tasks+ After the restore is completed, you should perform the following tasks to get your users and applications back up and running: - If the new server is meant to replace the original server, redirect clients and client applications to the new server
After the restore is completed, you should perform the following tasks to get yo
- Configure alerts as appropriate for the newly restore server ## Next steps
-Learn more about [business continuity](concepts-business-continuity.md)
+Learn more about [business continuity](concepts-business-continuity.md)
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
Last updated 10/12/2021
This article summarizes new releases and features in Azure Database for MySQL - Flexible Server beginning in January 2021. Listings appear in reverse chronological order, with the most recent updates first.
+## March 2022
+
+This release of Azure Database for MySQL - Flexible Server includes the following updates.
+
+- **Migrate from locally redundant backup storage to geo-redundant backup storage for existing flexible server**
+ Azure Database for MySQL - Flexible Server now provides the added flexibility to migrate to geo-redundant backup storage from locally redundant backup storage post server-create to provide higher data resiliency. Enabling geo-redundancy via the server's Compute + Storage blade empowers customers to recover their existing flexible servers from a geographic disaster or regional failure when they canΓÇÖt access the server in the primary region. With this feature enabled for their existing servers, customers can perform geo-restore and deploy a new server to the geo-paired Azure region leveraging the original serverΓÇÖs latest available geo-redundant backup. [Learn more](concepts-backup-restore.md)
+
+- **Simulate disaster recovery drills for your stopped servers**
+ Azure Database for MySQL - Flexible Server now provides the ability to perform geo-restore on stopped servers helping users simulate disaster recovery drills for their workloads to estimate impact and recovery time.This will help users plan better to meet their disaster recovery and business continuity objectives by leveraging geo-redundancy feature offered by Azure Database for MySQL Flexible Server. [Learn more](how-to-restore-server-cli.md)
+ ## January 2022 This release of Azure Database for MySQL - Flexible Server includes the following updates.
openshift Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/migration.md
If you're using external registries such as [Azure Container Registry](../contai
### Monitoring
-Azure Red Hat OpenShift includes a pre-configured, pre-installed, and self-updating monitoring stack that is based on the Prometheus open source project and its wider eco-system. It provides monitoring of cluster components and includes a set of alerts to immediately notify the cluster administrator about any occurring problems and a set of Grafana dashboards. The cluster monitoring stack is only supported for monitoring Azure Red Hat OpenShift clusters. For more information, see [Cluster monitoring for Azure Red Hat OpenShift](https://docs.openshift.com/container-platform/4.6/monitoring/understanding-the-monitoring-stack.html).
+Azure Red Hat OpenShift includes a pre-configured, pre-installed, and self-updating monitoring stack that is based on the Prometheus open source project and its wider eco-system. It provides monitoring of cluster components and includes a set of alerts to immediately notify the cluster administrator about any occurring problems and a set of Grafana dashboards. The cluster monitoring stack is only supported for monitoring Azure Red Hat OpenShift clusters. For more information, see [Cluster monitoring for Azure Red Hat OpenShift](https://docs.openshift.com/container-platform/4.5/monitoring/cluster_monitoring/about-cluster-monitoring.html).
If you have been using [Azure Monitor for Containers for Azure Red Hat OpenShift 3.11](../azure-monitor/containers/container-insights-azure-redhat-setup.md), you can also enable Azure Monitor for Containers for [Azure Red Hat OpenShift 4 clusters](../azure-monitor/containers/container-insights-azure-redhat4-setup.md) and continue using the same Log Analytics workspace.
purview Concept Elastic Data Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-elastic-data-map.md
Previously updated : 08/18/2021 Last updated : 03/21/2022 # Elastic data map in Azure Purview
-Azure Purview Data Map provides the foundation for data discovery and data governance. It captures metadata about enterprise data present in analytics, software-as-a-service (SaaS), and operation systems in hybrid, on-premises, and multi-cloud environments. Azure Purview Data Map automatically stays up to date with its built-in scanning and classification system. With the UI, developers can further programmatically interact with the Data Map using open-source Apache Atlas APIs.
-
-## Elastic data map
-
-All Azure Purview accounts have a Data Map that can elastically grow starting at one capacity unit. They scale up and down based on request load within the elasticity window ([check current limits](how-to-manage-quotas.md)). These limits should cover most data landscapes. However, if you need a higher capacity, [you can create a support ticket](#request-capacity).
+The Azure Purview data map provides the foundation for data discovery and data governance. It captures metadata about data present in analytics, software-as-a-service (SaaS), and operation systems in hybrid, on-premises, and multi-cloud environments. The Azure Purview data map stays up to date with its built-in scanning and classification system.
+All Azure Purview accounts have a data map that elastically grow starting at one capacity unit. They scale up and down based on request load and metadata stored within the data map.
## Data map capacity unit
-Elastic Data Map comes with operation throughput and storage components that are represented as Capacity Unit (CU). All Azure Purview accounts, by default, come with one capacity unit and elastically grow based on usage. Each Data Map capacity unit includes a throughput of 25 operations/sec and 10 GB of metadata storage limit.
+The elastic data map has two components, metadata storage and operation throughput, represented as a capacity unit (CU). All Azure Purview accounts, by default, start with one capacity unit and elastically grow based on usage. Each data Map capacity unit includes a throughput of 25 operations/sec and 10 GB of metadata storage limit.
### Operations
-Operations are the throughput measure of the Azure Purview Data Map. They include the Create, Read, Write, Update, and Delete operations on metadata stored in the Data Map. Some examples of operations are listed below:
+Operations are the throughput measure of the Azure Purview Data Map. They include any Create, Read, Write, Update, and Delete operations on metadata stored in the Data Map. Some examples of operations are listed below:
- Create an asset in Data Map - Add a relationship to an asset such as owner, steward, parent, lineage, etc.
Based on the Data Map operations/second and metadata storage consumption in this
>[!Important] >Azure Purview Data Map can automatically scale up and down within the elasticity window ([check current limits](how-to-manage-quotas.md)). To get the next level of the elasticity window, a support ticket needs to be created.
-## Request capacity
+## Increase operations throughput limit
+
+The default limit for maximum operations per second is 10 capacity units. If you are working with a very large Azure Purview environment and require a higher throughput, you can request a larger capacity of elasticity window by [creating a quota request](how-to-manage-quotas.md#request-quota-increase). Select "Data map capacity unit" as the quota type and provide as much relevant information as you can about your environment and the additional capacity you would like to request.
-If you're working with very large datasets or a massive environment and need higher capacity for your elastic data map, you can request a larger capacity of elasticity window by [creating a support ticket](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
+> [!IMPORTANT]
+> There's no default limit for metadata storage. As you add more metadata to your data map, it will elastically increase.
-Select **Service and subscription limits (quota)** and complete the on screen instructions by choosing the Azure Purview account that you'd like to request larger capacity for.
+Increasing the operations throughput limit will also increase the minimum number of capacity units. If you increase the throughput limit to 20, the minimum capacity units you will be charged is 2 CUs. The below table illustrates the possible throughput options. The number you enter in the quota request is the minimum number of capacity units on the account.
-In the description, provide as much relevant information as you can about your environment and the additional capacity you would like to request.
+| Minimum capacity units | Operations throughput limit |
+|-|--|
+| 1 |10 (Default) |
+| 2 |20 |
+| 3 |30 |
+| 4 |40 |
+| 5 |50 |
+| 6 |60 |
+| 7 |70 |
+| 8 |80 |
+| 9 |90 |
+| 10 |100 |
## Monitoring the elastic data map
purview How To Manage Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-manage-quotas.md
Previously updated : 11/12/2020 Last updated : 03/21/2022 # Manage and increase quotas for resources with Azure Purview
-Azure Purview is a cloud service for use by data users. You use Azure Purview to centrally manage data governance across your data estate, spanning both cloud and on-prem environments. The service enables business analysts to search for relevant data by using meaningful business terms. To raise the limits up to the maximum for your subscription, contact support.
-
+This article highlights the limits that currently exist in the Azure Purview service. These limits are also known as quotas.
+ ## Azure Purview limits |**Resource**| **Default Limit** |**Maximum Limit**| |||| |Azure Purview accounts per region, per tenant (all subscriptions combined)|3|Contact Support|
+|Data Map throughput^ <br><small>There's no default limit on the data map metadata storage</small>| 10 capacity units <br><small>250 operations per second</small> | 100 capacity units <br><small>2,500 operations per second</small> |
|vCores available for scanning, per account*|160|160|
-|Concurrent scans, per account at a given point. The limit is based on the type of data sources scanned*|5 | 10 |
+|Concurrent scans per Purview account. The limit is based on the type of data sources scanned*|5 | 10 |
|Maximum time that a scan can run for|7 days|7 days|
-|[Data Map Capacity unit (CU)](concept-elastic-data-map.md) |1 CU (25 Operations/second throughput and 10 GB metadata storage) | 100 CU (Contact Support for higher CU)|
-|Data Map Operations throughput |25 Operations/second for each Capacity Unit | 2,500 Operations/Sec for 100 CU (Contact Support for more throughput)|
-|Data Map Storage |10 GB for each Capacity Unit | 1000 GB for for 100 CU (Contact Support for more storage) |
-|Data Map elasticity window | 1 - 8 CU (Data Map can auto scale up/down based on throughput within elasticity window) | Contact support to get higher elasticity window |
|Size of assets per account|100M physical assets |Contact Support| |Maximum size of an asset in a catalog|2 MB|2 MB| |Maximum length of an asset name and classification name|4 KB|4 KB| |Maximum length of asset property name and value|32 KB|32 KB| |Maximum length of classification attribute name and value|32 KB|32 KB| |Maximum number of glossary terms, per account|100K|100K|+
+\* Self-hosted integration runtime scenarios aren't included in the limits defined in the above table.
+
+^ Increasing the data map throughput limit also increases the minimum number of capacity units with no usage. See [Data Map throughput](concept-elastic-data-map.md) for more info.
-*Self-hosted integration runtime scenarios are outside the scope for the limits defined in the above table.
-
+## Request quota increase
+
+Use the following steps to create a new support request from the Azure portal to increase quota for Azure Purview. You can create a quota request for Azure Purview accounts in a subscription, accounts in a tenant and the data map throughput of a specific account.
+
+1. On the [Azure portal](https://portal.azure.com) menu, select **Help + support**.
+
+ :::image type="content" source="./media/how-to-manage-quotas/help-plus-support.png" alt-text="Screenshot showing how to navigate to help and support" border="true":::
+
+1. In **Help + support**, select **New support request**.
+
+ :::image type="content" source="./media/how-to-manage-quotas/create-new-support-request.png" alt-text="Screenshot showing how to create new support request" border="true":::
+
+1. For **Issue type**, select **Service and subscription limits (quotas)**.
+
+1. For **Subscription**, select the subscription whose quota you want to increase.
+
+1. For **Quota type**, select Azure Purview. Then select **Next**.
+
+ :::image type="content" source="./media/how-to-manage-quotas/enter-support-details.png" alt-text="Screenshot showing how to enter support information" border="true":::
+
+1. In the **Details** window, select **Enter details** to enter additional information.
+1. Choose your **Quota type**, scope (either location or account) and what you wish the new limit to be
+
+ :::image type="content" source="./media/how-to-manage-quotas/enter-quota-amount.png" alt-text="Screenshot showing how to enter quota amount for Azure Purview accounts per subscription" border="true":::
+
+1. Enter the rest of the required support information. Review and create the support request
+ ## Next steps > [!div class="nextstepaction"]
purview Register Scan On Premises Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-on-premises-sql-server.md
This article outlines how to register on-premises SQL server instances, and how
The supported SQL Server versions are 2005 and above. SQL Server Express LocalDB is not supported.
+When scanning on-premises SQL server, Azure Purview supports:
+
+- Extracting technical metadata including:
+
+ - Instance
+ - Databases
+ - Schemas
+ - Tables including the columns
+ - Views including the columns
+
+When setting up scan, you can choose to specify the database name to scan one database, and you can further scope the scan by selecting tables and views as needed. The whole SQL Server instance will be scanned if database name is not provided.
+ ## Prerequisites * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
search Index Ranking Similarity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-ranking-similarity.md
Title: Configure the similarity algorithm
description: Learn how to enable BM25 on older search services, and how BM25 parameters can be modified to better accommodate the content of your indexes. --++ Last updated 03/12/2021
search Search Api Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-api-versions.md
Previously updated : 10/15/2021 Last updated : 03/22/2022 # API versions in Azure Cognitive Search Azure Cognitive Search rolls out feature updates regularly. Sometimes, but not always, these updates require a new version of the API to preserve backward compatibility. Publishing a new version allows you to control when and how you integrate search service updates in your code.
-As a rule, the Azure Cognitive Search team publishes new versions only when necessary, since it can involve some effort to upgrade your code to use a new API version. A new version is needed only if some aspect of the API has changed in a way that breaks backward compatibility. Such changes can happen because of fixes to existing features, or because of new features that change existing API surface area.
+As a rule, the REST APIs and libraries are versioned only when necessary, since it can involve some effort to upgrade your code to use a new API version. A new version is needed only if some aspect of the API has changed in a way that breaks backward compatibility. Such changes can happen because of fixes to existing features, or because of new features that change existing API surface area.
+
+See [Azure SDK lifecycle and support policy](https://azure.github.io/azure-sdk/policies_support.html) for more information about the deprecation path.
<a name="unsupported-versions"></a> ## Unsupported versions
-Some API versions are no longer supported and will be rejected by a search service:
+Some API versions are discontinued and will be rejected by a search service:
+ **2015-02-28** + **2015-02-28-Preview** + **2014-07-31-Preview** + **2014-10-20-Preview**
-In addition, versions of the Azure Cognitive Search .NET SDK older than [**3.0.0-rc**](https://www.nuget.org/packages/Microsoft.Azure.Search/3.0.0-rc) will also be retired since they target one of these REST API versions.
+All SDKs are based on REST API versions. If a REST version is discontinued, any SDK that's based on it is also discontinued. All Azure Cognitive Search .NET SDKs older than [**3.0.0-rc**](https://www.nuget.org/packages/Microsoft.Azure.Search/3.0.0-rc) are now discontinued.
Support for the above-listed versions was discontinued on October 15, 2020. If you have code that uses a discontinued version, you can [migrate existing code](search-api-migration.md) to a newer [REST API version](/rest/api/searchservice/) or to a newer Azure SDK.
The following table provides links to more recent SDK versions.
| SDK version | Status | Description | |-|--||
-| [Azure.Search.Documents 11](/dotnet/api/overview/azure/search.documents-readme) | Stable | New client library from the Azure .NET SDK team, initially released July 2020. See the [Change Log](https://github.com/Azure/azure-sdk-for-net/blob/Azure.Search.Documents_11.3.0/sdk/search/Azure.Search.Documents/CHANGELOG.md) for information about minor releases. |
-| [Microsoft.Azure.Search 10](https://www.nuget.org/packages/Microsoft.Azure.Search/) | Stable | Released May 2019. This is the last version of the Microsoft.Azure.Search package. It is succeeded by Azure.Search.Documents. |
-| [Microsoft.Azure.Management.Search 4.0.0](/dotnet/api/overview/azure/search/management) | Stable | Targets the Management REST api-version=2020-08-01. |
-| Microsoft.Azure.Management.Search 3.0.0 | Stable | Targets the Management REST api-version=2015-08-19. |
+| [Azure.Search.Documents 11](/dotnet/api/overview/azure/search.documents-readme) | Active | New client library from the Azure .NET SDK team, initially released July 2020. See the [Change Log](https://github.com/Azure/azure-sdk-for-net/blob/Azure.Search.Documents_11.3.0/sdk/search/Azure.Search.Documents/CHANGELOG.md) for information about minor releases. |
+| [Microsoft.Azure.Search 10](https://www.nuget.org/packages/Microsoft.Azure.Search/) | Retired | Released May 2019. This is the last version of the Microsoft.Azure.Search package and it's now deprecated. It's succeeded by Azure.Search.Documents. |
+| [Microsoft.Azure.Management.Search 4.0.0](/dotnet/api/overview/azure/search/management) | Active | Targets the Management REST api-version=2020-08-01. |
+| [Microsoft.Azure.Management.Search 3.0.0](https://www.nuget.org/packages/Microsoft.Azure.Management.Search/3.0.0) | Active | Targets the Management REST api-version=2015-08-19. |
## Azure SDK for Java | SDK version | Status | Description | |-|--||
-| [Java azure-search-documents 11](/java/api/overview/azure/search-documents-readme) | Stable | New client library from Azure Java SDK, released July 2020. Targets the Search REST api-version=2019-05-06. |
-| [Java Management Client 1.35.0](/java/api/overview/azure/search/management) | Stable | Targets the Management REST api-version=2015-08-19. |
+| [Java azure-search-documents 11](/java/api/overview/azure/search-documents-readme) | Active | New client library from Azure Java SDK, released July 2020. Targets the Search REST api-version=2019-05-06. |
+| [Java Management Client 1.35.0](/java/api/overview/azure/search/management) | Active | Targets the Management REST api-version=2015-08-19. |
## Azure SDK for JavaScript | SDK version | Status | Description | |-|--||
-| [JavaScript @azure/search-documents 11.0](/javascript/api/overview/azure/search-documents-readme) | Stable | New client library from Azure JavaScript & TypesScript SDK, released July 2020. Targets the Search REST api-version=2016-09-01. |
-| [JavaScript @azure/arm-search](https://www.npmjs.com/package/@azure/arm-search) | Stable | Targets the Management REST api-version=2015-08-19. |
+| [JavaScript @azure/search-documents 11.0](/javascript/api/overview/azure/search-documents-readme) | Active | New client library from Azure JavaScript & TypesScript SDK, released July 2020. Targets the Search REST api-version=2016-09-01. |
+| [JavaScript @azure/arm-search](https://www.npmjs.com/package/@azure/arm-search) | Active | Targets the Management REST api-version=2015-08-19. |
## Azure SDK for Python | SDK version | Status | Description | |-|--||
-| [Python azure-search-documents 11.0](/python/api/azure-search-documents) | Stable | New client library from Azure Python SDK, released July 2020. Targets the Search REST api-version=2019-05-06. |
-| [Python azure-mgmt-search 8.0](https://pypi.org/project/azure-mgmt-search/) | Stable | Targets the Management REST api-version=2015-08-19. |
+| [Python azure-search-documents 11.0](/python/api/azure-search-documents) | Active | New client library from Azure Python SDK, released July 2020. Targets the Search REST api-version=2019-05-06. |
+| [Python azure-mgmt-search 8.0](https://pypi.org/project/azure-mgmt-search/) | Active | Targets the Management REST api-version=2015-08-19. |
## All Azure SDKs
search Search Dotnet Sdk Migration Version 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-dotnet-sdk-migration-version-1.md
ms.devlang: csharp
Last updated 09/16/2021 + # Upgrade to Azure Search .NET SDK version 1.1
search Search Dotnet Sdk Migration Version 10 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-dotnet-sdk-migration-version-10.md
ms.devlang: csharp Previously updated : 09/16/2021 Last updated : 03/21/2022 # Upgrade to Azure Cognitive Search .NET SDK version 10
+> [!IMPORTANT]
+> Version 10 is the last version of the Microsoft.Azure.Search package and it's now deprecated. It's succeeded by Azure.Search.Documents. If you're using older versions of Microsoft.Azure.Search, we recommend a sequential migration path. For example, if you're using version 8.0-preview or older, you should upgrade to version 9 first, and then to version 10, and finally to version 11.
+ If you're using version 9.0 or older of the [.NET SDK](/dotnet/api/overview/azure/search), this article will help you upgrade your application to use version 10.
-Azure Search is renamed to Azure Cognitive Search in version 10, but namespaces and package names are unchanged. Previous versions of the SDK (9.0 and earlier) continue to use the former name. For more information about using the SDK, including examples, see [How to use Azure Cognitive Search from a .NET Application](search-howto-dotnet-sdk.md).
+"Azure Search" is renamed to "Azure Cognitive Search" in version 10, but namespaces and package names are unchanged. Previous versions of the SDK (9.0 and earlier) continue to use the "Microsoft.Search" prefix. For more information about using the SDK, including examples, see [How to use Azure Cognitive Search from a .NET Application](search-howto-dotnet-sdk.md).
Version 10 adds several features and bug fixes, bringing it to the same functional level as the REST API version `2019-05-06`. In cases where a change breaks existing code, we'll walk you through the [steps required to resolve the issue](#UpgradeSteps).
-> [!NOTE]
-> If you're using version 8.0-preview or older, you should upgrade to version 9 first, and then upgrade to version 10. See [Upgrading to the Azure Search .NET SDK version 9](search-dotnet-sdk-migration-version-9.md) for instructions.
->
-> Your search service instance supports several REST API versions, including the latest one. You can continue to use a version when it is no longer the latest one, but we recommend that you migrate your code to use the newest version. When using the REST API, you must specify the API version in every request via the api-version parameter. When using the .NET SDK, the version of the SDK you're using determines the corresponding version of the REST API. If you are using an older SDK, you can continue to run that code with no changes even if the service is upgraded to support a newer API version.
- <a name="WhatsNew"></a> ## What's new in version 10
search Search Dotnet Sdk Migration Version 11 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-dotnet-sdk-migration-version-11.md
ms.devlang: csharp Previously updated : 12/10/2021 Last updated : 03/21/2022
If your search solution is built on the [**Azure SDK for .NET**](/dotnet/azure/), this article will help you migrate your code from earlier versions of [**Microsoft.Azure.Search**](/dotnet/api/overview/azure/search/client10) to version 11, the new [**Azure.Search.Documents**](/dotnet/api/overview/azure/search.documents-readme) client library. Version 11 is a fully redesigned client library, released by the Azure SDK development team (previous versions were produced by the Azure Cognitive Search development team).
-With [one exception](#WhatsNew), all features from version 10 are implemented in version 11. Key differences include:
+All features from version 10 are implemented in version 11. Key differences include:
+ One package (**Azure.Search.Documents**) instead of four + Three clients instead of two: SearchClient, SearchIndexClient, SearchIndexerClient + Naming differences across a range of APIs and small structural differences that simplify some tasks
-The client library's [Change Log](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/search/Azure.Search.Documents/CHANGELOG.md) has an itemized list of updates.
+The client library's [Change Log](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/search/Azure.Search.Documents/CHANGELOG.md) has an itemized list of updates. You can review a [summarized version](#WhatsNew) in this article.
All C# code samples and snippets in the Cognitive Search product documentation have been revised to use the new **Azure.Search.Documents** client library.
All C# code samples and snippets in the Cognitive Search product documentation h
The benefits of upgrading are summarized as follows:
-+ New features will be added to **Azure.Search.Documents** only. The previous version, Microsoft.Azure.Search, is now a legacy client. Updates to legacy libraries are limited to high priority bug fixes only.
++ New features will be added to **Azure.Search.Documents** only. The previous version, Microsoft.Azure.Search, is now retired. Updates to deprecated libraries are limited to high priority bug fixes only. + Consistency with other Azure client libraries. **Azure.Search.Documents** takes a dependency on [Azure.Core](/dotnet/api/azure.core) and [System.Text.Json](/dotnet/api/system.text.json), and follows conventional approaches for common tasks such as client connections and authorization.
Version 11.3 additions ([change log](https://github.com/Azure/azure-sdk-for-net/
## Before upgrading
-+ [Quickstarts](search-get-started-dotnet.md), tutorials, and [C# samples](samples-dotnet.md) have been updated to use the Azure.Search.Documents package. We recommend reviewing existing samples and walkthroughs to learn about the new APIs before embarking on a migration exercise.
++ [Quickstarts](search-get-started-dotnet.md), tutorials, and [C# samples](samples-dotnet.md) have been updated to use the Azure.Search.Documents package. We recommend reviewing the samples and walkthroughs to learn about the new APIs before embarking on a migration exercise. + [How to use Azure.Search.Documents](search-howto-dotnet-sdk.md) introduces the most commonly used APIs. Even knowledgeable users of Cognitive Search might want to review this introduction to the new library as a precursor to migration.
search Search Dotnet Sdk Migration Version 5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-dotnet-sdk-migration-version-5.md
ms.devlang: csharp Last updated 09/16/2021+ # Upgrade to Azure Search .NET SDK version 5
search Search Dotnet Sdk Migration Version 9 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-dotnet-sdk-migration-version-9.md
ms.devlang: csharp
Last updated 09/16/2021 + # Upgrade to Azure Search .NET SDK version 9
search Search Dotnet Sdk Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-dotnet-sdk-migration.md
ms.devlang: csharp
Last updated 09/16/2021 + # Upgrade to Azure Search .NET SDK version 3
security Secure Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/secure-design.md
with security best practices on Azure:
application attacks. - [Secure DevOps Kit for
- Azure](https://azsk.azurewebsites.net/https://docsupdatetracker.net/index.html) is a collection of
+ Azure](https://github.com/azsk/AzTS-docs/#readme) is a collection of
scripts, tools, extensions, and automations that caters to the comprehensive Azure subscription and resource security needs of DevOps teams that use extensive automation. The Secure DevOps Kit
security Secure Develop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/secure-develop.md
Ensuring that your application is secure is as important as testing any other fu
### Run security verification tests
-[Secure DevOps Kit for Azure](https://azsk.azurewebsites.net/https://docsupdatetracker.net/index.html) (AzSK) contains SVTs for multiple services of the Azure platform. You run these SVTs periodically to ensure that your Azure subscription and the different resources that comprise your application are in a secure state. You can also automate these tests by using the continuous integration/continuous deployment (CI/CD) extensions feature of AzSK, which makes SVTs available as a Visual Studio extension.
+[Secure DevOps Kit for Azure](https://github.com/azsk/AzTS-docs/#readme) (AzSK) contains SVTs for multiple services of the Azure platform. You run these SVTs periodically to ensure that your Azure subscription and the different resources that comprise your application are in a secure state. You can also automate these tests by using the continuous integration/continuous deployment (CI/CD) extensions feature of AzSK, which makes SVTs available as a Visual Studio extension.
## Next steps
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md
For more information, see the Cognito Detect Syslog Guide, which can be download
| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Supported by** | Microsoft |
+> [!NOTE]
+> This connector was designed to import only those alerts whose status is "open." Alerts that have been closed in Azure AD Identity Protection will not be imported to Microsoft Sentinel.
## Azure Activity
sentinel Sap Deploy Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap-deploy-solution.md
To deploy SAP solution security content, do the following:
a. Download SAP watchlists from the Microsoft Sentinel GitHub repository at https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/Analytics/Watchlists. b. In the Microsoft Sentinel **Watchlists** area, add the watchlists to your Microsoft Sentinel workspace. Use the downloaded CSV files as the sources, and then customize them as needed for your environment.
- [ ![SAP-related watchlists added to Microsoft Sentinel.](media/sap/sap-watchlists.png) ](media/sap/sap-watchlists.png#lightbox)
+ [![SAP-related watchlists added to Microsoft Sentinel.](media/sap/sap-watchlists.png)](media/sap/sap-watchlists.png#lightbox)
For more information, see [Use Microsoft Sentinel watchlists](watchlists.md) and [Available SAP watchlists](sap-solution-security-content.md#available-watchlists). 1. In Microsoft Sentinel, go to the **Microsoft Sentinel Continuous Threat Monitoring for SAP** data connector to confirm the connection:
- [ ![Screenshot of the Microsoft Sentinel Continuous Threat Monitoring for SAP data connector page.](media/sap/sap-data-connector.png) ](media/sap/sap-data-connector.png#lightbox)
+ [![Screenshot of the Microsoft Sentinel Continuous Threat Monitoring for SAP data connector page.](media/sap/sap-data-connector.png)](media/sap/sap-data-connector.png#lightbox)
SAP ABAP logs are displayed on the Microsoft Sentinel **Logs** page, under **Custom logs**:
- [ ![Screenshot of the SAP ABAP logs in the 'Custom Logs' area in Microsoft Sentinel.](media/sap/sap-logs-in-sentinel.png) ](media/sap/sap-logs-in-sentinel.png#lightbox)
+ [![Screenshot of the SAP ABAP logs in the 'Custom Logs' area in Microsoft Sentinel.](media/sap/sap-logs-in-sentinel.png)](media/sap/sap-logs-in-sentinel.png#lightbox)
For more information, see [Microsoft Sentinel SAP solution logs reference (public preview)](sap-solution-log-reference.md).
sentinel Sap Solution Log Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap-solution-log-reference.md
Last updated 02/22/2022
> Some logs, noted below, are not sent to Microsoft Sentinel by default, but you can manually add them as needed. For more information, see [Define the SAP logs that are sent to Microsoft Sentinel](sap-solution-deploy-alternate.md#define-the-sap-logs-that-are-sent-to-microsoft-sentinel). >
-This article describes the SAP logs available from the Microsoft Sentinel SAP data connector, including the table names in Microsoft Sentinel, the log purposes, and detailed log schemas. Schema field descriptions are based on the field descriptions in the relevant [SAP documentation](https://help.sap.com/).
+This article describes the functions, logs, and tables available as part of the Microsoft Sentinel SAP solution and its data connector. It is intended for advanced SAP users.
+
+## Functions available from the SAP solution
+
+This section describes the [functions](/azure-monitor/logs/functions.md) that are available in your workspace after you've deployed the Continuous Threat Monitoring for SAP solution. Find these functions in the Microsoft Sentinel **Logs** page to use in your KQL queries, listed under **Workspace functions**.
+
+Users are *strongly encouraged* to use the functions as the subjects of their analysis whenever possible, instead of the underlying logs or tables. These functions are intended to serve as the principal user interface to the data. They form the basis for all the built-in analytics rules and workbooks available to you out of the box. This allows for changes to be made to the data infrastructure beneath the functions, without breaking user-created content.
+
+- [SAPUsersAssignments](#sapusersassignments)
+- [SAPUsersGetPrivileged](#sapusersgetprivileged)
+- [SAPUsersAuthorizations](#sapusersauthorizations)
+- [SAPConnectorHealth](#sapconnectorhealth)
+- [SAPConnectorOverview](#sapconnectoroverview)
+
+### SAPUsersAssignments
+
+The **SAPUsersAssignments** function gathers data from multiple SAP data sources and creates a user-centric view of the current user master data, including the roles and profiles currently assigned.
+
+This function summarizes the user assignments to roles and profiles, and returns the following data:
++
+| Field | Description | Data Source/Notes |
+| - | -- | -- |
+| User | SAP user ID | SAL only |
+| Email | SMTP address | USR21 (SMTP_ADDR) |
+| UserType | User type | USR02 (USTYP) |
+| Timezone | Time zone | USR02 (TZONE) |
+| LockedStatus | Lock status | USR02 (UFLAG) |
+| LastSeenDate | Last seen date | USR02 (TRDAT) |
+| LastSeenTime | Last seen time | USR02 (LTIME) |
+| UserGroupAuth | User group in user master maintenance | USR02 (CLASS) |
+| Profiles | Set of profiles (default maximum set size = 50) | `["Profile 1", "Profile 2",...,"profile 50"]` |
+| DirectRoles | Set of Directly assigned roles (default max set size = 50) | `["Role 1", "Role 2",...,"ΓÇ¥"Role 50"]` |
+| ChildRoles | Set of indirectly assigned roles (default max set size = 50) | `["Role 1", "Role 2",...,"ΓÇ¥"Role 50"]` |
+| Client | Client ID | |
+| SystemID | System ID | As defined in the connector |
+||||
+
+### SAPUsersGetPrivileged
+
+The **SAPUsersGetPrivileged** function returns a list of privileged users per client and system ID.
+
+Users are considered privileged when they are listed in the *SAP - Privileged Users* watchlist, have been assigned to a profile listed in *SAP - Sensitive Profiles* watchlist, or have been added to a role listed in *SAP - Sensitive Roles* watchlist.
+
+**Parameters:**
+- TimeAgo
+ - Optional
+ - Default value: 7 days
+ - Determines that the function seeks User master data from the time defined by the `TimeAgo` value until the time defined by the `now()` value.
+
+The **SAPUsersGetPrivileged** function returns the following data:
+
+| Field | Description |
+| -- | -- |
+| User | SAP user ID |
+| Client | Client ID |
+| SystemID | System ID |
+| | |
+
+### SAPUsersAuthorizations
+
+The **SAPUsersAuthorizations** function brings together data from several tables to produce a user-centric view of the current roles and authorizations assigned. Only users with active role and authorization assignments are returned.
+
+**Parameters:**
+- TimeAgo
+ - Optional
+ - Default value: 7 days
+ - Determines that the function seeks User master data from the time defined by the `TimeAgo` value until the time defined by the `now()` value.
+
+The **SAPUsersAuthorizations** function returns the following data:
+
+| Field | Description | Notes |
+| -- | -- | -- |
+| User | SAP user ID | |
+| Roles | Set of roles (default max set size = 50) | `["Role 1", "Role 2",...,"Role 50"]` |
+| AuthorizationsDetails | Set of authorizations (default max set size = 100) | `{{AuthorizationsDeatils1}`,<br>`{AuthorizationsDeatils2}`, <br>...,<br>`{AuthorizationsDeatils100}}` |
+| Client | Client ID | |
+| SystemID | System ID | |
++
+### SAPConnectorHealth
+
+The **SAPConnectorHealth** function reflects the status of the agent's and the underlying SAP system's connectivity. Based on the heartbeat log *SAP_HeartBeat_CL* and other health indicators, it returns the following data:
+
+| Field | Description |
+| | -- |
+| Agent | Agent ID in agent's configuration (automatically generated) |
+| SystemID | SAP System ID |
+| Status | Overall connectivity status |
+| Details | Connectivity details |
+| ExtendedDetails | Connectivity extended details |
+| LastSeen | Timestamp of latest activity |
+| StatusCode | Code reflecting the system's status |
++
+### SAPConnectorOverview
+
+The **SAPConnectorOverview** function shows row counts of each SAP table per System ID. It returns a list of data records per system ID, and their time generated.
+
+**Parameters:**
+- TimeAgo
+ - Optional
+ - Default value: 7 days
+ - Determines that the function seeks User master data from the time defined by the `TimeAgo` value until the time defined by the `now()` value.
++
+| Field | Description |
+| | -- |
+| TimeGenerated | A datetime value of the timestamp of the record's generation |
+| SystemID_s | A string representing the SAP System ID |
+
+Use the following Kusto query to perform a daily trend analysis:
+
+```kusto
+SAPConnectorOverview(7d)
+| summarize count() by bin(TimeGenerated, 1d), SystemID_s
+```
-This article is intended for advanced SAP users.
## Logs produced by the data connector agent
-The following sections describe the logs that are produced by the SAP data connector agent and ingested into Microsoft Sentinel.
+This section describes the SAP logs available from the Microsoft Sentinel SAP data connector, including the table names in Microsoft Sentinel, the log purposes, and detailed log schemas. Schema field descriptions are based on the field descriptions in the relevant [SAP documentation](https://help.sap.com/).
+
+For best results, use the Microsoft Sentinel functions listed below to visualize, access, and query the data.
+
+- [ABAP Application log](#abap-application-log)
+- [ABAP Change Documents log](#abap-change-documents-log)
+- [ABAP CR log](#abap-cr-log)
+- [ABAP DB table data log](#abap-db-table-data-log)
+- [ABAP Gateway log](#abap-gateway-log)
+- [ABAP ICM log](#abap-icm-log)
+- [ABAP Job log](#abap-job-log)
+- [ABAP Security Audit log](#abap-security-audit-log)
+- [ABAP Spool log](#abap-spool-log)
+- [APAB Spool Output log](#apab-spool-output-log)
+- [ABAP SysLog](#abap-syslog)
+- [ABAP Workflow log](#abap-workflow-log)
+- [ABAP WorkProcess log](#abap-workprocess-log)
+- [HANA DB Audit Trail](#hana-db-audit-trail)
+- [JAVA files](#java-files)
+- [SAP Heartbeat Log](#sap-heartbeat-log)
### ABAP Application log -- **Name in Microsoft Sentinel**: `ABAPAppLog_CL`
+- **Microsoft Sentinel function for querying this log**: SAPAppLog
- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/56bf1265a92e4b4d9a72448c579887af/7.5.7/en-US/c769bcc9f36611d3a6510000e835363f.html)
The following sections describe the logs that are produced by the SAP data conne
| ContextDDIC | Context DDIC structure | | ExternalID | External log ID | | Host | Host |
-| Instance | ABAP instance, in the following syntax: `<HOST>_<SYSID>_<SYSNR>` |
+| Instance | ABAP instance, in the following syntax: `<HOST>_<SYSID>_<SYSNR>` |
| InternalMessageSerial | Application log message serial | | LevelofDetail | Level of detail | | LogHandle | Application log handle |
The following sections describe the logs that are produced by the SAP data conne
### ABAP Change Documents log -- **Name in Microsoft Sentinel**: `ABAPChangeDocsLog_CL`
+- **Microsoft Sentinel function for querying this log**: SAPChangeDocsLog
- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/6f51f5216c4b10149358d088a0b7029c/7.01.22/en-US/b8686150ed102f1ae10000000a44176f.html)
The following sections describe the logs that are produced by the SAP data conne
#### ABAPChangeDocsLog_CL log schema
-| Field | Description |
+| Field | Description |
| | - | | ActualChangeNum | Actual change number | | ChangedTableKey | Changed table key |
The following sections describe the logs that are produced by the SAP data conne
### ABAP CR log -- **Name in Microsoft Sentinel**: `ABAPCRLog_CL`
+- **Microsoft Sentinel function for querying this log**: SAPCRLog
- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/56bf1265a92e4b4d9a72448c579887af/7.5.7/en-US/c769bcd5f36611d3a6510000e835363f.html)
The following sections describe the logs that are produced by the SAP data conne
To have this log sent to Microsoft Sentinel, you must [add it manually to the **systemconfig.ini** file](sap-solution-deploy-alternate.md#define-the-sap-logs-that-are-sent-to-microsoft-sentinel). -- **Name in Microsoft Sentinel**: `ABAPTableDataLog_CL`
+- **Microsoft Sentinel function for querying this log**: SAPTableDataLog
- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/56bf1265a92e4b4d9a72448c579887af/7.5.7/en-US/c769bcd2f36611d3a6510000e835363f.html)
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
To have this log sent to Microsoft Sentinel, you must [add it manually to the **systemconfig.ini** file](sap-solution-deploy-alternate.md#define-the-sap-logs-that-are-sent-to-microsoft-sentinel). --- **Name in Microsoft Sentinel**: `ABAPOS_GW_CL`
+- **Microsoft Sentinel function for querying this log**: SAPOS_GW
- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/62b4de4187cb43668d15dac48fc00732/7.5.7/en-US/48b2a710ca1c3079e10000000a42189b.html)
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
To have this log sent to Microsoft Sentinel, you must [add it manually to the **systemconfig.ini** file](sap-solution-deploy-alternate.md#define-the-sap-logs-that-are-sent-to-microsoft-sentinel). --- **Name in Microsoft Sentinel**: `ABAPOS_ICM_CL`
+- **Microsoft Sentinel function for querying this log**: SAPOS_ICM
- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/683d6a1797a34730a6e005d1e8de6f22/7.52.4/en-US/a10ec40d01e740b58d0a5231736c434e.html)
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
### ABAP Job log -- **Name in Microsoft Sentinel**: `ABAPJobLog_CL`
+- **Microsoft Sentinel function for querying this log**: SAPJobLog
- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/b07e7195f03f438b8e7ed273099d74f3/7.31.19/en-US/4b2bc0974c594ba2e10000000a42189c.html)
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
### ABAP Security Audit log -- **Name in Microsoft Sentinel**: `ABAPAuditLog_CL`
+- **Microsoft Sentinel function for querying this log**: SAPAuditLog
- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/280f016edb8049e998237fcbd80558e7/7.5.7/en-US/4d41bec4aa601c86e10000000a42189b.html)
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
#### ABAPAuditLog_CL log schema
-| Field | Description |
-| -- | - |
-| ABAPProgramName | Program name, SAL only |
-| AlertSeverity | Alert severity |
-| AlertSeverityText | Alert severity text, SAL only |
-| AlertValue | Alert value |
-| AuditClassID | Audit class ID, SAL only |
-| ClientID | ABAP client ID (MANDT) |
-| Computer | User machine, SAL only |
-| Email | User email |
-| Host | Host |
-| Instance | ABAP instance, in the following syntax: `<HOST>_<SYSID>_<SYSNR>` |
-| MessageClass | Message class |
-| MessageContainerID | Message container ID, XAL Only |
-| MessageID | Message ID, such as `‘AU1’,’AU2’…` |
-| MessageText | Message text |
-| MonitoringObjectName | MTE Monitor object name, XAL only |
-| MonitorShortName | MTE Monitor short name, XAL only |
-| SAPProcesType | System Log: SAP process type, SAL only |
-| B* - Background Processing | |
-| D* - Dialog Processing | |
-| U* - Update Tasks | |
+| Field | Description |
+| -- | -- |
+| ABAPProgramName | Program name, SAL only |
+| AlertSeverity | Alert severity |
+| AlertSeverityText | Alert severity text, SAL only |
+| AlertValue | Alert value |
+| AuditClassID | Audit class ID, SAL only |
+| ClientID | ABAP client ID (MANDT) |
+| Computer | User machine, SAL only |
+| Email | User email |
+| Host | Host |
+| Instance | ABAP instance, in the following syntax: `<HOST>_<SYSID>_<SYSNR>` |
+| MessageClass | Message class |
+| MessageContainerID | Message container ID, XAL Only |
+| MessageID | Message ID, such as `‘AU1’,’AU2’…` |
+| MessageText | Message text |
+| MonitoringObjectName | MTE Monitor object name, XAL only |
+| MonitorShortName | MTE Monitor short name, XAL only |
+| SAPProcesType | System Log: SAP process type, SAL only |
+| B* - Background Processing | |
+| D* - Dialog Processing | |
+| U* - Update Tasks | |
| SAPWPName | System Log: Work process number, SAL only |
-| SystemID | System ID |
-| SystemNumber | System number |
-| TerminalIPv6 | User machine IP, SAL only |
-| TransactionCode | Transaction code, SAL only |
-| User | User |
-| Variable1 | Message variable 1 |
-| Variable2 | Message variable 2 |
-| Variable3 | Message variable 3 |
-| Variable4 | Message variable 4 |
+| SystemID | System ID |
+| SystemNumber | System number |
+| TerminalIPv6 | User machine IP, SAL only |
+| TransactionCode | Transaction code, SAL only |
+| User | User |
+| Variable1 | Message variable 1 |
+| Variable2 | Message variable 2 |
+| Variable3 | Message variable 3 |
+| Variable4 | Message variable 4 |
### ABAP Spool log -- **Name in Microsoft Sentinel**: `ABAPSpoolLog_CL`
+- **Microsoft Sentinel function for querying this log**: SAPSpoolLog
- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/290ce8983cbc4848a9d7b6f5e77491b9/7.52.1/en-US/4eae791c40f72045e10000000a421937.html)
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
### APAB Spool Output log -- **Name in Microsoft Sentinel**: `ABAPSpoolOutputLog_CL`
+- **Microsoft Sentinel function for querying this log**: SAPSpoolOutputLog
- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/290ce8983cbc4848a9d7b6f5e77491b9/7.52.1/en-US/4eae779e40f72045e10000000a421937.html)
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
-### ABAP SysLog
+### ABAP Syslog
To have this log sent to Microsoft Sentinel, you must [add it manually to the **systemconfig.ini** file](sap-solution-deploy-alternate.md#define-the-sap-logs-that-are-sent-to-microsoft-sentinel). --- **Name in Microsoft Sentinel**: `ABAPOS_Syslog_CL`
+- **Microsoft Sentinel function for querying this log**: SAPOS_Syslog
- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/56bf1265a92e4b4d9a72448c579887af/7.5.7/en-US/c769bcbaf36611d3a6510000e835363f.html)
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
| - | - | | ClientID | ABAP client ID (MANDT) | | Host | Host |
-| Instance | ABAP instance, in the following syntax: `<HOST>_<SYSID>_<SYSNR> ` |
+| Instance | ABAP instance, in the following syntax: `<HOST>_<SYSID>_<SYSNR>` |
| MessageNumber | Message number | | MessageText | Message text |
-| Severity | Message severity, one of the following values: `Debug`, `Info`, `Warning`, `Error` |
+| Severity | Message severity, one of the following values: `Debug`, `Info`, `Warning`, `Error` |
| SystemID | System ID | | SystemNumber | System number | | TransacationCode | Transaction code |
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
### ABAP Workflow log -- **Name in Microsoft Sentinel**: `ABAPWorkflowLog_CL`
+- **Microsoft Sentinel function for querying this log**: SAPWorkflowLog
- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/56bf1265a92e4b4d9a72448c579887af/7.5.7/en-US/c769bcccf36611d3a6510000e835363f.html)
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
| WorkItemID | Work item ID | ---- ### ABAP WorkProcess log To have this log sent to Microsoft Sentinel, you must [add it manually to the **systemconfig.ini** file](sap-solution-deploy-alternate.md#define-the-sap-logs-that-are-sent-to-microsoft-sentinel). --- **Name in Microsoft Sentinel**: `ABAPOS_WP_CL`
+- **Microsoft Sentinel function for querying this log**: SAPOS_WP
- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/d0739d980ecf42ae9f3b4c19e21a4b6e/7.3.15/en-US/46fb763b6d4c5515e10000000a1553f6.html)
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
To have this log sent to Microsoft Sentinel, you must [deploy a Microsoft Management Agent](connect-syslog.md) to gather Syslog data from the machine running HANA DB. -- **Name in Microsoft Sentinel**: `Syslog`
+- **Microsoft Sentinel function for querying this log**: SAPSyslog
- **Related SAP documentation**: [General](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.03/en-US/48fd6586304c4f859bf92d64d0cd8b08.html) | [Audit Trail](https://help.sap.com/viewer/b3ee5778bc2e4a089d3299b82ec762a7/2.0.03/en-US/0a57444d217649bf94a19c0b68b470cc.html)
To have this log sent to Microsoft Sentinel, you must [deploy a Microsoft Manage
To have this log sent to Microsoft Sentinel, you must [add it manually to the **systemconfig.ini** file](sap-solution-deploy-alternate.md#define-the-sap-logs-that-are-sent-to-microsoft-sentinel). -- **Name in Microsoft Sentinel**: `JavaFilesLogsCL`
+- **Microsoft Sentinel function for querying this log**: SAPJAVAFilesLogs
- **Related SAP documentation**: [General](https://help.sap.com/viewer/2f8b1599655d4544a3d9c6d1a9b6546b/7.5.9/en-US/485059dfe31672d4e10000000a42189c.html) | [Java Security Audit Log](https://help.sap.com/viewer/1531c8a1792f45ab95a4c49ba16dc50b/7.5.9/en-US/4b6013583840584ae10000000a42189c.html)
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
| User | User |
-## Tables retrieved directly from SAP systems
-
-This section lists the data tables that are retrieved directly from the SAP system and ingested into Microsoft Sentinel exactly as they are.
-
-To have the data from these tables ingested into Microsoft Sentinel, configure the relevant settings in the **systemconfig.ini** file. For more information, see [Configuring User Master data collection](sap-solution-deploy-alternate.md#configuring-user-master-data-collection).
-
-The data retrieved from these tables provides a clear view of the authorization structure, group membership, and user profiles. It also allows you to track the process of authorization grants and revokes, and identiy and govern the risks associated with those processes.
-The tables listed below are required to enable functions that identify privileged users, map users to roles, groups, and authorizations.
-
-| Table name | Table description |
-| - | -- |
-| USR01 | User master record (runtime data) |
-| USR02 | Logon data (kernel-side use) |
-| UST04 | User masters<br>Maps users to profiles |
-| AGR_USERS | Assignment of roles to users |
-| AGR_1251 |Authorization data for the activity group |
-| USGRP_USER |Assignment of users to user groups |
-| USR21 | User name/Address key assignment |
-| ADR6 | Email addresses (business address services) |
-| USRSTAMP | Time stamp for all changes to the user |
-| ADCP | Person/Address assignment (business address services) |
-| USR05 | User master parameter ID |
-| AGR_PROF | Profile name for role |
-| AGR_FLAGS | Role attributes |
-| DEVACCESS | Table for development user |
-| AGR_DEFINE | Role definition |
-| AGR_AGRS | Roles in composite roles |
-| PAHI | History of the system, database, and SAP parameters |
+### SAP Heartbeat Log
+- **Microsoft Sentinel function for querying this log**: SAPConnectorHealth
+- **Log purpose**: Provides heartbeat and other health information on the connectivity between the agents and the different SAP systems.
-## Functions available from the SAP solution
-
-This section describes the [functions](../azure-monitor/logs/functions.md) that are available in your workspace after you've deployed the Continuous Threat Monitoring for SAP solution. Find these functions in the Microsoft Sentinel **Logs** page to use in your KQL queries, listed under **Workspace functions**.
-
-### SAPUsersAssignments
-
-The **SAPUsersAssignments** function gathers data from multiple SAP data sources and creates a user-centric view of the current user master data, roles, and profiles currently assigned.
+ Automatically created for any agents of the SAP Connector for Microsoft Sentinel.
- This function summarizes the user assignments to roles and profiles, and returns the following data:
+#### SAP_HeartBeat_CL log schema
+| Field | Description |
+| - | -- |
+| TimeGenerated | Time of log posting event |
+| agent_id_s | Agent ID in agent's configuration (automatically generated) |
+| agent_ver_s | Agent version |
+| host_s | The agent's host name |
+| system_id_s | Netweaver ABAP System ID /<br>Netweaver SAPControl Host (preview) /<br>Java SAPControl host (preview)
+| push_timestamp_d | Timestamp of the extraction, according to the agent's time zone |
+| agent_timezone_s | Agent's time zone |
-| Field | Description | Data Source/Notes |
-| - | - | - |
-| User | SAP user ID| SAL only |
-| Email | SMTP address| USR21 (SMTP_ADDR) |
-| UserType | User type| USR02 (USTYP) |
-| Timezone | Time zone| USR02 (TZONE) |
-| LockedStatus | Lock status| USR02 (UFLAG) |
-| LastSeenDate | Last seen date| USR02 (TRDAT) |
-| LastSeenTime | Last seen time| USR02 (LTIME) |
-| UserGroupAuth | User group in user master maintenance| USR02 (CLASS) |
-| Profiles |Set of profiles (default maximum set size = 50)|`["Profile 1", "Profile 2",...,"profile 50"]` |
-| DirectRoles | Set of Directly assigned roles (default max set size = 50) |`["Role 1", "Role 2",...,"ΓÇ¥"Role 50"]` |
-| ChildRoles |Set of indirectly assigned roles (default max set size = 50) |`["Role 1", "Role 2",...,"ΓÇ¥"Role 50"]` |
-| Client | Client ID | |
-| SystemID | System ID | As defined in the connector |
--
-### SAPUsersGetPrivileged
-
-The **SAPUsersGetPrivileged** function returns a list of privileged users per client and system ID.
-
-Users are considered privileged when they are listed in the *SAP - Privileged Users* watchlist, have been assigned to a profile listed in *SAP - Sensitive Profiles* watchlist, or have been added to a role listed in *SAP - Sensitive Roles* watchlist.
-
-**Parameters:**
- - TimeAgo
- - optional
- - default vaule: 7 days
- - Function will only seek User master data from TimeAgo until now()
-
-The **SAPUsersGetPrivileged** Microsoft Sentinel Function returns the following data:
-
-|Field| Description|
-|-|-|
-|User|SAP user ID |
-|Client| Client ID |
-|SystemID| System ID|
-
+## Tables retrieved directly from SAP systems
-### SAPUsersAuthorizations
+This section lists the data tables that are retrieved directly from the SAP system and ingested into Microsoft Sentinel exactly as they are.
-lists user assignments to authorizations, including the following data:
-The **SAPUsersAuthorizations** Microsoft Sentinel Function brings together data from several tables to produce a user-centric view of the current roles and authorizations assigned. Only users with active role and authorization assignments are returned.
+To have the data from these tables ingested into Microsoft Sentinel, configure the relevant settings in the **systemconfig.ini** file. For more information, see [Configuring User Master data collection](sap-solution-deploy-alternate.md#configuring-user-master-data-collection).
-**Parameters:**
- - TimeAgo
- - Optional
- - Default value: 7 days
- - Determines that the function seeks User master data from the time defined by the `TimeAgo` value until the time defined by the `now()` value.
+The data retrieved from these tables provides a clear view of the authorization structure, group membership, and user profiles. It also allows you to track the process of authorization grants and revokes, and identify and govern the risks associated with those processes.
-The **SAPUsersAuthorizations** function returns the following data:
+The tables listed below are required to enable functions that identify privileged users, map users to roles, groups, and authorizations.
-|Field| Description |Notes|
-|-|-|-|
-|User| SAP user ID||
-|Roles| Set of roles (default max set size = 50)| `["Role 1", "Role 2",...,"Role 50"]`|
-|AuthorizationsDetails| Set of authorizations (default max set size = 100|`{ {AuthorizationsDeatils1}`,<br>`{AuthorizationsDeatils2}`, <br>...,<br>`{AuthorizationsDeatils100}}`|
-|Client| Client ID |
-|SystemID| System ID|
+For best results, refer to these tables using the name in the **Sentinel function name** column below:
+
+| Table name | Table description | Sentinel function name |
+| --| - | - |
+| USR01 | User master record (runtime data) | SAP_USR01 |
+| USR02 | Logon data (kernel-side use) | SAP_USR02 |
+| UST04 | User masters<br>Maps users to profiles | SAP_UST04 |
+| AGR_USERS | Assignment of roles to users | SAP_AGR_USERS |
+| AGR_1251 | Authorization data for the activity group | SAP_AGR_1251 |
+| USGRP_USER | Assignment of users to user groups | SAP_USGRP_USER |
+| USR21 | User name/Address key assignment | SAP_USR21 |
+| ADR6 | Email addresses (business address services) | SAP_ADR6 |
+| USRSTAMP | Time stamp for all changes to the user | SAP_USRSTAMP |
+| ADCP | Person/Address assignment (business address services) | SAP_ADCP |
+| USR05 | User master parameter ID | SAP_USR05 |
+| AGR_PROF | Profile name for role | SAP_AGR_PROF |
+| AGR_FLAGS | Role attributes | SAP_AGR_FLAGS |
+| DEVACCESS | Table for development user | SAP_DEVACCESS |
+| AGR_DEFINE | Role definition | SAP_AGR_DEFINE |
+| AGR_AGRS | Roles in composite roles | SAP_AGR_AGRS |
+| PAHI | History of the system, database, and SAP parameters | SAP_PAHI |
+||||
## Next steps
For more information, see:
- [Deploy the Microsoft Sentinel SAP data connector with SNC](sap-solution-deploy-snc.md) - [Expert configuration options, on-premises deployment, and SAPControl log sources](sap-solution-deploy-alternate.md) - [Microsoft Sentinel SAP solution: built-in security content](sap-solution-security-content.md)-- [Troubleshooting your Microsoft Sentinel SAP solution deployment](sap-deploy-troubleshoot.md)
+- [Troubleshooting your Microsoft Sentinel SAP solution deployment](sap-deploy-troubleshoot.md)
sentinel Troubleshooting Cef Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/troubleshooting-cef-syslog.md
This procedure is relevant only for CEF connections, and is *not* relevant for S
- You must have elevated permissions (sudo) on your log forwarder machine.
- - You must have **python 2.7** or **3** installed on your log forwarder machine. Use the `python ΓÇôversion` command to check.
+ - You must have **python 2.7** or **3** installed on your log forwarder machine. Use the `python --version` command to check.
- You may need the Workspace ID and Workspace Primary Key at some point in this process. You can find them in the workspace resource, under **Agents management**.
service-bus-messaging Service Bus Messaging Exceptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-messaging-exceptions.md
Title: Azure Service Bus - messaging exceptions | Microsoft Docs description: This article provides a list of Azure Service Bus messaging exceptions and suggested actions to taken when the exception occurs. Previously updated : 08/04/2021 Last updated : 03/21/2022 # Service Bus messaging exceptions
The messaging APIs generate exceptions that can fall into the following categori
3. Transient exceptions ([Microsoft.ServiceBus.Messaging.MessagingException](/dotnet/api/microsoft.servicebus.messaging.messagingexception), [Microsoft.ServiceBus.Messaging.ServerBusyException](/dotnet/api/microsoft.azure.servicebus.serverbusyexception), [Microsoft.ServiceBus.Messaging.MessagingCommunicationException](/dotnet/api/microsoft.servicebus.messaging.messagingcommunicationexception)). General action: retry the operation or notify users. The `RetryPolicy` class in the client SDK can be configured to handle retries automatically. For more information, see [Retry guidance](/azure/architecture/best-practices/retry-service-specific#service-bus). 4. Other exceptions ([System.Transactions.TransactionException](/dotnet/api/system.transactions.transactionexception), [System.TimeoutException](/dotnet/api/system.timeoutexception), [Microsoft.ServiceBus.Messaging.MessageLockLostException](/dotnet/api/microsoft.azure.servicebus.messagelocklostexception), [Microsoft.ServiceBus.Messaging.SessionLockLostException](/dotnet/api/microsoft.azure.servicebus.sessionlocklostexception)). General action: specific to the exception type; refer to the table in the following section:
+> [!IMPORTANT]
+> Azure Service Bus doesn't retry an operation in case of an exception when the operation is in a transaction scope.
++ ## Exception types The following table lists messaging exception types, and their causes, and notes suggested action you can take.
service-bus-messaging Service Bus Transactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-transactions.md
Title: Overview of transaction processing in Azure Service Bus description: This article gives you an overview of transaction processing and the send via feature in Azure Service Bus. Previously updated : 09/21/2021 Last updated : 03/21/2022 ms.devlang: csharp
Receive operations aren't included, because it's assumed that the application ac
The disposition of the message (complete, abandon, dead-letter, defer) then occurs within the scope of, and dependent on, the overall outcome of the transaction.
+> [!IMPORTANT]
+> Azure Service Bus doesn't retry an operation in case of an exception when the operation is in a transaction scope.
+ ## Transfers and "send via" To enable transactional handover of data from a queue or topic to a processor, and then to another queue or topic, Service Bus supports *transfers*. In a transfer operation, a sender first sends a message to a *transfer queue or topic*, and the transfer queue or topic immediately moves the message to the intended destination queue or topic using the same robust transfer implementation that the autoforward capability relies on. The message is never committed to the transfer queue or topic's log in a way that it becomes visible for the transfer queue or topic's consumers.
service-fabric Concepts Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/concepts-managed-identity.md
The following scenarios are not supported or not recommended; note these actions
- Remove or change the identities assigned to an application; if you must make changes, submit separate deployments to first add a new identity assignment, and then to remove a previously assigned one. Removal of an identity from an existing application can have undesirable effects, including leaving your application in a state that is not upgradeable. It is safe to delete the application altogether if the removal of an identity is necessary; note this will delete the system-assigned identity (if so defined) associated with the application, and will remove any associations with the user-assigned identities assigned to the application. -- Service Fabric support for managed identities is not integrated at this time into the [AzureServiceTokenProvider](/dotnet/api/overview/azure/service-to-service-authentication).
+- Service Fabric support for managed identities is not integrated at this time into the deprecated [AzureServiceTokenProvider](/dotnet/api/overview/azure/service-to-service-authentication). However, Service Fabric does support leveraging managed identities instead through the [Azure Identity SDK](./how-to-managed-identity-service-fabric-app-code.md)
## Next steps
The following scenarios are not supported or not recommended; note these actions
- [Deploy an Azure Service Fabric application with a user-assigned managed identity](./how-to-deploy-service-fabric-application-user-assigned-managed-identity.md) - [Leverage the managed identity of a Service Fabric application from service code](./how-to-managed-identity-service-fabric-app-code.md) - [Grant an Azure Service Fabric application access to other Azure resources](./how-to-grant-access-other-resources.md)-- [Declaring and using application secrets as KeyVaultReferences](./service-fabric-keyvault-references.md)
+- [Declaring and using application secrets as KeyVaultReferences](./service-fabric-keyvault-references.md)
service-fabric Service Fabric Cluster Resource Manager Movement Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-resource-manager-movement-cost.md
await fabricClient.ServiceManager.UpdateServiceAsync(new Uri("fabric:/AppName/Se
## Dynamically specifying move cost on a per-replica basis
-The preceding snippets are all for specifying MoveCost for a whole service at once from outside the service itself. However, move cost is most useful is when the move cost of a specific service object changes over its lifespan. Since the services themselves probably have the best idea of how costly they are to move a given time, there's an API for services to report their own individual move cost during runtime.
+The preceding snippets are all for specifying MoveCost for a whole service at once from outside the service itself. However, move cost is most useful when the move cost of a specific service object changes over its lifespan. Since the services themselves probably have the best idea of how costly they are to move a given time, there's an API for services to report their own individual move cost during runtime.
C#:
C#:
this.Partition.ReportMoveCost(MoveCost.Medium); ```
+## Reporting move cost for a partition
+
+The previous section describes how service replicas or instances report MoveCost themselves. We provided Service Fabric API for reporting MoveCost values on behalf of other partitions. Sometimes service replica or instance can't determine the best MoveCost value by itself, and must rely on other services logic. Reporting MoveCost on behalf of other partitions, alongside [reporting load on behalf of other partitions](service-fabric-cluster-resource-manager-metrics.md#reporting-load-for-a-partition), allows you to completely manage partitions from outside. These APIs eliminate needs for [the Sidecar pattern](https://docs.microsoft.com/azure/architecture/patterns/sidecar), from the perspective of the Cluster Resource Manager.
+
+You can report MoveCost updates for a different partition with the same API call. You need to specify PartitionMoveCostDescription object for each partition that you want to update with new values of MoveCost. The API allows multiple ways to update MoveCost:
+
+ - A stateful service partition can update its primary replica MoveCost.
+ - Both stateless and stateful services can update the MoveCost of all its secondary replicas or instances.
+ - Both stateless and stateful services can update the MoveCost of a specific replica or instance on a node.
+
+Each MoveCost update for partition should contain at least one valid value that will be changed. For example, you could skip primary replica update with assigning _null_ to primary replica entry, other entries will be used during MoveCost update and we will skip MoveCost update for primary replica. Since updating of MoveCost for multiple partitions with single API call is possible, API provides a list of return codes for corresponding partition. If we successfully accept and process a request for MoveCost update, return code will be Success. Otherwise, API provides error code:
+
+ - PartitionNotFound - Specified partition ID doesn't exist.
+ - ReconfigurationPending - Partition is currently reconfiguring.
+ - InvalidForStatelessServices - An attempt was made to change the MoveCost of a primary replica for a partition belonging to a stateless service.
+ - ReplicaDoesNotExist - Secondary replica or instance does not exist on a specified node.
+ - InvalidOperation - Updating MoveCost for a partition that belongs to the System application.
+
+C#:
+
+```csharp
+Guid partitionId = Guid.Parse("53df3d7f-5471-403b-b736-bde6ad584f42");
+string nodeName0 = "NodeName0";
+
+OperationResult<UpdatePartitionMoveCostResultList> updatePartitionMoveCostResults =
+ await this.FabricClient.UpdatePartitionMoveCostAsync(
+ new UpdatePartitionMoveCostQueryDescription
+ {
+ new List<PartitionMoveCostDescription>()
+ {
+ new PartitionMoveCostDescription(
+ partitionId,
+ MoveCost.VeryHigh,
+ MoveCost.Zero,
+ new List<ReplicaMoveCostDescription>()
+ {
+ new ReplicaMoveCostDescription(nodeName0, MoveCost.Medium)
+ })
+ }
+ },
+ this.Timeout,
+ cancellationToken);
+```
+
+With this example, you will perform an update of the last reported move cost for a partition _53df3d7f-5471-403b-b736-bde6ad584f42_. Primary replica move cost will be _VeryHigh_. All secondary replicas move cost will be _Zero_, except move cost for a specific secondary replica located at the node _NodeName0_. Move cost for a specific replica will be _Medium_. If you want to skip updating move cost for primary replica or all secondary replicas, you could leave corresponding entry as _null_.
+ ## Impact of move cost MoveCost has five levels: Zero, Low, Medium, High and VeryHigh. The following rules apply:
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5,
**Release** | **Mobility service version** | **Kernel version** | | | |
-14.04 LTS | [9.42](https://support.microsoft.com/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8), [9.43](https://support.microsoft.com/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6), [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094), [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d), [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 3.13.0-24-generic to 3.13.0-170-generic,<br/>3.16.0-25-generic to 3.16.0-77-generic,<br/>3.19.0-18-generic to 3.19.0-80-generic,<br/>4.2.0-18-generic to 4.2.0-42-generic,<br/>4.4.0-21-generic to 4.4.0-148-generic,<br/>4.15.0-1023-azure to 4.15.0-1045-azure |
+14.04 LTS | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | No new 14.04 LTS kernels supported in this release. |
+14.04 LTS | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | No new 14.04 LTS kernels supported in this release. |
+14.04 LTS | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | No new 14.04 LTS kernels supported in this release. |
+14.04 LTS | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | No new 14.04 LTS kernels supported in this release. |
+14.04 LTS | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 3.13.0-24-generic to 3.13.0-170-generic,<br/>3.16.0-25-generic to 3.16.0-77-generic,<br/>3.19.0-18-generic to 3.19.0-80-generic,<br/>4.2.0-18-generic to 4.2.0-42-generic,<br/>4.4.0-21-generic to 4.4.0-148-generic,<br/>4.15.0-1023-azure to 4.15.0-1045-azure |
|||
-16.04 LTS | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094), [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d), [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 4.4.0-21-generic to 4.4.0-206-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic to 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-140-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1111-azure|
-16.04 LTS | [9.43](https://support.microsoft.com/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | 4.4.0-21-generic to 4.4.0-206-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic to 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-140-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1111-azure|
-16.04 LTS | [9.42](https://support.microsoft.com/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8) | 4.4.0-21-generic to 4.4.0-206-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic to 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-140-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1111-azure|
+16.04 LTS | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | No new 16.04 LTS kernels supported in this release. |
+16.04 LTS | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | No new 16.04 LTS kernels supported in this release. |
+16.04 LTS | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 4.4.0-21-generic to 4.4.0-206-generic <br/>4.8.0-34-generic to 4.8.0-58-generic <br/>4.10.0-14-generic to 4.10.0-42-generic <br/>4.11.0-13-generic to 4.11.0-14-generic <br/>4.13.0-16-generic to 4.13.0-45-generic <br/>4.15.0-13-generic to 4.15.0-140-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1111-azure|
+16.04 LTS | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | No new 16.04 LTS kernels supported in this release. |
+16.04 LTS | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 4.4.0-21-generic to 4.4.0-206-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic to 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-140-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1111-azure|
|||
+18.04 LTS |[9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | No new 18.04 LTS kernels supported in this release. |
18.04 LTS |[9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 5.4.0-92-generic </br> 4.15.0-166-generic </br> 4.15.0-1129-azure </br> 5.4.0-1065-azure </br> 4.15.0-1130-azure </br> 4.15.0-167-generic </br> 5.4.0-1067-azure </br> 5.4.0-1068-azure </br> 5.4.0-94-generic </br> 5.4.0-96-generic </br> 5.4.0-97-generic </br> 5.4.0-99-generic </br> 4.15.0-1131-azure </br> 4.15.0-169-generic </br> 5.4.0-100-generic </br> 5.4.0-1069-azure </br> 5.4.0-1070-azure | 18.04 LTS |[9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 4.15.0-1126-azure </br> 4.15.0-1125-azure </br> 4.15.0-1123-azure </br> 5.4.0-1058-azure </br> 4.15.0-162-generic </br> 4.15.0-161-generic </br> 4.15.0-156-generic </br> 5.4.0-1061-azure to 5.4.0-1063-azure </br> 5.4.0-90-generic </br> 5.4.0-89-generic </br> 9.46 hotfix patch** </br> 4.15.0-1127-azure </br> 4.15.0-163-generic </br> 5.4.0-1064-azure </br> 5.4.0-91-generic | 18.04 LTS |[9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 4.15.0-1123-azure </br> 5.4.0-1058-azure </br> 4.15.0-156-generic </br> 4.15.0-1125-azure </br> 4.15.0-161-generic </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure </br> 5.4.0-89-generic | 18.04 LTS | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 4.15.0-20-generic to 4.15.0-140-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-65-generic </br> 5.3.0-19-generic to 5.3.0-72-generic </br> 5.4.0-37-generic to 5.4.0-70-generic </br> 4.15.0-1009-azure to 4.15.0-1111-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1043-azure </br> 4.15.0-1114-azure </br> 4.15.0-143-generic </br> 5.4.0-1047-azure </br> 5.4.0-73-generic </br> 4.15.0-1115-azure </br> 4.15.0-144-generic </br> 5.4.0-1048-azure </br> 5.4.0-74-generic </br> 4.15.0-1121-azure </br> 4.15.0-151-generic </br> 4.15.0-153-generic </br> 5.3.0-76-generic </br> 5.4.0-1055-azure </br> 5.4.0-80-generic </br> 4.15.0-147-generic </br> 4.15.0-153-generic </br> 5.4.0-1056-azure </br> 5.4.0-81-generic </br> 4.15.0-1122-azure </br> 4.15.0-154-generic |
-18.04 LTS | [9.43](https://support.microsoft.com/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | 4.15.0-20-generic to 4.15.0-140-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-65-generic </br> 5.3.0-19-generic to 5.3.0-72-generic </br> 5.4.0-37-generic to 5.4.0-70-generic </br> 4.15.0-1009-azure to 4.15.0-1111-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1043-azure </br> 4.15.0-1114-azure </br> 4.15.0-143-generic </br> 5.4.0-1047-azure </br> 5.4.0-73-generic </br> 4.15.0-1115-azure </br> 4.15.0-144-generic </br> 5.4.0-1048-azure </br> 5.4.0-74-generic </br> 4.15.0-1121-azure </br> 4.15.0-151-generic </br> 4.15.0-153-generic </br> 5.3.0-76-generic </br> 5.4.0-1055-azure </br> 5.4.0-80-generic </br> 4.15.0-147-generic |
-18.04 LTS |[9.42](https://support.microsoft.com/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8) | 4.15.0-20-generic to 4.15.0-140-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-65-generic </br> 5.3.0-19-generic to 5.3.0-72-generic </br> 5.4.0-37-generic to 5.4.0-70-generic </br> 4.15.0-1009-azure to 4.15.0-1111-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1043-azure </br> 4.15.0-1114-azure </br> 4.15.0-143-generic </br> 5.4.0-1047-azure </br> 5.4.0-73-generic </br> 4.15.0-1115-azure </br> 4.15.0-144-generic </br> 5.4.0-1048-azure </br> 5.4.0-74-generic |
|||
+20.04 LTS |[9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 5.11.0-1007-azure </br> 5.11.0-1012-azure </br> 5.11.0-1013-azure </br> 5.11.0-1015-azure </br> 5.11.0-1017-azure </br> 5.11.0-1019-azure </br> 5.11.0-1020-azure </br> 5.11.0-1021-azure </br> 5.11.0-1022-azure </br> 5.11.0-1023-azure </br> 5.11.0-1025-azure </br> 5.11.0-1027-azure </br> 5.11.0-1028-azure </br> 5.11.0-22-generic </br> 5.11.0-25-generic </br> 5.11.0-27-generic </br> 5.11.0-34-generic </br> 5.11.0-36-generic </br> 5.11.0-37-generic </br> 5.11.0-38-generic </br> 5.11.0-40-generic </br> 5.11.0-41-generic </br> 5.11.0-43-generic </br> 5.11.0-44-generic </br> 5.11.0-46-generic |
20.04 LTS |[9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 5.4.0-1065-azure </br> 5.4.0-92-generic </br> 5.8.0-1033-azure </br> 5.8.0-1036-azure </br> 5.8.0-1039-azure </br> 5.8.0-1040-azure </br> 5.8.0-1041-azure </br> 5.8.0-1042-azure </br> 5.8.0-1043-azure </br> 5.8.0-23-generic </br> 5.8.0-25-generic </br> 5.8.0-28-generic </br> 5.8.0-29-generic </br> 5.8.0-31-generic </br> 5.8.0-33-generic </br> 5.8.0-34-generic </br> 5.8.0-36-generic </br> 5.8.0-38-generic </br> 5.8.0-40-generic </br> 5.8.0-41-generic </br> 5.8.0-43-generic </br> 5.8.0-44-generic </br> 5.8.0-45-generic </br> 5.8.0-48-generic </br> 5.8.0-49-generic </br> 5.8.0-50-generic </br> 5.8.0-53-generic </br> 5.8.0-55-generic </br> 5.8.0-59-generic </br> 5.8.0-63-generic </br> 5.4.0-1067-azure </br> 5.4.0-1068-azure </br> 5.4.0-94-generic </br> 5.4.0-96-generic </br> 5.4.0-97-generic </br> 5.4.0-99-generic </br> 5.4.0-100-generic </br> 5.4.0-1069-azure </br> 5.4.0-1070-azure | 20.04 LTS |[9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 5.4.0-84-generic </br> 5.4.0-1058-azure </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure </br> 5.4.0-1063-azure </br> 5.4.0-89-generic </br> 5.4.0-90-generic </br> 9.46 hotfix patch** </br> 5.4.0-1064-azure </br> 5.4.0-91-generic | 20.04 LTS |[9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 5.4.0-1058-azure </br> 5.4.0-84-generic </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure </br> 5.4.0-89-generic | 20.04 LTS |[9.44](https://support.microsoft.com/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | 5.4.0-26-generic to 5.4.0-60-generic </br> 5.4.0-1010-azure to 5.4.0-1043-azure </br> 5.4.0-1047-azure </br> 5.4.0-73-generic </br> 5.4.0-1048-azure </br> 5.4.0-74-generic </br> 5.4.0-81-generic </br> 5.4.0-1056-azure |
-20.04 LTS |[9.43](https://support.microsoft.com/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | 5.4.0-26-generic to 5.4.0-60-generic </br> 5.4.0-1010-azure to 5.4.0-1043-azure </br> 5.4.0-1047-azure </br> 5.4.0-73-generic </br> 5.4.0-1048-azure </br> 5.4.0-74-generic |
**Note: To support latest Linux kernels within 15 days of release, Azure Site Recovery rolls out hot fix patch on top of latest mobility agent version. This fix is rolled out in between two major version releases. To update to latest version of mobility agent (including hot fix patch) follow steps mentioned in [this article](service-updates-how-to.md#azure-vm-disaster-recovery-to-azure). This patch is currently rolled out for mobility agents used in Azure to Azure DR scenario.
Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5,
**Release** | **Mobility service version** | **Kernel version** | | | |
-Debian 7 | [9.40](https://support.microsoft.com/topic/update-rollup-53-for-azure-site-recovery-060268ef-5835-bb49-7cbc-e8c1e6c6e12a) , [9.41](https://support.microsoft.com/topic/update-rollup-54-for-azure-site-recovery-50873c7c-272c-4a7a-b9bb-8cd59c230533), [9.42](https://support.microsoft.com/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8), [9.43](https://support.microsoft.com/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 3.2.0-4-amd64 to 3.2.0-6-amd64, 3.16.0-0.bpo.4-amd64 |
+Debian 7 | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | No new Debian 7 kernels supported in this release. |
+Debian 7 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | No new Debian 7 kernels supported in this release. |
+Debian 7 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | No new Debian 7 kernels supported in this release. |
+Debian 7 | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | No new Debian 7 kernels supported in this release. |
+Debian 7 | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 3.2.0-4-amd64 to 3.2.0-6-amd64, 3.16.0-0.bpo.4-amd64 |
|||
-Debian 8 | [9.40](https://support.microsoft.com/topic/update-rollup-53-for-azure-site-recovery-060268ef-5835-bb49-7cbc-e8c1e6c6e12a), [9.41](https://support.microsoft.com/topic/update-rollup-54-for-azure-site-recovery-50873c7c-272c-4a7a-b9bb-8cd59c230533), [9.42](https://support.microsoft.com/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8), [9.43](https://support.microsoft.com/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6), [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 3.16.0-4-amd64 to 3.16.0-11-amd64, 4.9.0-0.bpo.4-amd64 to 4.9.0-0.bpo.11-amd64 |
+Debian 8 | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | No new Debian 8 kernels supported in this release. |
+Debian 8 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | No new Debian 8 kernels supported in this release. |
+Debian 8 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | No new Debian 8 kernels supported in this release. |
+Debian 8 | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | No new Debian 8 kernels supported in this release. |
+Debian 8 | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 3.16.0-4-amd64 to 3.16.0-11-amd64, 4.9.0-0.bpo.4-amd64 to 4.9.0-0.bpo.11-amd64 |
|||
+Debian 9.1 | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | No new Debian 9.1 kernels supported in this release.
Debian 9.1 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 4.9.0-17-amd64
+Debian 9.1 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | No new Debian 9.1 kernels supported in this release.
Debian 9.1 | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 4.19.0-0.bpo.18-amd64 </br> 4.19.0-0.bpo.18-cloud-amd64 Debian 9.1 | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 4.9.0-1-amd64 to 4.9.0-15-amd64 </br> 4.19.0-0.bpo.1-amd64 to 4.19.0-0.bpo.16-amd64 </br> 4.19.0-0.bpo.1-cloud-amd64 to 4.19.0-0.bpo.16-cloud-amd64
-Debian 9.1 | [9.43](https://support.microsoft.com/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | 4.9.0-1-amd64 to 4.9.0-15-amd64 </br> 4.19.0-0.bpo.1-amd64 to 4.19.0-0.bpo.16-amd64 </br> 4.19.0-0.bpo.1-cloud-amd64 to 4.19.0-0.bpo.16-cloud-amd64
|||
+Debian 10 | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 5.10.0-0.bpo.11-amd64 </br> 5.10.0-0.bpo.11-cloud-amd64 </br> 5.10.0-0.bpo.7-amd64 </br> 5.10.0-0.bpo.7-cloud-amd64 </br> 5.10.0-0.bpo.9-amd64 </br> 5.10.0-0.bpo.9-cloud-amd64 </br> 5.8.0-0.bpo.2-amd64 </br> 5.8.0-0.bpo.2-cloud-amd64 </br> 5.9.0-0.bpo.2-amd64 </br> 5.9.0-0.bpo.2-cloud-amd64 </br> 5.9.0-0.bpo.5-amd64 </br> 5.9.0-0.bpo.5-cloud-amd64
+Debian 10 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | No new Debian 10 kernels supported in this release.
+Debian 10 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | No new Debian 10 kernels supported in this release.
Debian 10 | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 4.19.0-18-amd64 </br> 4.19.0-18-cloud-amd64 Debian 10 | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 4.19.0-6-amd64 to 4.19.0-16-amd64 </br> 4.19.0-6-cloud-amd64 to 4.19.0-16-cloud-amd64 </br> 5.8.0-0.bpo.2-amd64 </br> 5.8.0-0.bpo.2-cloud-amd64
-Debian 10 | [9.43](https://support.microsoft.com/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | 4.19.0-6-amd64 to 4.19.0-16-amd64 </br> 4.19.0-6-cloud-amd64 to 4.19.0-16-cloud-amd64 </br> 5.8.0-0.bpo.2-amd64 </br> 5.8.0-0.bpo.2-cloud-amd64
-Debian 10 | [9.42](https://support.microsoft.com/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8) | 4.19.0-6-amd64 to 4.19.0-16-amd64 </br> 4.19.0-6-cloud-amd64 to 4.19.0-16-cloud-amd64 </br> 5.8.0-0.bpo.2-amd64 </br> 5.8.0-0.bpo.2-cloud-amd64
-Debian 10 | [9.41](https://support.microsoft.com/topic/update-rollup-54-for-azure-site-recovery-50873c7c-272c-4a7a-b9bb-8cd59c230533) | 4.19.0-6-amd64 to 4.19.0-14-amd64 </br> 4.19.0-6-cloud-amd64 to 4.19.0-14-cloud-amd64 </br> 5.8.0-0.bpo.2-amd64 </br> 5.8.0-0.bpo.2-cloud-amd64 </br> 4.19.0-10-cloud-amd64, 4.19.0-16-amd64, 4.19.0-16-cloud-amd64 through 9.41 hot fix patch**
**Note: To support latest Linux kernels within 15 days of release, Azure Site Recovery rolls out hot fix patch on top of latest mobility agent version. This fix is rolled out in between two major version releases. To update to latest version of mobility agent (including hot fix patch) follow steps mentioned in [this article](service-updates-how-to.md#azure-vm-disaster-recovery-to-azure). This patch is currently rolled out for mobility agents used in Azure to Azure DR scenario.
Debian 10 | [9.41](https://support.microsoft.com/topic/update-rollup-54-for-azur
**Release** | **Mobility service version** | **Kernel version** | | | |
+SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | No new SLES 12 kernels supported in this release. |
SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-16.85-azure:5 </br> 4.12.14-122.106-default:5 </br> 4.12.14-16.88-azure:5 </br> 4.12.14-122.110-default:5 | SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.46](https://support.microsoft.com/en-us/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-16.80-azure |
+SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | No new SLES 12 kernels supported in this release. |
SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.44](https://support.microsoft.com/en-us/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.4.138-4.7-azure to 4.4.180-4.31-azure,</br>4.12.14-6.3-azure to 4.12.14-6.43-azure </br> 4.12.14-16.7-azure to 4.12.14-16.56-azure </br> 4.12.14-16.65-azure </br> 4.12.14-16.68-azure |
-SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.43](https://support.microsoft.com/en-us/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.4.138-4.7-azure to 4.4.180-4.31-azure,</br>4.12.14-6.3-azure to 4.12.14-6.43-azure </br> 4.12.14-16.7-azure to 4.12.14-16.56-azure </br> 4.12.14-16.65-azure |
#### Supported SUSE Linux Enterprise Server 15 kernel versions for Azure virtual machines **Release** | **Mobility service version** | **Kernel version** | | | |
+SUSE Linux Enterprise Server 15, SP1, SP2 | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | No new SLES 15 kernels supported in this release.
SUSE Linux Enterprise Server 15, SP1, SP2 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 5.3.18-36-azure </br> 5.3.18-38.11-azure </br> 5.3.18-38.14-azure </br> 5.3.18-38.17-azure </br> 5.3.18-38.22-azure </br> 5.3.18-38.25-azure </br> 5.3.18-38.28-azure </br> 5.3.18-38.3-azure </br> 5.3.18-38.31-azure </br> 5.3.18-38.8-azure </br> 5.3.18-57-default </br> 5.3.18-59.10-default </br> 5.3.18-59.13-default </br> 5.3.18-59.16-default </br> 5.3.18-59.19-default </br> 5.3.18-59.24-default </br> 5.3.18-59.27-default </br> 5.3.18-59.30-default </br> 5.3.18-59.34-default </br> 5.3.18-59.37-default </br> 5.3.18-59.5-default </br> 5.3.18-150300.38.37-azure:3 </br> 5.3.18-38.34-azure:3 </br> 5.3.18-150300.59.43-default:3 </br> 5.3.18-150300.59.46-default:3 </br> 5.3.18-59.40-default:3 </br> 5.3.18-150300.38.40-azure:3 </br> 5.3.18-150300.59.49-default:3 SUSE Linux Enterprise Server 15, SP1, SP2 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.58-azure </br> 5.3.18-18.69-azure </br> 5.3.18-18.72-azure </br> 5.3.18-18.75-azure SUSE Linux Enterprise Server 15, SP1, SP2 | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.58-azure </br> 5.3.18-18.69-azure SUSE Linux Enterprise Server 15, SP1, SP2 | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.58-azure
-SUSE Linux Enterprise Server 15, SP1, SP2 | [9.43](https://support.microsoft.com/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.58-azure
**Note: To support latest Linux kernels within 15 days of release, Azure Site Recovery rolls out hot fix patch on top of latest mobility agent version. This fix is rolled out in between two major version releases. To update to latest version of mobility agent (including hot fix patch) follow steps mentioned in [this article](service-updates-how-to.md#azure-vm-disaster-recovery-to-azure). This patch is currently rolled out for mobility agents used in Azure to Azure DR scenario.
site-recovery Vmware Physical Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-support-matrix.md
Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5,
| | | 14.04 LTS | [9.42](https://support.microsoft.com/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8), [9.43](https://support.microsoft.com/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6), [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094), [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d), [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 3.13.0-24-generic to 3.13.0-170-generic,<br/>3.16.0-25-generic to 3.16.0-77-generic,<br/>3.19.0-18-generic to 3.19.0-80-generic,<br/>4.2.0-18-generic to 4.2.0-42-generic,<br/>4.4.0-21-generic to 4.4.0-148-generic,<br/>4.15.0-1023-azure to 4.15.0-1045-azure | |||
-16.04 LTS | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094), [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d), [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 4.4.0-21-generic to 4.4.0-206-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic to 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-140-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1111-azure|
-16.04 LTS | [9.43](https://support.microsoft.com/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | 4.4.0-21-generic to 4.4.0-206-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic to 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-140-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1111-azure|
-16.04 LTS | [9.42](https://support.microsoft.com/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8) | 4.4.0-21-generic to 4.4.0-206-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic to 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-140-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1111-azure|
+16.04 LTS | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094), [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d), [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 4.4.0-21-generic to 4.4.0-206-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic to 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-140-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1111-azure |
|||
+18.04 LTS |[9.48](https://support.microsoft.com/en-us/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.15.0-1130-azure </br> 4.15.0-1131-azure </br> 4.15.0-167-generic </br> 4.15.0-169-generic </br> 5.4.0-100-generic </br> 5.4.0-1067-azure </br> 5.4.0-1068-azure </br> 5.4.0-1069-azure </br> 5.4.0-1070-azure </br> 5.4.0-94-generic </br> 5.4.0-96-generic </br> 5.4.0-97-generic </br> 5.4.0-99-generic |
18.04 LTS |[9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 4.15.0-1126-azure </br> 4.15.0-1127-azure </br> 4.15.0-1129-azure </br> 4.15.0-162-generic </br> 4.15.0-163-generic </br> 4.15.0-166-generic </br> 5.4.0-1063-azure </br> 5.4.0-1064-azure </br> 5.4.0-1065-azure </br> 5.4.0-90-generic </br> 5.4.0-91-generic </br> 5.4.0-92-generic | 18.04 LTS |[9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 4.15.0-1123-azure </br> 4.15.0-1124-azure </br> 4.15.0-1125-azure</br> 4.15.0-156-generic </br> 4.15.0-158-generic </br> 4.15.0-159-generic </br> 4.15.0-161-generic </br> 5.4.0-1058-azure </br> 5.4.0-1059-azure </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure </br> 5.4.0-84-generic </br> 5.4.0-86-generic </br> 5.4.0-87-generic </br> 5.4.0-89-generic | 18.04 LTS |[9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 4.15.0-1123-azure </br> 5.4.0-1058-azure </br> 4.15.0-156-generic </br> 4.15.0-1125-azure </br> 4.15.0-161-generic </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure </br> 5.4.0-89-generic | 18.04 LTS | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 4.15.0-20-generic to 4.15.0-140-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-65-generic </br> 5.3.0-19-generic to 5.3.0-72-generic </br> 5.4.0-37-generic to 5.4.0-70-generic </br> 4.15.0-1009-azure to 4.15.0-1111-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1043-azure </br> 4.15.0-1114-azure </br> 4.15.0-143-generic </br> 5.4.0-1047-azure </br> 5.4.0-73-generic </br> 4.15.0-1115-azure </br> 4.15.0-144-generic </br> 5.4.0-1048-azure </br> 5.4.0-74-generic </br> 4.15.0-1121-azure </br> 4.15.0-151-generic </br> 5.3.0-76-generic </br> 5.4.0-1055-azure </br> 5.4.0-80-generic </br> 4.15.0-147-generic </br> 4.15.0-153-generic </br> 5.4.0-1056-azure </br> 5.4.0-81-generic </br> 4.15.0-1122-azure </br> 4.15.0-154-generic |
-18.04 LTS | [9.43](https://support.microsoft.com/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | 4.15.0-20-generic to 4.15.0-140-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-65-generic </br> 5.3.0-19-generic to 5.3.0-72-generic </br> 5.4.0-37-generic to 5.4.0-70-generic </br> 4.15.0-1009-azure to 4.15.0-1111-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1043-azure </br> 4.15.0-1114-azure </br> 4.15.0-143-generic </br> 5.4.0-1047-azure </br> 5.4.0-73-generic </br> 4.15.0-1115-azure </br> 4.15.0-144-generic </br> 5.4.0-1048-azure </br> 5.4.0-74-generic </br> 4.15.0-1121-azure </br> 4.15.0-151-generic </br> 5.3.0-76-generic </br> 5.4.0-1055-azure </br> 5.4.0-80-generic </br> 4.15.0-147-generic |
|||
+20.04 LTS |[9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 5.11.0-1007-azure </br> 5.11.0-1012-azure </br> 5.11.0-1013-azure </br> 5.11.0-1015-azure </br> 5.11.0-1017-azure </br> 5.11.0-1019-azure </br> 5.11.0-1020-azure </br> 5.11.0-1021-azure </br> 5.11.0-1022-azure </br> 5.11.0-1023-azure </br> 5.11.0-1025-azure </br> 5.11.0-1027-azure </br> 5.11.0-1028-azure </br> 5.11.0-22-generic </br> 5.11.0-25-generic </br> 5.11.0-27-generic </br> 5.11.0-34-generic </br> 5.11.0-36-generic </br> 5.11.0-37-generic </br> 5.11.0-38-generic </br> 5.11.0-40-generic </br> 5.11.0-41-generic </br> 5.11.0-43-generic </br> 5.11.0-44-generic </br> 5.11.0-46-generic </br> 5.4.0-100-generic </br> 5.4.0-1067-azure </br> 5.4.0-1068-azure </br> 5.4.0-1069-azure </br> 5.4.0-1070-azure </br> 5.4.0-94-generic </br> 5.4.0-96-generic </br> 5.4.0-97-generic </br> 5.4.0-99-generic |
20.04 LTS |[9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 5.4.0-1063-azure </br> 5.4.0-1064-azure </br> 5.4.0-1065-azure </br> 5.4.0-90-generic </br> 5.4.0-91-generic </br> 5.4.0-92-generic </br> 5.8.0-1033-azure </br> 5.8.0-1036-azure </br> 5.8.0-1039-azure </br> 5.8.0-1040-azure </br> 5.8.0-1041-azure </br> 5.8.0-1042-azure </br> 5.8.0-1043-azure </br> 5.8.0-23-generic </br> 5.8.0-25-generic </br> 5.8.0-28-generic </br> 5.8.0-29-generic </br> 5.8.0-31-generic </br> 5.8.0-33-generic </br> 5.8.0-34-generic </br> 5.8.0-36-generic </br> 5.8.0-38-generic </br> 5.8.0-40-generic </br> 5.8.0-41-generic </br> 5.8.0-43-generic </br> 5.8.0-44-generic </br> 5.8.0-45-generic </br> 5.8.0-48-generic </br> 5.8.0-49-generic </br> 5.8.0-50-generic </br> 5.8.0-53-generic </br> 5.8.0-55-generic </br> 5.8.0-59-generic </br> 5.8.0-63-generic | 20.04 LTS |[9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 5.4.0-1058-azure </br> 5.4.0-1059-azure </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure </br> 5.4.0-84-generic </br> 5.4.0-86-generic </br> 5.4.0-88-generic </br> 5.4.0-89-generic | 20.04 LTS |[9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 5.4.0-1058-azure </br> 5.4.0-84-generic </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure | 20.04 LTS |[9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 5.4.0-26-generic to 5.4.0-80 </br> 5.4.0-1010-azure to 5.4.0-1048-azure </br> 5.4.0-81-generic </br> 5.4.0-1056-azure |
-20.04 LTS |[9.43](https://support.microsoft.com/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | 5.4.0-26-generic to 5.4.0-80 </br> 5.4.0-1010-azure to 5.4.0-1048-azure |
**Note: For Ubuntu 20.04, we had initially rolled out support for kernels 5.8.* but we have since found issues with support for this kernel and hence have redacted these kernels from our support statement for the time being.
Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5,
**Supported release** | **Mobility service version** | **Kernel version** | | | |
-Debian 7 | [9.42](https://support.microsoft.com/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8),[9.43](https://support.microsoft.com/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6), [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094), [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d), [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 3.2.0-4-amd64 to 3.2.0-6-amd64, 3.16.0-0.bpo.4-amd64 |
+Debian 7 | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094), [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d), [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 3.2.0-4-amd64 to 3.2.0-6-amd64, 3.16.0-0.bpo.4-amd64 |
|||
-Debian 8 |[9.42](https://support.microsoft.com/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8), [9.43](https://support.microsoft.com/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6), [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d), [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 3.16.0-4-amd64 to 3.16.0-11-amd64, 4.9.0-0.bpo.4-amd64 to 4.9.0-0.bpo.11-amd64 |
+Debian 8 | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d), [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 3.16.0-4-amd64 to 3.16.0-11-amd64, 4.9.0-0.bpo.4-amd64 to 4.9.0-0.bpo.11-amd64 |
||| Debian 9.1 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 4.9.0-17-amd64 </br> Debian 9.1 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 4.9.0-1-amd64 to 4.9.0-15-amd64 </br> 4.19.0-0.bpo.1-amd64 to 4.19.0-0.bpo.16-amd64 </br> 4.19.0-0.bpo.1-cloud-amd64 to 4.19.0-0.bpo.16-cloud-amd64 </br> 4.19.0-0.bpo.18-amd64 </br> 4.19.0-0.bpo.18-cloud-amd64 Debian 9.1 | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 4.9.0-1-amd64 to 4.9.0-15-amd64 </br> 4.19.0-0.bpo.1-amd64 to 4.19.0-0.bpo.16-amd64 </br> 4.19.0-0.bpo.1-cloud-amd64 to 4.19.0-0.bpo.16-cloud-amd64 </br> 4.19.0-0.bpo.18-amd64 </br> 4.19.0-0.bpo.18-cloud-amd64 </br> Debian 9.1 | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 4.9.0-1-amd64 to 4.9.0-15-amd64 </br> 4.19.0-0.bpo.1-amd64 to 4.19.0-0.bpo.16-amd64 </br> 4.19.0-0.bpo.1-cloud-amd64 to 4.19.0-0.bpo.16-cloud-amd64 </br>
-Debian 9.1 | [9.43](https://support.microsoft.com/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | 4.9.0-1-amd64 to 4.9.0-15-amd64 </br> 4.19.0-0.bpo.1-amd64 to 4.19.0-0.bpo.16-amd64 </br> 4.19.0-0.bpo.1-cloud-amd64 to 4.19.0-0.bpo.16-cloud-amd64 </br>
|||
+Debian 10 | [9.48](https://support.microsoft.com/en-us/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 5.10.0-0.bpo.11-amd64 </br> 5.10.0-0.bpo.11-cloud-amd64 </br> 5.10.0-0.bpo.7-amd64 </br> 5.10.0-0.bpo.7-cloud-amd64 </br> 5.10.0-0.bpo.9-amd64 </br> 5.10.0-0.bpo.9-cloud-amd64 </br> 5.8.0-0.bpo.2-amd64 </br> 5.8.0-0.bpo.2-cloud-amd64 </br> 5.9.0-0.bpo.2-amd64 </br> 5.9.0-0.bpo.2-cloud-amd64 </br> 5.9.0-0.bpo.5-amd64 </br> 5.9.0-0.bpo.5-cloud-amd64
Debian 10 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 5.10.0-0.bpo.9-cloud-amd64 </br> 5.10.0-0.bpo.9-amd64 Debian 10 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 4.9.0-1-amd64 to 4.9.0-15-amd64 <br/> 4.19.0-18-amd64 </br> 4.19.0-18-cloud-amd64 Debian 10 | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d), [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 4.9.0-1-amd64 to 4.9.0-15-amd64 <br/> 4.19.0-18-amd64 </br> 4.19.0-18-cloud-amd64 Debian 10 | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 4.19.0-6-amd64 to 4.19.0-16-amd64 </br> 4.19.0-6-cloud-amd64 to 4.19.0-16-cloud-amd64 </br> 5.8.0-0.bpo.2-amd64 </br> 5.8.0-0.bpo.2-cloud-amd64
-Debian 10 | [9.43](https://support.microsoft.com/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | 4.19.0-6-amd64 to 4.19.0-16-amd64 </br> 4.19.0-6-cloud-amd64 to 4.19.0-16-cloud-amd64 </br> 5.8.0-0.bpo.2-amd64 </br> 5.8.0-0.bpo.2-cloud-amd64
+ ### SUSE Linux Enterprise Server 12 supported kernel versions **Release** | **Mobility service version** | **Kernel version** | | | |
+SUSE Linux Enterprise Server 12 (SP1,SP2,SP3,SP4, SP5) | [9.48](https://support.microsoft.com/en-us/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br> 4.12.14-16.85-azure:5 </br> 4.12.14-16.88-azure:5 </br> 4.12.14-122.106-default:5 </br> 4.12.14-122.110-default:5 |
SUSE Linux Enterprise Server 12 (SP1,SP2,SP3,SP4, SP5) | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-16.80-azure </br> 4.12.14-122.103-default </br> 4.12.14-122.98-default5 | SUSE Linux Enterprise Server 12 (SP1,SP2,SP3,SP4, SP5) | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.4.138-4.7-azure to 4.4.180-4.31-azure,</br>4.12.14-6.3-azure to 4.12.14-6.43-azure </br> 4.12.14-16.7-azure to 4.12.14-16.65-azure </br> 4.12.14-16.68-azure </br> 4.12.14-16.73-azure </br> 4.12.14-16.76-azure </br> 4.12.14-122.88-default </br> 4.12.14-122.91-default | SUSE Linux Enterprise Server 12 (SP1,SP2,SP3,SP4, SP5) | [9.45](https://support.microsoft.com/en-us/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.4.138-4.7-azure to 4.4.180-4.31-azure,</br>4.12.14-6.3-azure to 4.12.14-6.43-azure </br> 4.12.14-16.7-azure to 4.12.14-16.65-azure </br> 4.12.14-16.68-azure </br> 4.12.14-16.76-azure | SUSE Linux Enterprise Server 12 (SP1,SP2,SP3,SP4, SP5) | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.4.138-4.7-azure to 4.4.180-4.31-azure,</br>4.12.14-6.3-azure to 4.12.14-6.43-azure </br> 4.12.14-16.7-azure to 4.12.14-16.65-azure </br> 4.12.14-16.68-azure |
-SUSE Linux Enterprise Server 12 (SP1,SP2,SP3,SP4, SP5) | [9.43](https://support.microsoft.com/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.4.138-4.7-azure to 4.4.180-4.31-azure,</br>4.12.14-6.3-azure to 4.12.14-6.43-azure </br> 4.12.14-16.7-azure to 4.12.14-16.65-azure |
### SUSE Linux Enterprise Server 15 supported kernel versions **Release** | **Mobility service version** | **Kernel version** | | | |
+SUSE Linux Enterprise Server 15, SP1, SP2 | [9.48](https://support.microsoft.com/en-us/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br> 5.3.18-150300.38.37-azure:3 </br> 5.3.18-150300.38.40-azure:3 </br> 5.3.18-38.34-azure:3 </br> 5.3.18-150300.59.43-default:3 </br> 5.3.18-150300.59.46-default:3 </br> 5.3.18-150300.59.49-default:3 </br> 5.3.18-59.40-default:3 |
SUSE Linux Enterprise Server 15, SP1, SP2 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 5.3.18-18.72-azure: </br> 5.3.18-18.75-azure: </br> 5.3.18-24.93-default </br> 5.3.18-24.96-default </br> 5.3.18-36-azure </br> 5.3.18-38.11-azure </br> 5.3.18-38.14-azure </br> 5.3.18-38.17-azure </br> 5.3.18-38.22-azure </br> 5.3.18-38.25-azure </br> 5.3.18-38.28-azure </br> 5.3.18-38.3-azure </br> 5.3.18-38.31-azure </br> 5.3.18-38.8-azure </br> 5.3.18-57-default </br> 5.3.18-59.10-default </br> 5.3.18-59.13-default </br> 5.3.18-59.16-default </br> 5.3.18-59.19-default </br> 5.3.18-59.24-default </br> 5.3.18-59.27-default </br> 5.3.18-59.30-default </br> 5.3.18-59.34-default </br> 5.3.18-59.37-default </br> 5.3.18-59.5-default | SUSE Linux Enterprise Server 15, SP1, SP2 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.58-azure </br> 5.3.18-18.66-azure </br> 5.3.18-18.69-azure </br> 5.3.18-24.83-default </br> 5.3.18-24.86-default | SUSE Linux Enterprise Server 15, SP1, SP2 | [9.45](https://support.microsoft.com/en-us/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.58-azure </br> 5.3.18-18.69-azure SUSE Linux Enterprise Server 15, SP1, SP2 | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.58-azure
-SUSE Linux Enterprise Server 15, SP1, SP2 | [9.43](https://support.microsoft.com/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.58-azure
## Linux file systems/guest storage
storage Archive Rehydrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-rehydrate-overview.md
A rehydration operation with [Set Blob Tier](/rest/api/storageservices/set-blob-
Copying an archived blob to an online tier with [Copy Blob](/rest/api/storageservices/copy-blob) is billed for data read transactions and data retrieval size. Creating the destination blob in an online tier is billed for data write transactions. Early deletion fees don't apply when you copy to an online blob because the source blob remains unmodified in the Archive tier. High-priority retrieval charges do apply if selected.
-Blobs in the Archive tier should be stored for a minimum of 180 days. Deleting or changing the tier of an archived blob before the 180-day period elapses incurs an early deletion fee. For more information, see [Archive access tier](access-tiers-overview.md#archive-access-tier).
+Blobs in the Archive tier should be stored for a minimum of 180 days. Deleting or changing the tier of an archived blob before the 180-day period elapses incurs an early deletion fee.For example, if a blob is moved to the Archive tier and then deleted or moved to the Hot tier after 45 days, you'll be charged an early deletion fee equivalent to 135 (180 minus 45) days of storing that blob in the Archive tier. For more information, see [Archive access tier](access-tiers-overview.md#archive-access-tier).
For more information about pricing for block blobs and data rehydration, see [Azure Storage Pricing](https://azure.microsoft.com/pricing/details/storage/blobs/). For more information on outbound data transfer charges, see [Data Transfers Pricing Details](https://azure.microsoft.com/pricing/details/data-transfers/).
For more information about pricing for block blobs and data rehydration, see [Az
- [Archive a blob](archive-blob.md) - [Rehydrate an archived blob to an online tier](archive-rehydrate-to-online-tier.md) - [Run an Azure Function in response to a blob rehydration event](archive-rehydrate-handle-event.md)-- [Reacting to Blob storage events](storage-blob-event-overview.md)
+- [Reacting to Blob storage events](storage-blob-event-overview.md)
storage Storage Files Quick Create Use Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-quick-create-use-linux.md
+
+ Title: Tutorial - Create an NFS Azure file share and mount it on a Linux VM using the Azure Portal
+description: This tutorial covers how to use the Azure portal to deploy a Linux virtual machine, create an Azure file share using the NFS protocol, and mount the file share so that it's ready to store files.
+++ Last updated : 03/21/2022++
+#Customer intent: As an IT admin new to Azure Files, I want to try out Azure file share using NFS and Linux so I can determine whether I want to subscribe to the service.
++
+# Tutorial: Create an NFS Azure file share and mount it on a Linux VM using the Azure Portal
+
+Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard [Server Message Block (SMB) protocol](/windows/win32/fileio/microsoft-smb-protocol-and-cifs-protocol-overview) or [Network File System (NFS) protocol](https://en.wikipedia.org/wiki/Network_File_System). Both NFS and SMB protocols are supported on Azure virtual machines (VMs) running Linux. This tutorial shows you how to create an Azure file share using the NFS protocol and connect it to a Linux VM.
+
+In this tutorial, you will:
+
+> [!div class="checklist"]
+> * Create a storage account
+> * Deploy a Linux VM
+> * Create an NFS file share
+> * Connect to your VM
+> * Mount the file share to your VM
+
+## Applies to
+| File share type | SMB | NFS |
+|-|:-:|:-:|
+| Standard file shares (GPv2), LRS/ZRS | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
+| Standard file shares (GPv2), GRS/GZRS | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
+| Premium file shares (FileStorage), LRS/ZRS | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+
+## Getting started
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+Sign in to the [Azure portal](https://portal.azure.com).
+
+### Create a storage account
+
+Before you can work with an NFS Azure file share, you have to create an Azure storage account with the Premium performance tier. Premium is the only tier that supports NFS Azure file shares.
+
+1. On the Azure portal menu, select **All services**. In the list of resources, type **Storage Accounts**. As you begin typing, the list filters based on your input. Select **Storage Accounts**.
+1. On the **Storage Accounts** window that appears, choose **+ Create**.
+1. On the **Basics** tab, select the subscription in which to create the storage account.
+1. Under the **Resource group** field, select **Create new** to create a new resource group to use for this tutorial.
+1. Enter a name for your storage account. The name you choose must be unique across Azure. The name also must be between 3 and 24 characters in length, and may include only numbers and lowercase letters.
+1. Select a region for your storage account, or use the default region. Azure supports NFS file shares in all the same regions that support premium file storage.
+1. Select the *Premium* performance tier to store your data on solid-state drives (SSD). Under **Premium account type**, select *File shares*.
+1. Leave replication set to its default value of *Locally-redundant storage (LRS)*.
+1. Select **Review + Create** to review your storage account settings and create the account.
+1. When you see the **Validation passed** notification appear, select **Create**. You should see a notification that deployment is in progress.
+
+The following image shows the settings on the **Basics** tab for a new storage account:
++
+## Deploy an Azure VM running Linux
+
+Next, create an Azure VM running Linux to represent the on-premises server. When you create the VM, a virtual network will be created for you. The NFS protocol can only be used from a machine inside of a virtual network.
+
+1. Select **Home**, and then select **Virtual machines** under **Azure services**.
+
+1. Select **+ Create** and then **+ Virtual machine**.
+
+1. In the **Basics** tab, under **Project details**, make sure the correct subscription and resource group are selected. Under **Instance details**, type *myVM* for the **Virtual machine name**, and select the same region as your storage account. Choose the default Ubuntu Server version for your **Image**. Leave the other defaults. The default size and pricing is only shown as an example. Size availability and pricing is dependent on your region and subscription.
+
+ :::image type="content" source="media/storage-files-quick-create-use-linux/create-vm-project-instance-details.png" alt-text="Screenshot showing how to enter the project and instance details to create a new VM." lightbox="media/storage-files-quick-create-use-linux/create-vm-project-instance-details.png" border="true":::
+
+1. Under **Administrator account**, select **SSH public key**. Leave the rest of the defaults.
+
+ :::image type="content" source="media/storage-files-quick-create-use-linux/create-vm-admin-account.png" alt-text="Screenshot showing how to configure the administrator account and create an SSH key pair for a new VM." lightbox="media/storage-files-quick-create-use-linux/create-vm-admin-account.png" border="true":::
+
+1. Under **Inbound port rules > Public inbound ports**, choose **Allow selected ports** and then select **SSH (22) and HTTP (80)** from the drop-down.
+
+ :::image type="content" source="media/storage-files-quick-create-use-linux/create-vm-inbound-port-rules.png" alt-text="Screenshot showing how to configure the inbound port rules for a new VM." lightbox="media/storage-files-quick-create-use-linux/create-vm-inbound-port-rules.png" border="true":::
+
+ > [!IMPORTANT]
+ > Setting SSH port(s) open to the internet is only recommended for testing. If you want to change this setting later, go back to the **Basics** tab.
+
+1. Select the **Review + create** button at the bottom of the page.
+
+1. On the **Create a virtual machine** page, you can see the details about the VM you are about to create. Note the name of the virtual network. When you are ready, select **Create**.
+
+1. When the **Generate new key pair** window opens, select **Download private key and create resource**. Your key file will be download as **myKey.pem**. Make sure you know where the .pem file was downloaded, because you'll need the path to it to connect to your VM.
+
+You'll see a message that deployment is in progress. Wait a few minutes for deployment to complete.
+
+## Create an NFS Azure file share
+
+Now you're ready to create an NFS file share and provide network-level security for your NFS traffic.
+
+### Add a file share to your storage account
+
+1. Select **Home** and then **Storage accounts**.
+
+1. Select the storage account you created.
+
+1. Select **Data storage > File shares** from the storage account pane.
+
+1. Select **+ File Share**.
+
+1. Name the new file share *qsfileshare* and enter "100" for the minimum **Provisioned capacity**, or provision more capacity (up to 102,400 GiB) to get more performance. Select **NFS** protocol, leave **No Root Squash** selected, and select **Create**.
+
+ :::image type="content" source="media/storage-files-quick-create-use-linux/create-nfs-share.png" alt-text="Screenshot showing how to name the file share and provision capacity to create a new NFS file share." lightbox="media/storage-files-quick-create-use-linux/create-nfs-share.png" border="true":::
+
+### Set up a private endpoint
+
+Next, you'll need to set up a private endpoint for your storage account. This gives your storage account a private IP address from within the address space of your virtual network.
+
+1. Select the file share *qsfileshare*. You should see a dialog that says *Connect to this NFS share from Linux*. Under **Network configuration**, select **Review options**
+
+ :::image type="content" source="media/storage-files-quick-create-use-linux/connect-from-linux.png" alt-text="Screenshot showing how to configure network and secure transfer settings to connect the NFS share from Linux." lightbox="media/storage-files-quick-create-use-linux/connect-from-linux.png" border="true":::
+
+1. Next, select **Setup a private endpoint**.
+
+ :::image type="content" source="media/storage-files-quick-create-use-linux/configure-network-security.png" alt-text="Screenshot showing network-level security configurations." lightbox="media/storage-files-quick-create-use-linux/configure-network-security.png" border="true":::
+
+1. Select **+ Private endpoint**.
+
+ :::image type="content" source="media/storage-files-quick-create-use-linux/create-private-endpoint.png" alt-text="Screenshot showing how to select + private endpoint to create a new private endpoint.":::
+
+1. Leave **Subscription** and **Resource group** the same. Under **Instance**, provide a name and select a region for the new private endpoint. Your private endpoint must be in the same region as your virtual network, so use the same region as you specified when creating the VM. When all the fields are complete, select **Next: Resource**.
+
+ :::image type="content" source="media/storage-files-quick-create-use-linux/private-endpoint-basics.png" alt-text="Screenshot showing how to provide the project and instance details for a new private endpoint." lightbox="media/storage-files-quick-create-use-linux/private-endpoint-basics.png" border="true":::
+
+1. Confirm that the **Subscription**, **Resource type** and **Resource** are correct, and select **File** from the **Target sub-resource** drop-down. Then select **Next: Virtual Network**.
+
+ :::image type="content" source="media/storage-files-quick-create-use-linux/private-endpoint-resource.png" alt-text="Screenshot showing how to select the resources that a new private endpoint should connect to." lightbox="media/storage-files-quick-create-use-linux/private-endpoint-resource.png" border="true":::
+
+1. Under **Networking**, select the virtual network associated with your VM and leave the default subnet. Select **Yes** for **Integrate with private DNS zone**. Select the correct subscription and resource group, and then select **Next: Tags**.
+
+ :::image type="content" source="media/storage-files-quick-create-use-linux/private-endpoint-networking.png" alt-text="Screenshot showing how to add virtual networking and DNS integration to a new private endpoint." lightbox="media/storage-files-quick-create-use-linux/private-endpoint-networking.png" border="true":::
+
+1. You can optionally apply tags to categorize your resources, such as applying the name **Environment** and the value **Test** to all testing resources. Enter name/value pairs if desired, and then select **Next: Review + create**.
+
+ :::image type="content" source="media/storage-files-quick-create-use-linux/private-endpoint-tags.png" alt-text="Screenshot showing how to add tags to resources in order to categorize them." lightbox="media/storage-files-quick-create-use-linux/private-endpoint-tags.png" border="true":::
+
+1. Azure will attempt to validate the private endpoint. When validation is complete, select **Create**. You'll see a notification that deployment is in progress. After a few minutes, you should see a notification that deployment is complete.
+
+### Disable secure transfer
+
+Because the NFS protocol doesn't support encryption and relies instead on network-level security, you'll need to disable secure transfer.
+
+1. Select **Home** and then **Storage accounts**.
+
+1. Select the storage account you created.
+
+1. Select **File shares** from the storage account pane.
+
+1. Select the NFS file share that you created. Under **Secure transfer setting**, select **Change setting**.
+
+ :::image type="content" source="media/storage-files-quick-create-use-linux/secure-transfer-setting.png" alt-text="Screenshot showing how to change the secure transfer setting." lightbox="media/storage-files-quick-create-use-linux/secure-transfer-setting.png" border="true":::
+
+1. Change the **Secure transfer required** setting to **Disabled**, and select **Save**. The setting change may take up to 30 seconds to take effect.
+
+ :::image type="content" source="media/storage-files-quick-create-use-linux/disable-secure-transfer.png" alt-text="Screenshot showing how to disable the secure transfer setting." lightbox="media/storage-files-quick-create-use-linux/disable-secure-transfer.png" border="true":::
+
+## Connect to your VM
+
+Create an SSH connection with the VM.
+
+1. Select **Home** and then **Virtual machines**.
+
+1. Select the Linux VM you created for this tutorial and ensure that its status is **Running**. Take note of the VM's public IP address and copy it to your clipboard.
+
+ :::image type="content" source="media/storage-files-quick-create-use-linux/connect-to-vm.png" alt-text="Screenshot showing how to confirm that the VM is running and find its public IP address." lightbox="media/storage-files-quick-create-use-linux/connect-to-vm.png" border="true":::
+
+1. If you are on a Mac or Linux machine, open a Bash prompt. If you are on a Windows machine, open a PowerShell prompt.
+
+1. At your prompt, open an SSH connection to your VM. Replace the IP address with the one from your VM, and replace the path to the `.pem` with the path to where the key file was downloaded.
+
+```console
+ssh -i .\Downloads\myVM_key.pem azureuser@20.25.14.85
+```
+
+If you encounter a warning that the authenticity of the host can't be established, type **yes** to continue connecting to the VM. Leave the ssh connection open for the next step.
+
+> [!TIP]
+> The SSH key you created can be used the next time your create a VM in Azure. Just select the **Use a key stored in Azure** for **SSH public key source** the next time you create a VM. You already have the private key on your computer, so you won't need to download anything.
+
+## Mount the NFS share
+
+Now that you've created an NFS share, to use it you have to mount it on your Linux client.
+
+1. Select **Home** and then **Storage accounts**.
+
+1. Select the storage account you created.
+
+1. Select **File shares** from the storage account pane and select the NFS file share you created.
+
+1. You should see **Connect to this NFS share from Linux** along with sample commands to use NFS on your Linux distribution and a provided mounting script.
+
+ :::image type="content" source="media/storage-files-quick-create-use-linux/mount-nfs-share.png" alt-text="Screenshot showing how to connect to an NFS file share from Linux using a provided mounting script." lightbox="media/storage-files-quick-create-use-linux/mount-nfs-share.png" border="true":::
+
+1. Select your Linux distribution (Ubuntu).
+
+1. Using the ssh connection you created to your VM, enter the sample commands to use NFS and mount the file share.
+
+You have now mounted your NFS share, and it's ready to store files.
+
+## Clean up resources
+
+When you're done, delete the resource group. Deleting the resource group deletes the storage account, the Azure file share, and any other resources that you deployed inside the resource group.
+
+1. Select **Home** and then **Resource groups**.
+1. Select the resource group you created for this tutorial.
+1. Select **Delete resource group**. A window opens and displays a warning about the resources that will be deleted with the resource group.
+1. Enter the name of the resource group, and then select **Delete**.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn about using NFS Azure file shares](files-nfs-protocol.md)
storage Understanding Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/understanding-billing.md
description: Learn how to interpret the provisioned and pay-as-you-go billing mo
Previously updated : 12/08/2021 Last updated : 3/21/2022
For each server that you have connected to a sync group, there is an additional
The cost of data at rest depends on the billing tier you choose. This is the cost of storing data in the Azure file share in the cloud including snapshot storage. #### Cloud enumeration scans cost
-Azure File Sync enumerates the Azure File Share in the cloud once per day to discover changes that were made directly to the share so that they can sync down to the server endpoints. This scan generates transactions which are billed to the storage account at a rate of two LIST transactions per directory per day. You can put this number into the [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the scan cost.
+Azure File Sync enumerates the Azure File Share in the cloud once per day to discover changes that were made directly to the share so that they can sync down to the server endpoints. This scan generates transactions which are billed to the storage account at a rate of one LIST transaction per directory per day. You can put this number into the [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the scan cost.
> [!Tip] > If you don't know how many folders you have, check out the TreeSize tool from JAM Software GmbH.
time-series-insights How To Private Links https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-private-links.md
Here are the steps that are covered in this article:
1. Turn on Private Link and configure a private endpoint for a Time Series Insights Gen2 environment. 1. Disable or enable public network access flags, to restrict access to Private Link connections only.
+> [!NOTE]
+> Please note that Private Link for an event source is not supported. Do not restrict Public Internet access to a hub or event source used by Time Series Insights.
+ ## Prerequisites Before you can set up a private endpoint, you'll need an [**Azure Virtual Network (VNet)**](../virtual-network/virtual-networks-overview.md) where the endpoint can be deployed. If you don't have a VNet already, you can follow one of the Azure Virtual Network [quickstarts](../virtual-network/quick-create-portal.md) to set this up.
To disable or enable public network access in the [Azure portal](https://portal.
## Next steps Learn more about Private Link for Azure:
-* [*What is Azure Private Link service?*](../private-link/private-link-service-overview.md)
+* [*What is Azure Private Link service?*](../private-link/private-link-service-overview.md)
virtual-desktop Connect Web https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/user-documentation/connect-web.md
Title: Connect to Azure Virtual Desktop with the web client - Azure
description: How to connect to Azure Virtual Desktop using the web client. Previously updated : 09/30/2021 Last updated : 03/21/2022
While any HTML5-capable browser should work, we officially support the following
## Access remote resources feed
-In a browser, navigate to the Azure Resource Manager-integrated version of the Azure Virtual Desktop web client at <https://rdweb.wvd.microsoft.com/arm/webclient> and sign in with your user account.
+In a browser, navigate to the Azure Resource Manager-integrated version of the Azure Virtual Desktop web client at <https://client.wvd.microsoft.com/arm/webclient/https://docsupdatetracker.net/index.html> and sign in with your user account.
+
+>[!IMPORTANT]
+>We plan to start automatically redirecting to a new web client URL at <https://client.wvd.microsoft.com/arm/webclient/https://docsupdatetracker.net/index.html> as of April 18th, 2022. The current URLs at <https://rdweb.wvd.microsoft.com/arm/webclient/https://docsupdatetracker.net/index.html> and <https://www.wvd.microsoft.com/arm/webclient/https://docsupdatetracker.net/index.html> will still be available, but we recommend you update your bookmarks to the new URL at <https://client.wvd.microsoft.com/arm/webclient/https://docsupdatetracker.net/index.html> as soon as possible.
>[!NOTE]
->If you're using Azure Virtual Desktop (classic) without Azure Resource Manager integration, connect to your resources at <https://rdweb.wvd.microsoft.com/webclient> instead.
+>If you're using Azure Virtual Desktop (classic) without Azure Resource Manager integration, connect to your resources at <https://client.wvd.microsoft.com/webclient/https://docsupdatetracker.net/index.html> instead.
> > If you're using the US Gov portal, use <https://rdweb.wvd.azure.us/arm/webclient/https://docsupdatetracker.net/index.html>. >
virtual-desktop Connect Web 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/connect-web-2019.md
Title: Connect Azure Virtual Desktop (classic) web client - Azure
description: How to connect to Azure Virtual Desktop (classic) using the web client. Previously updated : 03/30/2020 Last updated : 03/21/2022
While any HTML5-capable browser should work, we officially support the following
## Access remote resources feed
-In a browser, navigate to the Azure Virtual Desktop web client at <https://rdweb.wvd.microsoft.com/webclient> and sign in with your user account.
+In a browser, navigate to the Azure Virtual Desktop web client at <https://client.wvd.microsoft.com/webclient/https://docsupdatetracker.net/index.html> and sign in with your user account.
+
+>[!IMPORTANT]
+>We plan to start automatically redirecting to a new web client URL at <https://client.wvd.microsoft.com/webclient/https://docsupdatetracker.net/index.html> as of April 18th, 2022. The current URLs at <https://rdweb.wvd.microsoft.com/webclient/https://docsupdatetracker.net/index.html> and <https://www.wvd.microsoft.com/webclient/https://docsupdatetracker.net/index.html> will still be available, but we recommend you update your bookmarks to the new URL at <https://client.wvd.microsoft.com/webclient/https://docsupdatetracker.net/index.html> as soon as possible.
>[!NOTE]
->If you're using Azure Virtual Desktop with Azure Resource Manager integration, connect to your resources at <https://rdweb.wvd.microsoft.com/arm/webclient> instead.
+>If you're using Azure Virtual Desktop with Azure Resource Manager integration, connect to your resources at <https://client.wvd.microsoft.com/arm/webclient/https://docsupdatetracker.net/index.html> instead.
>[!NOTE] >If you've already signed in with a different Azure Active Directory account than the one you want to use for Azure Virtual Desktop, you should either sign out or use a private browser window.
virtual-machines Disk Encryption Sample Scripts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-sample-scripts.md
To set up the secret in your key vault, use [Set-AzKeyVaultSecret](/powershell/m
Use the `$secretUrl` in the next step for [attaching the OS disk without using KEK](#without-using-a-kek). ### Disk encryption secret encrypted with a KEK
-Before you upload the secret to the key vault, you can optionally encrypt it by using a key encryption key. Use the wrap [API](/rest/api/keyvault/wrapkey) to first encrypt the secret using the key encryption key. The output of this wrap operation is a base64 URL encoded string, which you can then upload as a secret by using the [`Set-AzKeyVaultSecret`](/powershell/module/az.keyvault/set-azkeyvaultsecret) cmdlet.
+Before you upload the secret to the key vault, you can optionally encrypt it by using a key encryption key. Use the wrap [API](/rest/api/keyvault/keys/wrap-key) to first encrypt the secret using the key encryption key. The output of this wrap operation is a base64 URL encoded string, which you can then upload as a secret by using the [`Set-AzKeyVaultSecret`](/powershell/module/az.keyvault/set-azkeyvaultsecret) cmdlet.
```powershell # This is the passphrase that was provided for encryption during the distribution installation
virtual-machines Monitor Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/monitor-vm.md
When you have critical applications and business processes that rely on Azure re
Azure virtual machines collect the same kinds of monitoring data as other Azure resources, which are described in [Monitoring data from Azure resources](/azure/azure-monitor/insights/monitor-azure-resource#monitoring-data). For detailed information about the metrics and logs that are created by Azure virtual machines, see [Reference: Monitoring Azure virtual machine data](monitor-vm-reference.md). ## Overview page
-To begin exploring Azure Monitor, go to the **Overview** page for your virtual machine, and then select the **Monitoring** tab. The **Key Metrics** pane includes charts that show key health metrics, such as average CPU and network utilization. At the top of the pane, you can select a duration to change the time range for the charts, or select a chart to open the **Metrics** pane to drill down further or to create an alert rule.
+To begin exploring Azure Monitor, go to the **Overview** page for your virtual machine, and then select the **Monitoring** tab. You can see the number of active alerts on the tab.
+The **Alerts** pane shows you the alerts fired in the last 24 hours, along with important statistics about those alerts. If there are no alerts configured for your VM, there is a link to help you quickly create new alerts for your VM.
++
+The **Key Metrics** pane includes charts that show key health metrics, such as average CPU and network utilization. At the top of the pane, you can select a duration to change the time range for the charts, or select a chart to open the **Metrics** pane to drill down further or to create an alert rule.
+ ## Activity log The [Activity log](../azure-monitor/essentials/activity-log.md) displays recent activity by the virtual machine, including any configuration changes and when it was stopped and started. View the Activity log in the Azure portal, or create a [diagnostic setting to send it to a Log Analytics workspace](../azure-monitor/essentials/activity-log.md#send-to-log-analytics-workspace), where you can view events over time or analyze them with other collected data.
virtual-machines Nc Series Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nc-series-retirement.md
Title: NC-series retirement
-description: NC-series retirement by August 31, 2022
+description: NC-series retirement by August 31, 2023
Last updated 09/01/2021
-# Migrate your NC and NC_Promo series virtual machines by August 31, 2022
+# Migrate your NC and NC_Promo series virtual machines by August 31, 2023
+Based on feedback weΓÇÖve received from customers weΓÇÖre happy to announce that we are extending the retirement date by 1 year to 31 August 2023, for the Azure NC-Series virtual machine to give you more time to plan your migration.
+ As we continue to bring modern and optimized virtual machine instances to Azure leveraging the latest innovations in datacenter technologies, we thoughtfully plan how we retire aging hardware.
-With this in mind, we are retiring our NC (v1) GPU VM sizes, powered by NVIDIA Tesla K80 GPUs on 31 August 2022.
+With this in mind, we are retiring our NC (v1) GPU VM sizes, powered by NVIDIA Tesla K80 GPUs on 31 August 2023.
## How does the NC-series migration affect me?
-After 31 August 2022, any remaining NC size virtual machines remaining in your subscription will be set to a deallocated state. These virtual machines will be stopped and removed from the host. These virtual machines will no longer be billed in the deallocated state.
+After 31 August 2023, any remaining NC size virtual machines remaining in your subscription will be set to a deallocated state. These virtual machines will be stopped and removed from the host. These virtual machines will no longer be billed in the deallocated state.
This VM size retirement only impacts the VM sizes in the [NC-series](nc-series.md). This does not impact the newer [NCv3](ncv3-series.md), [NC T4 v3](nct4-v3-series.md), and [ND v2](ndv2-series.md) series virtual machines.
virtual-machines Ncv2 Series Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ncv2-series-retirement.md
Title: NCv2-series retirement
-description: NCv2-series retirement by August 31, 2022
+description: NCv2-series retirement by August 31, 2023
Last updated 09/01/2021
-# Migrate your NCv2 series virtual machines by August 31, 2022
+# Migrate your NCv2 series virtual machines by August 31, 2023
+Based on feedback weΓÇÖve received from customers weΓÇÖre happy to announce that we are extending the retirement date by 1 year to August 31, 2023, for the Azure NCv2-Series virtual machine to give you more time to plan your migration.
+ As we continue to bring modern and optimized virtual machine instances to Azure leveraging the latest innovations in datacenter technologies, we thoughtfully plan how we retire aging hardware.
-With this in mind, we are retiring our NC (v2) GPU VM sizes, powered by NVIDIA Tesla P100 GPUs on 31 August 2022.
+With this in mind, we are retiring our NC (v2) GPU VM sizes, powered by NVIDIA Tesla P100 GPUs on 31 August 2023.
## How does the NCv2-series migration affect me?
-After 31 August 2022, any remaining NCv2 size virtual machines remaining in your subscription will be set to a deallocated state. These virtual machines will be stopped and removed from the host. These virtual machines will no longer be billed in the deallocated state.
+After 31 August 2023, any remaining NCv2 size virtual machines remaining in your subscription will be set to a deallocated state. These virtual machines will be stopped and removed from the host. These virtual machines will no longer be billed in the deallocated state.
This VM size retirement only impacts the VM sizes in the [NCv2-series](ncv2-series.md). This does not impact the newer [NCv3](ncv3-series.md), [NC T4 v3](nct4-v3-series.md), and [ND v2](ndv2-series.md) series virtual machines.
virtual-machines Nd Series Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nd-series-retirement.md
Title: ND-series retirement
-description: ND-series retirement by August 31, 2022
+description: ND-series retirement by August 31, 2023
Last updated 09/01/2021
-# Migrate your ND series virtual machines by August 31, 2022
+# Migrate your ND series virtual machines by August 31, 2023
+Based on feedback weΓÇÖve received from customers weΓÇÖre happy to announce that we are extending the retirement date by 1 year to 31 August 2023, for the Azure ND-Series virtual machine to give you more time to plan your migration.
+ As we continue to bring modern and optimized virtual machine instances to Azure leveraging the latest innovations in datacenter technologies, we thoughtfully plan how we retire aging hardware.
-With this in mind, we are retiring our ND GPU VM sizes, powered by NVIDIA Tesla P40 GPUs on 31 August 2022.
+With this in mind, we are retiring our ND GPU VM sizes, powered by NVIDIA Tesla P40 GPUs on 31 August 2023.
## How does the ND-series migration affect me?
-After 31 August 2022, any remaining ND size virtual machines remaining in your subscription will be set to a deallocated state. These virtual machines will be stopped and removed from the host. These virtual machines will no longer be billed in the deallocated state.
+After 31 August 2023, any remaining ND size virtual machines remaining in your subscription will be set to a deallocated state. These virtual machines will be stopped and removed from the host. These virtual machines will no longer be billed in the deallocated state.
This VM size retirement only impacts the VM sizes in the [ND-series](nd-series.md). This does not impact the newer [NCv3](ncv3-series.md), [NC T4 v3](nct4-v3-series.md), and [ND v2](ndv2-series.md) series virtual machines.
virtual-machines Nv Series Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nv-series-retirement.md
Last updated 01/12/2020
-# Migrate your NV and NV_Promo series virtual machines by August 31, 2022
+# Migrate your NV and NV_Promo series virtual machines by August 31, 2023
+Based on feedback weΓÇÖve received from customers weΓÇÖre happy to announce that we are extending the retirement date by 1 year to August 31, 2023, for the Azure NV-Series and NV_Promo Series virtual machine to give you more time to plan your migration.
-We continue to bring modern and optimized virtual machine (VM) instances to Azure by using the latest innovations in datacenter technologies. As we innovate, we also thoughtfully plan how we retire aging hardware. With this context in mind, we're retiring our NV-series Azure VM sizes on September 1, 2022.
+We continue to bring modern and optimized virtual machine (VM) instances to Azure by using the latest innovations in datacenter technologies. As we innovate, we also thoughtfully plan how we retire aging hardware. With this context in mind, we're retiring our NV-series Azure VM sizes on August 31, 2023.
## How does the NV series migration affect me?
-After September 1, 2022, any remaining NV and NV_Promo-size VMs remaining in your subscription will be set to a deallocated state. These VMs will be stopped and removed from the host. These VMs will no longer be billed in the deallocated state.
+After August 31, 2023, any remaining NV and NV_Promo-size VMs remaining in your subscription will be set to a deallocated state. These VMs will be stopped and removed from the host. These VMs will no longer be billed in the deallocated state.
The current VM size retirement only affects the VM sizes in the [NV series](nv-series.md). This retirement doesn't affect the [NVv3](nvv3-series.md) and [NVv4](nvv4-series.md) series VMs.
virtual-machines Disk Encryption Sample Scripts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-sample-scripts.md
To set up the secret in your key vault, use [Set-AzKeyVaultSecret](/powershell/m
Use the `$secretUrl` in the next step for [attaching the OS disk without using KEK](#without-using-a-kek). ### Disk encryption secret encrypted with a KEK
-Before you upload the secret to the key vault, you can optionally encrypt it by using a key encryption key. Use the wrap [API](/rest/api/keyvault/wrapkey) to first encrypt the secret using the key encryption key. The output of this wrap operation is a base64 URL encoded string, which you can then upload as a secret by using the [`Set-AzKeyVaultSecret`](/powershell/module/az.keyvault/set-azkeyvaultsecret) cmdlet.
+Before you upload the secret to the key vault, you can optionally encrypt it by using a key encryption key. Use the wrap [API](/rest/api/keyvault/keys/wrap-key) to first encrypt the secret using the key encryption key. The output of this wrap operation is a base64 URL encoded string, which you can then upload as a secret by using the [`Set-AzKeyVaultSecret`](/powershell/module/az.keyvault/set-azkeyvaultsecret) cmdlet.
```powershell # This is the passphrase that was provided for encryption during the distribution installation
virtual-machines Automation Configure Extra Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-extra-disks.md
The following sample code is an example configuration for the database tier. It
"size_gb" : 256, "caching" : "ReadWrite", "write_accelerator" : false,
- "start_lun" : 0
+ "lun_start" : 0
}, { "name" : "log",
The following sample code is an example configuration for the database tier. It
"disk-mbps-read-write" : 8, "caching" : "None", "write_accelerator" : false,
- "start_lun" : 9
+ "lun_start" : 9
}, { "name" : "backup",
The following sample code is an example configuration for the database tier. It
"size_gb" : 256, "caching" : "ReadWrite", "write_accelerator" : false,
- "start_lun" : 13
+ "lun_start" : 13
} ]
virtual-machines Automation Configure Sap Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-sap-parameters.md
The table below contains the parameters stored in the sap-parameters.yaml file,
> [!div class="mx-tdCol2BreakAll "] > | Parameter | Description | Type | > | - | - | - |
-> | `NFS_Provider` | Defines what NFS backend to use, the options are 'AFS' for Azure Files NFS or 'ANF' for Azure NetApp files, 'NONE' for NFS from the SCS server or 'NFS' for an external NFS solution. | Optional |
+> | `NFS_provider` | Defines what NFS backend to use, the options are 'AFS' for Azure Files NFS or 'ANF' for Azure NetApp files, 'NONE' for NFS from the SCS server or 'NFS' for an external NFS solution. | Optional |
> | `sap_mnt` | The NFS path for sap_mnt | Required | > | `sap_trans` | The NFS path for sap_trans | Required | > | `usr_sap_install_mountpoint' | The NFS path for usr/sap/install | Required |
virtual-machines Automation Configure System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-system.md
By default the SAP System deployment uses the credentials from the SAP Workload
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | - | -- | -- |
-> | `NFS_Provider` | Defines what NFS backend to use, the options are 'AFS' for Azure Files NFS or 'ANF' for Azure NetApp files. |
+> | `NFS_provider` | Defines what NFS backend to use, the options are 'AFS' for Azure Files NFS or 'ANF' for Azure NetApp files. |
> | `sapmnt_volume_size` | Defines the size (in GB) for the 'sapmnt' volume | Optional | ### Azure Files NFS Support
virtual-machines Automation Configure Workload Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-workload-zone.md
The table below defines the parameters used for defining the Key Vault informati
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | - | -- | -- |
-> | `NFS_Provider` | Defines what NFS backend to use, the options are 'AFS' for Azure Files NFS or 'ANF' for Azure NetApp files, 'NONE' for NFS from the SCS server or 'NFS' for an external NFS solution. | Optional |
+> | `NFS_provider` | Defines what NFS backend to use, the options are 'AFS' for Azure Files NFS or 'ANF' for Azure NetApp files, 'NONE' for NFS from the SCS server or 'NFS' for an external NFS solution. | Optional |
> | `transport_volume_size` | Defines the size (in GB) for the 'transport' volume | Optional | ### Azure Files NFS Support
vpn-gateway About Vpn Profile Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/about-vpn-profile-download.md
Previously updated : 02/08/2021 Last updated : 03/20/2021 # Working with P2S VPN client profile files
-Profile files contain information that is necessary to configure a VPN connection. This article will help you obtain and understand the information necessary for a VPN client profile.
+Client profile files contain information that is necessary to configure a VPN connection. This article helps you obtain and understand the information needed for a VPN client profile.
## Generate and download profile
web-application-firewall Waf Front Door Custom Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-custom-rules.md
Previously updated : 09/05/2019 Last updated : 03/21/2022
Azure Web Application Firewall (WAF) with Front Door allows you to control acces
You can control access with a custom WAf rule that defines a priority number, a rule type, an array of match conditions, and an action. -- **Priority:** is a unique integer that describes the order of evaluation of WAF rules. Rules with lower priority values are evaluated before rules with higher values. Priority numbers must be unique among all custom rules.
+- **Priority:** is a unique integer that describes the order of evaluation of WAF rules. Rules with lower priority values are evaluated before rules with higher values. The rule evaluation stops on any rule action except for *Log*. Priority numbers must be unique among all custom rules.
- **Action:** defines how to route a request if a WAF rule is matched. You can choose one of the below actions to apply when a request matches a custom rule.
- - *Allow* - WAF forwards the quest to the back-end, logs an entry in WAF logs and exits.
- - *Block* - Request is blocked, WAF sends response to client without forwarding the request to the back-end. WAF logs an entry in WAF logs.
- - *Log* - WAF logs an entry in WAF logs and continues to evaluate the next rule.
- - *Redirect* - WAF redirects request to a specified URI, logs an entry in WAF logs, and exits.
+ - *Allow* - WAF forwards the request to the backend, logs an entry in WAF logs, and exits.
+ - *Block* - Request is blocked. WAF sends response to client without forwarding the request to the backend. WAF logs an entry in WAF logs and exits.
+ - *Log* - WAF forwards the request to the backend, logs an entry in WAF logs, and continues to evaluate the next rule in the priority order.
+ - *Redirect* - WAF redirects the request to a specified URI, logs an entry in WAF logs, and exits.
- **Match condition:** defines a match variable, an operator, and match value. Each rule may contain multiple match conditions. A match condition may be based on geo location, client IP addresses (CIDR), size, or string match. String match can be against a list of match variables. - **Match variable:**
web-application-firewall Waf Front Door Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-monitor.md
Previously updated : 01/21/2022 Last updated : 03/21/2022
WAF with Front Door provides detailed reporting on each threat it detects. Loggi
| Property | Description | | - | - |
-|Action|Action taken on the request|
+|Action|Action taken on the request. WAF log shows all action values. WAF metrics show all action values, except *Log*.|
| ClientIp | The IP address of the client that made the request. If there was an X-Forwarded-For header in the request, then the Client IP is picked from the header field. | | ClientPort | The IP port of the client that made the request. | | Details|Additional details on the matched request |
web-application-firewall Waf Front Door Rate Limit Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-rate-limit-powershell.md
Previously updated : 02/26/2020 Last updated : 03/21/2022 # Configure a Web Application Firewall rate limit rule using Azure PowerShell
-The Azure Web Application Firewall (WAF) rate limit rule for Azure Front Door controls the number of requests allowed from clients during a one-minute duration.
-This article shows how to configure a WAF rate limit rule that controls the number of requests allowed from clients to a web application that contains */promo* in the URL using Azure PowerShell.
+
+The Azure Web Application Firewall (WAF) rate limit rule for Azure Front Door controls the number of requests allowed from a particular client IP address to the application during a rate limit duration. This article shows how to configure a WAF rate limit rule that controls the number of requests allowed from a particular client to a web application that contains */promo* in the URL using Azure PowerShell.
+ If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
If you don't have an Azure subscription, create a [free account](https://azure.m
> Rate limits are applied for each client IP address. If you have multiple clients accessing your Front Door from different IP addresses, they will have their own rate limits applied. ## Prerequisites+ Before you begin to set up a rate limit policy, set up your PowerShell environment and create a Front Door profile.+ ### Set up your PowerShell environment+ Azure PowerShell provides a set of cmdlets that use the [Azure Resource Manager](../../azure-resource-manager/management/overview.md) model for managing your Azure resources. You can install [Azure PowerShell](/powershell/azure/) on your local machine and use it in any PowerShell session. Follow the instructions on the page, to sign in with your Azure credentials, and install Az PowerShell module. #### Connect to Azure with an interactive dialog for sign in+ ``` Connect-AzAccount
Install-Module PowerShellGet -Force -AllowClobber
Install-Module -Name Az.FrontDoor ``` ### Create a Front Door profile+ Create a Front Door profile by following the instructions described in [Quickstart: Create a Front Door profile](../../frontdoor/quickstart-create-front-door.md) ## Define URL match conditions+ Define a URL match condition (URL contains /promo) using [New-AzFrontDoorWafMatchConditionObject](/powershell/module/az.frontdoor/new-azfrontdoorwafmatchconditionobject). The following example matches */promo* as the value of the *RequestUri* variable:
The following example matches */promo* as the value of the *RequestUri* variable
-MatchValue "/promo" ``` ## Create a custom rate limit rule+ Set a rate limit using [New-AzFrontDoorWafCustomRuleObject](/powershell/module/az.frontdoor/new-azfrontdoorwafcustomruleobject).
-In the following example, the limit is set to 1000. Requests from any client to the promo page exceeding 1000 during one minute are blocked until the next minute starts.
+In the following example, the limit is set to 1000. Requests from a particular client IP address client to the promo page exceeding 1000 during one minute are blocked until the next minute starts.
```powershell-interactive $promoRateLimitRule = New-AzFrontDoorWafCustomRuleObject `
In the following example, the limit is set to 1000. Requests from any client to
-Action Block -Priority 1 ``` - ## Configure a security policy Find the name of the resource group that contains the Front Door profile using `Get-AzureRmResourceGroup`. Next, configure a security policy with a custom rate limit rule using [New-AzFrontDoorWafPolicy](/powershell/module/az.frontdoor/new-azfrontdoorwafpolicy) in the specified resource group that contains the Front Door profile.
The below example uses the Resource Group name *myResourceGroupFD1* with the ass
-EnabledState Enabled ``` ## Link policy to a Front Door front-end host+ Link the security policy object to an existing Front Door front-end host and update Front Door properties. First retrieve the Front Door object using [Get-AzFrontDoor](/powershell/module/Az.FrontDoor/Get-AzFrontDoor) command. Next, set the front-end *WebApplicationFirewallPolicyLink* property to the *resourceId* of the "$ratePolicy" created in the previous step using [Set-AzFrontDoor](/powershell/module/Az.FrontDoor/Set-AzFrontDoor) command.
web-application-firewall Application Gateway Waf Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-waf-metrics.md
For metrics supported by Application Gateway V1 SKU, see [Application Gateway v1
3. In **Metrics**, select the metric to add:
+ :::image type="content" source="../media/waf-appgateway-metrics/appgw-waf-metrics-1.png" alt-text="Screenshot of waf metrics page." lightbox="../media/waf-appgateway-metrics/appgw-waf-metrics-1-expanded.png":::
4. Select Add filter to add a filter:
+ :::image type="content" source="../media/waf-appgateway-metrics/appgw-waf-metrics-2.png" alt-text="Screenshot of adding filters to metrics." lightbox="../media/waf-appgateway-metrics/appgw-waf-metrics-2-expanded.png":::
5. Select New chart to add a new chart