Updates from: 03/08/2022 02:09:16
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Configure Authentication Sample Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-web-app.md
Under the project root folder, open the *appsettings.json* file. This file conta
|||| |AzureAdB2C|Instance| The first part of your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name) (for example, `https://contoso.b2clogin.com`).| |AzureAdB2C|Domain| Your Azure AD B2C tenant full [tenant name](tenant-management.md#get-your-tenant-name) (for example, `contoso.onmicrosoft.com`).|
-|AzureAdB2C|ClientId| The web API application ID from [step 2](#step-2-register-a-web-application).|
+|AzureAdB2C|ClientId| The Web App Application (client) ID from [step 2](#step-2-register-a-web-application).|
|AzureAdB2C|SignUpSignInPolicyId|The user flows or custom policy you created in [step 1](#step-1-configure-your-user-flow).| Your final configuration file should look like the following JSON:
active-directory-b2c Partner Bindid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-bindid.md
zone_pivot_groups: b2c-policy-type
[!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)]
-In this sample tutorial, learn how to integrate Azure Active Directory (AD) B2C authentication with [Transmit Security](https://www.transmitsecurity.com/bindid) passwordless authentication solution **BindID**. BindID is a passwordless authentication service that uses strong Fast Identity Online (FIDO2) biometric authentication for a reliable omni-channel authentication experience. The solution ensures a smooth login experience for all customers across every device and channel eliminating fraud, phishing, and credential reuse.
+In this sample tutorial, learn how to integrate Azure Active Directory B2C (Azure AD B2C) authentication with [Transmit Security](https://www.transmitsecurity.com/bindid) passwordless authentication solution **BindID**. BindID is a passwordless authentication service that uses strong Fast Identity Online (FIDO2) biometric authentication for a reliable omni-channel authentication experience. The solution ensures a smooth login experience for all customers across every device and channel eliminating fraud, phishing, and credential reuse.
## Scenario description
The following architecture diagram shows the implementation.
|Step | Description | |:--| :--|
-| 1. | User arrives at a login page. Users select sign-in/sign-up and enter username into the page.
-| 2. | Azure AD B2C redirects the user to BindID using an OpenID Connect (OIDC) request.
+| 1. | User attempts to log in to an Azure AD B2C application and is forwarded to Azure AD B2CΓÇÖs combined sign-in and sign-up policy.
+| 2. | Azure AD B2C redirects the user to BindID using the OpenID Connect (OIDC) authorization code flow.
| 3. | BindID authenticates the user using appless FIDO2 biometrics, such as fingerprint. | 4. | A decentralized authentication response is returned to BindID. | 5. | The OIDC response is passed on to Azure AD B2C.
To get started, you'll need:
### Step 1 - Create an application registration in BindID
-From [Applications](https://admin.bindid-sandbox.io/console/#/applications) to configure your tenant application in BindID, the following information is needed
+For [Applications](https://admin.bindid-sandbox.io/console/#/applications) to configure your tenant application in BindID, the following information is needed
| Property | Description | |:|:|
The relying party policy, for example [SignUpSignIn.xml](https://github.com/Azur
1. Open the Azure AD B2C tenant and under Policies select **Identity Experience Framework**.
-2. Click on your previously created **CustomSignUpSignIn** and select the settings:
+2. Select your previously created **CustomSignUpSignIn** and select the settings:
a. **Application**: select the registered app (sample is JWT)
active-directory-b2c Protocols Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/protocols-overview.md
https://{tenant}.b2clogin.com/{tenant}.onmicrosoft.com/oauth2/v2.0/authorize
https://{tenant}.b2clogin.com/{tenant}.onmicrosoft.com/oauth2/v2.0/token ```
+If you're using a [custom domain](custom-domain.md), replace `{tenant}.b2clogin.com` with the custom domain, such as `contoso.com`, in the endpoints.
+ In nearly all OAuth and OpenID Connect flows, four parties are involved in the exchange:
active-directory Msal Net Web Browsers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-web-browsers.md
MSAL.NET is able to respond with an HTTP message when a token is received or in
```csharp var options = new SystemWebViewOptions() {
- HtmlMessageError = "<p> An error occured: {0}. Details {1}</p>",
+ HtmlMessageError = "<p> An error occurred: {0}. Details {1}</p>",
BrowserRedirectSuccess = new Uri("https://www.microsoft.com"); }
active-directory Howto Vm Sign In Azure Ad Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
Organizations can now improve the security of Windows virtual machines (VMs) in Azure by integrating with Azure Active Directory (AD) authentication. You can now use Azure AD as a core authentication platform to RDP into a **Windows Server 2019 Datacenter edition** and later or **Windows 10 1809** and later. Additionally, you will be able to centrally control and enforce Azure RBAC and Conditional Access policies that allow or deny access to the VMs. This article shows you how to create and configure a Windows VM and login with Azure AD based authentication. There are many security benefits of using Azure AD based authentication to login to Windows VMs in Azure, including:-- Use your corporate AD credentials to login to Windows VMs in Azure.
+- Use your corporate Azure AD credentials to login to Windows VMs in Azure.
- Reduce your reliance on local administrator accounts, you do not need to worry about credential loss/theft, users configuring weak credentials etc. - Password complexity and password lifetime policies configured for your Azure AD directory help secure Windows VMs as well. - With Azure role-based access control (Azure RBAC), specify who can login to a VM as a regular user or with administrator privileges. When users join or leave your team, you can update the Azure RBAC policy for the VM to grant access as appropriate. When employees leave your organization and their user account is disabled or removed from Azure AD, they no longer have access to your resources.
active-directory Plan Connect Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-connect-topologies.md
This topology implements the following use cases:
* Only one Azure AD tenant sync can be configured to write back to Active Directory for the same object. This includes device and group writeback as well as Hybrid Exchange configurations ΓÇô these features can only be configured in one tenant. The only exception here is Password Writeback ΓÇô see below. * It is supported to configure Password Hash Sync from Active Directory to multiple Azure AD tenants for the same user object. If Password Hash Sync is enabled for a tenant, then Password Writeback may be enabled as well, and this can be done on multiple tenants: if the password is changed on one tenant, then password writeback will update it in Active Directory, and Password Hash Sync will update the password in the other tenants. * It is not supported to add and verify the same custom domain name in more than one Azure AD tenant, even if these tenants are in different Azure environments.
-* It is not supported to configure hybrid experiences such as Seamless SSO and Hybrid Azure AD Join on more than one tenant. Doing so would overwrite the configuration of the other tenant and would make it unusable.
-* You can synchronize device objects to more than one tenant but only one tenant can be configured to trust a device.
+* It is not supported to configure hybrid experiences that utilize forest level configuration in AD, such as Seamless SSO and Hybrid Azure AD Join (non-targeted approach), with more than one tenant. Doing so would overwrite the configuration of the other tenant, making it no longer usable. You can find additional information in [Plan your hybrid Azure Active Directory join deployment](https://docs.microsoft.com/azure/active-directory/devices/hybrid-azuread-join-plan#hybrid-azure-ad-join-for-single-forest-multiple-azure-ad-tenants).
+* You can synchronize device objects to more than one tenant but a device can be Hybrid Azure AD Joined to only one tenant.
* Each Azure AD Connect instance should be running on a domain-joined machine. >[!NOTE]
active-directory Alexishr Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/alexishr-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure AlexisHR for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to AlexisHR.
++
+writer: twimmers
+
+ms.assetid: 438a007c-2c3f-466f-ac9a-7e752e2532a4
++++ Last updated : 03/05/2022+++
+# Tutorial: Configure AlexisHR for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both AlexisHR and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [AlexisHR](https://alexishr.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Capabilities Supported
+> [!div class="checklist"]
+> * Create users in AlexisHR.
+> * Remove users in AlexisHR when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and AlexisHR.
+> * [Single sign-on](alexishr-tutorial.md) to AlexisHR (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in AlexisHR with Admin permissions.
++
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and AlexisHR](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure AlexisHR to support provisioning with Azure AD
+
+1. Log in to [AlexisHR Admin Console](https://app.alexishr.com/login/). Navigate to **Settings > Access tokens**.
+
+ ![Access User Management](media/alexishr-provisioning-tutorial/login.png)
+
+1. Once on the Access token page, fill in the **Name** and **Description** textbox and click on **Save**.A pop-up window will appear with the token in it.Copy and save the token. This value will be entered in the **Secret Token** * field in the Provisioning tab of your AlexisHR application in the Azure portal.
+
+ ![Access tokens](media/alexishr-provisioning-tutorial/token.png)
+
+## Step 3. Add AlexisHR from the Azure AD application gallery
+
+Add AlexisHR from the Azure AD application gallery to start managing provisioning to AlexisHR. If you have previously setup AlexisHR for SSO you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* When assigning users and groups to AlexisHR, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
++
+## Step 5. Configure automatic user provisioning to AlexisHR
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in AlexisHR based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for AlexisHR in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+1. In the applications list, select **AlexisHR**.
+
+ ![The AlexisHR link in the Applications list](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+1. In the **Admin Credentials** section, input your AlexisHR **Tenant URL** and **Secret Token**. Click **Test Connection** to ensure Azure AD can connect to AlexisHR. If the connection fails , ensure your AlexisHR account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. In the **Mappings** section, select **Synchronize Azure Active Directory Users to AlexisHR**.
+
+1. Review the user attributes that are synchronized from Azure AD to AlexisHR in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in AlexisHR for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the AlexisHR API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by AlexisHR
+ |||||
+ |userName|String|&check;|&check;
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|Reference||
+ |active|Boolean||&check;
+ |title|String||
+ |emails[type eq "work"].value|String||&check;
+ |name.givenName|String||&check;
+ |name.familyName|String||&check;
+ |phoneNumbers[type eq "work"].value|String||&check;
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:costCenter|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:division|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:organization|String||
+
+ > [!NOTE]
+ > phonenumbers value should be in E164 format. For example +16175551212
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for AlexisHR, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to AlexisHR by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+1. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
++
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Appsec Flow Sso Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/appsec-flow-sso-tutorial.md
Previously updated : 02/23/2022 Last updated : 03/02/2022
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Conviso Platform SSO single sign-on (SSO) enabled subscription.
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ ## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
active-directory Atea Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/atea-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
## Step 2. Configure Atea to support provisioning with Azure AD
-To configure Atea to support provisioning with Azure AD - please write an email to Atea support team <SSO.Support@atea.com>
+Contact [Atea support](mailto:sso.support@atea.com) to configure Atea to support provisioning with Azure AD.
## Step 3. Add Atea from the Azure AD application gallery
active-directory Envoy Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/envoy-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Envoy | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Envoy'
description: Learn how to configure single sign-on between Azure Active Directory and Envoy.
Previously updated : 08/25/2021 Last updated : 02/23/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Envoy
+# Tutorial: Azure AD SSO integration with Envoy
In this tutorial, you'll learn how to integrate Envoy with Azure Active Directory (Azure AD). When you integrate Envoy with Azure AD, you can:
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Envoy single sign-on (SSO) enabled subscription.
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ ## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
active-directory G Suite Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/g-suite-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
Before configuring G Suite for automatic user provisioning with Azure AD, you will need to enable SCIM provisioning on G Suite.
-1. Sign in to the [G Suite Admin console](https://admin.google.com/) with your administrator account, and then select **Security**. If you don't see the link, it might be hidden under the **More Controls** menu at the bottom of the screen.
+1. Sign in to the [G Suite Admin console](https://admin.google.com/) with your administrator account, then click on **Main menu** and then select **Security**. If you don't see it, it might be hidden under the **Show More** menu.
- ![G Suite Security](./media/g-suite-provisioning-tutorial/gapps-security.png)
+ ![G Suite Security](./media/g-suite-provisioning-tutorial/security.png)
+
+ ![G Suite Show More](./media/g-suite-provisioning-tutorial/show-more.png)
-2. On the **Security** page, select **API Reference**.
+1. Navigate to **Security -> Access and data control -> API Controls** .Select the check box **Trust internal,domain-owned apps** and then click **SAVE**
- ![G Suite API](./media/g-suite-provisioning-tutorial/gapps-api.png)
-
-3. Select **Enable API access**.
-
- ![G Suite API Enabled](./media/g-suite-provisioning-tutorial/gapps-api-enabled.png)
+ ![G Suite API](./media/g-suite-provisioning-tutorial/api-control.png)
> [!IMPORTANT]
- > For every user that you intend to provision to G Suite, their user name in Azure AD **must** be tied to a custom domain. For example, user names that look like bob@contoso.onmicrosoft.com are not accepted by G Suite. On the other hand, bob@contoso.com is accepted. You can change an existing user's domain by following the instructions [here](../fundamentals/add-custom-domain.md).
+ > For every user that you intend to provision to G Suite, their user name in Azure AD **must** be tied to a custom domain. For example, user names that look like bob@contoso.onmicrosoft.com are not accepted by G Suite. On the other hand, bob@contoso.com is accepted. You can change an existing user's domain by following the instructions [here](../fundamentals/add-custom-domain.md).
-4. Once you have added and verified your desired custom domains with Azure AD, you must verify them again with G Suite. To verify domains in G Suite, refer to the following steps:
+1. Once you have added and verified your desired custom domains with Azure AD, you must verify them again with G Suite. To verify domains in G Suite, refer to the following steps:
- a. In the [G Suite Admin Console](https://admin.google.com/), select **Domains**.
+ 1. In the [G Suite Admin Console](https://admin.google.com/), navigate to **Account -> Domains -> Manage Domains**.
- ![G Suite Domains](./media/g-suite-provisioning-tutorial/gapps-domains.png)
+ ![G Suite Domains](./media/g-suite-provisioning-tutorial/domains.png)
- b. Select **Add a domain or a domain alias**.
+ 1. In the Manage Domain page, click on **Add a domain**.
- ![G Suite Add Domain](./media/g-suite-provisioning-tutorial/gapps-add-domain.png)
+ ![G Suite Add Domain](./media/g-suite-provisioning-tutorial/add-domains.png)
- c. Select **Add another domain**, and then type in the name of the domain that you want to add.
+ 1. In the Add Domain page, type in the name of the domain that you want to add.
- ![G Suite Add Another](./media/g-suite-provisioning-tutorial/gapps-add-another.png)
+ ![G Suite Verify Domain](./media/g-suite-provisioning-tutorial/verify-domains.png)
- d. Select **Continue and verify domain ownership**. Then follow the steps to verify that you own the domain name. For comprehensive instructions on how to verify your domain with Google, see [Verify your site ownership](https://support.google.com/webmasters/answer/35179).
+ 1. Select **ADD DOMAIN & START VERIFICATION**. Then follow the steps to verify that you own the domain name. For comprehensive instructions on how to verify your domain with Google, see [Verify your site ownership](https://support.google.com/webmasters/answer/35179).
- e. Repeat the preceding steps for any additional domains that you intend to add to G Suite.
+ 1. Repeat the preceding steps for any additional domains that you intend to add to G Suite.
-5. Next, determine which admin account you want to use to manage user provisioning in G Suite. Navigate to **Admin Roles**.
+1. Next, determine which admin account you want to use to manage user provisioning in G Suite. Navigate to **Account->Admin roles**.
- ![G Suite Admin](./media/g-suite-provisioning-tutorial/gapps-admin.png)
+ ![G Suite Admin](./media/g-suite-provisioning-tutorial/admin-roles.png)
-6. For the **Admin role** of that account, edit the **Privileges** for that role. Make sure to enable all **Admin API Privileges** so that this account can be used for provisioning.
+1. For the **Admin role** of that account, edit the **Privileges** for that role. Make sure to enable all **Admin API Privileges** so that this account can be used for provisioning.
- ![G Suite Admin Privileges](./media/g-suite-provisioning-tutorial/gapps-admin-privileges.png)
+ ![G Suite Admin Privileges](./media/g-suite-provisioning-tutorial/admin-privilege.png)
## Step 3. Add G Suite from the Azure AD application gallery
This section guides you through the steps to configure the Azure AD provisioning
### To configure automatic user provisioning for G Suite in Azure AD:
-1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. Users will need to login to `portal.azure.com` and will not be able to use `aad.portal.azure.com`.
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. Users will need to log in to `portal.azure.com` and will not be able to use `aad.portal.azure.com`.
![Enterprise applications blade](./media/g-suite-provisioning-tutorial/enterprise-applications.png)
active-directory Hoxhunt Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hoxhunt-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Hoxhunt | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Hoxhunt'
description: Learn how to configure single sign-on between Azure Active Directory and Hoxhunt.
Previously updated : 08/31/2021 Last updated : 03/04/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Hoxhunt
+# Tutorial: Azure AD SSO integration with Hoxhunt
In this tutorial, you'll learn how to integrate Hoxhunt with Azure Active Directory (Azure AD). When you integrate Hoxhunt with Azure AD, you can:
In this tutorial, you configure and test Azure AD SSO in a test environment.
* Hoxhunt supports **SP** initiated SSO. * Hoxhunt supports [Automated user provisioning](hoxhunt-provisioning-tutorial.md).
-## Adding Hoxhunt from the gallery
+## Add Hoxhunt from the gallery
To configure the integration of Hoxhunt into Azure AD, you need to add Hoxhunt from the gallery to your list of managed SaaS apps.
To configure the integration of Hoxhunt into Azure AD, you need to add Hoxhunt f
1. In the **Add from the gallery** section, type **Hoxhunt** in the search box. 1. Select **Hoxhunt** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. - ## Configure and test Azure AD SSO for Hoxhunt Configure and test Azure AD SSO with Hoxhunt using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Hoxhunt.
Follow these steps to enable Azure AD SSO in the Azure portal.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, enter the values for the following fields:
-
- a. In the **Sign on URL** text box, type the URL:
- `https://app.hoxhunt.com/`
+1. On the **Basic SAML Configuration** section, perform the following steps:
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
`https://app.hoxhunt.com/saml/consume/<ID>`
- c. In the **Reply URL** text box, type a URL using the following pattern:
+ b. In the **Reply URL** text box, type a URL using the following pattern:
`https://app.hoxhunt.com/saml/consume/<ID>`
+ c. In the **Sign on URL** text box, type the URL:
+ `https://game.hoxhunt.com/`
+ > [!NOTE]
- > These values are not real. Update these values with the actual Reply URL and Identifier. Contact [Hoxhunt Client support team](mailto:support@hoxhunt.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Hoxhunt Client support team](mailto:support@hoxhunt.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
In this section, you test your Azure AD single sign-on configuration with follow
## Next steps
-Once you configure Hoxhunt you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure Hoxhunt you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Joyn Fsm Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/joyn-fsm-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Joyn FSM for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Joyn FSM.
+
+documentationcenter: ''
+
+writer: Thwimmer
++
+ms.assetid: e778e26b-c998-4432-85b7-5a0d0047ccae
+++
+ms.devlang: na
+ Last updated : 02/27/2022+++
+# Tutorial: Configure Joyn FSM for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Joyn FSM and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Joyn FSM](https://www.sevenlakes.com/solutions/field-service-management/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
++
+## Capabilities Supported
+> [!div class="checklist"]
+> * Create users in Joyn FSM
+> * Remove users in Joyn FSM when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and Joyn FSM
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Joyn FSM](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Joyn FSM to support provisioning with Azure AD
+
+Contact your [SevenLakes Customer Success Representative](mailto:mailto:CustomerSuccessTeam@sevenlakes.com) in order to obtain the Tenant URL and Secret Token which are required for configuring provisioning.
+
+## Step 3. Add Joyn FSM from the Azure AD application gallery
+
+Add Joyn FSM from the Azure AD application gallery to start managing provisioning to Joyn FSM. If you have previously setup Joyn FSM for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* When assigning users and groups to Joyn FSM, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
++
+## Step 5. Configure automatic user provisioning to Joyn FSM
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in Joyn FSM based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Joyn FSM in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+1. In the applications list, select **Joyn FSM**.
+
+ ![The Joyn FSM link in the Applications list](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning mode](common/provisioning-automatic.png)
+
+1. In the **Admin Credentials** section, input your Joyn FSM Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Joyn FSM. If the connection fails, ensure your Joyn FSM account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Joyn FSM**.
+
+1. Review the user attributes that are synchronized from Azure AD to Joyn FSM in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Joyn FSM for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Joyn FSM API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Joyn FSM|
+ |||||
+ |userName|String|&check;|&check;
+ |emails[type eq "work"].value|String||&check;
+ |active|Boolean||
+ |name.formatted|String||
+ |displayName|String||
+ |externalId|String||
+ |name.givenName||
+ |name.familyName|String||
+ |addresses[type eq "work"].formatted|String||
+ |addresses[type eq "work"].streetAddress|String||
+ |addresses[type eq "work"].locality|String||
+ |addresses[type eq "work"].region|String||
+ |addresses[type eq "work"].postalCode|String||
+ |addresses[type eq "work"].country|String||
+ |phoneNumbers[type eq "mobile"].value|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String||
+ |urn:ietf:params:scim:schemas:extension:joynfsm:2.0:User:xid|String||
+ |urn:ietf:params:scim:schemas:extension:joynfsm:2.0:User:joynFieldId|String||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Joyn FSM, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to Joyn FSM by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
++
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Looker Analytics Platform Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/looker-analytics-platform-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Looker Analytics Platform | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Looker Analytics Platform'
description: Learn how to configure single sign-on between Azure Active Directory and Looker Analytics Platform.
Previously updated : 11/10/2020 Last updated : 02/23/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Looker Analytics Platform
+# Tutorial: Azure AD SSO integration with Looker Analytics Platform
In this tutorial, you'll learn how to integrate Looker Analytics Platform with Azure Active Directory (Azure AD). When you integrate Looker Analytics Platform with Azure AD, you can:
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Looker Analytics Platform single sign-on (SSO) enabled subscription.
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ ## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
active-directory Meta Networks Connector Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/meta-networks-connector-provisioning-tutorial.md
The Azure AD provisioning service allows you to scope who will be provisioned ba
This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in Meta Networks Connector based on user and/or group assignments in Azure AD. > [!TIP]
-> You may also choose to enable SAML-based single sign-on for Meta Networks Connector, following the instructions provided in the [Meta Networks Connector Single sign-on tutorial](./metanetworksconnector-tutorial.md). Single sign-on can be configured independently of automatic user provisioning, though these two features compliment each other
+> You may also choose to enable SAML-based single sign-on for Meta Networks Connector, following the instructions provided in the [Meta Networks Connector Single sign-on tutorial](./metanetworksconnector-tutorial.md). Single sign-on can be configured independently of automatic user provisioning, though these two features complement each other
### To configure automatic user provisioning for Meta Networks Connector in Azure AD:
This section guides you through the steps to configure the Azure AD provisioning
|phonenumbers[type eq "work"].value|String|| > [!NOTE]
- > phonenumbers[type eq "work"].value should be in E164 format.For example +16175551212
+ > phonenumbers value should be in E164 format. For example +16175551212
1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Meta Networks Connector**.
active-directory Postman Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/postman-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Postman | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Postman'
description: Learn how to configure single sign-on between Azure Active Directory and Postman.
Previously updated : 06/04/2021 Last updated : 02/24/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Postman
+# Tutorial: Azure AD SSO integration with Postman
In this tutorial, you'll learn how to integrate Postman with Azure Active Directory (Azure AD). When you integrate Postman with Azure AD, you can:
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Postman single sign-on (SSO) enabled subscription.
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ ## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
active-directory Proprofs Knowledge Base Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/proprofs-knowledge-base-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with ProProfs Knowledge Base | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with ProProfs Knowledge Base'
description: Learn how to configure single sign-on between Azure Active Directory and ProProfs Knowledge Base.
Previously updated : 11/27/2020 Last updated : 02/24/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with ProProfs Knowledge Base
+# Tutorial: Azure AD SSO integration with ProProfs Knowledge Base
In this tutorial, you'll learn how to integrate ProProfs Knowledge Base with Azure Active Directory (Azure AD). When you integrate ProProfs Knowledge Base with Azure AD, you can:
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * ProProfs Knowledge Base single sign-on (SSO) enabled subscription.
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ ## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* ProProfs Knowledge Base supports **IDP** initiated SSO
+* ProProfs Knowledge Base supports **IDP** initiated SSO.
## Adding ProProfs Knowledge Base from the gallery
active-directory Us Bank Prepaid Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/us-bank-prepaid-tutorial.md
Previously updated : 03/03/2022 Last updated : 03/04/2022
Follow these steps to enable Azure AD SSO in the Azure portal.
a. In the **Identifier** text box, type the value: `USBank:SAML2.0:Prepaid_SP`
- b. In the **Reply URL** text box, type the URL:
- `https://uat-federation.usbank.com/sp/ACS.saml2`
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<Environment>.usbank.com/sp/ACS.saml2`
c. In the **Sign-on URL** text box, type a URL using the following pattern: `https://<Environment>.usbank.com/sp/startSSO.ping?PartnerIdpId=<ID>` > [!NOTE]
- > The value is not real. Update this value with the actual Sign-on URL. Contact [U.S. Bank Prepaid Client support team](mailto:web.access.management@usbank.com) to get this value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Reply URL and Sign-on URL. Contact [U.S. Bank Prepaid Client support team](mailto:web.access.management@usbank.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
In this section, you test your Azure AD single sign-on configuration with follow
* Click on **Test this application** in Azure portal and you should be automatically signed in to the U.S. Bank Prepaid for which you set up the SSO.
-You can also use Microsoft My Apps to test the application in any mode. When you click the U.S. Bank Prepaid tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the U.S. Bank Prepaid for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+You can also use Microsoft My Apps to test the application in any mode. When you click the U.S. Bank Prepaid tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the U.S. Bank Prepaid for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure U.S. Bank Prepaid you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
+Once you configure U.S. Bank Prepaid you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
aks Limit Egress Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/limit-egress-traffic.md
description: Learn what ports and addresses are required to control egress traff
Previously updated : 01/12/2021 Last updated : 03/7/2022 #Customer intent: As an cluster operator, I want to restrict egress traffic for nodes to only access defined ports and addresses and improve cluster security.
See [virtual network route table documentation](../virtual-network/virtual-netwo
### Adding firewall rules
+> [!NOTE]
+> For applications outside of the kube-system or gatekeeper-system namespaces that needs to talk to the API server, an additional network rule to allow TCP communication to port 443 for the API server IP in addition to adding application rule for fqdn-tag AzureKubernetesService is required.
++ Below are three network rules you can use to configure on your firewall, you may need to adapt these rules based on your deployment. The first rule allows access to port 9000 via TCP. The second rule allows access to port 1194 and 123 via UDP (if you're deploying to Azure China 21Vianet, you might require [more](#azure-china-21vianet-required-network-rules)). Both these rules will only allow traffic destined to the Azure Region CIDR that we're using, in this case East US. Finally, we'll add a third network rule opening port 123 to `ntp.ubuntu.com` FQDN via UDP (adding an FQDN as a network rule is one of the specific features of Azure Firewall, and you'll need to adapt it when using your own options).
app-service Overview Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview-certificates.md
Title: Certificates in App Service Environment
-description: Explain topics related to certificates in an App Service Environment. Learn how certificate bindings work on the single-tenanted apps in an ASE.
+description: Explain topics related to certificates in an App Service Environment. Learn how certificate bindings work on the single-tenanted apps in an App Service Environment.
Previously updated : 11/15/2021 Last updated : 3/4/2022
> This article is about the App Service Environment v3 which is used with Isolated v2 App Service plans >
-The App Service Environment (ASE) is a deployment of the Azure App Service that runs within your Azure virtual network. It can be deployed with an internet accessible application endpoint or an application endpoint that is in your virtual network. If you deploy the ASE with an internet accessible endpoint, that deployment is called an External ASE. If you deploy the ASE with an endpoint in your virtual network, that deployment is called an ILB ASE. You can learn more about the ILB ASE from the [Create and use an ILB ASE](./creation.md) document.
+The App Service Environment is a deployment of the Azure App Service that runs within your Azure virtual network. It can be deployed with an internet accessible application endpoint or an application endpoint that is in your virtual network. If you deploy the App Service Environment with an internet accessible endpoint, that deployment is called an External App Service Environment. If you deploy the App Service Environment with an endpoint in your virtual network, that deployment is called an ILB App Service Environment. You can learn more about the ILB App Service Environment from the [Create and use an ILB App Service Environment](./creation.md) document.
## Application certificates
-Apps that are hosted in an ASE can use the app-centric certificate features that are available in the multi-tenant App Service. Those features include:
+Applications that are hosted in an App Service Environment support the following app-centric certificate features, which are also available in the multi-tenant App Service. For requirements and instructions for uploading and managing those certificates, see [Add a TLS/SSL certificate in Azure App Service](../configure-ssl-certificate.md).
-- SNI certificates-- KeyVault hosted certificates
+- [SNI certificates](../configure-ssl-certificate.md)
+- [KeyVault hosted certificates](../configure-ssl-certificate.md#import-a-certificate-from-key-vault)
-The requirements and instructions for uploading and managing those certificates are available in [Add a TLS/SSL certificate in Azure App Service](../configure-ssl-certificate.md).
+Once you add the certificate to your App Service app or function app, you can [secure a custom domain name with it](../configure-ssl-bindings.md) or [use it in your application code](../configure-ssl-certificate-in-code.md).
-Once the certificate is added to your App Service app or function app, you can [secure a custom domain name with it](../configure-ssl-bindings.md) or [use it in your application code](../configure-ssl-certificate-in-code.md).
+### Limitations
+
+[App Service managed certificates](../configure-ssl-certificate.md#create-a-free-managed-certificate) aren't supported on apps that are hosted in an App Service Environment.
## TLS settings
You can [configure the TLS setting](../configure-ssl-bindings.md#enforce-tls-ver
## Private client certificate
-A common use case is to configure your app as a client in a client-server model. If you secure your server with a private CA certificate, you will need to upload the client certificate to your app. The following instructions will load certificates to the truststore of the workers that your app is running on. If you load the certificate to one app, you can use it with your other apps in the same App Service plan without uploading the certificate again.
+A common use case is to configure your app as a client in a client-server model. If you secure your server with a private CA certificate, you'll need to upload the client certificate to your app. The following instructions will load certificates to the truststore of the workers that your app is running on. You only need to upload the certificate once to use it with apps that are in the same App Service plan.
>[!NOTE] > Private client certificates are not supported outside the app. This limits usage in scenarios such as pulling the app container image from a registry using a private certificate and TLS validating through the front-end servers using a private certificate.
-Follow these steps to upload the certificate (*.cer* file) to your app in your ASE. The *.cer* file can be exported from your certificate. For testing purposes, there is a PowerShell example at the end to generate a temporary self-signed certificate:
+Follow these steps to upload the certificate (*.cer* file) to your app in your App Service Environment. The *.cer* file can be exported from your certificate. For testing purposes, there's a PowerShell example at the end to generate a temporary self-signed certificate:
1. Go to the app that needs the certificate in the Azure portal
-1. Go to **TLS/SSL settings** in the app. Click **Public Key Certificate (.cer)**. Select **Upload Public Key Certificate**. Provide a name. Browse and select your *.cer* file. Select upload.
+1. Go to **TLS/SSL settings** in the app. Select **Public Key Certificate (.cer)**. Select **Upload Public Key Certificate**. Provide a name. Browse and select your *.cer* file. Select upload.
1. Copy the thumbprint. 1. Go to **Application Settings**. Create an app setting WEBSITE_LOAD_ROOT_CERTIFICATES with the thumbprint as the value. If you have multiple certificates, you can put them in the same setting separated by commas and no whitespace like 84EC242A4EC7957817B8E48913E50953552DAFA6,6A5C65DC9247F762FE17BF8D4906E04FE6B31819
-The certificate will be available by all the apps in the same app service plan as the app, which configured that setting. If you need it to be available for apps in a different App Service plan, you will need to repeat the app setting operation in an app in that App Service plan. To check that the certificate is set, go to the Kudu console and issue the following command in the PowerShell debug console:
+The certificate will be available by all the apps in the same app service plan as the app, which configured that setting. If you need it to be available for apps in a different App Service plan, you'll need to repeat the app setting operation in an app in that App Service plan. To check that the certificate is set, go to the Kudu console and issue the following command in the PowerShell debug console:
```azurepowershell-interactive dir Cert:\LocalMachine\Root
app-service Tutorial Dotnetcore Sqldb App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-dotnetcore-sqldb-app.md
Title: Deploy an ASP.NET Core and Azure SQL Database app to Azure App Service description: Learn how to deploy an ASP.NET Core web app to Azure App Service and connect to an Azure SQL Database. Previously updated : 02/04/2022 Last updated : 03/02/2022 ms.devlang: csharp-+ # Tutorial: Deploy an ASP.NET Core and Azure SQL Database app to Azure App Service
-In this tutorial, you'll learn how to deploy an ASP.NET Core app to Azure App Service and connect to an Azure SQL Database. Azure App Service is a highly scalable, self-patching, web-hosting service that can easily deploy apps on Windows or Linux. Although this tutorial uses an ASP.NET Core 6.0 app, the process is the same for other versions of ASP.NET Core and ASP.NET Framework.
+In this tutorial, you'll learn how to deploy an ASP.NET Core app to Azure App Service and connect to an Azure SQL Database. Azure App Service is a highly scalable, self-patching, web-hosting service that can easily deploy apps on Windows or Linux. Although this tutorial uses an ASP.NET Core 6.0 app, the process is the same for other versions of ASP.NET Core and ASP.NET Framework.
-This article assumes you're familiar with [.NET](https://dotnet.microsoft.com/download/dotnet/6.0) and have it installed locally. You'll also need an Azure account with an active subscription. If you don't have an Azure account, you [can create one for free](https://azure.microsoft.com/free).
+This article assumes you're familiar with [.NET](https://dotnet.microsoft.com/download/dotnet/6.0) and have it installed locally. You'll also need an Azure account with an active subscription. If you don't have an Azure account, you [can create one for free](https://azure.microsoft.com/free).
## 1 - Set up the Sample Application
-To follow along with this tutorial, [Download the Sample Project](https://github.com/Azure-Samples/msdocs-app-service-sqldb-dotnetcore/archive/refs/heads/main.zip) from the repository [https://github.com/Azure-Samples/msdocs-app-service-sqldb-dotnetcore](https://github.com/Azure-Samples/msdocs-app-service-sqldb-dotnetcore) or clone it using the Git command below.
+To follow along with this tutorial, [Download the Sample Project](https://github.com/Azure-Samples/msdocs-app-service-sqldb-dotnetcore/archive/refs/heads/main.zip) from the repository [https://github.com/Azure-Samples/msdocs-app-service-sqldb-dotnetcore](https://github.com/Azure-Samples/msdocs-app-service-sqldb-dotnetcore) or clone it using the Git command below:
```terminal git clone https://github.com/Azure-Samples/msdocs-app-service-sqldb-dotnetcore.git
cd msdocs-app-service-sqldb-dotnetcore
## 2 - Create the App Service
-Let's first create the Azure App Service that will host our deployed Web App. There are several different ways to create an App Service depending on your ideal workflow.
+Let's first create the Azure App Service that hosts our deployed Web App. There are several different ways to create an App Service depending on your ideal workflow.
### [Azure portal](#tab/azure-portal)
-Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps to create your Azure App Service resources.
+Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps to create your Azure App Service resources:
| Instructions | Screenshot | |:-|--:| | [!INCLUDE [Create app service step 1](<./includes/tutorial-dotnetcore-sqldb-app/azure-portal-create-app-service-01.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-app-service-1-240px.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find App Services in Azure." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-app-service-1.png"::: | | [!INCLUDE [Create app service step 2](<./includes/tutorial-dotnetcore-sqldb-app/azure-portal-create-app-service-02.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-app-service-2-240px.png" alt-text="A screenshot showing the create button on the App Services page used to create a new web app." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-app-service-2.png"::: | | [!INCLUDE [Create app service step 3](<./includes/tutorial-dotnetcore-sqldb-app/azure-portal-create-app-service-03.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-app-service-3-240px.png" alt-text="A screenshot showing the form to fill out to create a web app in Azure." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-app-service-3.png"::: |
-| [!INCLUDE [Create app service step 4](<./includes/tutorial-dotnetcore-sqldb-app/azure-portal-create-app-service-04.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-app-service-4-240px.png" alt-text="A screenshot of the Spec Picker dialog that allows you to select the App Service plan to use for your web app." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-app-service-4.png"::: |
+| [!INCLUDE [Create app service step 4](<./includes/tutorial-dotnetcore-sqldb-app/azure-portal-create-app-service-04.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-app-service-4-240px.png" alt-text="A screenshot of the Spec Picker dialog that lets you select the App Service plan to use for your web app." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-app-service-4.png"::: |
| [!INCLUDE [Create app service step 5](<./includes/tutorial-dotnetcore-sqldb-app/azure-portal-create-app-service-05.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-app-service-5-240px.png" alt-text="A screenshot of the main web app create page showing the button to select on to create your web app in Azure." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-app-service-5.png"::: | ### [Azure CLI](#tab/azure-cli)
-Azure CLI commands can be run in the [Azure Cloud Shell](https://shell.azure.com) or on a workstation with the [Azure CLI installed](/cli/azure/install-azure-cli).
+You can run Azure CLI commands in the [Azure Cloud Shell](https://shell.azure.com) or on a workstation with the [Azure CLI installed](/cli/azure/install-azure-cli).
-First, create a resource group using the [az group create](/cli/azure/group#az_group_create) command. The resource group will act as a container for all of the Azure resources related to this application.
+First, create a resource group using the [az group create](/cli/azure/group#az_group_create) command. The resource group acts as a container for all of the Azure resources related to this application.
```azurecli-interactive # Use 'az account list-locations --output table' to list available locations close to you
az group create --location eastus --name msdocs-core-sql
Next, create an App Service plan using the [az appservice plan create](/cli/azure/appservice/plan#az_appservice_plan_create) command.
-* The `--sku` parameter defines the size (CPU, memory) and cost of the app service plan. This example uses the F1 (Free) service plan. For a full list of App Service plans, view the [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/windows/) page.
+* The `--sku` parameter defines the size (CPU, memory) and cost of the app service plan. This example uses the F1 (Free) service plan. For a full list of App Service plans, view the [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/windows/) page.
```azurecli-interactive
az webapp create \
## 3 - Create the Database
-Next let's create the Azure SQL Database that will manage the data in our app.
+Next, let's create the Azure SQL Database that manages the data in our app.
### [Azure portal](#tab/azure-portal)
-Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps to create your Azure App Service resources.
+Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps to create your Azure App Service resources:
| Instructions | Screenshot | |:-|--:| | [!INCLUDE [Create database step 1](<./includes/tutorial-dotnetcore-sqldb-app/azure-portal-sql-db-create-01.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-sql-01-240px.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find Azure SQL in Azure." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-sql-01.png"::: | | [!INCLUDE [Create database step 2](<./includes/tutorial-dotnetcore-sqldb-app/azure-portal-sql-db-create-02.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-sql-02-240px.png" alt-text="A screenshot showing the create button on the SQL Servers page used to create a new database server." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-sql-02.png"::: | | [!INCLUDE [Create database step 3](<./includes/tutorial-dotnetcore-sqldb-app/azure-portal-sql-db-create-03.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-sql-03-240px.png" alt-text="A screenshot showing the form to fill out to create a SQL Server in Azure." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-sql-03.png"::: |
-| [!INCLUDE [Create database step 4](<./includes/tutorial-dotnetcore-sqldb-app/azure-portal-sql-db-create-04.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-sql-04-240px.png" alt-text="A screenshot showing the form to used to allow other Azure services to access the database." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-sql-04.png"::: |
+| [!INCLUDE [Create database step 4](<./includes/tutorial-dotnetcore-sqldb-app/azure-portal-sql-db-create-04.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-sql-04-240px.png" alt-text="A screenshot showing the form used to allow other Azure services to connect to the database." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-sql-04.png"::: |
| [!INCLUDE [Create database step 5](<./includes/tutorial-dotnetcore-sqldb-app/azure-portal-sql-db-create-05.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-sql-05-240px.png" alt-text="A screenshot showing how to use the search box to find the SQL databases item in Azure." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-sql-05.png"::: | | [!INCLUDE [Create database step 6](<./includes/tutorial-dotnetcore-sqldb-app/azure-portal-sql-db-create-06.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-sql-06-240px.png" alt-text="A screenshot showing the create button in on the SQL databases page." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-sql-06.png"::: | | [!INCLUDE [Create database step 7](<./includes/tutorial-dotnetcore-sqldb-app/azure-portal-sql-db-create-07.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-sql-07-240px.png" alt-text="A screenshot showing the form to fill out to create a new SQL database in Azure." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-sql-07.png"::: | ### [Azure CLI](#tab/azure-cli)
-To create an Azure SQL database, we first must create a SQL Server to host it. A new Azure SQL Server is created by using the [az sql server create ](/cli/azure/sql/server#az_sql_server_create) command.
+First, create an Azure SQL Server to host the database. A new Azure SQL Server is created by using the [az sql server create ](/cli/azure/sql/server#az_sql_server_create) command.
-Replace the *server-name* placeholder with a unique SQL Database name. This name is used as the part of the globally unique SQL Database endpoint. Also, replace *db-username* and *db-username* with a username and password of your choice.
+Replace the *server-name* placeholder with a unique SQL Database name. The SQL Database name is used as part of the globally unique SQL Database endpoint. Also, replace *db-username* and *db-username* with a username and password of your choice.
```azurecli-interactive az sql server create \
az sql server create \
--admin-password <db-password> ```
-Provisioning a SQL Server may take a few minutes. Once the resource is available, we can create a database with the [az sql db create](/cli/azure/sql/db#az_sql_db_create) command.
+Setting up an SQL Server might take a few minutes. When the resource is available, we can create a database with the [az sql db create](/cli/azure/sql/db#az_sql_db_create) command.
```azurecli-interactive az sql db create \
az sql db create \
--name coreDb ```
-We also need to add the following firewall rule to our database server to allow other Azure resources to access it.
+We also need to add the following firewall rule to our database server to allow other Azure resources to connect to it.
```azurecli-interactive az sql server firewall-rule create \
We're now ready to deploy our .NET app to the App Service.
| Instructions | Screenshot | |:-|--:| | [!INCLUDE [Deploy app service step 1](<./includes/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-01.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-01-240px.png" alt-text="A screenshot showing the publish dialog in Visual Studio." lightbox="./media/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-01.png"::: |
-| [!INCLUDE [Deploy app service step 2](<./includes/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-02.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-02-240px.png" alt-text="A screenshot showing the how to select the deployment target in Azure." lightbox="./media/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-02.png"::: |
-| [!INCLUDE [Deploy app service step 3](<./includes/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-03.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-03-240px.png" alt-text="A screenshot showing the sign in to Azure dialog in Visual Studio." lightbox="./media/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-03.png"::: |
+| [!INCLUDE [Deploy app service step 2](<./includes/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-02.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-02-240px.png" alt-text="A screenshot showing how to select the deployment target in Azure." lightbox="./media/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-02.png"::: |
+| [!INCLUDE [Deploy app service step 3](<./includes/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-03.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-03-240px.png" alt-text="A screenshot showing the sign-in to Azure dialog in Visual Studio." lightbox="./media/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-03.png"::: |
| [!INCLUDE [Deploy app service step 4](<./includes/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-04.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-04-240px.png" alt-text="A screenshot showing the dialog to select the App Service instance to deploy to in Visual Studio." lightbox="./media/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-04.png"::: | | [!INCLUDE [Deploy app service step 5](<./includes/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-05.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-05-240px.png" alt-text="A screenshot showing the publishing profile summary dialog in Visual Studio and the location of the publish button used to publish the app." lightbox="./media/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-05.png"::: |
We're now ready to deploy our .NET app to the App Service.
| Instructions | Screenshot | |:-|--:|
-| [!INCLUDE [Deploy app service step 1](<./includes/tutorial-dotnetcore-sqldb-app/visual-studio-code-deploy-app-service-01.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/visual-studio-code-deploy-01-240px.png" alt-text="A screenshot showing howto install the Azure Account and App Service extensions in Visual Studio Code." lightbox="./media/tutorial-dotnetcore-sqldb-app/visual-studio-code-deploy-01.png"::: |
+| [!INCLUDE [Deploy app service step 1](<./includes/tutorial-dotnetcore-sqldb-app/visual-studio-code-deploy-app-service-01.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/visual-studio-code-deploy-01-240px.png" alt-text="A screenshot showing how to install the Azure Account and App Service extensions in Visual Studio Code." lightbox="./media/tutorial-dotnetcore-sqldb-app/visual-studio-code-deploy-01.png"::: |
| [!INCLUDE [Deploy app service step 2](<./includes/tutorial-dotnetcore-sqldb-app/visual-studio-code-deploy-app-service-02.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/visual-studio-code-deploy-02-240px.png" alt-text="A screenshot showing how to use the Azure App Service extension to deploy an app to Azure from Visual Studio Code." lightbox="./media/tutorial-dotnetcore-sqldb-app/visual-studio-code-deploy-02.png"::: | ### [Deploy using Local Git](#tab/azure-cli-deploy)
We're now ready to deploy our .NET app to the App Service.
## 5 - Connect the App to the Database
-Next we must connect the App hosted in our App Service to our database using a Connection String.
+Next, we must connect the App hosted in our App Service to our database using a Connection String.
### [Azure portal](#tab/azure-portal)
-Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps to create your Azure App Service resources.
+Sign in to the [Azure portal](https://portal.azure.com/) and follow the steps to create your Azure App Service resources:
| Instructions | Screenshot | |:-|--:|
Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps
### [Azure CLI](#tab/azure-cli)
-Azure CLI commands can be run in the [Azure Cloud Shell](https://shell.azure.com) or on a workstation with the [Azure CLI installed](/cli/azure/install-azure-cli).
+Run Azure CLI commands in the [Azure Cloud Shell](https://shell.azure.com) or on a workstation with the [Azure CLI installed](/cli/azure/install-azure-cli).
-We can retrieve the Connection String for our database using the [az sql db show-connection-string](/cli/azure/sql/db#az_sql_db_show_connection_string) command. This command allows us to add the Connection String to our App Service configuration settings. Copy this Connection String value for later use.
+We can retrieve the Connection String for our database using the [az sql db show-connection-string](/cli/azure/sql/db#az_sql_db_show_connection_string) command. This command allows us to add the Connection String to our App Service configuration settings. Copy this Connection String value for later use.
```azurecli-interactive az sql db show-connection-string \
az sql db show-connection-string \
--server <your-server-name> ```
-Next, let's assign the Connection String to our App Service using the command below. `MyDbConnection` is the name of the Connection String in our appsettings.json file, which means it will be loaded by our app during startup.
+Next, let's assign the Connection String to our App Service using the command below. `MyDbConnection` is the name of the Connection String in our appsettings.json file, which means it gets loaded by our app during startup.
-Make sure to replace the username and password in the connection string with your own before running the command.
+Replace the username and password in the connection string with your own before running the command.
```azurecli-interactive az webapp config connection-string set \
az webapp config connection-string set \
## 6 - Generate the Database Schema
-To generate our database schema, we need to configure a firewall rule on our Database Server. This rule will allow our local computer to connect to Azure. For this step you'll need to know your local computer's IP Address, which you can discover [by clicking here](https://whatismyipaddress.com/)
+To generate our database schema, we need to set up a firewall rule on our Database Server. This rule allows our local computer to connect to Azure. For this step, you'll need to know your local computer's IP address. For more information about how to find the IP address, [see here](https://whatismyipaddress.com/).
### [Azure portal](#tab/azure-portal)
az sql server firewall-rule create -resource-group msdocs-core-sql --server <you
-Next we need to update the appsettings.json file in our local app code with the Connection String of our Azure SQL Database. This will allow us to run migrations locally against our database hosted in Azure. Make sure to replace the username and password placeholders with the values you chose when creating your database.
+Next, update the appsettings.json file in our local app code with the Connection String of our Azure SQL Database. The update allows us to run migrations locally against our database hosted in Azure. Replace the username and password placeholders with the values you chose when creating your database.
```json "ConnectionStrings": {
Next we need to update the appsettings.json file in our local app code with the
} ```
-Finally, run the commands below to install the necessary CLI tools for Entity Framework Core, create an initial database migration file, and apply those changes to update the database.
+Finally, run the following commands to install the necessary CLI tools for Entity Framework Core. Create an initial database migration file and apply those changes to update the database:
```dotnetcli dotnet tool install -g dotnet-ef \
dotnet ef migrations add InitialCreate \
dotnet ef database update ```
-After the migration completes, your Azure SQL database will have the correct schema.
+After the migration finishes, the correct schema is created.
If you receive an error stating `Client with IP address xxx.xxx.xxx.xxx is not allowed to access the server`, that means the IP address you entered into your Azure firewall rule is incorrect. To fix this issue, update the Azure firewall rule with the IP address provided in the error message. ## 7 - Browse the Deployed Application and File Directory
-Navigate back to your web app in the browser. You can always get back to your site by clicking the **Browse** link at the top of the App Service overview page. If you refresh the page, you can now create todos and see them displayed on the home page. Congratulations!
+Go back to your web app in the browser. You can always get back to your site by selecting the **Browse** link at the top of the App Service overview page. If you refresh the page, you can now create todos and see them displayed on the home page. Congratulations!
:::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/app-success.png" alt-text="A screenshot showing the app successfully deployed to Azure." ::: Next, let's take a closer look at the deployed files of our app using a tool called Kudu.
-Azure App Service provides a web-based diagnostics console named Kudu. Kudu allows you to examine the server-hosting environment for your web app. You can view the files deployed to Azure, review the deployment history, and even open an SSH session into the hosting environment.
+Azure App Service provides a web-based diagnostics console named Kudu. Kudu lets you examine the server-hosting environment, view deployed files to Azure, review deployment history, and even open an SSH session into the hosting environment.
-To access Kudu, navigate to one of the following URLs. You'll need to sign into the Kudu site with your Azure credentials.
+To use Kudu, go to one of the following URLs. You'll need to sign into the Kudu site with your Azure credentials.
* For apps deployed in Free, Shared, Basic, Standard, and Premium App Service plans - `https:/<app-name>.scm.azurewebsites.net` * For apps deployed in Isolated service plans - `https://<app-name>.scm.<ase-name>.p.azurewebsites.net`
-From the main page in Kudu, you can access information about the application-hosting environment, app settings, deployments, and browse the files in the wwwroot directory.
+From the main page in Kudu, you can find information about the application-hosting environment, app settings, deployments, and browse the files in the wwwroot directory.
:::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/kudu-main-page.png" alt-text="A screenshot showing the Kudu admin page." :::
Azure App Service captures messages logged to the console to assist you in diagn
| Instructions | Screenshot | |:-|--:| | [!INCLUDE [Stream logs from Visual Studio Code 1](<./includes/tutorial-dotnetcore-sqldb-app/visual-studio-code-stream-logs-01.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/visual-studio-code-stream-logs-1-240px.png" alt-text="A screenshot showing the menu item used to enable application logging for a web app in Visual Studio Code." lightbox="./media/tutorial-dotnetcore-sqldb-app/visual-studio-code-stream-logs-1.png"::: |
-| [!INCLUDE [Stream logs from Visual Studio Code 2](<./includes/tutorial-dotnetcore-sqldb-app/visual-studio-code-stream-logs-02.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/visual-studio-code-stream-logs-2-240px.png" alt-text="A screenshot showing the output stream of an application log in Visual Studio Code." lightbox="./media/tutorial-dotnetcore-sqldb-app/visual-studio-code-stream-logs-2.png"::: |
+| [!INCLUDE [Stream logs from Visual Studio Code 2](<./includes/tutorial-dotnetcore-sqldb-app/visual-studio-code-stream-logs-02.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/visual-studio-code-stream-logs-2-240px.png" alt-text="A screenshot showing the output stream of an application login Visual Studio Code." lightbox="./media/tutorial-dotnetcore-sqldb-app/visual-studio-code-stream-logs-2.png"::: |
### [Azure CLI](#tab/azure-cli-logs)
az webapp log tail \
--resource-group $RESOURCE_GROUP_NAME ```
-Refresh the home page in the app or attempt other requests to generate some log messages. The output should look similar to the following.
+Refresh the home page in the app or attempt other requests to generate some log messages. The output should look similar to the below output.
```Console 2022-01-06T22:37:11 Welcome, you are now connected to log-streaming service. The default timeout is 2 hours. Change the timeout with the App Setting SCM_LOGSTREAM_TIMEOUT (in seconds).
Refresh the home page in the app or attempt other requests to generate some log
## Clean up resources
-When you are finished, you can delete all of the resources from Azure by deleting the resource group for the application. This will delete all of the resources contained inside the group.
+When you're finished, you can delete all of the resources from Azure by deleting the resource group for the application. It deletes all of the resources contained inside the group.
### [Azure portal](#tab/azure-portal-resources)
-Follow these steps while signed-in to the Azure portal to delete a resource group.
+Follow these steps while signed-in to the Azure portal to delete a resource group:
| Instructions | Screenshot | |:-|--:|
Follow these steps while signed-in to the Azure portal to delete a resource grou
### [Azure CLI](#tab/azure-cli-resources)
-You can delete the resource group you created by using the [az group delete](/cli/azure/group#az_group_delete) command. This will delete all of the resources contained inside of it.
+You can delete the resource group you created by using the [az group delete](/cli/azure/group#az_group_delete) command. Deleting the resource group deletes all of the resources contained within it.
```azurecli az group delete --name msdocs-core-sql
automation Automation Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-role-based-access-control.md
The following sections describe the minimum required permissions needed for enab
|Create / edit saved search | Microsoft.OperationalInsights/workspaces/write | Workspace | |Create / edit scope config | Microsoft.OperationalInsights/workspaces/write | Workspace|
-## Custom Azure Automation Contributor role
-
-Microsoft intends to remove the Automation account rights from the Log Analytics Contributor role. Currently, the built-in [Log Analytics Contributor](#log-analytics-contributor) role described above can escalate privileges to the subscription [Contributor](./../role-based-access-control/built-in-roles.md#contributor) role. Since Automation account Run As accounts are initially configured with Contributor rights on the subscription, it can be used by an attacker to create new runbooks and execute code as a Contributor on the subscription.
-
-As a result of this security risk, we recommend you don't use the Log Analytics Contributor role to execute Automation jobs. Instead, create the Azure Automation Contributor custom role and use it for actions related to the Automation account. Perform the following steps to create this custom role.
-
-### Create using the Azure portal
-
-Perform the following steps to create the Azure Automation custom role in the Azure portal. If you would like to learn more, see [Azure custom roles](./../role-based-access-control/custom-roles.md).
-
-1. Copy and paste the following JSON syntax into a file. Save the file on your local machine or in an Azure storage account. In the JSON file, replace the value for the **assignableScopes** property with the subscription GUID.
-
- ```json
- {
- "properties": {
- "roleName": "Automation Account Contributor (Custom)",
- "description": "Allows access to manage Azure Automation and its resources",
- "assignableScopes": [
- "/subscriptions/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXX"
- ],
- "permissions": [
- {
- "actions": [
- "Microsoft.Authorization/*/read",
- "Microsoft.Insights/alertRules/*",
- "Microsoft.Insights/metrics/read",
- "Microsoft.Insights/diagnosticSettings/*",
- "Microsoft.Resources/deployments/*",
- "Microsoft.Resources/subscriptions/resourceGroups/read",
- "Microsoft.Support/*"
- ],
- "notActions": [],
- "dataActions": [],
- "notDataActions": []
- }
- ]
- }
- }
- ```
-
-1. Complete the remaining steps as outlined in [Create or update Azure custom roles using the Azure portal](../role-based-access-control/custom-roles-portal.md#start-from-json). For [Step 3:Basics](../role-based-access-control/custom-roles-portal.md#step-3-basics), note the following:
-
- - In the **Custom role name** field, enter **Automation account Contributor (custom)** or a name matching your naming standards.
- - For **Baseline permissions**, select **Start from JSON**. Then select the custom JSON file you saved earlier.
-
-1. Complete the remaining steps, and then review and create the custom role. It can take a few minutes for your custom role to appear everywhere.
-
-### Create using PowerShell
-
-Perform the following steps to create the Azure Automation custom role with PowerShell. If you would like to learn more, see [Azure custom roles](./../role-based-access-control/custom-roles.md).
-
-1. Copy and paste the following JSON syntax into a file. Save the file on your local machine or in an Azure storage account. In the JSON file, replace the value for the **AssignableScopes** property with the subscription GUID.
-
- ```json
- {
- "Name": "Automation account Contributor (custom)",
- "Id": "",
- "IsCustom": true,
- "Description": "Allows access to manage Azure Automation and its resources",
- "Actions": [
- "Microsoft.Authorization/*/read",
- "Microsoft.Insights/alertRules/*",
- "Microsoft.Insights/metrics/read",
- "Microsoft.Insights/diagnosticSettings/*",
- "Microsoft.Resources/deployments/*",
- "Microsoft.Resources/subscriptions/resourceGroups/read",
- "Microsoft.Support/*"
- ],
- "NotActions": [],
- "AssignableScopes": [
- "/subscriptions/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXX"
- ]
- }
- ```
-
-1. Complete the remaining steps as outlined in [Create or update Azure custom roles using Azure PowerShell](./../role-based-access-control/custom-roles-powershell.md#create-a-custom-role-with-json-template). It can take a few minutes for your custom role to appear everywhere.
- ## Manage Role permissions for Hybrid Worker Groups and Hybrid Workers You can create [Azure custom roles](/azure/role-based-access-control/custom-roles) in Automation and grant the following permissions to Hybrid Worker Groups and Hybrid Workers:
Update Management can be used to assess and schedule update deployments to machi
|**Resource** |**Role** |**Scope** | ||||
-|Automation account |[Custom Azure Automation Contributor role](#custom-azure-automation-contributor-role) |Automation account |
|Automation account |Virtual Machine Contributor |Resource Group for the account | |Log Analytics workspace | Log Analytics Contributor|Log Analytics workspace | |Log Analytics workspace |Log Analytics Reader|Subscription|
automation Automation Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-security-overview.md
To be able to create or update the Automation account, you need to be a member o
- [Owner](./automation-role-based-access-control.md#owner) - [Contributor](./automation-role-based-access-control.md#contributor)-- [Custom Azure Automation Contributor](./automation-role-based-access-control.md#custom-azure-automation-contributor-role) To learn more about the Azure Resource Manager and Classic deployment models, see [Resource Manager and classic deployment](../azure-resource-manager/management/deployment-models.md).
Role-based access control is available with Azure Resource Manager to grant perm
If you have strict security controls for permission assignment in resource groups, you need to assign the Run As account membership to the **Contributor** role in the resource group. > [!NOTE]
-> We recommend you don't use the **Log Analytics Contributor** role to execute Automation jobs. Instead, create the Azure Automation Contributor custom role and use it for actions related to the Automation account. For more information, see [Custom Azure Automation Contributor role](./automation-role-based-access-control.md#custom-azure-automation-contributor-role).
+> We recommend you don't use the **Log Analytics Contributor** role to execute Automation jobs. Instead, create the Azure Automation Contributor custom role and use it for actions related to the Automation account.
## Runbook authentication with Hybrid Runbook Worker
automation Delete Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/delete-account.md
To recover an Automation account, ensure that the following conditions are met:
- Before you attempt to recover a deleted Automation account, ensure that resource group for that account exists. > [!NOTE]
-> If the resource group of the Automation account is deleted, to recover, you must recreate the resource group with the same name. After a few hours, the Automation account is repopulated in the list of deleted accounts. Then you can restore the account.
+> * If the resource group of the Automation account is deleted, to recover, you must recreate the resource group with the same name. After a few hours, the Automation account is repopulated in the list of deleted accounts. Then you can restore the account.
+> * Though the resource group isn't present, you can see the Automation account in the deleted list. If the resource group isn't present, the account restore operation fails with the error *Account restore failed*.
### Recover a deleted Automation account
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/overview.md
Through [Arc-enabled servers](../azure-arc/servers/overview.md), it provides a c
Azure Automation supports management throughout the lifecycle of your infrastructure and applications. Common scenarios include: * **Schedule tasks** - stop VMs or services at night and turn on during the day, weekly or monthly recurring maintenance workflows.
-* **Write runbooks** - Author PowerShell, PowerShell Workflow, graphical, Python 2 and 3, and DSC runbooks in common languages.
* **Build and deploy resources** - Deploy virtual machines across a hybrid environment using runbooks and Azure Resource Manager templates. Integrate into development tools, such as Jenkins and Azure DevOps.
-* **Configure VMs** - Assess and configure Windows and Linux machines with configurations for the infrastructure and application.
-* **Share knowledge** - Transfer knowledge into the system on how your organization delivers and maintains workloads.
-* **Retrieve inventory** - Get a complete inventory of deployed resources for targeting, reporting, and compliance.
-* **Find changes** - Identify and isolate machine changes that can cause misconfiguration and improve operational compliance. Remediate or escalate them to management systems.
* **Periodic maintenance** - to execute tasks that need to be performed at set timed intervals like purging stale or old data, or reindex a SQL database. * **Respond to alerts** - Orchestrate a response when cost-based, system-based, service-based, and/or resource utilization alerts are generated. * **Hybrid automation** - Manage or automate on-premises servers and services like SQL Server, Active Directory, SharePoint Server, etc. * **Azure resource lifecycle management** - for IaaS and PaaS services.
+ - Resource provisioning and deprovisioning.
+ - Add correct tags, locks, NSGs, UDRs per business rules.
+ - Resource group creation, deletion & update.
+ - Start container group.
+ - Register DNS record.
+ - Encrypt Virtual machines.
+ - Configure disk (disk snapshot, delete old snapshots).
+ - Subscription management.
+ - Start-stop resources to save cost.
+* **Monitoring & integrate** with 1st party (through Azure Monitor) or 3rd party external systems.
+ - Ensure resource creation\deletion operations is captured to SQL.
+ - Send resource usage data to web API.
+ - Send monitoring data to ServiceNow, Event Hub, New Relic and so on.
+ - Collect and store information about Azure resources.
+ - Perform SQL monitoring checks & reporting.
+ - Check website availability.
* **Dev/test automation scenarios** - Start and start resources, scale resources, etc. * **Governance related automation** - Automatically apply or update tags, locks, etc. * **Azure Site Recovery** - orchestrate pre/post scripts defined in a Site Recovery DR workflow. * **Azure Virtual Desktop** - orchestrate scaling of VMs or start/stop VMs based on utilization.
+* **Configure VMs** - Assess and configure Windows and Linux machines with configurations for the infrastructure and application.
+* **Retrieve inventory** - Get a complete inventory of deployed resources for targeting, reporting, and compliance.
+* **Find changes** - Identify and isolate machine changes that can cause misconfiguration and improve operational compliance. Remediate or escalate them to management systems.
+ Depending on your requirements, one or more of the following Azure services integrate with or compliment Azure Automation to help fullfil them:
automation Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new.md
For more information, see [Use of customer-managed keys](automation-secure-asset
Microsoft intends to remove the Automation account rights from the Log Analytics Contributor role. Currently, the built-in [Log Analytics Contributor](./automation-role-based-access-control.md#log-analytics-contributor) role can escalate privileges to the subscription [Contributor](./../role-based-access-control/built-in-roles.md#contributor) role. Since Automation account Run As accounts are initially configured with Contributor rights on the subscription, it can be used by an attacker to create new runbooks and execute code as a Contributor on the subscription.
-As a result of this security risk, we recommend you don't use the Log Analytics Contributor role to execute Automation jobs. Instead, create the Azure Automation Contributor custom role and use it for actions related to the Automation account. For implementation steps, see [Custom Azure Automation Contributor role](./automation-role-based-access-control.md#custom-azure-automation-contributor-role).
+As a result of this security risk, we recommend you don't use the Log Analytics Contributor role to execute Automation jobs. Instead, create the Azure Automation Contributor custom role and use it for actions related to the Automation account.
### Support for Automation and State Configuration available in West US 3
azure-arc Custom Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/custom-locations.md
If you are logged into Azure CLI using a service principal, to enable this featu
| `--resource-group, --g` | Resource group of the custom location | | `--namespace` | Namespace in the cluster bound to the custom location being created | | `--host-resource-id` | Azure Resource Manager identifier of the Azure Arc-enabled Kubernetes cluster (connected cluster) |
-| `--cluster-extension-ids` | Azure Resource Manager identifiers of the cluster extension instances installed on the connected cluster. Provide a space-seperated list of the cluster extension IDs |
+| `--cluster-extension-ids` | Azure Resource Manager identifiers of the cluster extension instances installed on the connected cluster. Provide a space-separated list of the cluster extension IDs |
**Optional parameters**
az customlocation update -n <customLocationName> -g <resourceGroupName> --namesp
| Parameter name | Description | |--||
-| `--cluster-extension-ids` | Associate new cluster extensions to this custom location by providing Azure Resource Manager identifiers of the cluster extension instances installed on the connected cluster. Provide a space-seperated list of the cluster extension IDs |
+| `--cluster-extension-ids` | Associate new cluster extensions to this custom location by providing Azure Resource Manager identifiers of the cluster extension instances installed on the connected cluster. Provide a space-separated list of the cluster extension IDs |
| `--tags` | Add new tags in addition to existing tags. Space-separated list of tags: key[=value] [key[=value] ...]. | ## Patch a custom location
az customlocation patch -n <customLocationName> -g <resourceGroupName> --namespa
| Parameter name | Description | |--||
-| `--cluster-extension-ids` | Associate new cluster extensions to this custom location by providing Azure Resource Manager identifiers of the cluster extension instances installed on the connected cluster. Provide a space-seperated list of the cluster extension IDs |
+| `--cluster-extension-ids` | Associate new cluster extensions to this custom location by providing Azure Resource Manager identifiers of the cluster extension instances installed on the connected cluster. Provide a space-separated list of the cluster extension IDs |
| `--tags` | Add new tags in addition to existing tags. Space-separated list of tags: key[=value] [key[=value] ...]. | ## Delete a custom location
azure-arc Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions.md
A conceptual overview of this feature is available in [Cluster extensions - Azur
| Extension | Description | | | -- |
-| [Azure Monitor for containers](../../azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md?toc=/azure/azure-arc/kubernetes/toc.json) | Provides visibility into the performance of workloads deployed on the Kubernetes cluster. Collects memory and CPU utilization metrics from controllers, nodes, and containers. |
+| [Azure Monitor for containers](../../azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md?toc=/azure/azure-arc/kubernetes/toc.json&bc=/azure/azure-arc/kubernetes/breadcrumb/toc.json) | Provides visibility into the performance of workloads deployed on the Kubernetes cluster. Collects memory and CPU utilization metrics from controllers, nodes, and containers. |
+| [Azure Policy](../../governance/policy/concepts/policy-for-kubernetes.md?toc=/azure/azure-arc/kubernetes/toc.json&bc=/azure/azure-arc/kubernetes/breadcrumb/toc.json) | Azure Policy extends [Gatekeeper](https://github.com/open-policy-agent/gatekeeper), an admission controller webhook for [Open Policy Agent](https://www.openpolicyagent.org/) (OPA), to apply at-scale enforcements and safeguards on your clusters in a centralized, consistent manner. |
| [Azure Key Vault Secrets Provider](tutorial-akv-secrets-provider.md) | The Azure Key Vault Provider for Secrets Store CSI Driver allows for the integration of Azure Key Vault as a secrets store with a Kubernetes cluster via a CSI volume. |
-| [Microsoft Defender for Cloud](../../security-center/defender-for-kubernetes-azure-arc.md?toc=/azure/azure-arc/kubernetes/toc.json) | Gathers information related to security like audit log data from the Kubernetes cluster. Provides recommendations and threat alerts based on gathered data. |
+| [Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-kubernetes-azure-arc.md?toc=/azure/azure-arc/kubernetes/toc.json&bc=/azure/azure-arc/kubernetes/breadcrumb/toc.json) | Gathers information related to security like audit log data from the Kubernetes cluster. Provides recommendations and threat alerts based on gathered data. |
| [Azure Arc-enabled Open Service Mesh](tutorial-arc-enabled-open-service-mesh.md) | Deploys Open Service Mesh on the cluster and enables capabilities like mTLS security, fine grained access control, traffic shifting, monitoring with Azure Monitor or with open source add-ons of Prometheus and Grafana, tracing with Jaeger, integration with external certification management solution. | | [Azure Arc-enabled Data Services](../../azure-arc/kubernetes/custom-locations.md#create-custom-location) | Makes it possible for you to run Azure data services on-prem, at the edge, and in public clouds using Kubernetes and the infrastructure of your choice. | | [Azure App Service on Azure Arc](../../app-service/overview-arc-integration.md) | Allows you to provision an App Service Kubernetes environment on top of Azure Arc-enabled Kubernetes clusters. |
azure-arc Onboard Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-service-principal.md
If you don't have an Azure subscription, create a [free account](https://azure.m
You can create a service principal in the Azure portal or by using Azure PowerShell. > [!NOTE]
-> To assign Arc-enabled server roles, your account must be a member of the **Owner** or **User Access Administrator** role in the subscription that you want to use for onboarding.
+> To create a service principal, your Azure Active Directory tenant needs to allow users to register applications. If it does not, your account must be a member of the **Application Administrator** or **Cloud Application Administrator** administrative role. See [Delegate app registration permissions in Azure Active Directory](../../active-directory/roles/delegate-app-roles.md) for more information about tenant-level requirements. To assign Arc-enabled server roles, your account must be a member of the **Owner** or **User Access Administrator** role in the subscription that you want to use for onboarding.
### Azure portal
azure-functions Configure Networking How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/configure-networking-how-to.md
Title: How to configure Azure Functions with a virtual network description: Article that shows you how to perform certain virtual networking tasks for Azure Functions. Previously updated : 3/13/2021- Last updated : 03/04/2022+ # How to configure Azure Functions with a virtual network
This article shows you how to perform tasks related to configuring your function
When you create a function app, you must create or link to a general-purpose Azure Storage account that supports Blob, Queue, and Table storage. You can replace this storage account with one that is secured with service endpoints or private endpoints. When configuring your storage account with private endpoints, public access to your function app will be automatically disabled, and your function app will only be accessible through the virtual network. > [!NOTE]
-> This feature currently works for all Windows virtual network-supported SKUs in the Dedicated (App Service) plan and for Windows Elastic Premium plans. ASEv3 is not supported yet. It is also supported with private DNS for Linux virtual network-supported SKUs. Consumption and custom DNS for Linux plans aren't supported.
+> This feature currently works for all Windows and Linux virtual network-supported SKUs in the Dedicated (App Service) plan and for Windows Elastic Premium plans. ASEv3 is not supported yet. Consumption tier isn't supported.
To set up a function with a storage account restricted to a private network:
To set up a function with a storage account restricted to a private network:
| `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING` | Storage connection string | This is the connection string for a secured storage account. | | `WEBSITE_CONTENTSHARE` | File share | The name of the file share created in the secured storage account where the project deployment files reside. | | `WEBSITE_CONTENTOVERVNET` | 1 | A value of 1 enables your function app to scale when you have your storage account restricted to a virtual network. You should enable this setting when restricting your storage account to a virtual network. |
- | `WEBSITE_VNET_ROUTE_ALL` | 1 | Forces all outbound traffic through the virtual network. Required when the storage account is using private endpoint connections. |
1. Select **Save** to save the application settings. Changing app settings causes the app to restart.
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
Requires that [FUNCTIONS\_EXTENSION\_VERSION](functions-app-settings.md#function
## FUNCTIONS\_WORKER\_PROCESS\_COUNT
-Specifies the maximum number of language worker processes, with a default value of `1`. The maximum value allowed is `10`. Function invocations are evenly distributed among language worker processes. Language worker processes are spawned every 10 seconds until the count set by FUNCTIONS\_WORKER\_PROCESS\_COUNT is reached. Using multiple language worker processes is not the same as [scaling](functions-scale.md). Consider using this setting when your workload has a mix of CPU-bound and I/O-bound invocations. This setting applies to all non-.NET languages.
+Specifies the maximum number of language worker processes, with a default value of `1`. The maximum value allowed is `10`. Function invocations are evenly distributed among language worker processes. Language worker processes are spawned every 10 seconds until the count set by FUNCTIONS\_WORKER\_PROCESS\_COUNT is reached. Using multiple language worker processes is not the same as [scaling](functions-scale.md). Consider using this setting when your workload has a mix of CPU-bound and I/O-bound invocations. This setting applies to all language runtimes, except for .NET running in process (`dotnet`).
|Key|Sample value| |||
azure-functions Functions Networking Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-networking-options.md
Title: Azure Functions networking options
description: An overview of all networking options available in Azure Functions. Previously updated : 1/21/2021 Last updated : 03/04/2022
Azure Functions supports two kinds of virtual network integration:
Virtual network integration in Azure Functions uses shared infrastructure with App Service web apps. To learn more about the two types of virtual network integration, see:
-* [Regional virtual network Integration](../app-service/overview-vnet-integration.md#regional-virtual-network-integration)
-* [Gateway-required virtual network Integration](../app-service/overview-vnet-integration.md#gateway-required-virtual-network-integration)
+* [Regional virtual network integration](../app-service/overview-vnet-integration.md#regional-virtual-network-integration)
+* [Gateway-required virtual network integration](../app-service/overview-vnet-integration.md#gateway-required-virtual-network-integration)
-To learn how to set up virtual network integration, see [Enable Vnet Integration](#enable-vnet-integration).
+To learn how to set up virtual network integration, see [Enable virtual network integration](#enable-virtual-network-integration).
-## Enable VNet Integration
+## Enable virtual network integration
1. Go to the **Networking** blade in the Function App portal. Under **VNet Integration**, select **Click here to configure**.
To learn how to set up virtual network integration, see [Enable Vnet Integration
:::image type="content" source="./media/functions-networking-options/vnet-int-function-app.png" alt-text="Select VNet Integration":::
-1. The drop-down list contains all of the Azure Resource Manager virtual networks in your subscription in the same region. Select the VNet you want to integrate with.
+1. The drop-down list contains all of the Azure Resource Manager virtual networks in your subscription in the same region. Select the virtual network you want to integrate with.
:::image type="content" source="./media/functions-networking-options/vnet-int-add-vnet-function-app.png" alt-text="Select the VNet":::
- * The Functions Premium Plan only supports regional VNet integration. If the VNet is in the same region, either create a new subnet or select an empty, pre-existing subnet.
- * To select a VNet in another region, you must have a VNet gateway provisioned with point to site enabled. VNet integration across regions is only supported for Dedicated plans.
+ * The Functions Premium Plan only supports regional virtual network integration. If the virtual network is in the same region, either create a new subnet or select an empty, pre-existing subnet.
+ * To select a virtual network in another region, you must have a virtual network gateway provisioned with point to site enabled. Virtual network integration across regions is only supported for Dedicated plans, but global peerings will work with regional virtual network integration.
-During the integration, your app is restarted. When integration is finished, you'll see details on the VNet you're integrated with. By default, Route All will be enabled, and all traffic will be routed into your VNet.
+During the integration, your app is restarted. When integration is finished, you'll see details on the virtual network you're integrated with. By default, Route All will be enabled, and all traffic will be routed into your virtual network.
If you wish for only your private traffic ([RFC1918](https://datatracker.ietf.org/doc/html/rfc1918#section-3) traffic) to be routed, please follow the steps in the [app service documentation](../app-service/overview-vnet-integration.md#application-routing). ## Regional virtual network integration
-Using regional VNet Integration enables your app to access:
+Using regional virtual network integration enables your app to access:
-* Resources in a VNet in the same region as your app.
-* Resources in VNets peered to the VNet your app is integrated with.
+* Resources in the same virtual network as your app.
+* Resources in virtual networks peered to the virtual network your app is integrated with.
* Service endpoint secured services. * Resources across Azure ExpressRoute connections.
-* Resources in the VNet you're integrated with.
* Resources across peered connections, which include Azure ExpressRoute connections. * Private endpoints
-When you use VNet Integration with VNets in the same region, you can use the following Azure networking features:
+When you use regional virtual network integration, you can use the following Azure networking features:
-* **Network security groups (NSGs)**: You can block outbound traffic with an NSG that's placed on your integration subnet. The inbound rules don't apply because you can't use VNet Integration to provide inbound access to your app.
+* **Network security groups (NSGs)**: You can block outbound traffic with an NSG that's placed on your integration subnet. The inbound rules don't apply because you can't use virtual network integration to provide inbound access to your app.
* **Route tables (UDRs)**: You can place a route table on the integration subnet to send outbound traffic where you want. > [!NOTE]
-> When you route all of your outbound traffic into your VNet, it's subject to the NSGs and UDRs that are applied to your integration subnet. When VNet integrated, your function app's outbound traffic to public IP addresses is still sent from the addresses that are listed in your app properties, unless you provide routes that direct the traffic elsewhere.
+> When you route all of your outbound traffic into your virtual network, it's subject to the NSGs and UDRs that are applied to your integration subnet. When virtual network integrated, your function app's outbound traffic to public IP addresses is still sent from the addresses that are listed in your app properties, unless you provide routes that direct the traffic elsewhere.
>
-> Regional VNet integration isn't able to use port 25.
+> Regional virtual network integration isn't able to use port 25.
-There are some limitations with using VNet Integration with VNets in the same region:
+There are some limitations with using virtual network:
-* You can't reach resources across global peering connections.
-* The feature is available from all App Service scale units in Premium V2 and Premium V3. It's also available in Standard but only from newer App Service scale units. If you are on an older scale unit, you can only use the feature from a Premium V2 App Service plan. If you want to make sure you can use the feature in a Standard App Service plan, create your app in a Premium V3 App Service plan. Those plans are only supported on our newest scale units. You can scale down if you desire after that.
+* The feature is available from all App Service deployments in Premium V2 and Premium V3. It's also available in Standard but only from newer App Service deployments. If you are on an older deployment, you can only use the feature from a Premium V2 App Service plan. If you want to make sure you can use the feature in a Standard App Service plan, create your app in a Premium V3 App Service plan. Those plans are only supported on our newest deployments. You can scale down if you desire after that.
* The integration subnet can be used by only one App Service plan. * The feature can't be used by Isolated plan apps that are in an App Service Environment.
-* The feature requires an unused subnet that's a /28 or larger in an Azure Resource Manager VNet.
-* The app and the VNet must be in the same region.
-* You can't delete a VNet with an integrated app. Remove the integration before you delete the VNet.
-* You can have only one regional VNet Integration per App Service plan. Multiple apps in the same App Service plan can use the same VNet.
-* You can't change the subscription of an app or a plan while there's an app that's using regional VNet Integration.
-* Your app can't resolve addresses in Azure DNS Private Zones without configuration changes.
+* The feature requires an unused subnet that's a /28 or larger in an Azure Resource Manager virtual network.
+* The app and the virtual network must be in the same region.
+* You can't delete a virtual network with an integrated app. Remove the integration before you delete the virtual network.
+* You can have only one regional virtual network integration per App Service plan. Multiple apps in the same App Service plan can use the same integration subnet.
+* You can't change the subscription of an app or a plan while there's an app that's using regional virtual network integration.
## Subnets
-VNet Integration depends on a dedicated subnet. When you provision a subnet, the Azure subnet loses five IPs from the start. One address is used from the integration subnet for each plan instance. When you scale your app to four instances, then four addresses are used.
+Virtual network integration depends on a dedicated subnet. When you provision a subnet, the Azure subnet loses five IPs from the start. One address is used from the integration subnet for each plan instance. When you scale your app to four instances, then four addresses are used.
When you scale up or down in size, the required address space is doubled for a short period of time. This affects the real, available supported instances for a given subnet size. The following table shows both the maximum available addresses per CIDR block and the impact this has on horizontal scale:
When you scale up or down in size, the required address space is doubled for a s
Since subnet size can't be changed after assignment, use a subnet that's large enough to accommodate whatever scale your app might reach. To avoid any issues with subnet capacity for Functions Premium plans, you should use a /24 with 256 addresses for Windows and a /26 with 64 addresses for Linux. When creating subnets in Azure portal as part of integrating with the virtual network, a minimum size of /26 and /24 is required for Windows and Linux respectively.
-When you want your apps in another plan to reach a VNet that's already connected to by apps in another plan, select a different subnet than the one being used by the pre-existing VNet Integration.
+When you want your apps in another plan to reach a virtual network that's already connected to by apps in another plan, select a different subnet than the one being used by the pre-existing virtual network integration.
The feature is fully supported for both Windows and Linux apps, including [custom containers](../app-service/configure-custom-container.md). All of the behaviors act the same between Windows apps and Linux apps. ### Service endpoints
-To provide a higher level of security, you can restrict a number of Azure services to a virtual network by using service endpoints. Regional VNet Integration enables your function app to reach Azure services that are secured with service endpoints. This configuration is supported on all [plans](functions-scale.md#networking-features) that support virtual network integration. To access a service endpoint-secured service, you must do the following:
+To provide a higher level of security, you can restrict a number of Azure services to a virtual network by using service endpoints. Regional virtual network integration enables your function app to reach Azure services that are secured with service endpoints. This configuration is supported on all [plans](functions-scale.md#networking-features) that support virtual network integration. To access a service endpoint-secured service, you must do the following:
-1. Configure regional VNet Integration with your function app to connect to a specific subnet.
+1. Configure regional virtual network integration with your function app to connect to a specific subnet.
1. Go to the destination service and configure service endpoints against the integration subnet. To learn more, see [Virtual network service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md). ### Network security groups
-You can use network security groups to block inbound and outbound traffic to resources in a VNet. An app that uses regional VNet integration can use a [network security group][VNETnsg] to block outbound traffic to resources in your VNet or the internet. To block traffic to public addresses, you must have VNet integration with Route All enabled. The inbound rules in an NSG don't apply to your app because VNet Integration affects only outbound traffic from your app.
+You can use network security groups to block inbound and outbound traffic to resources in a virtual network. An app that uses regional virtual network integration can use a [network security group][VNETnsg] to block outbound traffic to resources in your virtual network or the internet. To block traffic to public addresses, you must have virtual network integration with Route All enabled. The inbound rules in an NSG don't apply to your app because virtual network integration affects only outbound traffic from your app.
-To control inbound traffic to your app, use the Access Restrictions feature. An NSG that's applied to your integration subnet is in effect regardless of any routes applied to your integration subnet. If your function app is VNet integrated with Route All enabled, and you don't have any routes that affect public address traffic on your integration subnet, all of your outbound traffic is still subject to NSGs assigned to your integration subnet. When Route All isn't enabled, NSGs are only applied to RFC1918 traffic.
+To control inbound traffic to your app, use the Access Restrictions feature. An NSG that's applied to your integration subnet is in effect regardless of any routes applied to your integration subnet. If your function app is virtual network integrated with Route All enabled, and you don't have any routes that affect public address traffic on your integration subnet, all of your outbound traffic is still subject to NSGs assigned to your integration subnet. When Route All isn't enabled, NSGs are only applied to RFC1918 traffic.
### Routes
You can use route tables to route outbound traffic from your app to wherever you
If you want to route all outbound traffic on-premises, you can use a route table to send all outbound traffic to your ExpressRoute gateway. If you do route traffic to a gateway, be sure to set routes in the external network to send any replies back.
-Border Gateway Protocol (BGP) routes also affect your app traffic. If you have BGP routes from something like an ExpressRoute gateway, your app outbound traffic is affected. By default, BGP routes affect only your RFC1918 destination traffic. When your function app is VNet integrated with Route All enabled, all outbound traffic can be affected by your BGP routes.
+Border Gateway Protocol (BGP) routes also affect your app traffic. If you have BGP routes from something like an ExpressRoute gateway, your app outbound traffic is affected. By default, BGP routes affect only your RFC1918 destination traffic. When your function app is virtual network integrated with Route All enabled, all outbound traffic can be affected by your BGP routes.
### Azure DNS private zones
-After your app integrates with your VNet, it uses the same DNS server that your VNet is configured with. By default, your app won't work with Azure DNS private zones. To work with Azure DNS private zones, you need to add the following app settings:
--- `WEBSITE_VNET_ROUTE_ALL` with value `1`-
-This setting sends all of your outbound calls from your app into your VNet and enables your app to access an Azure DNS private zone. With these settings, your app can use Azure DNS by querying the DNS private zone at the worker level.
+After your app integrates with your virtual network, it uses the same DNS server that your virtual network is configured with and will work with the Azure DNS private zones linked to the virtual network.
### Private Endpoints If you want to make calls to [Private Endpoints][privateendpoints], then you must make sure that your DNS lookups resolve to the private endpoint. You can enforce this behavior in one of the following ways:
-* Integrate with Azure DNS private zones. When your VNet doesn't have a custom DNS server, this is done automatically.
+* Integrate with Azure DNS private zones. When your virtual network doesn't have a custom DNS server, this is done automatically.
* Manage the private endpoint in the DNS server used by your app. To do this you must know the private endpoint address and then point the endpoint you are trying to reach to that address using an A record. * Configure your own DNS server to forward to [Azure DNS private zones](#azure-dns-private-zones).
If you want to make calls to [Private Endpoints][privateendpoints], then you mus
When you create a function app, you must create or link to a general-purpose Azure Storage account that supports Blob, Queue, and Table storage. You can replace this storage account with one that is secured with service endpoints or private endpoints.
-This feature is supported for all Windows virtual network-supported SKUs in the Dedicated (App Service) plan and for the Premium plans. It is also supported with private DNS for Linux virtual network-supported SKUs. The Consumption plan and custom DNS on Linux plans aren't supported. To learn how to set up a function with a storage account restricted to a private network, see [Restrict your storage account to a virtual network](configure-networking-how-to.md#restrict-your-storage-account-to-a-virtual-network).
+This feature is supported for all Windows and Linux virtual network-supported SKUs in the Dedicated (App Service) plan and for the Premium plans. The Consumption plan isn't supported. To learn how to set up a function with a storage account restricted to a private network, see [Restrict your storage account to a virtual network](configure-networking-how-to.md#restrict-your-storage-account-to-a-virtual-network).
## Use Key Vault references
To learn more, see the [App Service documentation for Hybrid Connections](../app
Outbound IP restrictions are available in a Premium plan, App Service plan, or App Service Environment. You can configure outbound restrictions for the virtual network where your App Service Environment is deployed.
-When you integrate a function app in a Premium plan or an App Service plan with a virtual network, the app can still make outbound calls to the internet by default. By integrating your function app with a VNet with Route All enabled, you force all outbound traffic to be sent into your virtual network, where network security group rules can be used to restrict traffic.
+When you integrate a function app in a Premium plan or an App Service plan with a virtual network, the app can still make outbound calls to the internet by default. By integrating your function app with a virtual network with Route All enabled, you force all outbound traffic to be sent into your virtual network, where network security group rules can be used to restrict traffic.
To learn how to control the outbound IP using a virtual network, see [Tutorial: Control Azure Functions outbound IP with an Azure virtual network NAT gateway](functions-how-to-use-nat-gateway.md).
azure-functions Security Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/security-concepts.md
By default, keys are stored in a Blob storage container in the account provided
||||| |Different storage account | `AzureWebJobsSecretStorageSas` | `<BLOB_SAS_URL>` | Stores keys in Blob storage of a second storage account, based on the provided SAS URL. Keys are encrypted before being stored using a secret unique to your function app. | |File system | `AzureWebJobsSecretStorageType` | `files` | Keys are persisted on the file system, encrypted before storage using a secret unique to your function app. |
-|Azure Key Vault | `AzureWebJobsSecretStorageType`<br/>`AzureWebJobsSecretStorageKeyVaultName` | `keyvault`<br/>`<VAULT_NAME>` | The vault must have an access policy corresponding to the system-assigned managed identity of the hosting resource. The access policy should grant the identity the following secret permissions: `Get`,`Set`, `List`, and `Delete`. <br/>When running locally, the developer identity is used, and settings must be in the [local.settings.json file](functions-develop-local.md#local-settings-file). |
+|Azure Key Vault | `AzureWebJobsSecretStorageType`<br/>`AzureWebJobsSecretStorageKeyVaultName` | `keyvault`<br/>`<VAULT_NAME>` | The vault must have an access policy corresponding to the system-assigned managed identity of the hosting resource. The access policy should grant the identity the following secret permissions: `Get`,`Set`, `List`, and `Delete`. <br/>When running locally, the developer identity is used, and settings must be in the [local.settings.json file](functions-develop-local.md#local-settings-file). |
|Kubernetes Secrets |`AzureWebJobsSecretStorageType`<br/>`AzureWebJobsKubernetesSecretName` (optional) | `kubernetes`<br/>`<SECRETS_RESOURCE>` | Supported only when running the Functions runtime in Kubernetes. When `AzureWebJobsKubernetesSecretName` isn't set, the repository is considered read-only. In this case, the values must be generated before deployment. The Azure Functions Core Tools generates the values automatically when deploying to Kubernetes.|
+#### Using Key Vault in Functions v4
+
+The application settings for using Azure Key Vault as the secret repository in Functions v4:
+
+##### System-assigned managed identity
+
+| Setting | Value |
+||-|
+| `AzureWebJobsSecretStorageType` | `keyvault` |
+| `AzureWebJobsSecretStorageKeyVaultUri` | `<VAULT_URI>` |
+
+##### User-assigned managed identity
+
+| Setting | Value |
+||-|
+| `AzureWebJobsSecretStorageType` | `keyvault` |
+| `AzureWebJobsSecretStorageKeyVaultUri` | `<VAULT_URI>` |
+| `AzureWebJobsSecretStorageKeyVaultClientId` | `<CLIENT_ID>` |
+
+##### App registration
+
+| Setting | Value |
+||-|
+| `AzureWebJobsSecretStorageType` | `keyvault` |
+| `AzureWebJobsSecretStorageKeyVaultUri` | `<VAULT_URI>` |
+| `AzureWebJobsSecretStorageKeyVaultTenantId` | `<TENANT_ID>` |
+| `AzureWebJobsSecretStorageKeyVaultClientId` | `<CLIENT_ID>` |
+| `AzureWebJobsSecretStorageKeyVaultClientSecret` | `<CLIENT_SECRET>` |
+
+> [!NOTE]
+> The Vault URI should be the full value displayed in the Key Vault overview tab, including `https://`.
+ ### Authentication/authorization While function keys can provide some mitigation for unwanted access, the only way to truly secure your function endpoints is by implementing positive authentication of clients accessing your functions. You can then make authorization decisions based on identity.
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md
recommendations: false Previously updated : 03/02/2022 Last updated : 03/07/2022 # Compare Azure Government and global Azure
Table below lists API endpoints in Azure vs. Azure Government for accessing and
## Service availability
-Microsoft's goal for Azure Government is to match service availability in Azure. For service availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia). Services available in Azure Government are listed by category and whether they are Generally Available or available through Preview. If a service is available in Azure Government, that fact is not reiterated in the rest of this article. Instead, you are encouraged to review [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia) for the latest, up-to-date information on service availability.
+Microsoft's goal for Azure Government is to match service availability in Azure. For service availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true). Services available in Azure Government are listed by category and whether they are Generally Available or available through Preview. If a service is available in Azure Government, that fact is not reiterated in the rest of this article. Instead, you are encouraged to review [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true) for the latest, up-to-date information on service availability.
In general, service availability in Azure Government implies that all corresponding service features are available to you. Variations to this approach and other applicable limitations are tracked and explained in this article based on the main service categories outlined in the [online directory of Azure services](https://azure.microsoft.com/services/). Other considerations for service deployment and usage in Azure Government are also provided. ## AI + machine learning
-This section outlines variations and considerations when using **Azure Bot Service**, **Azure Machine Learning**, and **Cognitive Services** in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=machine-learning-service,bot-service,cognitive-services&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia).
+This section outlines variations and considerations when using **Azure Bot Service**, **Azure Machine Learning**, and **Cognitive Services** in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=machine-learning-service,bot-service,cognitive-services&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
### [Azure Bot Service](/azure/bot-service/)
The following Translator **features are not currently available** in Azure Gover
## Analytics
-This section outlines variations and considerations when using Analytics services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=data-share,power-bi-embedded,analysis-services,event-hubs,data-lake-analytics,storage,data-catalog,data-factory,synapse-analytics,stream-analytics,databricks,hdinsight&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia).
+This section outlines variations and considerations when using Analytics services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=data-share,power-bi-embedded,analysis-services,event-hubs,data-lake-analytics,storage,data-catalog,data-factory,synapse-analytics,stream-analytics,databricks,hdinsight&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
### [Azure HDInsight](../hdinsight/index.yml)
To learn how to embed analytical content within your business process applicatio
## Databases
-This section outlines variations and considerations when using Databases services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-api-for-fhir,data-factory,sql-server-stretch-database,redis-cache,database-migration,synapse-analytics,postgresql,mariadb,mysql,sql-database,cosmos-db&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia).
+This section outlines variations and considerations when using Databases services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-api-for-fhir,data-factory,sql-server-stretch-database,redis-cache,database-migration,synapse-analytics,postgresql,mariadb,mysql,sql-database,cosmos-db&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
### [Azure Database for MySQL](../mysql/index.yml)
The following Azure SQL Managed Instance **features are not currently available*
## Developer tools
-This section outlines variations and considerations when using Developer tools in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=load-testing,app-configuration,devtest-lab,lab-services,azure-devops&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia).
+This section outlines variations and considerations when using Developer tools in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=load-testing,app-configuration,devtest-lab,lab-services,azure-devops&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
### [Enterprise Dev/Test subscription offer](https://azure.microsoft.com/offers/ms-azr-0148p/)
This section outlines variations and considerations when using Developer tools i
## Identity
-This section outlines variations and considerations when using Identity services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=information-protection,active-directory-ds,active-directory&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia).
+This section outlines variations and considerations when using Identity services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=information-protection,active-directory-ds,active-directory&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
### [Azure Active Directory Premium P1 and P2](../active-directory/index.yml)
The following features have known limitations in Azure Government:
## Management and governance
-This section outlines variations and considerations when using Management and Governance services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=managed-applications,azure-policy,network-watcher,monitor,traffic-manager,automation,scheduler,site-recovery,cost-management,backup,blueprints,advisor&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia).
+This section outlines variations and considerations when using Management and Governance services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=managed-applications,azure-policy,network-watcher,monitor,traffic-manager,automation,scheduler,site-recovery,cost-management,backup,blueprints,advisor&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
### [Automation](../automation/index.yml)
You need to open some **outgoing ports** in your server's firewall to allow the
## Media
-This section outlines variations and considerations when using Media services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=cdn,media-services&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia).
+This section outlines variations and considerations when using Media services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=cdn,media-services&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
### [Media Services](../media-services/index.yml)
For Azure Media Services v3 feature variations in Azure Government, see [Azure M
## Migration
-This section outlines variations and considerations when using Migration services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=database-migration,cost-management,azure-migrate,site-recovery&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia).
+This section outlines variations and considerations when using Migration services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=database-migration,cost-management,azure-migrate,site-recovery&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
### [Azure Migrate](../migrate/index.yml)
For more information, see [Azure Migrate support matrix](../migrate/migrate-supp
## Networking
-This section outlines variations and considerations when using Networking services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-bastion,frontdoor,virtual-wan,dns,ddos-protection,cdn,azure-firewall,network-watcher,load-balancer,vpn-gateway,expressroute,application-gateway,virtual-network&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia).
+This section outlines variations and considerations when using Networking services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-bastion,frontdoor,virtual-wan,dns,ddos-protection,cdn,azure-firewall,network-watcher,load-balancer,vpn-gateway,expressroute,application-gateway,virtual-network&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
### [Azure ExpressRoute](../expressroute/index.yml)
Traffic Manager health checks can originate from certain IP addresses for Azure
## Security
-This section outlines variations and considerations when using Security services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-sentinel,azure-dedicated-hsm,information-protection,application-gateway,vpn-gateway,security-center,key-vault,active-directory-ds,ddos-protection,active-directory&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia).
+This section outlines variations and considerations when using Security services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-sentinel,azure-dedicated-hsm,information-protection,application-gateway,vpn-gateway,security-center,key-vault,active-directory-ds,ddos-protection,active-directory&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
### [Microsoft Defender for IoT](../defender-for-iot/index.yml)
For feature variations and limitations, see [Cloud feature availability for US G
## Storage
-This section outlines variations and considerations when using Storage services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=hpc-cache,managed-disks,storsimple,backup,storage&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia).
+This section outlines variations and considerations when using Storage services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=hpc-cache,managed-disks,storsimple,backup,storage&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
### [Azure managed disks](../virtual-machines/managed-disks-overview.md)
With Import/Export jobs for US Gov Arizona or US Gov Texas, the mailing address
## Web
-This section outlines variations and considerations when using Web services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=spring-cloud,signalr-service,api-management,notification-hubs,search,cdn,app-service-linux,app-service&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia).
+This section outlines variations and considerations when using Web services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=spring-cloud,signalr-service,api-management,notification-hubs,search,cdn,app-service-linux,app-service&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
### [API Management](../api-management/index.yml)
Learn more about Azure Government:
- [Acquiring and accessing Azure Government](https://azure.microsoft.com/offers/azure-government/) - [Azure Government overview](./documentation-government-welcome.md) - [Azure support for export controls](./documentation-government-overview-itar.md)
+- [Azure Government compliance](./documentation-government-plan-compliance.md)
- [Azure Government security](./documentation-government-plan-security.md) - [Azure guidance for secure isolation](./azure-secure-isolation-guidance.md)
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
recommendations: false Previously updated : 02/25/2022 Last updated : 03/07/2022 # Azure, Dynamics 365, Microsoft 365, and Power Platform services compliance scope
Microsoft Azure cloud environments meet demanding US government compliance requi
- [DoD IL4](/azure/compliance/offerings/offering-dod-il4) PA issued by DISA - [DoD IL5](/azure/compliance/offerings/offering-dod-il5) PA issued by DISA
-For current Azure Government regions and available services, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia).
+For current Azure Government regions and available services, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
> [!NOTE] >
azure-government Documentation Government Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-developer-guide.md
recommendations: false Previously updated : 01/26/2022 Last updated : 03/07/2022 # Azure Government developer guide
Service endpoints in Azure Government are different than in Azure. For a mapping
### Feature variations
-For current Azure Government regions and available services, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia). Services available in Azure Government are listed by category and whether they are Generally Available or available through Preview. In general, service availability in Azure Government implies that all corresponding service features are available to you. Variations to this approach and other applicable limitations are tracked and explained in [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md#service-availability).
+For current Azure Government regions and available services, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true). Services available in Azure Government are listed by category and whether they are Generally Available or available through Preview. In general, service availability in Azure Government implies that all corresponding service features are available to you. Variations to this approach and other applicable limitations are tracked and explained in [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md#service-availability).
### Quickstarts
azure-government Documentation Government Impact Level 5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-impact-level-5.md
recommendations: false Previously updated : 02/25/2022 Last updated : 03/07/2022 # Isolation guidelines for Impact Level 5 workloads
Azure Government supports applications that use Impact Level 5 (IL5) data in all
## Background
-In January 2017, DISA awarded the [IL5 Provisional Authorization](/azure/compliance/offerings/offering-dod-il5) (PA) to [Azure Government](https://azure.microsoft.com/global-infrastructure/government/get-started/), making it the first IL5 PA awarded to a hyperscale cloud provider. The PA covered two Azure Government regions (US DoD Central and US DoD East) that are [dedicated to the DoD](https://azure.microsoft.com/global-infrastructure/government/dod/). Based on DoD mission owner feedback and evolving security capabilities, Microsoft has partnered with DISA to expand the IL5 PA boundary in December 2018 to cover the remaining Azure Government regions: US Gov Arizona, US Gov Texas, and US Gov Virginia. For service availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope).
+In January 2017, DISA awarded the [IL5 Provisional Authorization](/azure/compliance/offerings/offering-dod-il5) (PA) to [Azure Government](https://azure.microsoft.com/global-infrastructure/government/get-started/), making it the first IL5 PA awarded to a hyperscale cloud provider. The PA covered two Azure Government regions (US DoD Central and US DoD East) that are [dedicated to the DoD](https://azure.microsoft.com/global-infrastructure/government/dod/). Based on DoD mission owner feedback and evolving security capabilities, Microsoft has partnered with DISA to expand the IL5 PA boundary in December 2018 to cover the remaining Azure Government regions: US Gov Arizona, US Gov Texas, and US Gov Virginia. For service availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia&rar=true). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope).
Azure Government is available to US federal, state, local, and tribal governments and their partners. The IL5 expansion to Azure Government honors the isolation requirements mandated by the DoD. Azure Government continues to provide more PaaS services suitable for DoD IL5 workloads than any other cloud services environment.
Be sure to review the entry for each service you're using and ensure that all is
## AI + machine learning
-For AI and machine learning services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=project-bonsai,genomics,search,bot-service,databricks,machine-learning-service,cognitive-services&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). Guidance below is provided only for IL5 PA services that require extra configuration to support IL5 workloads.
+For AI and machine learning services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=project-bonsai,genomics,search,bot-service,databricks,machine-learning-service,cognitive-services&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). Guidance below is provided only for IL5 PA services that require extra configuration to support IL5 workloads.
### [Azure Cognitive Search](../search/index.yml)
Cognitive Services QnA Maker is part of [Cognitive Services for Language](../cog
## Analytics
-For Analytics services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=data-share,power-bi-embedded,analysis-services,event-hubs,data-lake-analytics,storage,data-catalog,monitor,data-factory,synapse-analytics,stream-analytics,databricks,hdinsight&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). Guidance below is provided only for IL5 PA services that require extra configuration to support IL5 workloads.
+For Analytics services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=data-share,power-bi-embedded,analysis-services,event-hubs,data-lake-analytics,storage,data-catalog,monitor,data-factory,synapse-analytics,stream-analytics,databricks,hdinsight&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). Guidance below is provided only for IL5 PA services that require extra configuration to support IL5 workloads.
### [Azure Databricks](/azure/databricks/)
For Analytics services availability in Azure Government, see [Products available
## Compute
-For Compute services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=spring-cloud,azure-vmware,cloud-services,batch,app-service,service-fabric,functions,virtual-machine-scale-sets,virtual-machines&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). Guidance below is provided only for IL5 PA services that require extra configuration to support IL5 workloads.
+For Compute services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=spring-cloud,azure-vmware,cloud-services,batch,app-service,service-fabric,functions,virtual-machine-scale-sets,virtual-machines&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). Guidance below is provided only for IL5 PA services that require extra configuration to support IL5 workloads.
### [Batch](../batch/index.yml)
You can encrypt disks that support virtual machine scale sets by using Azure Dis
## Containers
-For Containers services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=openshift,app-service-linux,container-registry,service-fabric,container-instances,kubernetes-service&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). Guidance below is provided only for IL5 PA services that require extra configuration to support IL5 workloads.
+For Containers services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=openshift,app-service-linux,container-registry,service-fabric,container-instances,kubernetes-service&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). Guidance below is provided only for IL5 PA services that require extra configuration to support IL5 workloads.
### [Azure Kubernetes Service](../aks/index.yml)
For Containers services availability in Azure Government, see [Products availabl
## Databases
-For Databases services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-sql,sql-server-stretch-database,redis-cache,database-migration,postgresql,mariadb,mysql,sql-database,cosmos-db&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). Guidance below is provided only for IL5 PA services that require extra configuration to support IL5 workloads.
+For Databases services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-sql,sql-server-stretch-database,redis-cache,database-migration,postgresql,mariadb,mysql,sql-database,cosmos-db&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). Guidance below is provided only for IL5 PA services that require extra configuration to support IL5 workloads.
### [Azure Cosmos DB](../cosmos-db/index.yml)
Azure Healthcare APIs supports Impact Level 5 workloads in Azure Government with
## Integration
-For Integration services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=event-grid,api-management,service-bus,logic-apps&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). Guidance below is provided only for IL5 PA services that require extra configuration to support IL5 workloads.
+For Integration services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=event-grid,api-management,service-bus,logic-apps&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). Guidance below is provided only for IL5 PA services that require extra configuration to support IL5 workloads.
### [Service Bus](../service-bus-messaging/index.yml)
For Integration services availability in Azure Government, see [Products availab
## Internet of Things
-For Internet of Things services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=notification-hubs,azure-rtos,azure-maps,iot-central,iot-hub&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). Guidance below is provided only for IL5 PA services that require extra configuration to support IL5 workloads.
+For Internet of Things services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=notification-hubs,azure-rtos,azure-maps,iot-central,iot-hub&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). Guidance below is provided only for IL5 PA services that require extra configuration to support IL5 workloads.
### [Azure IoT Hub](../iot-hub/index.yml)
For Internet of Things services availability in Azure Government, see [Products
## Management and governance
-For Management and governance services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-automanage,resource-mover,azure-portal,azure-lighthouse,cloud-shell,managed-applications,azure-policy,monitor,automation,scheduler,site-recovery,cost-management,backup,blueprints,advisor&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope).
+For Management and governance services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-automanage,resource-mover,azure-portal,azure-lighthouse,cloud-shell,managed-applications,azure-policy,monitor,automation,scheduler,site-recovery,cost-management,backup,blueprints,advisor&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope).
### [Automation](../automation/index.yml)
Log Analytics may also be used to ingest additional customer-provided logs. Thes
## Migration
-For Migration services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=database-migration,azure-migrate&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). Guidance below is provided only for IL5 PA services that require extra configuration to support IL5 workloads.
+For Migration services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=database-migration,azure-migrate&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). Guidance below is provided only for IL5 PA services that require extra configuration to support IL5 workloads.
### [Azure Data Box](../databox/index.yml)
For Migration services availability in Azure Government, see [Products available
## Security
-For Security services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-sentinel,azure-dedicated-hsm,security-center,key-vault&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). Guidance below is provided only for IL5 PA services that require extra configuration to support IL5 workloads.
+For Security services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-sentinel,azure-dedicated-hsm,security-center,key-vault&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). Guidance below is provided only for IL5 PA services that require extra configuration to support IL5 workloads.
### [Azure Information Protection](/azure/information-protection/)
For Security services availability in Azure Government, see [Products available
## Storage
-For Storage services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=hpc-cache,managed-disks,storsimple,storage&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). Guidance below is provided only for IL5 PA services that require extra configuration to support IL5 workloads.
+For Storage services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=hpc-cache,managed-disks,storsimple,storage&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). Guidance below is provided only for IL5 PA services that require extra configuration to support IL5 workloads.
### [Azure Archive Storage](../storage/blobs/access-tiers-overview.md)
Learn more about Azure Government:
- [Acquiring and accessing Azure Government](https://azure.microsoft.com/offers/azure-government/) - [Azure Government overview](./documentation-government-welcome.md)
+- [Azure Government compliance](./documentation-government-plan-compliance.md)
- [DoD Impact Level 5](/azure/compliance/offerings/offering-dod-il5) - [DoD in Azure Government](./documentation-government-overview-dod.md) - [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope)
azure-government Documentation Government Overview Dod https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-dod.md
recommendations: false Previously updated : 01/25/2022 Last updated : 03/07/2022 # Department of Defense (DoD) in Azure Government
Azure Government offers the following regions to DoD mission owners and their pa
|US Gov Arizona </br> US Gov Texas </br> US Gov Virginia|FedRAMP High, DoD IL4, DoD IL5|145| |US DoD Central </br> US DoD East|DoD IL5|60|
-**Azure Government regions** (US Gov Arizona, US Gov Texas, and US Gov Virginia) are intended for US federal (including DoD), state, and local government agencies, and their partners. **Azure Government DoD regions** (US DoD Central and US DoD East) are reserved for exclusive DoD use. Separate DoD IL5 PAs are in place for Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia) vs. Azure Government DoD regions (US DoD Central and US DoD East).
+**Azure Government regions** (US Gov Arizona, US Gov Texas, and US Gov Virginia) are intended for US federal (including DoD), state, and local government agencies, and their partners. **Azure Government DoD regions** (US DoD Central and US DoD East) are reserved for exclusive DoD use. Separate DoD IL5 PAs are in place for Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia) vs. Azure Government DoD regions (US DoD Central and US DoD East). For service availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
The primary differences between DoD IL5 PAs that are in place for Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia) vs. Azure Government DoD regions (US DoD Central and US DoD East) are:
All Azure Government regions are built to support DoD customers, including:
- The unified combatant commands - Other offices, agencies, activities, and commands under the control or supervision of any approved entity named above
+### What services are available in Azure Government?
+For service availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
+ ### What services are part of your IL5 authorization scope? For a complete list of services in scope for DoD IL5 PA in Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia), see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). For a complete list of services in scope for DoD IL5 PA in Azure Government DoD regions (US DoD Central and US DoD East), see [Azure Government DoD regions IL5 audit scope](#azure-government-dod-regions-il5-audit-scope) in this article.
azure-government Documentation Government Plan Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-plan-compliance.md
recommendations: false Previously updated : 01/28/2022 Last updated : 03/07/2022 # Azure Government compliance
For links to additional Azure Government compliance assurances, see [Azure compl
- [Electronic Prescriptions for Controlled Substances (EPCS)](/azure/compliance/offerings/offering-epcs-us) - And many more US government, global, and industry standards
-For current Azure Government regions and available services, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia).
+For current Azure Government regions and available services, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
> [!NOTE] >
For a detailed list of Azure, Dynamics 365, Microsoft 365, and Power Platform se
## Audit documentation
-You can access Azure and Azure Government audit reports and related documentation via the [Service Trust Portal](https://servicetrust.microsoft.com) (STP) in the following sections:
+You can access Azure and Azure Government audit reports and related documentation from the [Service Trust Portal](https://servicetrust.microsoft.com) (STP) in the following sections:
- STP [Audit Reports](https://servicetrust.microsoft.com/ViewPage/MSComplianceGuideV3), which has a subsection for FedRAMP Reports. - STP [Data Protection Resources](https://servicetrust.microsoft.com/ViewPage/TrustDocumentsV3), which is further divided into Compliance Guides, FAQ and White Papers, and Pen Test and Security Assessments subsections.
azure-government Documentation Government Welcome https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-welcome.md
recommendations: false Previously updated : 01/26/2022 Last updated : 03/07/2022 # What is Azure Government?
-US government agencies or their partners interested in cloud services that meet government security and compliance requirements, can be confident that [Microsoft Azure Government](https://azure.microsoft.com/global-infrastructure/government/) provides world-class security and compliance. Azure Government delivers a dedicated cloud enabling government agencies and their partners to transform mission-critical workloads to the cloud. Azure Government services can accommodate data that is subject to various [US government regulations and requirements](../compliance/index.yml).
+US government agencies or their partners interested in cloud services that meet government security and compliance requirements, can be confident that [Microsoft Azure Government](https://azure.microsoft.com/global-infrastructure/government/) provides world-class security and compliance. Azure Government delivers a dedicated cloud enabling government agencies and their partners to transform mission-critical workloads to the cloud. Azure Government services can accommodate data that is subject to various [US government regulations and requirements](./documentation-government-plan-compliance.md).
To provide you with the highest level of security and compliance, Azure Government uses physically isolated datacenters and networks located in the US only. Compared to Azure global, Azure Government provides an extra layer of protection to customers through contractual commitments regarding storage of customer data in the US and limiting potential access to systems processing customer data to [screened US persons](./documentation-government-plan-security.md#screening).
The following video provides a good introduction to Azure Government:
## Compare Azure Government and global Azure
-Azure Government offers [Infrastructure-as-a-Service (IaaS)](https://azure.microsoft.com/overview/what-is-iaas/), [Platform-as-a-Service (PaaS)](https://azure.microsoft.com/overview/what-is-paas/), and [Software-as-a-Service (SaaS)](https://azure.microsoft.com/overview/what-is-saas/) cloud service models based on the same underlying technologies as global Azure. For service availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia). Services available in Azure Government are listed by category and whether they're Generally Available or available through Preview.
+Azure Government offers [Infrastructure-as-a-Service (IaaS)](https://azure.microsoft.com/overview/what-is-iaas/), [Platform-as-a-Service (PaaS)](https://azure.microsoft.com/overview/what-is-paas/), and [Software-as-a-Service (SaaS)](https://azure.microsoft.com/overview/what-is-saas/) cloud service models based on the same underlying technologies as global Azure. For service availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true). Services available in Azure Government are listed by category and whether they're Generally Available or available through Preview.
There are some key differences that developers working on applications hosted in Azure Government must be aware of. For detailed information, see [Guidance for developers](./documentation-government-developer-guide.md). As a developer, you must know how to connect to Azure Government and once you connect you'll mostly have the same experience as in global Azure. To see feature variations and usage limitations between Azure Government and global Azure, see [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md) and click on individual service.
To start using Azure Government, first check out [Guidance for developers](./doc
- [Acquiring and accessing Azure Government](https://azure.microsoft.com/offers/azure-government/) - [Azure Government security](./documentation-government-plan-security.md) - [Azure Government compliance](./documentation-government-plan-compliance.md)
+- [Azure compliance](../compliance/index.yml)
- View [YouTube videos](https://www.youtube.com/playlist?list=PLLasX02E8BPA5IgCPjqWms5ne5h4briK7)
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-overview.md
In addition to consolidating this functionality into a single agent, the Azure M
### Current limitations When compared with the existing agents, this new agent doesn't yet have full parity. - **Comparison with Log Analytics agents (MMA/OMS):**
- - Not all Log Analytics solutions are supported today. [View supported features and services](#supported-services-and-features).
- - No support for collecting file based logs or IIS logs.
+ - Not all Log Analytics solutions are supported yet. [View supported features and services](#supported-services-and-features).
+ - The support for collecting file based logs or IIS logs is in [private preview](https://aka.ms/amadcr-privatepreviews).
- **Comparison with Azure Diagnostics extensions (WAD/LAD):**
- - No support for Event Hubs and Storage accounts as destinations.
- - No support for collecting file based logs, IIS logs, ETW events, .NET events and crash dumps.
+ - No support yet for Event Hubs and Storage accounts as destinations.
+ - No support yet for collecting file based logs, IIS logs, ETW events, .NET events and crash dumps.
### Changes in data collection The methods for defining data collection for the existing agents are distinctly different from each other. Each method has challenges that are addressed with the Azure Monitor agent.
The Azure Monitor agent sends data to Azure Monitor Metrics (preview) or a Log A
<sup>2</sup> Azure Monitor Linux Agent v1.15.2 or higher supports syslog RFC formats including Cisco Meraki, Cisco ASA, Cisco FTD, Sophos XG, Juniper Networks, Corelight Zeek, CipherTrust, NXLog, McAfee and CEF (Common Event Format). ## Supported services and features
-The following table shows the current support for the Azure Monitor agent with other Azure services.
+The following table shows the current support for the Azure Monitor agent with other Azure services.
| Azure service | Current support | More information | |:|:|:|
The following table shows the current support for the Azure Monitor agent with A
| Azure Monitor feature | Current support | More information | |:|:|:|
+| File based logs and Windows IIS logs | Private preview | [Sign-up link](https://aka.ms/amadcr-privatepreviews) |
| [VM insights](../vm/vminsights-overview.md) | Private preview | [Sign-up link](https://aka.ms/amadcr-privatepreviews) | | [Connect using private links](azure-monitor-agent-data-collection-endpoint.md) | Public preview | No sign-up needed |
-| [VM insights guest health](../vm/vminsights-health-overview.md) | Public preview | Available only on the new agent |
-| [SQL insights](../insights/sql-insights-overview.md) | Public preview | Available only on the new agent |
The following table shows the current support for the Azure Monitor agent with Azure solutions.
azure-monitor Data Collection Rule Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-transformations.md
The following [Bitwise operators](/azure/data-explorer/kusto/query/binoperators)
- [tobool](/azure/data-explorer/kusto/query/toboolfunction) - [todatetime](/azure/data-explorer/kusto/query/todatetimefunction) - [todouble/toreal](/azure/data-explorer/kusto/query/todoublefunction)-- [toguid](/azure/data-explorer/kusto/query/toguid)-- [toint](/azure/data-explorer/kusto/query/toint)-- [tolong](/azure/data-explorer/kusto/query/tolong)
+- [toguid](/azure/data-explorer/kusto/query/toguidfunction)
+- [toint](/azure/data-explorer/kusto/query/tointfunction)
+- [tolong](/azure/data-explorer/kusto/query/tolongfunction)
- [tostring](/azure/data-explorer/kusto/query/tostringfunction) - [totimespan](/azure/data-explorer/kusto/query/totimespanfunction)
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
This latest update adds a new column and reorders the metrics to be alphabetical
|connectedclients7|Yes|Connected Clients (Shard 7)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions| |connectedclients8|Yes|Connected Clients (Shard 8)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions| |connectedclients9|Yes|Connected Clients (Shard 9)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|errors|Yes|Errors|Count|Maximum|The number errors that occured on the cache. For more details, see https://aka.ms/redis/metrics.|ShardId, ErrorType|
+|errors|Yes|Errors|Count|Maximum|The number errors that occurred on the cache. For more details, see https://aka.ms/redis/metrics.|ShardId, ErrorType|
|evictedkeys|Yes|Evicted Keys|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|ShardId| |evictedkeys0|Yes|Evicted Keys (Shard 0)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions| |evictedkeys1|Yes|Evicted Keys (Shard 1)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
azure-monitor Data Collector Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-collector-api.md
If you then submit the following entry, before the record type is created, Azure
The following properties are reserved and shouldn't be used in a custom record type. You'll receive an error if your payload includes any of these property names: - tenant
+- TimeGenerated
+- RawData
## Data limits The data posted to the Azure Monitor Data collection API is subject to certain constraints:
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
Title: "What's new in Azure Monitor documentation" description: "What's new in Azure Monitor documentation" Previously updated : 02/09/2022 Last updated : 03/07/2022 # What's new in Azure Monitor documentation This article lists significant changes to Azure Monitor documentation.
+## February, 2022
+
+### General
+
+**Updated articles**
+
+- [What is monitored by Azure Monitor?](monitor-reference.md)
+### Agents
+
+**New articles**
+
+- [Sample data collection rule - agent](agents/data-collection-rule-sample-agent.md)
+- [Using data collection endpoints with Azure Monitor agent (preview)](agents/azure-monitor-agent-data-collection-endpoint.md)
+
+**Updated articles**
+
+- [Azure Monitor agent overview](/azure/azure-monitor/agents/azure-monitor-agent-overview.md)
+- [Manage the Azure Monitor agent](/azure/azure-monitor/agents/azure-monitor-agent-manage.md)
+
+### Alerts
+
+**Updated articles**
+
+- [How to trigger complex actions with Azure Monitor alerts](/azure/azure-monitor/alerts/action-groups-logic-app.md)
+
+### Application Insights
+
+**New articles**
+
+- [Migrate from Application Insights instrumentation keys to connection strings](app/migrate-from-instrumentation-keys-to-connection-strings.md)
++
+**Updated articles**
+
+- [Application Monitoring for Azure App Service and Java](/azure/azure-monitor/app/azure-web-apps-java.md)
+- [Application Monitoring for Azure App Service and Node.js](/azure/azure-monitor/app/azure-web-apps-nodejs.md)
+- [Enable Snapshot Debugger for .NET apps in Azure App Service](/azure/azure-monitor/app/snapshot-debugger-appservice.md)
+- [Profile live Azure App Service apps with Application Insights](/azure/azure-monitor/app/profiler.md)
+- [Visualizations for Application Change Analysis (preview)](/azure/azure-monitor/app/change-analysis-visualizations.md)
+
+### Autoscale
+
+**New articles**
+
+- [Use predictive autoscale to scale out before load demands in virtual machine scale sets (Preview)](autoscale/autoscale-predictive.md)
+
+### Data collection
+
+**New articles**
+
+- [Data collection endpoints in Azure Monitor (preview)](essentials/data-collection-endpoint-overview.md)
+- [Data collection rules in Azure Monitor](essentials/data-collection-rule-overview.md)
+- [Data collection rule transformations](essentials/data-collection-rule-transformations.md)
+- [Structure of a data collection rule in Azure Monitor (preview)](essentials/data-collection-rule-structure.md)
+### Essentials
+
+**Updated articles**
+
+- [Azure Activity log](/azure/azure-monitor/essentials/activity-log.md)
+
+### Logs
+
+**Updated articles**
+
+- [Azure Monitor Logs overview](logs/data-platform-logs.md)
+
+**New articles**
+
+- [Configure Basic Logs in Azure Monitor (Preview)](logs/basic-logs-configure.md)
+- [Configure data retention and archive in Azure Monitor Logs (Preview)](logs/data-retention-archive.md)
+- [Log Analytics workspace overview](logs/log-analytics-workspace-overview.md)
+- [Overview of ingestion-time transformations in Azure Monitor Logs](logs/ingestion-time-transformations.md)
+- [Query data from Basic Logs in Azure Monitor (Preview)](logs/basic-logs-query.md)
+- [Restore logs in Azure Monitor (Preview)](logs/restore.md)
+- [Sample data collection rule - custom logs](logs/data-collection-rule-sample-custom-logs.md)
+- [Search jobs in Azure Monitor (Preview)](logs/search-jobs.md)
+- [Send custom logs to Azure Monitor Logs with REST API](logs/custom-logs-overview.md)
+- [Tables that support ingestion-time transformations in Azure Monitor Logs (preview)](logs/tables-feature-support.md)
+- [Tutorial - Send custom logs to Azure Monitor Logs (preview)](logs/tutorial-custom-logs.md)
+- [Tutorial - Send custom logs to Azure Monitor Logs using resource manager templates](logs/tutorial-custom-logs-api.md)
+- [Tutorial - Add ingestion-time transformation to Azure Monitor Logs using Azure portal](logs/tutorial-ingestion-time-transformations.md)
+- [Tutorial - Add ingestion-time transformation to Azure Monitor Logs using resource manager templates](logs/tutorial-ingestion-time-transformations-api.md)
++ ## January, 2022 ### Agents
This article lists significant changes to Azure Monitor documentation.
### Agents
+**New articles**
+
+- [Sample data collection rule - agent](agents/data-collection-rule-sample-agent.md)
++ **Updated articles** - [Install Log Analytics agent on Windows computers](agents/agent-windows.md)
This article lists significant changes to Azure Monitor documentation.
**New articles** - [Analyzing product usage with HEART](app/usage-heart.md)
+- [Migrate from Application Insights instrumentation keys to connection strings](app/migrate-from-instrumentation-keys-to-connection-strings.md)
+ **Updated articles**
This article lists significant changes to Azure Monitor documentation.
- [Set up Azure Monitor for your Python application](app/opencensus-python.md) - [Click Analytics Auto-collection plugin for Application Insights JavaScript SDK](app/javascript-click-analytics-plugin.md) + ### Logs **New articles**
azure-netapp-files Azacsnap Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-preview.md
na Previously updated : 01/25/2022 Last updated : 03/07/2022
> PREVIEWS ARE PROVIDED "AS-IS," "WITH ALL FAULTS," AND "AS AVAILABLE," AND ARE EXCLUDED FROM THE SERVICE LEVEL AGREEMENTS AND LIMITED WARRANTY > ref: https://azure.microsoft.com/support/legal/preview-supplemental-terms/
-This article provides a guide on setup and usage of the new features in preview for **AzAcSnap v5.1**. These new features can be used with Azure NetApp Files, Azure BareMetal, and now Azure Managed Disk. This guide should be read along with the documentation for the generally available version of AzAcSnap at [aka.ms/azacsnap](./azacsnap-introduction.md).
+This article provides a guide on set up and usage of the new features in preview for **AzAcSnap v5.1**. These new features can be used with Azure NetApp Files, Azure BareMetal, and now Azure Managed Disk. This guide should be read along with the documentation for the generally available version of AzAcSnap at [aka.ms/azacsnap](./azacsnap-introduction.md).
-The four new preview features provided with AzAcSnap v5.1 are:
-- Oracle Database support-- Backint coexistence-- Azure Managed Disk-- RunBefore and RunAfter capability
+The five new preview features provided with AzAcSnap v5.1 are:
+- Oracle Database support.
+- Backint coexistence.
+- Azure Managed Disk.
+- RunBefore and RunAfter capability.
+- Azure Key Vault support for storing Service Principal.
+
+Minor addition to `--volume` option:
+- All volumes snapshot.
## Providing feedback
This section explains how to enable communication with storage. Ensure the stora
# [Oracle](#tab/oracle) The snapshot tools communicate with the Oracle database and need a user with appropriate permissions to enable/disable backup mode. After putting the database in backup
-mode, `azacsnap` will query the Oracle database to get a list of files which have backup-mode as active. This file list is output into an external file which is in
+mode, `azacsnap` will query the Oracle database to get a list of files, which have backup-mode as active. This file list is output into an external file, which is in
the same location and basename as the log file, but with a ".protected-tables" extension (output filename detailed in the AzAcSnap log file).
-The following examples show the setup of the Oracle database user, the use of `mkstore` to create an Oracle Wallet, and the `sqlplus` configuration files required for
+The following examples show the set up of the Oracle database user, the use of `mkstore` to create an Oracle Wallet, and the `sqlplus` configuration files required for
communication to the Oracle database. The following example commands set up a user (AZACSNAP) in the Oracle database, change the IP address, usernames, and passwords as appropriate:
The following example commands set up a user (AZACSNAP) in the Oracle database,
```
-1. The Oracle Wallet provides a method to manage database credentials across multiple domains. This is accomplished by using a database connection string in
+1. The Oracle Wallet provides a method to manage database credentials across multiple domains. This capability is accomplished by using a database connection string in
the datasource definition, which is resolved by an entry in the wallet. When used correctly, the Oracle Wallet makes having passwords in the datasource configuration unnecessary.
- This feature can be leveraged to use the Oracle TNS (Transparent Network Substrate) administrative file to hide the details of the database
- connection string and instead use an alias. If the connection information changes, it's a matter of changing the `tnsnames.ora` file instead
+ This makes it possible to use the Oracle Transparent Network Substrate (TNS) administrative file with a connection string alias, thus hiding details of the database
+ connection string. If the connection information changes, it's a matter of changing the `tnsnames.ora` file instead
of potentially many datasource definitions.
- Set up the Oracle Wallet (change the password) This example uses the mkstore command from the Linux shell to set up the Oracle wallet. Theses commands
+ Set up the Oracle Wallet (change the password) This example uses the mkstore command from the Linux shell to set up the Oracle wallet. These commands
are run on the Oracle database server using unique user credentials to avoid any impact on the running database. In this example a new user (azacsnap) is created, and their environment variables configured appropriately.
The following example commands set up a user (AZACSNAP) in the Oracle database,
1. Run the following commands on the Oracle Database Server.
- 1. Get the Oracle environment variables to be used in setup. Run the following commands as the `root` user on the Oracle Database Server.
+ 1. Get the Oracle environment variables to be used in set up. Run the following commands as the `root` user on the Oracle Database Server.
```bash su - oracle -c 'echo $ORACLE_SID'
The following example commands set up a user (AZACSNAP) in the Oracle database,
1. Copy the ZIP file to the target system (for example, the centralized virtual machine running AzAcSnap). > [!NOTE]
- > If deploying to a centralized virtual machine, then it will need to have the Oracle instant client installed and setup so the AzAcSnap user can
+ > If deploying to a centralized virtual machine, then it will need to have the Oracle instant client installed and set up so the AzAcSnap user can
> run `sqlplus` commands. The Oracle Instant Client can downloaded from https://www.oracle.com/database/technologies/instant-client/linux-x86-64-downloads.html. > In order for SQL\*Plus to run correctly, download both the required package (for example, Basic Light Package) and the optional SQL\*Plus tools package.
The following example commands set up a user (AZACSNAP) in the Oracle database,
> The `$TNS_ADMIN` shell variable determines where to locate the Oracle Wallet and `*.ora` files, so it must be set before running `azacsnap` to ensure > correct operation.
- 1. Test the setup with AzAcSnap
+ 1. Test the set up with AzAcSnap
After configuring AzAcSnap (for example, `azacsnap -c configure --configuration new`) with the Oracle connect string (for example, `/@AZACSNAP`), it should be possible to connect to the Oracle database.
The following example commands set up a user (AZACSNAP) in the Oracle database,
``` > [!IMPORTANT]
- > The `$TNS_ADMIN` variable must be setup correctly for `azacsnap` to run correctly, either by adding to the user's `.bash_profile` file,
+ > The `$TNS_ADMIN` variable must be set up correctly for `azacsnap` to run correctly, either by adding to the user's `.bash_profile` file,
> or by exporting it before each run (for example, `export TNS_ADMIN="/home/orasnap/ORACLE19c" ; cd /home/orasnap/bin ; ./azacsnap --configfile ORACLE19c.json > -c backup --volume data --prefix hourly-ora19c --retention 12`)
This section explains how to configure the data base.
# [Oracle](#tab/oracle)
-These are required changes to be applied to the Oracle Database to allow for monitoring by the database administrator.
+The following changes must be applied to the Oracle Database to allow for monitoring by the database administrator.
1. Set up Oracle alert logging Use the following Oracle SQL commands while connected to the database as SYSDBA to create a stored procedure under the default Oracle SYSBACKUP database account.
- This will allow AzAcSnap to output messages to standard output using the PUT_LINE procedure in the DBMS_OUTPUT package, and also to the Oracle database `alert.log`
+ These SQL commands allow AzAcSnap to output messages to standard output using the PUT_LINE procedure in the DBMS_OUTPUT package, and also to the Oracle database `alert.log`
file (using the KSDWRT procedure in the DBMS_SYSTEM package). ```bash
The process described in the Azure Backup documentation has been implemented wit
1. re-enable the backint-based backup. By default this option is disabled, but it can be enabled by running `azacsnap -c configure ΓÇôconfiguration edit` and answering ΓÇÿyΓÇÖ (yes) to the question
-ΓÇ£Do you need AzAcSnap to automatically disable/enable backint during snapshot? (y/n) [n]ΓÇ¥. This will set the autoDisableEnableBackint value to true in the
-JSON configuration file (for example, `azacsnap.json`). It's also possible to change this value by editing the configuration file directly.
+ΓÇ£Do you need AzAcSnap to automatically disable/enable backint during snapshot? (y/n) [n]ΓÇ¥. Editing the configuration as described will set the
+autoDisableEnableBackint value to true in the JSON configuration file (for example, `azacsnap.json`). It's also possible to change this value by editing
+the configuration file directly.
Refer to this partial snippet of the configuration file to see where this value is placed and the correct format:
Refer to this partial snippet of the configuration file to see where this value
> Support for Azure Managed Disk as a storage back-end is a Preview feature. > This section's content supplements [Configure Azure Application Consistent Snapshot tool](azacsnap-cmd-ref-configure.md) website page.
-Microsoft provides a number of storage options for deploying databases such as SAP HANA. Many of these are detailed on the
+Microsoft provides many storage options for deploying databases such as SAP HANA. Many of these options are detailed on the
[Azure Storage types for SAP workload](/azure/virtual-machines/workloads/sap/planning-guide-storage) web page. Additionally there's a [Cost conscious solution with Azure premium storage](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#cost-conscious-solution-with-azure-premium-storage).
-AzAcSnap is able to take application consistent database snapshots when deployed on this type of architecture (that is, a VM with Managed Disks). However, the setup
+AzAcSnap is able to take application consistent database snapshots when deployed on this type of architecture (that is, a VM with Managed Disks). However, the set up
for this platform is slightly more complicated as in this scenario we need to block I/O to the mountpoint (using `xfs_freeze`) before taking a snapshot of the Managed Disks in the mounted Logical Volume(s).
The storage hierarchy looks like the following example for SAP HANA:
Installing and setting up the Azure VM and Azure Managed Disks in this way follows Microsoft guidance to create LVM stripes of the Managed Disks on the VM.
-With the Azure VM setup as described, AzAcSnap can be run with Azure Managed Disks in a similar way to other supported storage back-ends (for example, Azure NetApp Files, Azure Large Instance (Bare Metal)). Because AzAcSnap communicates with the Azure Resource Manager to take snapshots, it also needs a Service Principal with the correct permissions to take managed disk snapshots.
+With the Azure VM set up as prescribed, AzAcSnap can take snapshots of Azure Managed Disks. The snapshot operations are similar to those for other storage back-ends supported by AzAcSnap (for example, Azure NetApp Files, Azure Large Instance (Bare Metal)). Because AzAcSnap communicates with the Azure Resource Manager to take snapshots, it also needs a Service Principal with the correct permissions to take managed disk snapshots.
This capability allows customers to test/trial AzAcSnap on a smaller system and scale-up to Azure NetApp Files and/or Azure Large Instance (Bare Metal).
A new capability for AzAcSnap to execute external commands before or after its m
`--runbefore` will run a shell command before the main execution of azacsnap and provides some of the azacsnap command-line parameters to the shell environment. By default, `azacsnap` will wait up to 30 seconds for the external shell command to complete before killing the process and returning to azacsnap normal execution.
-This can be overridden by adding a number to wait in seconds after a `%` character (for example, `--runbefore "mycommand.sh%60"` will wait up to 60 seconds for `mycommand.sh`
+This delay can be overridden by adding a number to wait in seconds after a `%` character (for example, `--runbefore "mycommand.sh%60"` will wait up to 60 seconds for `mycommand.sh`
to complete). `--runafter` will run a shell command after the main execution of azacsnap and provides some of the azacsnap command-line parameters to the shell environment.
cat blob-credentials.saskey
PORTAL_GENERATED_SAS="https://<targetstorageaccount>.blob.core.windows.net/<blob-store>?sp=racwl&st=2021-06-10T21:10:38Z&se=2021-06-11T05:10:38Z&spr=https&sv=2020-02-10&sr=c&sig=<key-material>" ```
+## Azure Key Vault
+
+From AzAcSnap v5.1, it's possible to store the Service Principal securely as a Secret in Azure Key Vault. Using this feature allows for centralization of Service Principal credentials
+where an alternate administrator can set up the Secret for AzAcSnap to use.
+
+The steps to follow to set up Azure Key Vault and store the Service Principal in a Secret are as follows:
+
+1. Within an Azure Cloud Shell session, make sure you're logged on at the subscription where you want to create the Azure Key Vault:
+
+ ```azurecli-interactive
+ az account show
+ ```
+
+1. If the subscription isn't correct, use the following command to set the Cloud Shell to the correct subscription:
+
+ ```azurecli-interactive
+ az account set -s <subscription name or id>
+ ```
+
+1. Create Azure Key Vault
+
+ ```azurecli-interactive
+ az keyvault create --name "<AzureKeyVaultName>" -g <ResourceGroupName>
+ ```
+
+1. Create the trust relationship and assign the policy for virtual machine to get the Secret
+
+ 1. Show AzAcSnap virtual machine Identity
+
+ If the virtual machine already has an identity created, retrieve it as follows:
+
+ ```azurecli-interactive
+ az vm identity show --name "<VMName>" --resource-group "<ResourceGroup>"
+ ```
+
+ The `"principalId"` in the output is used as the `--object-id` value when setting the Policy with `az keyvault set-policy`.
+
+ ```output
+ {
+ "principalId": "99z999zz-99z9-99zz-99zz-9z9zz999zz99",
+ "tenantId": "99z999zz-99z9-99zz-99zz-9z9zz999zz99",
+ "type": "SystemAssigned, UserAssigned",
+ "userAssignedIdentities": {
+ "/subscriptions/99z999zz-99z9-99zz-99zz-9z9zz999zz99/resourceGroups/AzSecPackAutoConfigRG/providers/Microsoft.ManagedIdentity/userAssignedIdentities/AzSecPackAutoConfigUA-eastus2": {
+ "clientId": "99z999zz-99z9-99zz-99zz-9z9zz999zz99",
+ "principalId": "99z999zz-99z9-99zz-99zz-9z9zz999zz99"
+ }
+ }
+ }
+ ```
+
+ 1. Set AzAcSnap virtual machine Identity (if necessary)
+
+ If the VM doesn't have an identity, create it as follows:
+
+ ```azurecli-interactive
+ az vm identity assign --name "<VMName>" --resource-group "<ResourceGroup>"
+ ```
+
+ The `"systemAssignedIdentity"` in the output is used as the `--object-id` value when setting the Policy with `az keyvault set-policy`.
+
+ ```output
+ {
+ "systemAssignedIdentity": "99z999zz-99z9-99zz-99zz-9z9zz999zz99",
+ "userAssignedIdentities": {
+ "/subscriptions/99z999zz-99z9-99zz-99zz- 9z9zz999zz99/resourceGroups/AzSecPackAutoConfigRG/providers/Microsoft.ManagedIdentity/userAssignedIdentities/AzSecPackAutoConfigUA-eastus2": {
+ "clientId": "99z999zz-99z9-99zz-99zz-9z9zz999zz99",
+ "principalId": "99z999zz-99z9-99zz-99zz-9z9zz999zz99"
+ }
+ }
+ }
+ ```
+
+ 1. Assign a suitable policy for the virtual machine to be able to retrieve the Secret from the Key Vault.
+
+ ```azurecli-interactive
+ az keyvault set-policy --name "<AzureKeyVaultName>" --object-id "<VMIdentity>" --secret-permissions get
+ ```
+
+1. Create Azure Key Vault Secret
+
+ Create the secret, which will store the Service Principal credential information.
+
+ It's possible to paste the contents of the Service Principal. In the **Bash** Cloud Shell below a single apostrophe character is put after value then
+ press the `[Enter]` key, then paste the contents of the Service Principal, close the content by adding another single apostrophe and press the `[Enter]` key.
+ This command should create the Secret and store it in Azure Key Vault.
+
+ > [!TIP]
+ > If you have a separate Service Principal per installation the `"<NameOfSecret>"` could be the SID, or some other suitable unique identifier.
+
+ Following example is for using the **Bash** Cloud Shell:
+
+ ```azurecli-interactive
+ az keyvault secret set --name "<NameOfSecret>" --vault-name "<AzureKeyVaultName>" --value '
+ {
+ "clientId": "99z999zz-99z9-99zz-99zz-9z9zz999zz99",
+ "clientSecret": "<ClientSecret>",
+ "subscriptionId": "99z999zz-99z9-99zz-99zz-9z9zz999zz99",
+ "tenantId": "99z999zz-99z9-99zz-99zz-9z9zz999zz99",
+ "activeDirectoryEndpointUrl": "https://login.microsoftonline.com",
+ "resourceManagerEndpointUrl": "https://management.azure.com/",
+ "activeDirectoryGraphResourceId": "https://graph.windows.net/",
+ "sqlManagementEndpointUrl": "https://management.core.windows.net:8443/",
+ "galleryEndpointUrl": "https://gallery.azure.com/",
+ "managementEndpointUrl": "https://management.core.windows.net/"
+ }'
+ ```
+
+ Following example is for using the **PowerShell** Cloud Shell:
+
+ > [!WARNING]
+ > In PowerShell the double quotes have to be escaped with an additional double quote, so one double quote (") becomes two double quotes ("").
+
+ ```azurecli-interactive
+ az keyvault secret set --name "<NameOfSecret>" --vault-name "<AzureKeyVaultName>" --value '
+ {
+ ""clientId"": ""99z999zz-99z9-99zz-99zz-9z9zz999zz99"",
+ ""clientSecret"": ""<ClientSecret>"",
+ ""subscriptionId"": ""99z999zz-99z9-99zz-99zz-9z9zz999zz99"",
+ ""tenantId"": ""99z999zz-99z9-99zz-99zz-9z9zz999zz99"",
+ ""activeDirectoryEndpointUrl"": ""https://login.microsoftonline.com"",
+ ""resourceManagerEndpointUrl"": ""https://management.azure.com/"",
+ ""activeDirectoryGraphResourceId"": ""https://graph.windows.net/"",
+ ""sqlManagementEndpointUrl"": ""https://management.core.windows.net:8443/"",
+ ""galleryEndpointUrl"": ""https://gallery.azure.com/"",
+ ""managementEndpointUrl"": ""https://management.core.windows.net/""
+ }'
+ ```
+
+ The output of the command `az keyvault secret set` will have the URI value to use as `"authFile"` entry in the AzAcSnap JSON configuration file. The URI is
+ the value of the `"id"` below (for example, `"https://<AzureKeyVaultName>.vault.azure.net/secrets/<NameOfSecret>/z9999999z9999999z9999999"`).
+
+ ```output
+ {
+ "attributes": {
+ "created": "2022-02-23T20:21:01+00:00",
+ "enabled": true,
+ "expires": null,
+ "notBefore": null,
+ "recoveryLevel": "Recoverable+Purgeable",
+ "updated": "2022-02-23T20:21:01+00:00"
+ },
+ "contentType": null,
+ "id": "https://<AzureKeyVaultName>.vault.azure.net/secrets/<NameOfSecret>/z9999999z9999999z9999999",
+ "kid": null,
+ "managed": null,
+ "name": "AzureAuth",
+ "tags": {
+ "file-encoding": "utf-8"
+ },
+ "value": "\n{\n \"clientId\": \"99z999zz-99z9-99zz-99zz-9z9zz999zz99\",\n \"clientSecret\": \"<ClientSecret>\",\n \"subscriptionId\": \"99z999zz-99z9-99zz-99zz-9z9zz999zz99\",\n \"tenantId\": \"99z999zz-99z9-99zz-99zz-9z9zz999zz99\",\n \"activeDirectoryEndpointUrl\": \"https://login.microsoftonline.com\",\n \"resourceManagerEndpointUrl\": \"https://management.azure.com/\",\n \"activeDirectoryGraphResourceId\": \"https://graph.windows.net/\",\n \"sqlManagementEndpointUrl\": \"https://management.core.windows.net:8443/\",\n \"galleryEndpointUrl\": \"https://gallery.azure.com/\",\n \"managementEndpointUrl\": \"https://management.core.windows.net/\"\n}"
+ }
+ ```
+
+1. Update AzAcSnap JSON configuration file
+
+ Replace the value for the authFile entry with the Secret's ID value. Making this change can be done by editing the file using a tool like `vi`, or by using the
+ `azacsnap -c configure --configuration edit` option.
+
+ 1. Old Value
+
+ ```output
+ "authFile": "azureauth.json"
+ ```
+
+ 1. New Value
+
+ ```output
+ "authFile": "https://<AzureKeyVaultName>.vault.azure.net/secrets/<NameOfSecret>/z9999999z9999999z9999999"
+ ```
+
+## All Volumes Snapshot
+
+A new optional value for `--volume` allows for all the volumes to be snapshot as a group. This option allows for the snapshots to all have the same snapshot
+name, which is useful if doing a `-c restore` to clone or recover a system to specific date/time.
+
+Running the AzAcSnap command `azacsnap -c backup --volume all --retention 5 --prefix all-volumes` will take snapshot backups, with all the snapshots having
+the same name with a prefix of `all-volumes` and a maximum of five snapshots with that prefix per volume.
+
+The processing is handled in the order outlined as follows:
+
+1. `data` Volume Snapshot (same as the normal `--volume data` option)
+ 1. put the database into *backup-mode*.
+ 1. take snapshots of the Volume listed in the configuration file's `"dataVolume"` stanza.
+ 1. take the database out of *backup-mode*.
+ 1. perform snapshot management.
+1. `other Volume Snapshot (same as the normal `--volume other` option)
+ 1. take snapshots of the Volumes listed in the configuration file's `"otherVolume"` stanza.
+ 1. perform snapshot management.
++ ## Next steps - [Get started](azacsnap-get-started.md) - [Test AzAcSnap](azacsnap-cmd-ref-test.md)-- [Back up using AzAcSnap](azacsnap-cmd-ref-backup.md)
+- [Back up using AzAcSnap](azacsnap-cmd-ref-backup.md)
azure-netapp-files Azacsnap Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-release-notes.md
na Previously updated : 02/20/2022 Last updated : 03/08/2022
This page lists major changes made to AzAcSnap to provide new functionality or resolve defects.
+## Mar-2022
+
+### AzAcSnap v5.1 Preview (Build: 20220302.81795)
+
+AzAcSnap v5.1 Preview (Build: 20220302.81795) has been released with the following new features:
+
+- Azure Key Vault support for securely storing the Service Principal.
+- A new option for `-c backup --volume` which has the `all` parameter value.
+
+Details of these new features are in the AzAcSnap Preview documentation.
+
+Read about the new features and how to use the [AzAcSnap Preview](azacsnap-preview.md).
+Download the [latest release of the Preview installer](https://aka.ms/azacsnap-preview-installer).
+ ## Feb-2022 ### AzAcSnap v5.1 Preview (Build: 20220220.55340)
AzAcSnap v5.1 Preview (Build: 20220220.55340) has been released with the followi
- Resolved failure in matching `--dbsid` command line option with `sid` entry in the JSON configuration file for Oracle databases when using the `-c restore` command.
-Download the [latest release of the Preview installer](https://aka.ms/azacsnap-preview-installer) and read about the new features and how to use the [AzAcSnap Preview](azacsnap-preview.md).
- ### AzAcSnap v5.1 Preview (Build: 20220203.77807) AzAcSnap v5.1 Preview (Build: 20220203.77807) has been released with the following fixes and improvements:
azure-signalr Signalr Howto Authorize Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-authorize-managed-identity.md
services.AddSignalR().AddAzureSignalR(option =>
### Azure Functions SignalR bindings
-> [!WARNING]
-> SignalR trigger binding does not support identity-based connection yet and connection strings are still necessary.
- Azure Functions SignalR bindings use [application settings](../azure-functions/functions-how-to-use-azure-function-app-settings.md) on portal or [`local.settings.json`](../azure-functions/functions-develop-local.md#local-settings-file) at local to configure managed-identity to access your SignalR resources. You might need a group of key-value pairs to configure an identity. The keys of all the key-value pairs must start with a **connection name prefix** (defaults to `AzureSignalRConnectionString`) and a separator (`__` on portal and `:` at local). The prefix can be customized with binding property [`ConnectionStringSetting`](../azure-functions/functions-bindings-signalr-service.md).
If you want to use user-assigned identity, you need to assign one more `clientId
See the following related articles: - [Overview of Azure AD for SignalR](signalr-concept-authorize-azure-active-directory.md)-- [Authorize request to SignalR resources with Azure AD from Azure applications](signalr-howto-authorize-application.md)
+- [Authorize request to SignalR resources with Azure AD from Azure applications](signalr-howto-authorize-application.md)
azure-sql Advance Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/advance-notifications.md
ms.devlang:
- Previously updated : 09/14/2021+ Last updated : 03/07/2022 # Advance notifications for planned maintenance events (Preview)
-Advance notifications (Preview) is available for databases configured to use a non-default [Maintenance Window (Preview)](maintenance-window.md). Advance notifications enable customers to configure notifications to be sent up to 24 hours in advance of any planned event.
+Advance notifications (Preview) are available for databases configured to use a non-default [maintenance window](maintenance-window.md). Advance notifications enable customers to configure notifications to be sent up to 24 hours in advance of any planned event.
Notifications can be configured so you can get texts, emails, Azure push notifications, and voicemails when planned maintenance is due to begin in the next 24 hours. Additional notifications are sent when maintenance begins and when maintenance ends. Advance notifications cannot be configured for the **System default** maintenance window option. Choose a maintenance window other than the **System default** to configure and enable Advance notifications. > [!NOTE]
-> While the ability to choose a maintenance window is available for Azure SQL managed instances, advance notifications are not currently available for Azure SQL managed instances.
+> While [maintenance windows](maintenance-window.md) are generally available, advance notifications for maintenance windows are in public preview for Azure SQL Database and Azure SQL Managed Instance.
## Create an advance notification
Complete the following steps to enable a notification.
:::image type="content" source="media/advance-notifications/notifications.png" alt-text="configure notifications"::: -- 1. Complete the *Add or edit notification* form that opens and select **OK**: 2. Actions and Tags are optional. Here you can configure additional actions to be triggered or use tags to categorize and organize your Azure resources.
The following table shows additional notifications that may be sent while mainte
|**Blocked**|There was a problem during maintenance for database *xyz*. We'll notify you when we resume.| |**Resumed**|The problem has been resolved and maintenance will continue at the next maintenance window.|
+## Permissions
+
+While Advance Notifications can be sent to any email address, Azure subscription RBAC (role-based access control) policy determines who can access the links in the email. Querying resource graph is covered by [Azure RBAC](../../role-based-access-control/overview.md) access management. To enable read access, each recipient should have resource group level read access. For more information, see [Steps to assign an Azure role](../../role-based-access-control/role-assignments-steps.md).
+
+## Retrieve the list of impacted resources
+
+[Azure Resource Graph](../../governance/resource-graph/overview.md) is an Azure service designed to extend Azure Resource Management. The Azure Resource Graph Explorer provides efficient and performant resource exploration with the ability to query at scale across a given set of subscriptions so that you can effectively govern your environment.
+
+You can use the Azure Resource Graph Explorer to query for maintenance events. For an introduction on how to run these queries, see [Quickstart: Run your first Resource Graph query using Azure Resource Graph Explorer](../../governance/resource-graph/first-query-portal.md).
+
+When the advanced notification for planned maintenance is received, you will get a link that opens Azure Resource Graph and executes the query for the exact event, similar to the following. Note that the `notificationId` value is unique per maintenance event.
+
+```kusto
+resources
+| project resource = tolower(id)
+| join kind=inner (
+ maintenanceresources
+ | where type == "microsoft.maintenance/updates"
+ | extend p = parse_json(properties)
+ | mvexpand d = p.value
+ | where d has 'notificationId' and d.notificationId == 'LNPN-R9Z'
+ | project resource = tolower(name), status = d.status
+) on resource
+|project resource, status
+```
+
+For the full reference of the sample queries and how to use them across tools like PowerShell or Azure CLI, visit [Azure Resource Graph sample queries for Azure Service Health](../../service-health/resource-graph-samples.md).
+ ## Next steps
azure-sql Data Discovery And Classification Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/data-discovery-and-classification-overview.md
This is the required action to modify the data classification of a database are:
Learn more about role-based permissions in [Azure RBAC](../../role-based-access-control/overview.md). > [!NOTE]
-> The Azure SQL built-in roles in this section apply to a dedicated SQL pool (formerly SQL DW) but are not available for dedicated SQL pools and other SQL resources within Azure Synapse workspaces. For SQL resources in Azure Synapse workspaces, use the available actions for data classification to create custom Azure roles as needed for labelling. For more information on the `Microsoft.Synapse/workspaces/sqlPools` provider operations, see [Microsoft.Synapse](/azure/role-based-access-control/resource-provider-operations.md#microsoftsynapse).
+> The Azure SQL built-in roles in this section apply to a dedicated SQL pool (formerly SQL DW) but are not available for dedicated SQL pools and other SQL resources within Azure Synapse workspaces. For SQL resources in Azure Synapse workspaces, use the available actions for data classification to create custom Azure roles as needed for labelling. For more information on the `Microsoft.Synapse/workspaces/sqlPools` provider operations, see [Microsoft.Synapse](/azure/role-based-access-control/resource-provider-operations#microsoftsynapse).
## Manage classifications
azure-sql Doc Changes Updates Release Notes Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/doc-changes-updates-release-notes-whats-new.md
ms.devlang: Previously updated : 03/02/2022 Last updated : 03/07/2022 # What's new in Azure SQL Database? [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
The following table lists the features of Azure SQL Database that are currently
| [Elastic queries](elastic-query-overview.md) | The elastic queries feature allows for cross-database queries in Azure SQL Database. | | [Elastic transactions](elastic-transactions-overview.md) | Elastic transactions allow you to execute transactions distributed among cloud databases in Azure SQL Database. | | [Ledger](ledger-overview.md) | The Azure SQL Database ledger feature allows you to cryptographically attest to other parties, such as auditors or other business parties, that your data hasn't been tampered with. |
-| [Maintenance window](maintenance-window.md)| The maintenance window feature allows you to configure maintenance schedule for your Azure SQL Database. |
+| [Maintenance window advance notifications](../database/advance-notifications.md)| Advance notifications are available for databases configured to use a non-default [maintenance window](maintenance-window.md). Advance notifications for maintenance windows are in public preview for Azure SQL Database. |
| [Query editor in the Azure portal](connect-query-portal.md) | The query editor in the portal allows you to run queries against your Azure SQL Database directly from the [Azure portal](https://portal.azure.com).| | [Query Store hints](/sql/relational-databases/performance/query-store-hints?view=azuresqldb-current&preserve-view=true) | Use query hints to optimize your query execution via the OPTION clause. | | [SQL Analytics](../../azure-monitor/insights/azure-sql.md)|Azure SQL Analytics is an advanced cloud monitoring solution for monitoring performance of all of your Azure SQL databases at scale and across multiple subscriptions in a single view. Azure SQL Analytics collects and visualizes key performance metrics with built-in intelligence for performance troubleshooting.|
The following table lists the features of Azure SQL Database that have transitio
| Feature | GA Month | Details | | | | |
+| [Maintenance window](../database/maintenance-window.md)| March 2022 | The maintenance window feature allows you to configure maintenance schedule for your Azure SQL Database. [Maintenance window advance notifications](../database/advance-notifications.md), however, are in preview.|
| [Storage redundancy for Hyperscale databases](automated-backups-overview.md#configure-backup-storage-redundancy) | March 2022 | When creating a Hyperscale database, you can choose your preferred storage type: read-access geo-redundant storage (RA-GRS), zone-redundant storage (ZRS), or locally redundant storage (LRS) Azure standard storage. The selected storage redundancy option will be used for the lifetime of the database for both data storage redundancy and backup storage redundancy. | | [Azure Active Directory-only authentication](authentication-azure-ad-only-authentication.md) | November 2021 | It's possible to configure your Azure SQL Database to allow authentication only from Azure Active Directory. | | [Azure AD service principal](authentication-aad-service-principal.md) | September 2021 | Azure Active Directory (Azure AD) supports user creation in Azure SQL Database on behalf of Azure AD applications (service principals).|
Learn about significant changes to the Azure SQL Database documentation.
| Changes | Details | | | |
+| **GA for maintenance window** | The [maintenance window](maintenance-window.md) feature allows you to configure a maintenance schedule for your Azure SQL Database and receive advance notifications of maintenance windows. [Maintenance window advance notifications](../database/advance-notifications.md) are in public preview for databases configured to use a non-default [maintenance window](maintenance-window.md).|
| **Hyperscale zone redundant configuration preview** | It's now possible to create new Hyperscale databases with zone redundancy to make your databases resilient to a much larger set of failures. This feature is currently in preview for the Hyperscale service tier. To learn more, see [Hyperscale zone redundancy](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview). | | **Hyperscale storage redundancy GA** | Choosing your storage redundancy for your databases in the Hyperscale service tier is now generally available. See [Configure backup storage redundancy](automated-backups-overview.md#configure-backup-storage-redundancy) to learn more. |||
Learn about significant changes to the Azure SQL Database documentation.
### 2021 - | Changes | Details | | | | | **Azure AD-only authentication** | Restricting authentication to your Azure SQL Database only to Azure Active Directory users is now generally available. To learn more, see [Azure AD-only authentication](../database/authentication-azure-ad-only-authentication.md). |
Learn about significant changes to the Azure SQL Database documentation.
||| - ## Contribute to content To contribute to the Azure SQL documentation, see the [Docs contributor guide](/contribute/).
azure-sql Elastic Transactions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/elastic-transactions-overview.md
Title: Distributed transactions across cloud databases (preview)
+ Title: Distributed transactions across cloud databases
description: Overview of Elastic Database Transactions with Azure SQL Database and Azure SQL Managed Instance.
Last updated 11/02/2021
-# Distributed transactions across cloud databases (preview)
+# Distributed transactions across cloud databases
[!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
-> [!IMPORTANT]
-> Distributed transactions for Azure SQL Managed Instance are now generally available. Elastic Database Transactions for Azure SQL Database are in preview.
-Elastic database transactions for Azure SQL Database (Preview) and Azure SQL Managed Instance allow you to run transactions that span several databases. Elastic database transactions are available for .NET applications using ADO.NET and integrate with the familiar programming experience using the [System.Transaction](/dotnet/api/system.transactions) classes. To get the library, see [.NET Framework 4.6.1 (Web Installer)](https://www.microsoft.com/download/details.aspx?id=49981).
+Elastic database transactions for Azure SQL Database and Azure SQL Managed Instance allow you to run transactions that span several databases. Elastic database transactions are available for .NET applications using ADO.NET and integrate with the familiar programming experience using the [System.Transaction](/dotnet/api/system.transactions) classes. To get the library, see [.NET Framework 4.6.1 (Web Installer)](https://www.microsoft.com/download/details.aspx?id=49981).
Additionally, for managed instance distributed transactions are available in [Transact-SQL](/sql/t-sql/language-elements/begin-distributed-transaction-transact-sql). On premises, such a scenario usually requires running Microsoft Distributed Transaction Coordinator (MSDTC). Since MSDTC isn't available for Platform-as-a-Service application in Azure, the ability to coordinate distributed transactions has now been directly integrated into SQL Database or SQL Managed Instance. Applications can connect to any database to launch distributed transactions, and one of the databases or servers will transparently coordinate the distributed transaction, as shown in the following figure.
using (TransactionScope s = new TransactionScope())
## Transactions for SQL Database
-> [!IMPORTANT]
-> Distributed transactions for Azure SQL Database are in preview.
- Elastic database transactions are supported across different servers in Azure SQL Database. When transactions cross server boundaries, the participating servers first need to be entered into a mutual communication relationship. Once the communication relationship has been established, any database in any of the two servers can participate in elastic transactions with databases from the other server. With transactions spanning more than two servers, a communication relationship needs to be in place for any pair of servers. Use the following PowerShell cmdlets to manage cross-server communication relationships for elastic database transactions:
Use the following PowerShell cmdlets to manage cross-server communication relati
## Transactions for SQL Managed Instance
-> [!IMPORTANT]
-> Distributed transactions for Azure SQL Managed Instance are now generally available.
- Distributed transactions are supported across databases within multiple instances. When transactions cross managed instance boundaries, the participating instances need to be in a mutual security and communication relationship. This is done by creating a [Server Trust Group](../managed-instance/server-trust-group-overview.md), which can be done by using the Azure portal or Azure PowerShell or the Azure CLI. If instances are not on the same Virtual network then you must configure [Virtual network peering](../../virtual-network/virtual-network-peering-overview.md) and Network security group inbound and outbound rules need to allow ports 5024 and 11000-12000 on all participating Virtual networks. ![Server Trust Groups on Azure Portal][3]
azure-sql Features Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/features-comparison.md
The following table lists the major features of SQL Server and provides informat
| [Trace flags](/sql/t-sql/database-console-commands/dbcc-traceon-trace-flags-transact-sql) | No | Yes, but only limited set of global trace flags. See [DBCC differences](../managed-instance/transact-sql-tsql-differences-sql-server.md#dbcc) | | [Transactional Replication](../managed-instance/replication-transactional-overview.md) | Yes, [Transactional and snapshot replication subscriber only](migrate-to-database-from-sql-server.md) | Yes, in [public preview](/sql/relational-databases/replication/replication-with-sql-database-managed-instance). See the constraints [here](../managed-instance/transact-sql-tsql-differences-sql-server.md#replication). | | [Transparent data encryption (TDE)](/sql/relational-databases/security/encryption/transparent-data-encryption-tde) | Yes - General Purpose, Business Critical, and Hyperscale (in preview) service tiers only| [Yes](transparent-data-encryption-tde-overview.md) |
-| Windows authentication | No | No |
+| Windows authentication | No | Yes, see [Windows Authentication for Azure Active Directory principals (Preview)](../managed-instance/winauth-azuread-overview.md). |
| [Windows Server Failover Clustering](/sql/sql-server/failover-clusters/windows/windows-server-failover-clustering-wsfc-with-sql-server) | No. Other techniques that provide [high availability](high-availability-sla.md) are included with every database. Disaster recovery is discussed in [Overview of business continuity with Azure SQL Database](business-continuity-high-availability-disaster-recover-hadr-overview.md). | No. Other techniques that provide [high availability](high-availability-sla.md) are included with every database. Disaster recovery is discussed in [Overview of business continuity with Azure SQL Database](business-continuity-high-availability-disaster-recover-hadr-overview.md). | ## Platform capabilities
azure-sql Maintenance Window Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/maintenance-window-configure.md
Title: Configure maintenance window (Preview)
+ Title: Configure maintenance window
description: Learn how to set the time when planned maintenance should be performed on your Azure SQL databases, elastic pools, and managed instance databases.
Previously updated : 03/23/2021 Last updated : 03/07/2022
-# Configure maintenance window (Preview)
+# Configure maintenance window
[!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
-Configure the [maintenance window (Preview)](maintenance-window.md) for an Azure SQL database, elastic pool, or Azure SQL Managed Instance database during resource creation, or anytime after a resource is created.
+Configure the [maintenance window](maintenance-window.md) for an Azure SQL database, elastic pool, or Azure SQL Managed Instance database during resource creation, or anytime after a resource is created.
The *System default* maintenance window is 5PM to 8AM daily (local time of the Azure region the resource is located) to avoid peak business hours interruptions. If the *System default* maintenance window is not the best time, select one of the other available maintenance windows.
Be sure to delete unneeded resources after you're finished with them to avoid un
## Next steps -- To learn more about maintenance window, see [Maintenance window (Preview)](maintenance-window.md).
+- To learn more about maintenance window, see [Maintenance window](maintenance-window.md).
- For more information, see [Maintenance window FAQ](maintenance-window-faq.yml). - To learn about optimizing performance, see [Monitoring and performance tuning in Azure SQL Database and Azure SQL Managed Instance](monitor-tune-overview.md).
azure-sql Maintenance Window https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/maintenance-window.md
-+ Previously updated : 02/18/2022 Last updated : 03/07/2022
-# Maintenance window (Preview)
+# Maintenance window
[!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)] The maintenance window feature allows you to configure maintenance schedule for [Azure SQL Database](sql-database-paas-overview.md) and [Azure SQL Managed Instance](../managed-instance/sql-managed-instance-paas-overview.md) resources making impactful maintenance events predictable and less disruptive for your workload.
The maintenance window feature allows you to configure maintenance schedule for
> [!Note] > The maintenance window feature only protects from planned impact from upgrades or scheduled maintenance. It does not protect from all failover causes; exceptions that may cause short connection interruptions outside of a maintenance window include hardware failures, cluster load balancing, and database reconfigurations due to events like a change in database Service Level Objective.
+[Advance notifications (Preview)](advance-notifications.md) are available for databases configured to use a non-default maintenance window. Advance notifications enable customers to configure notifications to be sent up to 24 hours in advance of any planned event.
+ ## Overview Azure periodically performs [planned maintenance](planned-maintenance.md) of SQL Database and SQL managed instance resources. During Azure SQL maintenance event, databases are fully available but can be subject to short reconfigurations within respective availability SLAs for [SQL Database](https://azure.microsoft.com/support/legal/sla/azure-sql-database) and [SQL managed instance](https://azure.microsoft.com/support/legal/sla/azure-sql-sql-managed-instance).
Once the maintenance window selection is made and service configuration complete
## Advance notifications
-Maintenance notifications can be configured to alert you on upcoming planned maintenance events for your Azure SQL Database 24 hours in advance, at the time of maintenance, and when the maintenance is complete. For more information, see [Advance Notifications](advance-notifications.md).
+Maintenance notifications can be configured to alert you of upcoming planned maintenance events for your Azure SQL Database. The alerts arrive 24 hours in advance, at the time of maintenance, and when the maintenance is complete. For more information, see [Advance Notifications](advance-notifications.md).
## Feature availability
Choosing a maintenance window other than the default is currently available in t
| Australia East | Yes | Yes | Yes | | Australia Southeast | Yes | Yes | | | Brazil South | Yes | Yes | |
-| Brazil Southeast | Yes | | |
+| Brazil Southeast | Yes | Yes | |
| Canada Central | Yes | Yes | Yes | | Canada East | Yes | Yes | | | Central India | Yes | Yes | |
Choosing a maintenance window other than the default is currently available in t
| UK South | Yes | Yes | Yes | | UK West | Yes | Yes | | | US Gov Arizona | Yes | | |
-| US Gov Texas| Yes | | |
-| US Gov Virginia | Yes | | |
+| US Gov Texas| Yes | Yes | |
+| US Gov Virginia | Yes | Yes | |
| West Central US | Yes | Yes | | | West Europe | Yes | Yes | Yes | | West India | Yes | | |
Operations affecting the virtual cluster, like service upgrades and virtual clus
The serialization of virtual cluster management operations is general behavior that applies to the default maintenance policy as well. With a maintenance window schedule configured, the period between two adjacent windows can be few days long. Submitted operations can also be on hold for few days if the maintenance operation spans two windows. That is very rare case, but creation of new instances or resize of the existing instances (if additional compute nodes are needed) may be blocked during this period.
+## Retrieving list of maintenance events
+
+[Azure Resource Graph](../../governance/resource-graph/overview.md) is an Azure service designed to extend Azure Resource Management. The Azure Resource Graph Explorer provides efficient and performant resource exploration with the ability to query at scale across a given set of subscriptions so that you can effectively govern your environment.
+
+You can use the Azure Resource Graph Explorer to query for maintenance events. For an introduction on how to run these queries, see [Quickstart: Run your first Resource Graph query using Azure Resource Graph Explorer](../../governance/resource-graph/first-query-portal.md).
+
+To check for the maintenance events for all SQL databases in your subscription, use the following sample query in Azure Resource Graph Explorer:
+
+```kusto
+servicehealthresources
+| where type =~ 'Microsoft.ResourceHealth/events'
+| extend impact = properties.Impact
+| extend impactedService = parse_json(impact[0]).ImpactedService
+| where impactedService =~ 'SQL Database'
+| extend eventType = properties.EventType, status = properties.Status, description = properties.Title, trackingId = properties.TrackingId, summary = properties.Summary, priority = properties.Priority, impactStartTime = properties.ImpactStartTime, impactMitigationTime = properties.ImpactMitigationTime
+| where properties.Status == 'Active' and tolong(impactStartTime) > 1 and eventType == 'PlannedMaintenance'
+```
+
+To check for the maintenance events for all managed instances in your subscription, use the following sample query in Azure Resource Graph Explorer:
+
+```kusto
+servicehealthresources
+| where type =~ 'Microsoft.ResourceHealth/events'
+| extend impact = properties.Impact
+| extend impactedService = parse_json(impact[0]).ImpactedService
+| where impactedService =~ 'SQL Managed Instance'
+| extend eventType = properties.EventType, status = properties.Status, description = properties.Title, trackingId = properties.TrackingId, summary = properties.Summary, priority = properties.Priority, impactStartTime = properties.ImpactStartTime, impactMitigationTime = properties.ImpactMitigationTime
+| where properties.Status == 'Active' and tolong(impactStartTime) > 1 and eventType == 'PlannedMaintenance'
+```
+
+For the full reference of the sample queries and how to use them across tools like PowerShell or Azure CLI, visit [Azure Resource Graph sample queries for Azure Service Health](../../service-health/resource-graph-samples.md).
+ ## Next steps * [Configure maintenance window](maintenance-window-configure.md)
azure-sql Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/planned-maintenance.md
Previously updated : 3/23/2021 Last updated : 03/07/2022 # Plan for Azure maintenance events in Azure SQL Database and Azure SQL Managed Instance
If your database is experiencing log-on failures, check the [Resource Health](..
## Maintenance window feature
-The maintenance Window feature allows for the configuration of predictable maintenance window schedules for eligible Azure SQL databases and SQL managed instances. See [Maintenance window](maintenance-window.md) for more information.
+The [maintenance window feature](maintenance-window.md) allows for the configuration of predictable maintenance window schedules for eligible Azure SQL databases and SQL managed instances. [Maintenance window advance notifications](../database/advance-notifications.md) are available for databases configured to use a non-default [maintenance window](maintenance-window.md). Maintenance windows and advance notifications for maintenance windows are generally available for Azure SQL Database. For Azure SQL Managed Instance, maintenance windows are generally available but advance notifications are in public preview.
+ ## Next steps
azure-sql Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/security-overview.md
IP firewall rules grant access to databases based on the originating IP address
### Authentication
-Authentication is the process of proving the user is who they claim to be. Azure SQL Database and SQL Managed Instance support two types of authentication:
+Authentication is the process of proving the user is who they claim to be. Azure SQL Database and SQL Managed Instance support SQL authentication and Azure AD authentication. SQL Managed instance additionally supports Windows Authentication for Azure AD principals.
- **SQL authentication**:
Authentication is the process of proving the user is who they claim to be. Azure
A server admin called the **Active Directory administrator** must be created to use Azure AD authentication with SQL Database. For more information, see [Connecting to SQL Database By Using Azure Active Directory Authentication](authentication-aad-overview.md). Azure AD authentication supports both managed and federated accounts. The federated accounts support Windows users and groups for a customer domain federated with Azure AD.
- Additional Azure AD authentication options available are [Active Directory Universal Authentication for SQL Server Management Studio](authentication-mfa-ssms-overview.md) connections including [Multi-Factor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md) and [Conditional Access](conditional-access-configure.md).
+ Additional Azure AD authentication options available are [Active Directory Universal Authentication for SQL Server Management Studio](authentication-mfa-ssms-overview.md) connections including [multi-factor authentication](../../active-directory/authentication/concept-mfa-howitworks.md) and [Conditional Access](conditional-access-configure.md).
+
+- **Windows Authentication for Azure AD Principals (Preview)**:
+
+ [Kerberos authentication for Azure AD Principals](../managed-instance/winauth-azuread-overview.md) (Preview) enables Windows Authentication for Azure SQL Managed Instance. Windows Authentication for managed instances empowers customers to move existing services to the cloud while maintaining a seamless user experience and provides the basis for infrastructure modernization.
+
+ To enable Windows Authentication for Azure Active Directory (Azure AD) principals, you will turn your Azure AD tenant into an independent Kerberos realm and create an incoming trust in the customer domain. Learn [how Windows Authentication for Azure SQL Managed Instance is implemented with Azure Active Directory and Kerberos](../managed-instance/winauth-implementation-aad-kerberos.md).
> [!IMPORTANT] > Managing databases and servers within Azure is controlled by your portal user account's role assignments. For more information on this article, see [Azure role-based access control in Azure portal](../../role-based-access-control/overview.md). Controlling access with firewall rules does *not* apply to **SQL Managed Instance**. Please see the following article on [connecting to a managed instance](../managed-instance/connect-application-instance.md) for more information about the networking configuration needed.
azure-sql Doc Changes Updates Release Notes Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/doc-changes-updates-release-notes-whats-new.md
ms.devlang: Previously updated : 03/02/2022 Last updated : 03/07/2022 # What's new in Azure SQL Managed Instance? [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqlmi.md)]
The following table lists the features of Azure SQL Managed Instance that are cu
|[Endpoint policies](../../azure-sql/managed-instance/service-endpoint-policies-configure.md) | Configure which Azure Storage accounts can be accessed from a SQL Managed Instance subnet. Grants an extra layer of protection against inadvertent or malicious data exfiltration.| | [Instance pools](instance-pools-overview.md) | A convenient and cost-efficient way to migrate smaller SQL Server instances to the cloud. | | [Link feature](link-feature.md)| Online replication of SQL Server databases hosted anywhere to Azure SQL Managed Instance. |
-| [Maintenance window](../database/maintenance-window.md)| The maintenance window feature allows you to configure maintenance schedule for your Azure SQL Managed Instance. |
+| [Maintenance window advance notifications](../database/advance-notifications.md)| Advance notifications (preview) for databases configured to use a non-default [maintenance window](../database/maintenance-window.md). Advance notifications are in preview for Azure SQL Managed Instance. |
| [Memory optimized premium-series hardware generation](resource-limits.md#service-tier-characteristics) | Deploy your SQL Managed Instance to the new memory optimized premium-series hardware generation to take advantage of the latest Intel Ice Lake CPUs. The memory optimized hardware generation offers higher memory to vCore ratios. | | [Migration with Log Replay Service](log-replay-service-migrate.md) | Migrate databases from SQL Server to SQL Managed Instance by using Log Replay Service. | | [Premium-series hardware generation](resource-limits.md#service-tier-characteristics) | Deploy your SQL Managed Instance to the new premium-series hardware generation to take advantage of the latest Intel Ice Lake CPUs. |
The following table lists the features of Azure SQL Managed Instance that have t
| Feature | GA Month | Details | | | | |
+|[Maintenance window](../database/maintenance-window.md)| March 2022 | The maintenance window feature allows you to configure maintenance schedule for your Azure SQL Managed Instance. [Maintenance window advance notifications](../database/advance-notifications.md), however, are in preview for Azure SQL Managed Instance.|
|[16 TB support in General Purpose](resource-limits.md)| November 2021 | Support for allocation up to 16 TB of space on SQL Managed Instance in the General Purpose service tier. | [Azure Active Directory-only authentication](../database/authentication-azure-ad-only-authentication.md) | November 2021 | It's now possible to restrict authentication to your Azure SQL Managed Instance only to Azure Active Directory users. | | [Distributed transactions](../database/elastic-transactions-overview.md) | November 2021 | Distributed database transactions for Azure SQL Managed Instance allow you to run distributed transactions that span several databases across instances. |
Learn about significant changes to the Azure SQL Managed Instance documentation.
| Changes | Details | | | |
+| **GA for maintenance window, preview for advance notifications** | The [maintenance window](../database/maintenance-window.md) feature allows you to configure a maintenance schedule for your Azure SQL Managed Instance and receive advance notifications of maintenance windows. [Maintenance window advance notifications](../database/advance-notifications.md) (preview) are available for databases configured to use a non-default [maintenance window](../database/maintenance-window.md). |
|**Windows Auth for Azure Active Directory principals preview** | Windows Authentication for managed instances empowers customers to move existing services to the cloud while maintaining a seamless user experience, and provides the basis for infrastructure modernization. Learn more in [Windows Authentication for Azure Active Directory principals on Azure SQL Managed Instance](winauth-azuread-overview.md). | | **Data virtualization preview** | It's now possible to query data in external sources such as Azure Data Lake Storage Gen2 or Azure Blob Storage, joining it with locally stored relational data. This feature is currently in preview. To learn more, see [Data virtualization](data-virtualization-overview.md). | |||
Learn about significant changes to the Azure SQL Managed Instance documentation.
| **Log Replay Service** | It's now possible to migrate databases from SQL Server to Azure SQL Managed Instance using the Log Replay Service. To learn more, see [Migrate with Log Replay Service](log-replay-service-migrate.md). This feature is currently in preview. | | **Long-term backup retention** | Support for Long-term backup retention up to 10 years on Azure SQL Managed Instance. To learn more, see [Long-term backup retention](long-term-backup-retention-configure.md)| | **Machine Learning Services GA** | The Machine Learning Services for Azure SQL Managed Instance are now generally available (GA). To learn more, see [Machine Learning Services for SQL Managed Instance](machine-learning-services-overview.md).|
-| **Maintenance window** | The maintenance window feature allows you to configure a maintenance schedule for your Azure SQL Managed Instance, currently in preview. To learn more, see [maintenance window](../database/maintenance-window.md).|
+| **Maintenance window** | The maintenance window feature allows you to configure a maintenance schedule for your Azure SQL Managed Instance. To learn more, see [maintenance window](../database/maintenance-window.md).|
| **Service Broker message exchange** | The Service Broker component of Azure SQL Managed Instance allows you to compose your applications from independent, self-contained services, by providing native support for reliable and secure message exchange between the databases attached to the service. Currently in preview. To learn more, see [Service Broker](/sql/database-engine/configure-windows/sql-server-service-broker). | **SQL insights** | SQL insights is a comprehensive solution for monitoring any product in the Azure SQL family. SQL insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance. To learn more, see [SQL insights](../../azure-monitor/insights/sql-insights-overview.md). | |||
azure-sql Sql Managed Instance Paas Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/sql-managed-instance-paas-overview.md
SQL Managed Instance combines the best features that are available both in Azure
| | | |No hardware purchasing and management <br>No management overhead for managing underlying infrastructure <br>Quick provisioning and service scaling <br>Automated patching and version upgrade <br>Integration with other PaaS data services |99.99% uptime SLA <br>Built-in [high availability](../database/high-availability-sla.md) <br>Data protected with [automated backups](../database/automated-backups-overview.md) <br>Customer configurable backup retention period <br>User-initiated [backups](/sql/t-sql/statements/backup-transact-sql?preserve-view=true&view=azuresqldb-mi-current) <br>[Point-in-time database restore](../database/recovery-using-backups.md#point-in-time-restore) capability | |**Security and compliance** | **Management**|
-|Isolated environment ([VNet integration](connectivity-architecture-overview.md), single tenant service, dedicated compute and storage) <br>[Transparent data encryption (TDE)](/sql/relational-databases/security/encryption/transparent-data-encryption-azure-sql)<br>[Azure Active Directory (Azure AD) authentication](../database/authentication-aad-overview.md), single sign-on support <br> [Azure AD server principals (logins)](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-mi-current&preserve-view=true) <br>Adheres to compliance standards same as Azure SQL Database <br>[SQL auditing](auditing-configure.md) <br>[Advanced Threat Protection](threat-detection-configure.md) |Azure Resource Manager API for automating service provisioning and scaling <br>Azure portal functionality for manual service provisioning and scaling <br>Data Migration Service
+|Isolated environment ([VNet integration](connectivity-architecture-overview.md), single tenant service, dedicated compute and storage) <br>[Transparent data encryption (TDE)](/sql/relational-databases/security/encryption/transparent-data-encryption-azure-sql)<br>[Azure Active Directory (Azure AD) authentication](../database/authentication-aad-overview.md), single sign-on support <br> [Azure AD server principals (logins)](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-mi-current&preserve-view=true) <br>[What is Windows Authentication for Azure AD principals (Preview)](winauth-azuread-overview.md) <br>Adheres to compliance standards same as Azure SQL Database <br>[SQL auditing](auditing-configure.md) <br>[Advanced Threat Protection](threat-detection-configure.md) |Azure Resource Manager API for automating service provisioning and scaling <br>Azure portal functionality for manual service provisioning and scaling <br>Data Migration Service
> [!IMPORTANT] > Azure SQL Managed Instance has been certified against a number of compliance standards. For more information, see the [Microsoft Azure Compliance Offerings](https://servicetrust.microsoft.com/ViewPage/MSComplianceGuideV3?command=Download&downloadType=Document&downloadId=44bbae63-bf4d-4e3b-9d3d-c96fb25ec363&tab=7027ead0-3d6b-11e9-b9e1-290b1eb4cdeb&docTab=7027ead0-3d6b-11e9-b9e1-290b1eb4cdeb_FAQ_and_White_Papers), where you can find the most current list of SQL Managed Instance compliance certifications, listed under **SQL Database**.
SQL Managed Instance enables you to centrally manage identities of database user
### Authentication
-SQL Managed Instance authentication refers to how users prove their identity when connecting to the database. SQL Managed Instance supports two types of authentication:
+SQL Managed Instance authentication refers to how users prove their identity when connecting to the database. SQL Managed Instance supports three types of authentication:
- **SQL Authentication**:
SQL Managed Instance authentication refers to how users prove their identity whe
This authentication method uses identities managed by Azure Active Directory and is supported for managed and integrated domains. Use Active Directory authentication (integrated security) [whenever possible](/sql/relational-databases/security/choose-an-authentication-mode).
+- **Windows Authentication for Azure AD Principals (Preview)**:
+
+ [Kerberos authentication for Azure AD Principals](../managed-instance/winauth-azuread-overview.md) (Preview) enables Windows Authentication for Azure SQL Managed Instance. Windows Authentication for managed instances empowers customers to move existing services to the cloud while maintaining a seamless user experience and provides the basis for infrastructure modernization.
+ ### Authorization Authorization refers to what a user can do within a database in Azure SQL Managed Instance, and is controlled by your user account's database role memberships and object-level permissions. SQL Managed Instance has the same authorization capabilities as SQL Server 2017.
Some key differences:
- High availability is built in and pre-configured using technology similar to [Always On availability groups](/sql/database-engine/availability-groups/windows/always-on-availability-groups-sql-server). - There are only automated backups and point-in-time restore. Customers can initiate `copy-only` backups that do not interfere with the automatic backup chain. - Specifying full physical paths is unsupported, so all corresponding scenarios have to be supported differently: RESTORE DB does not support WITH MOVE, CREATE DB doesn't allow physical paths, BULK INSERT works with Azure blobs only, etc.-- SQL Managed Instance supports [Azure AD authentication](../database/authentication-aad-overview.md) as a cloud alternative to Windows authentication.
+- SQL Managed Instance supports [Azure AD authentication](../database/authentication-aad-overview.md) and [Windows Authentication for Azure Active Directory principals (Preview)](winauth-azuread-overview.md).
- SQL Managed Instance automatically manages XTP filegroups and files for databases containing In-Memory OLTP objects. - SQL Managed Instance supports SQL Server Integration Services (SSIS) and can host an SSIS catalog (SSISDB) that stores SSIS packages, but they are executed on a managed Azure-SSIS Integration Runtime (IR) in Azure Data Factory. See [Create Azure-SSIS IR in Data Factory](../../data-factory/create-azure-ssis-integration-runtime.md). To compare the SSIS features, see [Compare SQL Database to SQL Managed Instance](../../data-factory/create-azure-ssis-integration-runtime.md#comparison-of-sql-database-and-sql-managed-instance).
backup Backup Azure Manage Mars https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-manage-mars.md
Copy the PIN. The PIN is valid for only five minutes.
This section discusses a scenario where your source machine that was protected with MARS is no longer available because it was deleted, corrupted, infected with malware/ransomware, or decommissioned.
-For these machines, the Azure Backup service ensures that the most recent recovery point doesn't expire (that is, doesn't get pruned) according to the retention rules specified in the backup policy. Therefore, you can safely restore the machine. Consider the following scenarios you can perform on the backed-up data:
+For these machines, the Azure Backup service ensures that the latest recovery point with the longest retention doesn't expire (that is, doesn't get pruned) according to the retention rules specified in the backup policy. Therefore, you can safely restore the machine using this RP. Consider the following scenarios you can perform on the backed-up data:
### Scenario 1: The source machine is unavailable, and you no longer need to retain backup data
Learn more about [other report tabs](configure-reports.md) and receiving those [
- For information about supported scenarios and limitations, refer to the [Support Matrix for the MARS Agent](./backup-support-matrix-mars-agent.md). - Learn more about [On demand backup policy retention behavior](backup-windows-with-mars-agent.md#set-up-on-demand-backup-policy-retention-behavior).-- For more frequently asked questions, see the [MARS agent FAQ](backup-azure-file-folder-backup-faq.yml).
+- For more frequently asked questions, see the [MARS agent FAQ](backup-azure-file-folder-backup-faq.yml).
backup Geo Code List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/geo-code-list.md
Title: Geo-code mapping description: Learn about geo-codes mapped with the respective regions. Previously updated : 09/17/2021 Last updated : 03/07/2022 # Geo-code mapping
This sample XML provides you an insight about the geo-codes mapped with the resp
<GeoCodeRegionNameMap GeoCode="gec" RegionName="Germany Central" /> <GeoCodeRegionNameMap GeoCode="gne" RegionName="Germany Northeast" /> <GeoCodeRegionNameMap GeoCode="krc" RegionName="Korea Central" />
- <GeoCodeRegionNameMap GeoCode="krsr" RegionName="Korea Central" />
<GeoCodeRegionNameMap GeoCode="frc" RegionName="France Central" />
- <GeoCodeRegionNameMap GeoCode="frcr" RegionName="France South" />
<GeoCodeRegionNameMap GeoCode="frs" RegionName="France South" />
- <GeoCodeRegionNameMap GeoCode="frsr" RegionName="France Central" />
- <GeoCodeRegionNameMap GeoCode="scur" RegionName="North Central US" />
- <GeoCodeRegionNameMap GeoCode="ncur" RegionName="South Central US" />
- <GeoCodeRegionNameMap GeoCode="eusr" RegionName="West US" />
- <GeoCodeRegionNameMap GeoCode="wusr" RegionName="East US" />
- <GeoCodeRegionNameMap GeoCode="ner" RegionName="West Europe" />
- <GeoCodeRegionNameMap GeoCode="wer" RegionName="North Europe" />
- <GeoCodeRegionNameMap GeoCode="ear" RegionName="Southeast Asia" />
- <GeoCodeRegionNameMap GeoCode="sear" RegionName="East Asia" />
- <GeoCodeRegionNameMap GeoCode="cusr" RegionName="East US 2" />
- <GeoCodeRegionNameMap GeoCode="eu2r" RegionName="Central US" />
- <GeoCodeRegionNameMap GeoCode="bjbr" RegionName="China East" />
- <GeoCodeRegionNameMap GeoCode="shar" RegionName="China North" />
- <GeoCodeRegionNameMap GeoCode="jper" RegionName="Japan West" />
- <GeoCodeRegionNameMap GeoCode="jpwr" RegionName="Japan East" />
- <GeoCodeRegionNameMap GeoCode="brsr" RegionName="South Central US" />
- <GeoCodeRegionNameMap GeoCode="aer" RegionName="Australia Southeast" />
- <GeoCodeRegionNameMap GeoCode="aser" RegionName="Australia East" />
- <GeoCodeRegionNameMap GeoCode="ugvr" RegionName="USGov Texas" />
- <GeoCodeRegionNameMap GeoCode="ugir" RegionName="USGov Virginia" />
- <GeoCodeRegionNameMap GeoCode="incr" RegionName="South India" />
- <GeoCodeRegionNameMap GeoCode="insr" RegionName="Central India" />
- <GeoCodeRegionNameMap GeoCode="cncr" RegionName="Canada East" />
- <GeoCodeRegionNameMap GeoCode="cner" RegionName="Canada Central" />
- <GeoCodeRegionNameMap GeoCode="wcur" RegionName="West US 2" />
- <GeoCodeRegionNameMap GeoCode="wu2r" RegionName="West Central US" />
- <GeoCodeRegionNameMap GeoCode="ukwr" RegionName="UK South" />
- <GeoCodeRegionNameMap GeoCode="uksr" RegionName="UK West" />
- <GeoCodeRegionNameMap GeoCode="ccyr" RegionName="East US 2 EUAP" />
- <GeoCodeRegionNameMap GeoCode="ecyr" RegionName="Central US EUAP" />
- <GeoCodeRegionNameMap GeoCode="gecr" RegionName="Germany Northeast" />
- <GeoCodeRegionNameMap GeoCode="gner" RegionName="Germany Central" />
- <GeoCodeRegionNameMap GeoCode="krcr" RegionName="Korea South" />
<GeoCodeRegionNameMap GeoCode="krs" RegionName="Korea South" /> <GeoCodeRegionNameMap GeoCode="ugt" RegionName="USGov Texas" /> <GeoCodeRegionNameMap GeoCode="uga" RegionName="USGov Arizona" /> <GeoCodeRegionNameMap GeoCode="udc" RegionName="USDoD Central" /> <GeoCodeRegionNameMap GeoCode="ude" RegionName="USDoD East" />
- <GeoCodeRegionNameMap GeoCode="ugtr" RegionName="USGov Arizona" />
- <GeoCodeRegionNameMap GeoCode="ugar" RegionName="USGov Texas" />
- <GeoCodeRegionNameMap GeoCode="udcr" RegionName="USDoD East" />
- <GeoCodeRegionNameMap GeoCode="uder" RegionName="USDoD Central" />
<GeoCodeRegionNameMap GeoCode="acl" RegionName="Australia Central" /> <GeoCodeRegionNameMap GeoCode="acl2" RegionName="Australia Central 2" />
- <GeoCodeRegionNameMap GeoCode="aclr" RegionName="Australia Central 2" />
- <GeoCodeRegionNameMap GeoCode="ac2r" RegionName="Australia Central" />
<GeoCodeRegionNameMap GeoCode="bjb2" RegionName="China North 2" /> <GeoCodeRegionNameMap GeoCode="sha2" RegionName="China East 2" />
- <GeoCodeRegionNameMap GeoCode="bj2r" RegionName="China East 2" />
- <GeoCodeRegionNameMap GeoCode="sh2r" RegionName="China North 2" />
<GeoCodeRegionNameMap GeoCode="uac" RegionName="UAE Central" />
- <GeoCodeRegionNameMap GeoCode="uacr" RegionName="UAE North" />
<GeoCodeRegionNameMap GeoCode="uan" RegionName="UAE North" />
- <GeoCodeRegionNameMap GeoCode="uanr" RegionName="UAE Central" />
<GeoCodeRegionNameMap GeoCode="san" RegionName="South Africa North" />
- <GeoCodeRegionNameMap GeoCode="sanr" RegionName="South Africa West" />
<GeoCodeRegionNameMap GeoCode="saw" RegionName="South Africa West" />
- <GeoCodeRegionNameMap GeoCode="sawr" RegionName="South Africa North" />
<GeoCodeRegionNameMap GeoCode="rxe" RegionName="USSec East" />
- <GeoCodeRegionNameMap GeoCode="rxer" RegionName="USSec West" />
<GeoCodeRegionNameMap GeoCode="rxw" RegionName="USSec West" />
- <GeoCodeRegionNameMap GeoCode="rxwr" RegionName="USSec East" />
<GeoCodeRegionNameMap GeoCode="exe" RegionName="USNat East" />
- <GeoCodeRegionNameMap GeoCode="exer" RegionName="USNat West" />
<GeoCodeRegionNameMap GeoCode="exw" RegionName="USNat West" />
- <GeoCodeRegionNameMap GeoCode="exwr" RegionName="USNat East" />
<GeoCodeRegionNameMap GeoCode="inw" RegionName="West India" />
- <GeoCodeRegionNameMap GeoCode="inwr" RegionName="South India" />
<GeoCodeRegionNameMap GeoCode="gwc" RegionName="Germany West Central" />
- <GeoCodeRegionNameMap GeoCode="gwcr" RegionName="Germany North" />
<GeoCodeRegionNameMap GeoCode="gn" RegionName="Germany North" />
- <GeoCodeRegionNameMap GeoCode="gnr" RegionName="Germany West Central" />
<GeoCodeRegionNameMap GeoCode="szn" RegionName="Switzerland North" />
- <GeoCodeRegionNameMap GeoCode="sznr" RegionName="Switzerland West" />
<GeoCodeRegionNameMap GeoCode="szw" RegionName="Switzerland West" />
- <GeoCodeRegionNameMap GeoCode="szwr" RegionName="Switzerland North" />
<GeoCodeRegionNameMap GeoCode="nww" RegionName="Norway West" />
- <GeoCodeRegionNameMap GeoCode="nwwr" RegionName="Norway East" />
<GeoCodeRegionNameMap GeoCode="nwe" RegionName="Norway East" />
- <GeoCodeRegionNameMap GeoCode="nwer" RegionName="Norway West" />
<GeoCodeRegionNameMap GeoCode="sdc" RegionName="Sweden Central" />
- <GeoCodeRegionNameMap GeoCode="sdcr" RegionName="Sweden South" />
<GeoCodeRegionNameMap GeoCode="sds" RegionName="Sweden South" />
- <GeoCodeRegionNameMap GeoCode="sdsr" RegionName="Sweden Central" />
<GeoCodeRegionNameMap GeoCode="bse" RegionName="Brazil Southeast" />
- <GeoCodeRegionNameMap GeoCode="bser" RegionName="Brazil South" />
<GeoCodeRegionNameMap GeoCode="wus3" RegionName="West US 3" />
- <GeoCodeRegionNameMap GeoCode="wu3r" RegionName="East US" />
<GeoCodeRegionNameMap GeoCode="jic" RegionName="Jio India Central" />
- <GeoCodeRegionNameMap GeoCode="jicr" RegionName="Jio India West" />
<GeoCodeRegionNameMap GeoCode="jiw" RegionName="Jio India West" />
- <GeoCodeRegionNameMap GeoCode="jiwr" RegionName="Jio India Central" />
</GeoCodeList> ```
bastion Bastion Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-faq.md
Azure Bastion currently supports the following keyboard layouts inside the VM:
* sv-se-qwerty * tr-tr-qwerty
-To establish the correct key mappings for your target language, you must set either your language on your local computer or your language inside the target VM to English (United States). That is, your local computer language must be set to English (United States) while your target VM language is set to your target language, or vice versa. You can add English (United States) language to your machine in your computer settings.
+To establish the correct key mappings for your target language, you must set either the keyboard layout on your local computer to English (United States) or the keyboard layout inside the target VM to English (United States). That is, the keyboard layout on your local computer must be set to English (United States) while the keyboard layout on your target VM is set to your target language, or vice versa.
+
+To set English (United States) as your keyboard layout on a Windows workstation, navigate to Settings > Time & Language > Lanugage & Region. Under "Preferred languages," select "Add a language" and add English (United States). You will then be able to see your keyboard layouts on your toolbar. To set English (United States) as your keyboard layout, select "ENG" on your toolbar or click Windows + Spacebar to open keyboard layouts.
### <a name="res"></a>What is the maximum screen resolution supported via Bastion?
bastion Vm Upload Download Native https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/vm-upload-download-native.md
Previously updated : 03/03/2022 Last updated : 03/07/2022 # Customer intent: I want to upload or download files using Bastion.
# Upload or download files using the native client (Preview)
-Azure Bastion offers support for file transfer between your target VM and local computer using Bastion and a native RDP or SSH client. To learn more about native client support, refer to [Connect to a VM using the native client](connect-native-client-windows.md).
+Azure Bastion offers support for file transfer between your target VM and local computer using Bastion and a native RDP or native SSH client. To learn more about native client support, refer to [Connect to a VM using the native client](connect-native-client-windows.md). While it may be possible to use third-party clients and tools to upload or download files, this article focuses on working with supported native clients.
* File transfers are supported using the native client only. You can't upload or download files using PowerShell or via the Azure portal. * To both [upload and download files](#rdp), you must use the Windows native client and RDP.
batch Batch Js Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-js-get-started.md
containerList.forEach(function (val, index) {
const task = batchClient.task.add(jobId, taskConfig, function (error, result) { if (error !== null) {
- console.log("Error occured while creating task for container " + containerName + ". Details : " + error.response);
+ console.log("Error occurred while creating task for container " + containerName + ". Details : " + error.response);
} else { console.log("Task for container : " + containerName + " submitted successfully");
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md
Previously updated : 01/07/2022 Last updated : 03/07/2022
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-to-date with recent developments, this article provides you with information about new releases and features.
+## February 2022
+
+- Model improvements for latest model-version for [text summarization](text-summarization/overview.md)
+ ## December 2021 * The version 3.1-preview.x REST endpoints and 5.1.0-beta.x client library have been retired. Please upgrade to the General Available version of the API(v3.1). If you're using the client libraries, use package version 5.1.0 or higher. See the [migration guide](./concepts/migrate-language-service-latest.md) for details.
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-
* Based on ongoing customer feedback, we have increased the character limit per document for Text Analytics for health from 5,120 to 30,720. * Azure Cognitive Service for Language release, with support for:
- * [Question Answering (now Generally Available)](question-answering/overview.md)
- * [Sentiment Analysis and opinion mining](sentiment-opinion-mining/overview.md)
- * [Key Phrase Extraction](key-phrase-extraction/overview.md)
- * [Named Entity Recognition (NER), Personally Identifying Information (PII)](named-entity-recognition/overview.md)
- * [Language Detection](language-detection/overview.md)
- * [Text Analytics for health](text-analytics-for-health/overview.md)
- * [Text summarization preview](text-summarization/overview.md)
- * [Custom Named Entity Recognition (Custom NER) preview](custom-named-entity-recognition/overview.md)
- * [Custom Text Classification preview](custom-classification/overview.md)
- * [Conversational Language Understanding preview](conversational-language-understanding/overview.md)
+
+ * [Question Answering (now Generally Available)](question-answering/overview.md)
+ * [Sentiment Analysis and opinion mining](sentiment-opinion-mining/overview.md)
+ * [Key Phrase Extraction](key-phrase-extraction/overview.md)
+ * [Named Entity Recognition (NER), Personally Identifying Information (PII)](named-entity-recognition/overview.md)
+ * [Language Detection](language-detection/overview.md)
+ * [Text Analytics for health](text-analytics-for-health/overview.md)
+ * [Text summarization preview](text-summarization/overview.md)
+ * [Custom Named Entity Recognition (Custom NER) preview](custom-named-entity-recognition/overview.md)
+ * [Custom Text Classification preview](custom-classification/overview.md)
+ * [Conversational Language Understanding preview](conversational-language-understanding/overview.md)
* Preview model version `2021-10-01-preview` for [Sentiment Analysis and Opinion mining](sentiment-opinion-mining/overview.md), which provides:
- * Improved prediction quality.
- * [Additional language support](sentiment-opinion-mining/language-support.md?tabs=sentiment-analysis) for the opinion mining feature.
- * For more information, see the [project z-code site](https://www.microsoft.com/research/project/project-zcode/).
- * To use this [model version](sentiment-opinion-mining/how-to/call-api.md#specify-the-sentiment-analysis-model), you must specify it in your API calls, using the model version parameter.
+
+ * Improved prediction quality.
+ * [Additional language support](sentiment-opinion-mining/language-support.md?tabs=sentiment-analysis) for the opinion mining feature.
+ * For more information, see the [project z-code site](https://www.microsoft.com/research/project/project-zcode/).
+ * To use this [model version](sentiment-opinion-mining/how-to/call-api.md#specify-the-sentiment-analysis-model), you must specify it in your API calls, using the model version parameter.
* SDK support for sending requests to custom models:
- * [Custom Named Entity Recognition](custom-named-entity-recognition/how-to/call-api.md?tabs=client#use-the-client-libraries)
- * [Custom text classification](custom-classification/how-to/call-api.md?tabs=api#use-the-client-libraries)
- * [Custom language understanding](conversational-language-understanding/how-to/deploy-query-model.md#use-the-client-libraries-azure-sdk)
-
+
+ * [Custom Named Entity Recognition](custom-named-entity-recognition/how-to/call-api.md?tabs=client#use-the-client-libraries)
+ * [Custom text classification](custom-classification/how-to/call-api.md?tabs=api#use-the-client-libraries)
+ * [Custom language understanding](conversational-language-understanding/how-to/deploy-query-model.md#use-the-client-libraries-azure-sdk)
+ ## Next steps
-* [What is Azure Cognitive Service for Language?](overview.md)
+* [What is Azure Cognitive Service for Language?](overview.md)
confidential-ledger Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/overview.md
The data to the Ledger is sent through TLS 1.2 connection and the TLS 1.2 connec
### Ledger storage
-confidential ledgers are created as blocks in blob storage containers belonging to an Azure Storage account. Transaction data can either be stored as encrypted or in plaintext depending on your needs. When you create a Ledger, you will associate a Storage Account using the steps described in [Register a confidential ledger Service Principal](register-ledger-service-principal.md).
+Confidential ledgers are created as blocks in blob storage containers belonging to an Azure Storage account. Transaction data can either be stored encrypted or in plaintext depending on your needs. When you create a Ledger, you will associate a Storage Account using the steps described in [Register a confidential ledger Service Principal](register-ledger-service-principal.md).
The confidential ledger can be managed by administrators utilizing Administrative APIs (Control Plane), and can be called directly by your application code through Functional APIs (Data Plane). The Administrative APIs support basic operations such as create, update, get and, delete.
The Functional APIs allow direct interaction with your instantiated confidential
## Preview Limitations - Once a confidential ledger is created, you cannot change the Ledger type.-- confidential ledger does not support standard Azure Disaster Recovery at this time. However, Azure confidential ledger offers built-in redundancy within the Azure region, as the confidential ledger runs on multiple independent nodes.
+- Azure confidential ledger does not support standard Azure Disaster Recovery at this time. However, Azure confidential ledger offers built-in redundancy within the Azure region, as the confidential ledger runs on multiple independent nodes.
- Azure confidential ledger deletion leads to a "hard delete", so your data will not be recoverable after deletion. - Azure confidential ledger names must be globally unique. Ledgers with the same name, irrespective of their type, are not allowed.
container-registry Container Registry Auth Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-auth-service-principal.md
To create a service principal that can authenticate with a container registry in
For example steps, see [Pull images from a container registry to an AKS cluster in a different AD tenant](authenticate-aks-cross-tenant.md).
+## Service principal renewal
+
+The service principal is created with one-year validity. You have options to extend the validity further than one year, or can provide expiry date of your choice using the [`az ad sp credential reset`](/cli/azure/ad/sp/credential#az-ad-sp-credential-reset) command.
+ ## Next steps * See the [authentication overview](container-registry-authentication.md) for other scenarios to authenticate with an Azure container registry.
container-registry Container Registry Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-authentication.md
The admin account is currently required for some scenarios to deploy an image fr
> The admin account is designed for a single user to access the registry, mainly for testing purposes. We do not recommend sharing the admin account credentials among multiple users. All users authenticating with the admin account appear as a single user with push and pull access to the registry. Changing or disabling this account disables registry access for all users who use its credentials. Individual identity is recommended for users and service principals for headless scenarios. >
-The admin account is provided with two passwords, both of which can be regenerated. Two passwords allow you to maintain connection to the registry by using one password while you regenerate the other. If the admin account is enabled, you can pass the username and either password to the `docker login` command when prompted for basic authentication to the registry. For example:
+The admin account is provided with two passwords, both of which can be regenerated. New passwords created for admin accounts are available immediately. Regenerating passwords for admin accounts will take 60 seconds to replicate and be available. Two passwords allow you to maintain connection to the registry by using one password while you regenerate the other. If the admin account is enabled, you can pass the username and either password to the `docker login` command when prompted for basic authentication to the registry. For example:
``` docker login myregistry.azurecr.io
container-registry Container Registry Repository Scoped Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-repository-scoped-permissions.md
This feature is available in the **Premium** container registry service tier. Fo
To configure repository-scoped permissions, you create a *token* with an associated *scope map*.
-* A **token** along with a generated password lets the user authenticate with the registry. You can set an expiration date for a token password, or disable a token at any time.
+* A **token** along with a generated password lets the user authenticate with the registry. You can set an expiration date for a token password, or disable a token at any time.
After authenticating with a token, the user or service can perform one or more *actions* scoped to one or more repositories.
After the token is validated and created, token details appear in the **Tokens**
### Add token password
-To use a token created in the portal, you must generate a password. You can generate one or two passwords, and set an expiration date for each one.
+To use a token created in the portal, you must generate a password. You can generate one or two passwords, and set an expiration date for each one. New passwords created for tokens are available immediately. Regenerating new passwords for tokens will take 60 seconds to replicate and be available.
1. In the portal, navigate to your container registry. 1. Under **Repository permissions**, select **Tokens (Preview)**, and select a token.
az acr token list --registry myregistry --output table
### Regenerate token passwords
-If you didn't generate a token password, or you want to generate new passwords, run the [az acr token credential generate][az-acr-token-credential-generate] command.
+If you didn't generate a token password, or you want to generate new passwords, run the [az acr token credential generate][az-acr-token-credential-generate] command.Regenerating new passwords for tokens will take 60 seconds to replicate and be available.
The following example generates a new value for password1 for the *MyToken* token, with an expiration period of 30 days. It stores the password in the environment variable `TOKEN_PWD`. This example is formatted for the bash shell.
In the portal, select the token in the **Tokens (Preview)** screen, and select *
[az-acr-token-delete]: /cli/azure/acr/token/#az_acr_token_delete [az-acr-token-create]: /cli/azure/acr/token/#az_acr_token_create [az-acr-token-update]: /cli/azure/acr/token/#az_acr_token_update
-[az-acr-token-credential-generate]: /cli/azure/acr/token/credential/#az_acr_token_credential_generate
+[az-acr-token-credential-generate]: /cli/azure/acr/token/credential/#az_acr_token_credential_generate
cosmos-db Cassandra Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/cassandra-support.md
curl https://cacert.omniroot.com/bc2025.crt > bc2025.crt
keytool -importcert -alias bc2025ca -file bc2025.crt # Install the Cassandra libraries in order to get CQLSH:
-echo "deb http://www.apache.org/dist/cassandra/debian 311x main" | sudo tee -a /etc/apt/sources.list.d/cassandra.sources.list
+echo "deb https://downloads.apache.org/cassandra/debian 311x main" | sudo tee -a /etc/apt/sources.list.d/cassandra.sources.list
curl https://downloads.apache.org/cassandra/KEYS | sudo apt-key add - sudo apt-get update sudo apt-get install cassandra
cosmos-db Free Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/free-tier.md
When creating the account using the Azure portal, set the **Apply Free Tier Disc
### ARM template
-To create a free tier account by using an ARM template, set the property`"enableFreeTier": true`. For the complete template, see deploy an [ARM template with free tier](manage-with-templates.md#free-tier) example.
+To create a free tier account by using an ARM template, set the property `"enableFreeTier": true`. For the complete template, see deploy an [ARM template with free tier](manage-with-templates.md#free-tier) example.
### CLI
az cosmosdb create \
-g "MyResourcegroup" \ --enable-free-tier true \ --default-consistency-level "Session"
-
``` ### PowerShell
data-factory Concepts Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-integration-runtime.md
Each external transformation activity that utilizes an external compute engine h
Data Flow activities are executed on their associated Azure integration runtime. The Spark compute utilized by Data Flows are determined by the data flow properties in your Azure IR, and are fully managed by the service. ## Integration Runtime in CI/CD
-Integration runtimes don't change often and are similar across all stages in your CI/CD. Data Factory requires you to have the same name and type of integration runtime across all stages of CI/CD. If you want to share integration runtimes across all stages, consider using a ternary factory just to contain the shared integration runtimes. You can then use this shared factory in all of your environments as a linked integration runtime type.
+Integration runtimes don't change often and are similar across all stages in your CI/CD. Data Factory requires you to have the same name and type of integration runtime across all stages of CI/CD. If you want to share integration runtimes across all stages, consider using a dedicated factory just to contain the shared integration runtimes. You can then use this shared factory in all of your environments as a linked integration runtime type.
## Next steps
data-factory Connector Quickbase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-quickbase.md
Last updated 02/28/2022
This article outlines how to use Data Flow to transform data in Quickbase (Preview). To learn more, read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md).
+> [!IMPORTANT]
+> This connector is currently in preview. You can try it out and give us feedback. If you want to take a dependency on preview connectors in your solution, please contact [Azure support](https://azure.microsoft.com/support/).
+ ## Supported capabilities This Quickbase connector is supported for the following activities:
data-factory Connector Smartsheet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-smartsheet.md
Last updated 02/28/2022
This article outlines how to use Data Flow to transform data in Smartsheet (Preview). To learn more, read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md).
+> [!IMPORTANT]
+> This connector is currently in preview. You can try it out and give us feedback. If you want to take a dependency on preview connectors in your solution, please contact [Azure support](https://azure.microsoft.com/support/).
+ ## Supported capabilities This Smartsheet connector is supported for the following activities:
data-factory Connector Teamdesk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-teamdesk.md
Last updated 02/25/2022
This article outlines how to use Data Flow to transform data in TeamDesk (Preview). To learn more, read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md).
+> [!IMPORTANT]
+> This connector is currently in preview. You can try it out and give us feedback. If you want to take a dependency on preview connectors in your solution, please contact [Azure support](https://azure.microsoft.com/support/).
+ ## Supported capabilities This TeamDesk connector is supported for the following activities:
data-factory Connector Zendesk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-zendesk.md
Last updated 02/28/2022
This article outlines how to use Data Flow to transform data in Zendesk (Preview). To learn more, read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md).
+> [!IMPORTANT]
+> This connector is currently in preview. You can try it out and give us feedback. If you want to take a dependency on preview connectors in your solution, please contact [Azure support](https://azure.microsoft.com/support/).
+ ## Supported capabilities This Zendesk connector is supported for the following activities:
databox-online Azure Stack Edge Gpu 2202 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2202-release-notes.md
The following table provides a summary of known issues in this release.
| No. | Feature | Issue | Workaround/comments | | | | | |
-|**1.**|Preview features |For this release, the following features are available in preview: <ul><li>Clustering and Multi-Access Edge Computing (MEC) for Azure Stack Edge Pro GPU devices only. </li><li>VPN for Azure Stack Edge Pro R and Azure Stack Edge Mini R only.</li><li>Local Azure Resource Manager, VMs, Cloud management of VMs, Kubernetes cloud management, and Multi-process service (MPS) for Azure Stack Edge Pro GPU, Azure Stack Edge Pro R, and Azure Stack Edge Mini R.</li></ul> |These features will be generally available in later releases. |
+|**1.**|Preview features |For this release, the following features are available in preview: <br> - Clustering and Multi-Access Edge Computing (MEC) for Azure Stack Edge Pro GPU devices only. <br> - VPN for Azure Stack Edge Pro R and Azure Stack Edge Mini R only. <br> - Local Azure Resource Manager, VMs, Cloud management of VMs, Kubernetes cloud management, and Multi-process service (MPS) for Azure Stack Edge Pro GPU, Azure Stack Edge Pro R, and Azure Stack Edge Mini R. |These features will be generally available in later releases. |
|**2.**|Update |For a two-node cluster, in rare instances the update may fail. | If the update fails and you see a message indicating that updates are available, retry updating your device. If the update fails and no updates are available, and your device continues to be in maintenance mode, contact Microsoft Support to determine next steps. |
-|**3.**|Wi-Fi |Wi-Fi does not work on Azure Stack Edge Pro 2 in this release. | This functionality will be available in a future release. |
+|**3.**|Wi-Fi |Wi-Fi does not work on Azure Stack Edge Pro 2 in this release. | This functionality may be available in a future release. |
|**4.**|VPN |VPN feature shows up in the local web UI but this feature is not supported for this device. | This issue will be addressed in a future release. | ## Known issues from previous releases
The following table provides a summary of known issues carried over from the pre
| No. | Feature | Issue | Workaround/comments | | | | | |
-| **1.** |Azure Stack Edge Pro + Azure SQL | Creating SQL database requires Administrator access. |Do the following steps instead of Steps 1-2 in [Create-the-sql-database](../iot-edge/tutorial-store-data-sql-server.md#create-the-sql-database). <ol><li>In the local UI of your device, enable compute interface. Select **Compute > Port # > Enable for compute > Apply.**</li><li>Download `sqlcmd` on your client machine from [SQL command utility](/sql/tools/sqlcmd-utility). </li><li>Connect to your compute interface IP address (the port that was enabled), adding a ",1401" to the end of the address.</li><li>Final command will look like this: sqlcmd -S {Interface IP},1401 -U SA -P "Strong!Passw0rd".</li>After this, steps 3-4 from the current documentation should be identical. </li></ol> |
-| **2.** |Refresh| Incremental changes to blobs restored via **Refresh** are NOT supported |For Blob endpoints, partial updates of blobs after a Refresh, may result in the updates not getting uploaded to the cloud. For example, sequence of actions such as:<ol><li>Create blob in cloud. Or delete a previously uploaded blob from the device.</li><li>Refresh blob from the cloud into the appliance using the refresh functionality.</li><li>Update only a portion of the blob using Azure SDK REST APIs.</li></ol>These actions can result in the updated sections of the blob to not get updated in the cloud. <br>**Workaround**: Use tools such as robocopy, or regular file copy through Explorer or command line, to replace entire blobs.|
+| **1.** |Azure Stack Edge Pro + Azure SQL | Creating SQL database requires Administrator access. |Do the following steps instead of Steps 1-2 in [Create-the-sql-database](../iot-edge/tutorial-store-data-sql-server.md#create-the-sql-database). <br> - In the local UI of your device, enable compute interface. Select **Compute > Port # > Enable for compute > Apply.**<br> - Download `sqlcmd` on your client machine from [SQL command utility](/sql/tools/sqlcmd-utility). <br> - Connect to your compute interface IP address (the port that was enabled), adding a ",1401" to the end of the address.<br> - Final command will look like this: sqlcmd -S {Interface IP},1401 -U SA -P "Strong!Passw0rd".</li>After this, steps 3-4 from the current documentation should be identical. </li></ol> |
+| **2.** |Refresh| Incremental changes to blobs restored via **Refresh** are NOT supported |For Blob endpoints, partial updates of blobs after a Refresh, may result in the updates not getting uploaded to the cloud. For example, sequence of actions such as:<br> 1. Create blob in cloud. Or delete a previously uploaded blob from the device.<br> 2. Refresh blob from the cloud into the appliance using the refresh functionality.<br> 3. Update only a portion of the blob using Azure SDK REST APIs.</li></ol>These actions can result in the updated sections of the blob to not get updated in the cloud. <br>**Workaround**: Use tools such as robocopy, or regular file copy through Explorer or command line, to replace entire blobs.|
|**3.**|Throttling|During throttling, if new writes to the device aren't allowed, writes by the NFS client fail with a "Permission Denied" error.| The error will show as below:<br>`hcsuser@ubuntu-vm:~/nfstest$ mkdir test`<br>mkdir: cannot create directory 'test': Permission deniedΓÇï| |**4.**|Blob Storage ingestion|When using AzCopy version 10 for Blob storage ingestion, run AzCopy with the following argument: `Azcopy <other arguments> --cap-mbps 2000`| If these limits aren't provided for AzCopy, it could potentially send a large number of requests to the device, resulting in issues with the service.|
-|**5.**|Tiered storage accounts|The following apply when using tiered storage accounts:<ul><li> Only block blobs are supported. Page blobs are not supported.</li><li>There is no snapshot or copy API support.</li><li> Hadoop workload ingestion through `distcp` is not supported as it uses the copy operation heavily.</li></ul>||
+|**5.**|Tiered storage accounts|The following apply when using tiered storage accounts:<br> - Only block blobs are supported. Page blobs are not supported.<br> - There is no snapshot or copy API support.<br> - Hadoop workload ingestion through `distcp` is not supported as it uses the copy operation heavily.</li></ul>||
|**6.**|NFS share connection|If multiple processes are copying to the same share, and the `nolock` attribute isn't used, you may see errors during the copy.ΓÇï|The `nolock` attribute must be passed to the mount command to copy files to the NFS share. For example: `C:\Users\aseuser mount -o anon \\10.1.1.211\mnt\vms Z:`.| |**7.**|Kubernetes cluster|When applying an update on your device that is running a Kubernetes cluster, the Kubernetes virtual machines will restart and reboot. In this instance, only pods that are deployed with replicas specified are automatically restored after an update. |If you have created individual pods outside a replication controller without specifying a replica set, these pods won't be restored automatically after the device update. You will need to restore these pods.<br>A replica set replaces pods that are deleted or terminated for any reason, such as node failure or disruptive node upgrade. For this reason, we recommend that you use a replica set even if your application requires only a single pod.| |**8.**|Kubernetes cluster|Kubernetes on Azure Stack Edge Pro is supported only with Helm v3 or later. For more information, go to [Frequently asked questions: Removal of Tiller](https://v3.helm.sh/docs/faq/).|
The following table provides a summary of known issues carried over from the pre
|**20.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| | |**21.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that do not exist on the network.| | |**22.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it is not possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create 1 VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available 1 GPU. |
-|**23.**|Custom script VM extension |There is a known issue in the Windows VMs that were created in an earlier release and the device was updated to 2103. <br> If you add a custom script extension on these VMs, the Windows VM Guest Agent (Version 2.7.41491.901 only) gets stuck in the update causing the extension deployment to time out. | To work around this issue: <ol><li> Connect to the Windows VM using remote desktop protocol (RDP). </li><li> Make sure that the `waappagent.exe` is running on the machine: `Get-Process WaAppAgent`. </li><li> If the `waappagent.exe` is not running, restart the `rdagent` service: `Get-Service RdAgent` \| `Restart-Service`. Wait for 5 minutes.</li><li> While the `waappagent.exe` is running, kill the `WindowsAzureGuest.exe` process. </li><li>After you kill the process, the process starts running again with the newer version.</li><li>Verify that the Windows VM Guest Agent version is 2.7.41491.971 using this command: `Get-Process WindowsAzureGuestAgent` \| `fl ProductVersion`.</li><li>[Set up custom script extension on Windows VM](azure-stack-edge-gpu-deploy-virtual-machine-custom-script-extension.md). </li><ol> |
+|**23.**|Custom script VM extension |There is a known issue in the Windows VMs that were created in an earlier release and the device was updated to 2103. <br> If you add a custom script extension on these VMs, the Windows VM Guest Agent (Version 2.7.41491.901 only) gets stuck in the update causing the extension deployment to time out. | To work around this issue: <br> - Connect to the Windows VM using remote desktop protocol (RDP). <br> - Make sure that the `waappagent.exe` is running on the machine: `Get-Process WaAppAgent`. <br> - If the `waappagent.exe` is not running, restart the `rdagent` service: `Get-Service RdAgent` \| `Restart-Service`. Wait for 5 minutes.<br> - While the `waappagent.exe` is running, kill the `WindowsAzureGuest.exe` process. <br> - After you kill the process, the process starts running again with the newer version. <br> - Verify that the Windows VM Guest Agent version is 2.7.41491.971 using this command: `Get-Process WindowsAzureGuestAgent` \| `fl ProductVersion`.<br> - [Set up custom script extension on Windows VM](azure-stack-edge-gpu-deploy-virtual-machine-custom-script-extension.md). </li><ol> |
|**24.**|GPU VMs |Prior to this release, GPU VM lifecycle was not managed in the update flow. Hence, when updating to 2103 release, GPU VMs are not stopped automatically during the update. You will need to manually stop the GPU VMs using a `stop-stayProvisioned` flag before you update your device. For more information, see [Suspend or shut down the VM](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md#suspend-or-shut-down-the-vm).<br> All the GPU VMs that are kept running before the update, are started after the update. In these instances, the workloads running on the VMs aren't terminated gracefully. And the VMs could potentially end up in an undesirable state after the update. <br>All the GPU VMs that are stopped via the `stop-stayProvisioned` before the update, are automatically started after the update. <br>If you stop the GPU VMs via the Azure portal, you'll need to manually start the VM after the device update.| If running GPU VMs with Kubernetes, stop the GPU VMs right before the update. <br>When the GPU VMs are stopped, Kubernetes will take over the GPUs that were used originally by VMs. <br>The longer the GPU VMs are in stopped state, higher the chances that Kubernetes will take over the GPUs. | |**25.**|Multi-Process Service (MPS) |When the device software and the Kubernetes cluster are updated, the MPS setting is not retained for the workloads. |[Re-enable MPS](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface) and redeploy the workloads that were using MPS. |
databox-online Azure Stack Edge Pro 2 Deploy Activate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-activate.md
After you've activated the device, the next step is to deploy workloads.
- Make sure that you create a Device resource for Azure Network Function Manager (NFM) that is linked to the Azure Stack Edge resource. The device resource aggregates all the network functions deployed on Azure Stack Edge device. For detailed instructions, see [Tutorial: Create a Network Function Manager Device resource (Preview)](../network-function-manager/create-device.md). - You can then deploy Network Function Manager as per the instructions in [Tutorial: Deploy network functions on Azure Stack Edge (Preview)](../network-function-manager/deploy-functions.md). - To deploy IoT Edge and Kubernetes workloads:
- - You'll need to first configure compute as described in [Tutorial: Configure compute on Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-deploy-configure-compute.md). This step creates a Kubernetes cluster that acts as the hosting platform for IoT Edge on your device.
+ - You'll need to first configure compute as described in [Tutorial: Configure compute on Azure Stack Edge Pro 2 device](azure-stack-edge-pro-2-deploy-configure-compute.md). This step creates a Kubernetes cluster that acts as the hosting platform for IoT Edge on your device.
- After a Kubernetes cluster is created on your Azure Stack Edge device, you can deploy application workloads on this cluster via any of the following methods: - Native access via `kubectl`
In this tutorial, you learned about:
To learn how to deploy workloads on your Azure Stack Edge device, see: > [!div class="nextstepaction"]
-> [Configure compute to deploy IoT Edge and Kubernetes workloads on Azure Stack Edge](./azure-stack-edge-gpu-deploy-configure-compute.md)
+> [Configure compute to deploy IoT Edge and Kubernetes workloads on Azure Stack Edge](./azure-stack-edge-pro-2-deploy-configure-compute.md)
databox-online Azure Stack Edge Pro 2 Deploy Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-checklist.md
Previously updated : 02/18/2022 Last updated : 03/04/2022 zone_pivot_groups: azure-stack-edge-device-deployment
Use the following checklist to ensure you have this information after youΓÇÖve p
| Stage | Parameter | Details | |--|-|--|
-| Device management | <li>Azure subscription</li><li>Resource providers registered</li><li>Azure Storage account</li>|<li>Enabled for Azure Stack Edge, owner or contributor access.</li><li>In Azure portal, go to **Home > Subscriptions > Your-subscription > Resource providers**. Search for `Microsoft.DataBoxEdge` and register. Repeat for `Microsoft.Devices` if deploying IoT workloads.</li><li>Need access credentials</li> |
+| Device management | - Azure subscription. <br> - Resource providers registered. <br> - Azure Storage account.|- Enabled for Azure Stack Edge, owner or contributor access. <br> - In Azure portal, go to **Home > Subscriptions > Your-subscription > Resource providers**. Search for `Microsoft.EdgeOrder` and register. Repeat for `Microsoft.Devices` if deploying IoT workloads. <br> - Need access credentials. |
| Device installation | One power cable in the package. <!--<br>For US, an SVE 18/3 cable rated for 125 V and 15 Amps with a NEMA 5-15P to C13 (input to output) connector is shipped.--> | For more information, see the list of [Supported power cords by country](azure-stack-edge-technical-specifications-power-cords-regional.md) |
-| | <li>At least one X 1-GbE RJ-45 network cable for Port 1 </li><li> At least 100-GbE QSFP28 Passive Direct Attached Cable (tested in-house) for each data network interface Port 3 and Port 4 to be configured. </li><li>At least one 100-GbE network switch to connect a 1 GbE or a 100-GbE network interface to the Internet for data.</li>| Customer needs to procure these cables.<br>For a full list of supported cables, modules, and switches, see [Connect-X6 DX adapter card compatible firmware](https://docs.nvidia.com/networking/display/ConnectX6DxFirmwarev22271016/Firmware%20Compatible%20Products).|
-| First-time device connection | <li>Laptop whose IPv4 settings can be changed. This laptop connects to Port 1 via a switch or a USB to Ethernet adapter. </li><!--<li> A minimum of 1 GbE switch must be used for the device once the initial setup is complete. The local web UI will not be accessible if the connected switch is not at least 1 Gbe.</li>-->| |
+| | - At least one X 1-GbE RJ-45 network cable for Port 1. <br> - At least 100-GbE QSFP28 Passive Direct Attached Cable (tested in-house) for each data network interface Port 3 and Port 4 to be configured. <br> - At least one 100-GbE network switch to connect a 1 GbE or a 100-GbE network interface to the Internet for data.| Customer needs to procure these cables.<br><br>For a full list of supported cables, modules, and switches, see [Connect-X6 DX adapter card compatible firmware](https://docs.nvidia.com/networking/display/ConnectX6DxFirmwarev22271016/Firmware%20Compatible%20Products).|
+| First-time device connection | Laptop whose IPv4 settings can be changed. This laptop connects to Port 1 via a switch or a USB to Ethernet adapter. </li><!--<li> A minimum of 1 GbE switch must be used for the device once the initial setup is complete. The local web UI will not be accessible if the connected switch is not at least 1 Gbe.</li>-->| |
| Device sign-in | Device administrator password, between 8 and 16 characters, including three of the following character types: uppercase, lowercase, numeric, and special characters. | Default password is *Password1*, which expires at first sign-in. |
-| Network settings | Device comes with 2 x 10/1-GbE, 2 x 100-GbE network ports. <li>Port 1 is used to configure management settings only. One or more data ports can be connected and configured. </li><li> At least one data network interface from among Port 2 - Port 4 needs to be connected to the Internet (with connectivity to Azure).</li><li> DHCP and static IPv4 configuration supported. | Static IPv4 configuration requires IP, DNS server, and default gateway. |
-| Advanced networking settings | <li>Require 2 free, static, contiguous IPs for Kubernetes nodes, and one static IP for IoT Edge service.</li><li>Require one additional IP for each extra service or module that you'll deploy.</li>| Only static IPv4 configuration is supported.|
-| (Optional) Web proxy settings | <li>Web proxy server IP/FQDN, port </li><li>Web proxy username, password</li> | |
+| Network settings | Device comes with 2 x 10/1-GbE, 2 x 100-GbE network ports. <br> - Port 1 is used to configure management settings only. One or more data ports can be connected and configured. <br> - At least one data network interface from among Port 2 to Port 4 needs to be connected to the Internet (with connectivity to Azure). <br> - DHCP and static IPv4 configuration supported. | Static IPv4 configuration requires IP, DNS server, and default gateway. |
+| Advanced networking settings | - Require 2 free, static, contiguous IPs for Kubernetes nodes, and one static IP for IoT Edge service. <br> - Require one additional IP for each extra service or module that you'll deploy.| Only static IPv4 configuration is supported.|
+| (Optional) Web proxy settings |Web proxy server IP/FQDN, port |HTTPS URLs are not supported. |
| Firewall and port settings | If using firewall, make sure the [listed URLs patterns and ports](azure-stack-edge-pro-2-system-requirements.md#url-patterns-for-firewall-rules) are allowed for device IPs. | |
-| (Recommended) Time settings | Configure time zone, primary NTP server, secondary NTP server. | Configure primary and secondary NTP server on local network.<br>If local server isnΓÇÖt available, public NTP servers can be configured. |
-| (Optional) Update server settings | <li>Require update server IP address on local network, path to WSUS server. </li> | By default, public windows update server is used.|
-| Device settings | <li>Device fully qualified domain name (FQDN) </li><li>DNS domain</li> | |
+| (Recommended) Time settings | Configure time zone, primary NTP server, secondary NTP server. | Configure primary and secondary NTP server on local network. <br> - If local server isnΓÇÖt available, public NTP servers can be configured. |
+| (Optional) Update server settings | Require update server IP address on local network, path to WSUS server. | By default, public windows update server is used.|
+| Device settings | - Device fully qualified domain name (FQDN). <br> - DNS domain. | |
| (Optional) Certificates | To test non-production workloads, use [Generate certificates option](azure-stack-edge-gpu-deploy-configure-certificates.md#generate-device-certificates) <br><br> If you bring your own certificates including the signing chain(s), [Add certificates](azure-stack-edge-gpu-deploy-configure-certificates.md#bring-your-own-certificates) in appropriate format.| Configure certificates only if you change the device name and/or DNS domain. | | Activation | Require activation key from the Azure Stack Edge resource. | Once generated, the key expires in three days. |
Use the following checklist to ensure you have this information after youΓÇÖve p
| Stage | Parameter | Details | |--|-|--|
-| Device management | <li>Azure subscription</li><li>Resource providers registered</li><li>Azure Storage account</li>|<li>Enabled for Azure Stack Edge, owner or contributor access.</li><li>In Azure portal, go to **Home > Subscriptions > Your-subscription > Resource providers**. Search for `Microsoft.DataBoxEdge` and register. Repeat for `Microsoft.Devices` if deploying IoT workloads.</li><li>Need access credentials</li> |
-| Device installation | One power cable in the package. <!--<br>For US, an SVE 18/3 cable rated for 125 V and 15 Amps with a NEMA 5-15P to C13 (input to output) connector is shipped.--> | For more information, see the list of [Supported power cords by country](azure-stack-edge-technical-specifications-power-cords-regional.md) |
-| | <li>At least two 1-GbE RJ-45 network cable for Port 1 on the two device nodes </li><li> You would need two 1-GbE network cables to connect Port 2 on each device node to the internet. Depending on the network topology you wish to deploy, you may also need at least one 100-GbE QSFP28 Passive Direct Attached Cable (tested in-house) to connect Port 3 and Port 4 across the device nodes. </li><li> You would also need at least one 100-GbE network switch to connect a 1 GbE or a 100-GbE network interface to the Internet for data.</li>| Customer needs to procure these cables and switches.<br>For a full list of supported cables, modules, and switches, see [Connect-X6 DX adapter card compatible firmware](https://docs.nvidia.com/networking/display/ConnectX6DxFirmwarev22271016/Firmware%20Compatible%20Products).|
-| First-time device connection | <li>Laptop whose IPv4 settings can be changed. This laptop connects to Port 1 via a switch or a USB to Ethernet adapter. </li><!--<li> A minimum of 1 GbE switch must be used for the device once the initial setup is complete. The local web UI will not be accessible if the connected switch is not at least 1 Gbe.</li>-->| |
+| Device management | - Azure subscription <br> - Resource providers registered <br> - Azure Storage account|Enabled for Azure Stack Edge, owner or contributor access. <br> - In Azure portal, go to **Home > Subscriptions > Your-subscription > Resource providers**. Search for `Microsoft.EdgeOrder` and register. Repeat for `Microsoft.Devices` if deploying IoT workloads. <br> - Need access credentials</li> |
+| Device installation | One power cable in the package per device node. <!--<br>For US, an SVE 18/3 cable rated for 125 V and 15 Amps with a NEMA 5-15P to C13 (input to output) connector is shipped.--> | For more information, see the list of [Supported power cords by country](azure-stack-edge-technical-specifications-power-cords-regional.md) |
+| | <br> - At least two 1-GbE RJ-45 network cable for Port 1 on the two device nodes <br> - You would need two 1-GbE network cables to connect Port 2 on each device node to the internet. Depending on the network topology you wish to deploy, you may also need at least one 100-GbE QSFP28 Passive Direct Attached Cable (tested in-house) to connect Port 3 and Port 4 across the device nodes. <br> - You would also need at least one 10/1-GbE network switch to connect Port 1 and Port 2. You would need a 100/10-GbE switch to connect Port 3 or Port 4 network interface to the Internet for data.| Customer needs to procure these cables and switches. Exact number of cables and switches would depend on the network topology that you deploy. <br><br> For a full list of supported cables, modules, and switches, see [Connect-X6 DX adapter card compatible firmware](https://docs.nvidia.com/networking/display/ConnectX6DxFirmwarev22271016/Firmware%20Compatible%20Products).|
+| First-time device connection | Via a laptop whose IPv4 settings can be changed. This laptop connects to Port 1 via a switch or a USB to Ethernet adapter. |
| Device sign-in | Device administrator password, between 8 and 16 characters, including three of the following character types: uppercase, lowercase, numeric, and special characters. | Default password is *Password1*, which expires at first sign-in. |
-| Network settings | Device comes with 2 x 10/1-GbE, 2 x 100-GbE network ports. <li>Port 1 is used to configure management settings only. One or more data ports can be connected and configured. </li><li> At least one data network interface from among Port 2 - Port 4 needs to be connected to the Internet (with connectivity to Azure).</li><li> DHCP and static IPv4 configuration supported. | Static IPv4 configuration requires IP, DNS server, and default gateway. |
-| Advanced networking settings | <li>Require 2 free, static, contiguous IPs for Kubernetes nodes, and one static IP for IoT Edge service.</li><li>Require one additional IP for each extra service or module that you'll deploy.</li>| Only static IPv4 configuration is supported.|
-| (Optional) Web proxy settings | <li>Web proxy server IP/FQDN, port </li><li>Web proxy username, password</li> | |
+| Network settings | Device comes with 2 x 10/1-GbE network ports, Port 1 and Port 2. Device also has 2 x 100-GbE network ports, Port 3 and Port 4. <br> - Port 1 is used for initial configuration. Port 2, Port 3, and Port 4 are also connected and configured. <br> - At least one data network interface from among Port 2 - Port 4 needs to be connected to the Internet (with connectivity to Azure). <br> - DHCP and static IPv4 configuration supported. | Static IPv4 configuration requires IP, DNS server, and default gateway. |
+| Advanced networking settings | <br> - Require 3 free, static, contiguous IPs for Kubernetes nodes, and one static IP for IoT Edge service. <br> - Require one additional IP for each extra service or module that you'll deploy.| Only static IPv4 configuration is supported.|
+| (Optional) Web proxy settings | Web proxy server IP/FQDN, port| HTTPS URLs are not supported. |
| Firewall and port settings | If using firewall, make sure the [listed URLs patterns and ports](azure-stack-edge-pro-2-system-requirements.md#url-patterns-for-firewall-rules) are allowed for device IPs. | | | (Recommended) Time settings | Configure time zone, primary NTP server, secondary NTP server. | Configure primary and secondary NTP server on local network.<br>If local server isnΓÇÖt available, public NTP servers can be configured. |
-| (Optional) Update server settings | <li>Require update server IP address on local network, path to WSUS server. </li> | By default, public windows update server is used.|
-| Device settings | <li>Device fully qualified domain name (FQDN) </li><li>DNS domain</li> | |
+| (Optional) Update server settings | Require update server IP address on local network, path to WSUS server. | By default, public windows update server is used.|
+| Device settings | <br> - Device fully qualified domain name (FQDN) <br> - DNS domain| |
| (Optional) Certificates | To test non-production workloads, use [Generate certificates option](azure-stack-edge-gpu-deploy-configure-certificates.md#generate-device-certificates) <br><br> If you bring your own certificates including the signing chain(s), [Add certificates](azure-stack-edge-gpu-deploy-configure-certificates.md#bring-your-own-certificates) in appropriate format.| Configure certificates only if you change the device name and/or DNS domain. | | Activation | Require activation key from the Azure Stack Edge resource. | Once generated, the key expires in three days. |
databox-online Azure Stack Edge Pro 2 Deploy Configure Network Compute Web Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy.md
Previously updated : 03/01/2022 Last updated : 03/04/2022 zone_pivot_groups: azure-stack-edge-device-deployment # Customer intent: As an IT admin, I need to understand how to connect and activate Azure Stack Edge Pro so I can use it to transfer data to Azure.
Follow these steps to configure the network for your device.
![Screenshot of the Get started page in the local web UI of an Azure Stack Edge device. The Needs setup is highlighted on the Network tile.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/network-1.png)
- On your physical device, there are four network interfaces. Port 1 and Port 2 are 1-Gbps network interfaces that can also serve as 10-Gbps network interfaces. Port 3 and Port 4 are 100-Gbps network interfaces. Port 1 is automatically configured as a management-only port, and Port 2 to Port 4 are all data ports. For a new device, the **Network** page is as shown below.
+ On your physical device, there are four network interfaces. Port 1 and Port 2 are 1-Gbps network interfaces that can also serve as 10-Gbps network interfaces. Port 3 and Port 4 are 100-Gbps network interfaces. Port 1 is used for the intial configuration of the device. For a new device, the **Network** page is as shown below.
![Screenshot of the Network page in the local web UI of an Azure Stack Edge device whose network isn't configured.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/network-2.png)
Follow these steps to configure the network for your device.
* Port 3 and Port 4 are reserved for Network Function Manager workload deployments. For more information, see [Tutorial: Deploy network functions on Azure Stack Edge](../network-function-manager/deploy-functions.md). * If DHCP is enabled in your environment, network interfaces are automatically configured. An IP address, subnet, gateway, and DNS are automatically assigned. * If DHCP isn't enabled, you can assign static IPs if needed.
- * You can configure your network interface as IPv4.
- * <!--ENGG TO VERIFY --> Network Interface Card (NIC) Teaming or link aggregation isnΓÇÖt supported with Azure Stack Edge.
- * <!--ENGG TO VERIFY --> In this release, the 100-GbE interfaces aren't configured for RDMA mode.
* Serial number for any port corresponds to the node serial number. Once the device network is configured, the page updates as shown below.
Follow these steps to configure the network for your device.
![Screenshot of the Get started page in the local web UI of an Azure Stack Edge device. The Needs setup is highlighted on the Network tile.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/select-network-1.png)
- On your physical device, there are four network interfaces. Port 1 and Port 2 are 1-Gbps network interfaces that can also serve as 10-Gbps network interfaces. Port 3 and Port 4 are 100-Gbps network interfaces. Port 1 is automatically configured as a management-only port, and Port 2 to Port 4 are all data ports. Though Port 6 shows up in the local UI as the Wi-Fi port, the Wi-Fi functionality isn't available in this release.
+ On your physical device, there are four network interfaces. Port 1 and Port 2 are 1-Gbps network interfaces that can also serve as 10-Gbps network interfaces. Port 3 and Port 4 are 100-Gbps network interfaces.
For a new device, the **Network** page is as shown below.
Follow these steps to configure the network for your device.
* Make sure that Port 3 and Port 4 are connected for Network Function Manager deployments. For more information, see [Tutorial: Deploy network functions on Azure Stack Edge](../network-function-manager/deploy-functions.md). * If DHCP is enabled in your environment, network interfaces are automatically configured. An IP address, subnet, gateway, and DNS are automatically assigned. * If DHCP isn't enabled, you can assign static IPs if needed.
- * You can configure your network interface as IPv4.
- * <!--ENGG TO VERIFY --> Network Interface Card (NIC) Teaming or link aggregation isnΓÇÖt supported with Azure Stack Edge.
- * <!--ENGG TO VERIFY --> In this release, the 100-GbE interfaces aren't configured for RDMA mode.
* Serial number for any port corresponds to the node serial number.
- * Though Port 6 shows up in the local UI as the Wi-Fi port, the Wi-Fi functionality isn't available in this release.
Once the device network is configured, the page updates as shown below.
Follow these steps to reconfigure Port 1:
1. Make sure that your node is cabled as per the selected topology. 1. Select **Apply**.
-1. You'll see a **Confirm network setting** dialog. This dialog reminds you to make sure that your node is cabled as per the network topology you selected. Once you choose the network cluster topology, you can't change this topology without a device reset. Select **Yes** to confirm the network topology.
+1. You'll see a **Confirm network setting** dialog. This dialog reminds you to make sure that your node is cabled as per the network topology you selected. Once you choose the network cluster topology and create a cluster, you can't update the topology without a device reset. Select **Yes** to confirm the network topology.
- ![Local web UI "Confirm network setting" dialog](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/confirm-network-setting-1.png)
+ ![Screenshot of local web UI "Confirm network setting" dialog.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/confirm-network-setting-1.png)
The network topology setting takes a few minutes to apply and you see a notification when the settings are successfully applied.
+ If for any reason, you need to reset or update the network topology, you can use the **Update topology** option. If you update the topology, you may need to make sure the cabling for the device is changed accordingly.
+
+ ![Screenshot of local web UI "Update network topology" selected.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/update-topology-1.png)
+ 1. Once the network topology is applied, the **Network** page updates. For example, if you selected network topology that uses external switches and separate virtual switches, you'll see that on the device node, a virtual switch **vSwitch1** is created at Port 1 and another virtual switch, **vSwitch2** is created on Port 2. Port 3 and Port 4 don't have any virtual switches.
- ![Local web UI "Network" page updated](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/network-settings-updated-1.png)
+ ![Screenshot of local web UI "Network" page updated.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/network-settings-updated-1.png)
You'll now configure the network and the network topology of the second node.
You'll now prepare the second node for clustering. You'll first need to configur
1. On the **Prepare a node for clustering** page, in the **Network** tile, select **Needs setup**.
- ![Local web UI "Network" tile on second node](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/select-network-2.png)
+ ![Screenshot of local web UI "Network" tile on second node.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/select-network-2.png)
1. Configure the network on the second node in a similar way that you configured the first node.
Follow the steps to reconfigure Port 1 on second node as you did on the first no
1. Make sure that the second node is cabled as per the topology you selected for the first node. In the **Advanced networking** page, choose and **Apply** the same topology that you selected for the first node.
- ![Local web UI "Advanced networking" page with "Use external switches and Port 1 and Port 2 not teamed" option selected on second node](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/select-network-topology-3.png)
+ ![Screenshot of local web UI "Advanced networking" page with "Use external switches and Port 1 and Port 2 not teamed" option selected on second node.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/select-network-topology-3.png)
1. Select **Back to get started**.
You'll now get the authentication token that will be needed when adding this nod
1. On the **Prepare a node for clustering** page, in the **Get authentication token** tile, select **Prepare node**.
- ![Local web UI "Get authentication token" tile with "Prepare node" option selected on second node](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/select-get-authentication-token-1.png)
+ ![Screenshot of local web UI "Get authentication token" tile with "Prepare node" option selected on second node.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/select-get-authentication-token-1.png)
1. Select **Get token**. 1. Copy the node serial number and the authentication token. You'll use this information when you add this node to the cluster on the first node.
- ![Local web UI "Get authentication token" on second node](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/get-authentication-token-1.png)
+ ![Screenshot of local web UI "Get authentication token" on second node.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/get-authentication-token-1.png)
## Configure cluster
Follow these steps to configure the cluster witness.
1. In the local UI of the first node, go to the **Cluster (Preview)** page. Under **Cluster witness type**, select **Modify**.
- ![Local web UI "Cluster" page with "Modify" option selected for "Cluster witness" on first node](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/add-cluster-witness-1.png)
+ ![Screenshot of local web UI "Cluster" page with "Modify" option selected for "Cluster witness" on first node.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/add-cluster-witness-1.png)
1. In the **Modify cluster witness** blade, enter the following inputs. 1. Choose the **Witness type** as **Cloud.**
Follow these steps to configure the cluster witness.
1. If you chose Access key as the authentication mechanism, enter the Access key of the Storage account, Azure Storage container where the witness lives, and the service endpoint. 1. Select **Apply**.
- ![Local web UI "Cluster" page with cloud witness type selected in "Modify cluster witness" blade on first node](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/add-cluster-witness-cloud-1.png)
+ ![Screenshot of local web UI "Cluster" page with cloud witness type selected in "Modify cluster witness" blade on first node.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/add-cluster-witness-cloud-1.png)
#### Configure local witness 1. In the local UI of the first node, go to the **Cluster** page. Under **Cluster witness type**, select **Modify**.
- ![Local web UI "Cluster" page with "Modify" option selected for "Cluster witness" on first node](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/add-cluster-witness-1.png)
+ ![Screenshot of local web UI "Cluster" page with "Modify" option selected for "Cluster witness" on first node.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/add-cluster-witness-1.png)
1. In the **Modify cluster witness** blade, enter the following inputs. 1. Choose the **Witness type** as **Local.** 1. Enter the file share path as *//server/fileshare* format. 1. Select **Apply**.
- ![Local web UI "Cluster" page with local witness type selected in "Modify cluster witness" blade on first node](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/add-cluster-witness-local-1.png)
+ ![Screenshot of local web UI "Cluster" page with local witness type selected in "Modify cluster witness" blade on first node.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/add-cluster-witness-local-1.png)
### Add prepared node to cluster
You'll now add the prepared node to the first node and form the cluster. Before
1. In the local UI of the first node, go to the **Cluster** page. Under **Existing nodes**, select **Add node**.
- ![Local web UI "Cluster" page with "Add node" option selected for "Existing" on first node](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/add-node-1.png)
+ ![Screenshot of local web UI "Cluster" page with "Add node" option selected for "Existing" on first node.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/add-node-1.png)
1. In the **Add node** blade, input the following information for the incoming node:
You'll now add the prepared node to the first node and form the cluster. Before
1. The node is now ready to join the cluster. Select **Apply**.
- ![Local web UI "Add node" page with "Apply" option selected for second node.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/add-node-3.png)
+ ![Screenshot of local web UI "Add node" page with "Apply" option selected for second node.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/add-node-3.png)
1. A dialog pops us indicating that the cluster creation could take several minutes. Press **OK** to continue. Once the cluster is created, the page updates to show both the nodes are added.
For clients connecting via NFS protocol to the two-node device, follow these ste
1. If you chose IP settings as static, enter a virtual IP. This should be a free IP from within the NFS network that you specified. If you selected DHCP, a virtual IP is automatically picked from the NFS network that you selected. 1. Select **Apply**.
- ![Local web UI "Cluster" page with "Virtual IP Settings" blade configured for NFS on first node](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-network-file-system-2m.png)
+ ![Screenshot of local web UI "Cluster" page with "Virtual IP Settings" blade configured for NFS on first node.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-network-file-system-2m.png)
> [!NOTE] > Virtual IP settings are required. If you do not configure this IP, you will be blocked when configuring the **Device settings** in the next step.
After the cluster is formed and configured, you'll now create new virtual switch
1. In the local UI, go to **Advanced networking** page. 1. In the **Virtual switch** section, you'll assign compute intent to a virtual switch. You can select an existing virtual switch or select **Add virtual switch** to create a new switch.
- ![Configure compute page in Advanced networking in local UI 1](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/configure-compute-network-1.png)
+ ![Screenshot of configuring compute in Advanced networking in local UI 1](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/configure-compute-network-1.png)
1. In the **Network settings** blade, if using a new switch, provide the following:
After the cluster is formed and configured, you'll now create new virtual switch
1. Select **Apply**.
- ![Configure compute page in Advanced networking in local UI 2](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-compute-network-2.png)
+ ![Screenshot of configuring compute in Advanced networking in local UI 2](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-compute-network-2.png)
1. The configuration takes a couple minutes to apply and you may need to refresh the browser. You can see that the specified virtual switch is created and enabled for compute.
- ![Configure compute page in Advanced networking in local UI 3](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-compute-network-3.png)
+ ![Screenshot of configuring compute in Advanced networking in local UI 3](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/configure-compute-network-3.png)
To delete a virtual switch, under the **Virtual switch** section, select **Delete virtual switch**. When a virtual switch is deleted, the associated virtual networks will also be deleted.
To delete a virtual switch, under the **Virtual switch** section, select **Delet
### Configure virtual network
-You can add or delete virtual networks associated with your virtual switches. To add a virtual switch, follow these steps:
+You can add or delete virtual networks associated with your virtual switches. To add a virtual network, follow these steps:
1. In the local UI on the **Advanced networking** page, under the **Virtual network** section, select **Add virtual network**. 1. In the **Add virtual network** blade, input the following information:
This is an optional configuration. However, if you use a web proxy, you can conf
In this tutorial, you learned about: + > [!div class="checklist"] > * Prerequisites > * Configure network > * Configure advanced networking > * Configure web proxy ++
+> [!div class="checklist"]
+> * Prerequisites
+> * Select device setup type
+> * Configure network and network topology on both nodes
+> * Get authentication token for prepared node
+> * Configure cluster witness and add prepared node
+> * Configure virtual IP settings for Azure Consistent Services and NFS
+> * Configure advanced networking
+> * Configure web proxy
+ To learn how to set up your Azure Stack Edge Pro 2 device, see:
databox-online Azure Stack Edge Pro 2 Deploy Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-connect.md
Before you configure and set up your device, make sure that:
![Back plane of a cabled device](./media/azure-stack-edge-pro-2-deploy-install/cabled-backplane-1.png)
- The back plane of the device may look slightly different depending on the exact model youΓÇÖve received. For more information, see [Cable your device](azure-stack-edge-gpu-deploy-install.md#cable-the-device).
+ The back plane of the device may look slightly different depending on the exact model youΓÇÖve received. For more information, see [Cable your device](azure-stack-edge-pro-2-deploy-install.md#cable-the-device).
3. Open a browser window and access the local web UI of the device at `https://192.168.100.10`.
databox-online Azure Stack Edge Pro 2 Deploy Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-install.md
Previously updated : 02/28/2022 Last updated : 03/04/2022 zone_pivot_groups: azure-stack-edge-device-deployment # Customer intent: As an IT admin, I need to understand how to install Azure Stack Edge Pro 2 in datacenter so I can use it to transfer data to Azure.
This device is shipped in a single box. Complete the following steps to unpack y
- One single enclosure Azure Stack Edge Pro 2 device. - One power cord. - One packaged bezel.
+ - A pair of packaged Wi-Fi antennas in the accessory box.
- One packaged mounting accessory which could be: - A 4-post rack slide rail, or - A 2-post rack slide, or
This device is shipped in two boxes. Complete the following steps to unpack your
1. Place the box on a flat, level surface. 2. Inspect the box and the packaging foam for crushes, cuts, water damage, or any other obvious damage. If the box or packaging is severely damaged, don't open it. Contact Microsoft Support to help you assess whether the device is in good working order. 3. Unpack the box. After unpacking the box, make sure that you have the following in each box:
- - One single enclosure Azure Stack Edge Pro 2 device
- - One power cord
- - One packaged bezel
- - A pair of packaged Wi-Fi antennas in the accessory box
+ - One single enclosure Azure Stack Edge Pro 2 device.
+ - One power cord.
+ - One packaged bezel.
+ - A pair of packaged Wi-Fi antennas in the accessory box.
- One packaged mounting accessory which could be: - A 4-post rack slide rail, or - A 2-post rack slide, or - A wall mount (may be packaged separately).
- - A safety, environmental, and regulatory information booklet
+ - A safety, environmental, and regulatory information booklet.
::: zone-end
-If you didn't receive all of the items listed here, [Contact Microsoft Support](azure-stack-edge-contact-microsoft-support.md). The next step is to mount your device on a rack or wall.
+If you didn't receive all of the items listed here, [Contact Microsoft Support](azure-stack-edge-contact-microsoft-support.md). The next step is to mount your device on a rack or wall.
## Rack mount the device
If you have received 4-post rackmount, use the following procedure to rack moun
If you decide not to mount your device, you can also place it on a desk or a shelf. - ### Prerequisites - Before you begin, make sure to read the [Safety instructions](azure-stack-edge-pro-2-safety.md) for your device.
If you decide not to mount your device, you can also place it on a desk or a she
### Identify the rail kit contents Locate the components for installing the rail kit assembly:-- Inner rails-- Chassis of your device-- 10L M5 screws
+- Inner rails.
+- Chassis of your device.
+- 10L M5 screws.
### Install rails
Locate the components for installing the rail kit assembly:
:::image type="content" source="media/azure-stack-edge-pro-2-deploy-install/4-post-insert-chassis-new.png" alt-text="Diagram showing how to insert the chassis."::: +
+If deploying a two-node device cluster, make sure to mount both the devices on the rack or the wall.
++ ### Install the bezel After the device is mounted on a rack, install the bezel on the device. Bezel serves as the protective face plate for the device.
After the device is mounted on a rack, install the bezel on the device. Bezel se
![Lock the bezel](./media/azure-stack-edge-pro-2-deploy-install/lock-bezel.png) -
-If deploying a two-node device cluster, make sure to mount both the devices on the rack or the wall.
-- ## Cable the device
-Route the cables and then cable your device. The following procedures explain how to cable your Azure Stack Edge Pro 2 device for power and network.
-
+The following procedures explain how to cable your Azure Stack Edge Pro 2 device for power and network.
### Cabling checklist
Before you start cabling your device, you need the following things:
- Your Azure Stack Edge Pro 2 physical device, unpacked, and rack mounted. - One power cable (included in the device package).-- At least one 1-GbE RJ-45 network cable to connect to the Port 1. There are two 1-GbE network interfaces, one used for initial configuration and one for data, on the device. These network interfaces can also act as 10-GbE interfaces.-- One 100-GbE QSFP28 passive direct attached cable (tested in-house) for each data network interface Port 3 and Port 4 to be configured. At least one data network interface from among Port 2, Port 3, and Port 4 needs to be connected to the Internet (with connectivity to Azure). Here is an example QSFP28 DAC connector:
+- At least one 1-GbE RJ-45 network cable to connect to the Port 1. Port 1 and Port 2 the two 10/1-GbE network interfaces on your device.
+- One 100-GbE QSFP28 passive direct attached cable (tested in-house) for each data network interface Port 3 and Port 4 to be configured. Here is an example of the QSFP28 DAC connector:
![Example of a QSFP28 DAC connector](./media/azure-stack-edge-pro-2-deploy-install/qsfp28-dac-connector.png) For a full list of supported cables, modules, and switches, see [Connect-X6 DX adapter card compatible firmware](https://docs.nvidia.com/networking/display/ConnectX6DxFirmwarev22271016/Firmware+Compatible+Products). - Access to one power distribution unit.-- At least one 100-GbE network switch to connect a 10/1-GbE or a 100-GbE network interface to the internet for data.
+- At least one 100-GbE network switch to connect a 10/1-GbE or a 100-GbE network interface to the internet for data. At least one data network interface from among Port 2, Port 3, and Port 4 needs to be connected to the Internet (with connectivity to Azure).
- A pair of Wi-Fi antennas (included in the accessory box). ::: zone-end
Before you start cabling your device, you need the following things:
Before you start cabling your device, you need the following things: - Your two Azure Stack Edge Pro 2 physical devices, unpacked, and rack mounted.-- -- One power cable for each device.-- Access to one power distribution unit for each device.-- At least two 1-GbE RJ-45 network cable per device to connect to Port 1 and Port2. There are two 10/1-GbE network interfaces, one used for initial configuration and one for data, on the device.
+- One power cable for each device node (included in the device package).
+- Access to one power distribution unit for each device node.
+- At least two 1-GbE RJ-45 network cable per device to connect to Port 1 and Port2. These are the two 10/1-GbE network interfaces on your device.
- A 100-GbE QSFP28 passive direct attached cable (tested in-house) for each data network interface Port 3 and Port 4 to be configured on each device. The total number needed would depend on the network topology you will deploy. Here is an example QSFP28 DAC connector: ![Example of a QSFP28 DAC connector](./media/azure-stack-edge-pro-2-deploy-install/qsfp28-dac-connector.png) For a full list of supported cables, modules, and switches, see [Connect-X6 DX adapter card compatible firmware](https://docs.nvidia.com/networking/display/ConnectX6DxFirmwarev22271016/Firmware+Compatible+Products). - At least one 100-GbE network switch to connect a 1-GbE or a 100-GbE network interface to the internet for data for each device.-
+- A pair of Wi-Fi antennas (included in the accessory box).
::: zone-end
The front panel on Azure Stack Edge Pro 2 device:
- Four network interfaces:
- - Two 1-Gbps interfaces, Port 1 and Port 2, that can also serve as 10-Gbps interfaces.
+ - Two 10/1-Gbps interfaces, Port 1 and Port 2.
- Two 100-Gbps interfaces, PORT 3 and PORT 4. - A baseboard management controller (BMC).
Follow these steps to cable your device for power:
Follow these steps to cable your device for power:
-1. Identify the various ports on the back plane of each your devices.
+1. Identify the various ports on the back plane of each device.
1. Locate the disk slots and the power button on the front of each device. 1. Connect the power cord to the PSU in each device enclosure. 1. Attach the power cords from the two devices to two different power distribution units (PDU).
Follow these steps to install Wi-Fi antennas on your device:
Follow these steps to cable your device for network:
-1. Connect the 10/1-GbE network interface Port 1 to the computer that's used to configure the physical device. PORT 1 serves as the management interface for the initial configuration of the device.
+1. Connect the 10/1-GbE network interface Port 1 to the computer that's used to configure the physical device. Port 1 is used for the initial configuration of the device.
> [!NOTE] > If connecting the computer directly to your device (without going through a switch), use a crossover cable or a USB Ethernet adapter.
databox-online Azure Stack Edge Pro 2 Deploy Prep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-prep.md
Previously updated : 02/28/2022 Last updated : 03/04/2022 # Customer intent: As an IT admin, I need to understand how to prepare the portal to deploy Azure Stack Edge Pro 2 so I can use it to transfer data to Azure.
This tutorial is the first in the series of deployment tutorials that are required to completely deploy Azure Stack Edge Pro 2. This tutorial describes how to prepare the Azure portal to deploy an Azure Stack Edge resource.
-You need administrator privileges to complete the setup and configuration process. The portal preparation takes less than 10 minutes.
+You need administrator privileges to complete the setup and configuration process. The portal preparation takes less than 20 minutes.
In this tutorial, you learn how to:
For Azure Stack Edge Pro 2 deployment, you need to first prepare your environmen
|**[5. Configure device settings for Azure Stack Edge Pro 2](azure-stack-edge-pro-2-deploy-set-up-device-update-time.md)** |Assign a device name and DNS domain, configure update server and device time. | |**[6. Configure security settings for Azure Stack Edge Pro 2](azure-stack-edge-pro-r-security.md)** |Configure certificates for your device. Use device-generated certificates or bring your own certificates. | |**[7. Activate Azure Stack Edge Pro 2](azure-stack-edge-pro-2-deploy-activate.md)** |Use the activation key from service to activate the device. The device is ready to set up SMB or NFS shares or connect via REST. |
-|**[8. Configure compute](azure-stack-edge-gpu-deploy-configure-compute.md)** |Configure the compute role on your device. A Kubernetes cluster is also created. |
+|**[8. Configure compute](azure-stack-edge-pro-2-deploy-configure-compute.md)** |Configure the compute role on your device. A Kubernetes cluster is also created. |
|**[9A. Transfer data with Edge shares](./azure-stack-edge-gpu-deploy-add-shares.md)** |Add shares and connect to shares via SMB or NFS. | |**[9B. Transfer data with Edge storage accounts](./azure-stack-edge-gpu-deploy-add-storage-accounts.md)** |Add storage accounts and connect to blob storage via REST APIs. |
Before you begin, make sure that:
Before you begin, make sure that: -- The network in your datacenter is configured per the networking requirements for your Azure Stack device. For more information, see [Azure Stack Edge Pro 2 System Requirements](azure-stack-edge-gpu-system-requirements.md).
+- The network in your datacenter is configured per the networking requirements for your Azure Stack device. For more information, see [Azure Stack Edge Pro 2 System Requirements](azure-stack-edge-pro-2-system-requirements.md).
- For normal operating conditions of your Azure Stack Edge, you have:
After the Azure Stack Edge resource is up and running, you'll need to get the ac
Once you've specified a key vault name, select **Generate key** to create an activation key.
- ![Screenshot of the Overview pane for a newly created Azure Stack Edge resource. The Generate Activation Key button is highlighted.](media/azure-stack-edge-gpu-deploy-prep/azure-stack-edge-resource-3.png)
+ ![Screenshot of the Overview pane for a newly created Azure Stack Edge resource. The Generate Activation Key button is highlighted.](media/azure-stack-edge-pro-2-deploy-prep/generate-activation-key-1.png)
Wait a few minutes while the key vault and activation key are created. Select the copy icon to copy the key and save it for later use.
databox-online Azure Stack Edge Pro 2 Deploy Set Up Device Update Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-set-up-device-update-time.md
Previously updated : 03/01/2022 Last updated : 03/04/2022 # Customer intent: As an IT admin, I need to understand how to set up device name, update server and time server via the local web UI of Azure Stack Edge Pro 2 so I can use the device to transfer data to Azure.
databox-online Azure Stack Edge Pro 2 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-overview.md
Previously updated : 03/03/2022 Last updated : 03/04/2022 #Customer intent: As an IT admin, I need to understand what Azure Stack Edge Pro 2 is and how it works so I can use it to process and transform data before sending to Azure.
Azure Stack Edge Pro 2 has the following capabilities:
|Offline upload | Disconnected mode supports offline upload scenarios.| |Supported file transfer protocols | Support for standard Server Message Block (SMB), Network File System (NFS), and Representational state transfer (REST) protocols for data ingestion. <br> For more information on supported versions, see [Azure Stack Edge Pro 2 system requirements](azure-stack-edge-placeholder.md).| |Data refresh | Ability to refresh local files with the latest from cloud. <br> For more information, see [Refresh a share on your Azure Stack Edge](azure-stack-edge-gpu-manage-shares.md#refresh-shares).|
-|Encryption | BitLocker support to locally encrypt data and secure data transfer to cloud over *https*.|
+|Double encryption | Use self-encrypting drives to provide a layer of encryption. BitLocker support to locally encrypt data and secure data transfer to cloud over *https*. For more information, see [Configure encryption-at-rest](azure-stack-edge-pro-2-deploy-configure-certificates.md#configure-encryption-at-rest)|
|Bandwidth throttling| Throttle to limit bandwidth usage during peak hours. <br> For more information, see [Manage bandwidth schedules on your Azure Stack Edge](azure-stack-edge-gpu-manage-bandwidth-schedules.md).| |Easy ordering| Bulk ordering and tracking of the device via Azure Edge Hardware Center. <br> For more information, see [Order a device via Azure Edge Hardware Center](azure-stack-edge-pro-2-deploy-prep.md#create-a-new-resource).| |Specialized network functions|Use the Marketplace experience from Azure Network Function Manager to rapidly deploy network functions. The functions deployed on Azure Stack Edge include mobile packet core, SD-WAN edge, and VPN services. <br>For more information, see [What is Azure Network Function Manager? (Preview)](../network-function-manager/overview.md).|
Azure Stack Edge Pro 2 has the following capabilities:
The Azure Stack Edge Pro 2 solution consists of Azure Stack Edge resource, Azure Stack Edge Pro 2 physical device, and a local web UI.
-* **Azure Stack Edge Pro 2 physical device** - A 2U compact size device supplied by Microsoft that can be configured to send data to Azure.
+* **Azure Stack Edge Pro 2 physical device** - A compact 2U device supplied by Microsoft that can be configured to send data to Azure.
![Perspective view of Azure Stack Edge Pro 2 physical device](./media/azure-stack-edge-pro-2-overview/azure-stack-edge-pro-2-perspective-view-1.png)
databox-online Azure Stack Edge Pro 2 Technical Specifications Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-technical-specifications-compliance.md
Previously updated : 03/03/2022 Last updated : 03/06/2022
The following table lists the storage capacity of the device.
| Specification | Value | |-|--|
-| Number of data disks | 2 SATA SSDs |
-| Single data disk capacity | 960 GB |
| Boot disk | 1 NVMe SSD | | Boot disk capacity | 960 GB |
+| Number of data disks | 2 SATA SSDs |
+| Single data disk capacity | 960 GB |
| Total capacity | Model 64G2T: 2 TB | | Total usable capacity | Model 64G2T: 720 GB | | RAID configuration | [Storage Spaces Direct with mirroring](/windows-server/storage/storage-spaces/storage-spaces-fault-tolerance#mirroring) |
databox-online Azure Stack Edge Pro 2 Two Post Rack Mounting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-two-post-rack-mounting.md
The device must be installed on a standard 19-inch rack. Use the following proce
* Before you begin, read the safety instructions in your Safety, Environmental, and Regulatory Information booklet. This booklet was shipped with the device. * Begin installing the rails in the allotted space that is closest to the bottom of the rack enclosure. * For the rack mounting configuration, you need to supply:
- * Phillips-head screwdriver
+ * A Phillips-head screwdriver
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Microsoft Defender for Containers provides security alerts on the cluster level
| **Potential crypto coin miner started (Preview)**<br>(K8S.NODE_CryptoCoinMinerExecution) | Analysis of processes running within a container detected a process being started in a way normally associated with digital currency mining. | Execution | Medium | | **Suspicious password access (Preview)**<br>(K8S.NODE_SuspectPasswordFileAccess) | Analysis of processes running within a container detected suspicious access to encrypted user passwords. | Persistence | Informational | | **Suspicious use of DNS over HTTPS (Preview)**<br>(K8S.NODE_SuspiciousDNSOverHttps) | Analysis of processes running within a container indicates the use of a DNS call over HTTPS in an uncommon fashion. This technique is used by attackers to hide calls out to suspect or malicious sites. | DefenseEvasion, Exfiltration | Medium |
-| **A possible connection to malicious location has been detected. (Preview)**<br>(K8S.NODE_ThreatIntelCommandLineSuspectDomain) | Analysis of processes running within a container detected a connection to a location that has been reported to be malicious or unusual. This is an indicator that a compromise may have occured. | InitialAccess | Medium |
+| **A possible connection to malicious location has been detected. (Preview)**<br>(K8S.NODE_ThreatIntelCommandLineSuspectDomain) | Analysis of processes running within a container detected a connection to a location that has been reported to be malicious or unusual. This is an indicator that a compromise may have occurred. | InitialAccess | Medium |
| | | | | <sup><a name="footnote1"></a>1</sup>: **Limitations on GKE clusters**: GKE uses a Kuberenetes audit policy that doesn't support all alert types. As a result, this security alert, which is based on Kubernetes audit events, are not supported for GKE clusters.
defender-for-iot How To Accelerate Alert Incident Response https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-accelerate-alert-incident-response.md
The alert message indicates that a user-defined rule triggered the alert.
1. Select **Create rule** (**+**).
- :::image type="content" source="media/how-to-work-with-alerts-sensor/custom-alerts-rules.png" alt-text="Create custom alert rules":::
+ :::image type="content" source="media/how-to-work-with-alerts-sensor/custom-alerts-rules.png" alt-text="Screenshot of the Create custom alert rules pane.":::
1. Define an alert name. 1. Select protocol to detect. 1. Define a message to display. Alert messages can contain alphanumeric characters you enter, as well as traffic variables detected. For example, include the detected source and destination addresses in the alert messages. Use { } to add variables to the message 1. Select the engine that should detect the activity.
-1. **Select the source and destination devices that pairs for which activity should be detected.**
+1. Select the source and destination devices for the activity you want to detect.
#### Create rule conditions
Create conditions based on unique values associated with the category selected.
8. Enter a **Value** as a number. If the variable you selected is a MAC address or IP address, the value must be converted from a dotted-decimal address to decimal format. Use an IP address conversion tool, for example <https://www.ipaddressguide.com/ip>.
- :::image type="content" source="media/how-to-work-with-alerts-sensor/custom-rule-conditions.png" alt-text="Custom rule condition":::
+ :::image type="content" source="media/how-to-work-with-alerts-sensor/custom-rule-conditions.png" alt-text="Screenshot of the Custom rule condition options.":::
9. Select plus (**+**) to create a condition set. When the rule condition or condition set is met, the alert is sent. You will be notified if the condition logic is not valid.
-**Condition Based when activity took place**
+**Condition based on when activity took place**
Create conditions based on when the activity was detected. In the Detected section, select a time period and day in which the detection must occur in order to send the alert. You can choose to send the alert if the activity is detected: - any time throughout the day
The following actions can be defined for the rule:
The rule is added to the **Customized Alerts Rules** page. ### Managing customer alert rules
Changes made to custom alert rules are tracked in the event timeline. For exampl
1. Navigate to the Event timeline page.
-### See also
+## Next steps
-[Manage the alert event](how-to-manage-the-alert-event.md)
+For more information, see [Manage the alert event](how-to-manage-the-alert-event.md).
defender-for-iot How To Activate And Set Up Your On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-activate-and-set-up-your-on-premises-management-console.md
To unassign and delete a sensor:
1. To delete the unassigned sensor from the site, select the sensor from the list of unassigned sensors and select :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/delete-icon.png" border="false":::.
-## See also
+## Next steps
-[Troubleshoot the sensor and on-premises management console](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md)
+For more information, see [Troubleshoot the sensor and on-premises management console](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md).
defender-for-iot How To Activate And Set Up Your Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-activate-and-set-up-your-sensor.md
Your sensor was onboarded to Microsoft Defender for IoT in a specific management
A locally connected, or cloud-connected activation file was generated and downloaded for this sensor during onboarding. The activation file contains instructions for the management mode of the sensor. *A unique activation file should be uploaded to each sensor you deploy.* The first time you sign in, you need to upload the relevant activation file for this sensor. ### About certificates
For more information about working with certificates, see [Manage certificates](
1. Go to the sensor console from your browser by using the IP defined during the installation. The sign-in dialog box opens.
- :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/sensor-log-in-1.png" alt-text="Sensor log in screen":::
+ :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/sensor-log-in-1.png" alt-text="Screenshot of a Defender for IoT sensor sign in page.":::
1. Enter the credentials defined during the sensor installation, or select the **Password recovery** option. If you purchased a preconfigured sensor from Arrow, generate a password first. For more information on password recovery, see [Investigate password failure at initial sign-in](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md#investigate-password-failure-at-initial-sign-in).
For more information about working with certificates, see [Manage certificates](
1. Select **Login/Next**. The **Sensor Network Settings** tab opens.
- :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/sensor-log-in-wizard-activate.png" alt-text="log in to sensor":::
+ :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/sensor-log-in-wizard-activate.png" alt-text="Screenshot of the sensor network settings options when signing into the sensor.":::
1. Use this tab if you want to change the sensor network configuration before activation. The configuration parameters were defined during the software installation, or when you purchased a preconfigured sensor. The following parameters were defined:
For more information about working with certificates, see [Manage certificates](
If you want to work with a proxy, enable the proxy toggle and add the proxy host, port and username.
- :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/sensor-log-in-wizard-activate-proxy.png" alt-text="Initial Log in to sensor using a proxy":::
+ :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/sensor-log-in-wizard-activate-proxy.png" alt-text="Screenshot of the proxy options for signing in to a sensor.":::
1. Select **Next.** The Activation tab opens.
- :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/wizard-upload-activation-file.png" alt-text="First time log in activation file":::
+ :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/wizard-upload-activation-file.png" alt-text="Screenshot of a first time activation file upload option.":::
1. Select **Upload** and go to the activation file that you downloaded during the sensor onboarding.
For more information about working with certificates, see [Manage certificates](
It is **not recommended** to use a locally generated certificate in a production environment.
- :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/wizard-upload-activation-certificates-1.png" alt-text="Initial sensor login certificates":::
+ :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/wizard-upload-activation-certificates-1.png" alt-text="Screenshot of the SSL/TLS Certificates page when signing in to a sensor.":::
1. Enable the **Import trusted CA certificate (recommended)** toggle. 1. Define a certificate name.
For information about uploading a new certificate, supported certificate paramet
For users with versions prior to 10.0, your license may expire, and the following alert will be displayed.
- :::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/activation-popup.png" alt-text="When your license expires youΓÇÖll need to update your license through the activation file.":::
+ :::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/activation-popup.png" alt-text="Screenshot of a license expiration popup message.":::
**To activate your license:**
For users with versions prior to 10.0, your license may expire, and the followin
1. Paste the string into space provided.
- :::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/add-license.png" alt-text="Paste the string into the provided field.":::
+ :::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/add-license.png" alt-text="Screenshot of the license activation box and button.":::
1. Select **Activate**.
For users with versions prior to 10.0, your license may expire, and the followin
After first-time activation, the Microsoft Defender for IoT sensor console opens after sign-in without requiring an activation file or certificate definition. You only need your sign-in credentials. After your sign in, the Microsoft Defender for IoT sensor console opens.
- :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/initial-dashboard.png" alt-text="Screenshot that shows the Defender for IoT initial dashboard." lightbox="media/how-to-activate-and-set-up-your-sensor/initial-dashboard.png":::
+ :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/initial-dashboard.png" alt-text="Screenshot of the initial sensor console dashboard Overview page." lightbox="media/how-to-activate-and-set-up-your-sensor/initial-dashboard.png":::
## Initial setup and learning (for administrators)
Before you sign in, verify that you have:
- The sensor IP address. - Sign in credentials that your administrator provided.
- :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/sensor-log-in-1.png" alt-text="Sensor login after initial setup":::
+ :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/sensor-log-in-1.png" alt-text="Screenshot of the sensor sign in page after the initial setup.":::
## Console tools: Overview
You can access console tools from the side menu. Tools help you:
- Set up your sensor for maximum performance - Create and manage users
- :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/main-page-side-bar.png" alt-text="The main menu of the sensor console on the left side of the screen":::
+ :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/main-page-side-bar.png" alt-text="Screenshot of the sensor console's main menu on the left.":::
### Discover | Tools| Description | | --|--|
-| Overview | View a dashboard with high-level information about your sensor deployment, alerts, traffic, and more. <! For more information, see TBD >|
+| Overview | View a dashboard with high-level information about your sensor deployment, alerts, traffic, and more. |
| Device map | View the network devices, device connections, Purdue levels, and device properties in a map. Various zoom, highlight, and filter options are available to help you gain the insight you need. For more information, see [Investigate sensor detections in the Device Map](how-to-work-with-the-sensor-device-map.md#investigate-sensor-detections-in-the-device-map). | | Device inventory | The Device inventory displays a list of device attributes that this sensor detects. Options are available to: <br /> - Sort, or filter the information according to the table fields, and see the filtered information displayed. <br /> - Export information to a CSV file. <br /> - Import Windows registry details. For more information, see [Investigate sensor detections in a device inventory](how-to-investigate-sensor-detections-in-a-device-inventory.md#investigate-sensor-detections-in-an-inventory).| | Alerts | Alerts are triggered when sensor engines detect changes or suspicious activity in network traffic that require your attention. For more information, see [View alerts on your sensor](how-to-view-alerts.md#view-alerts-on-your-sensor).|
You can access console tools from the side menu. Tools help you:
- your sensor isn't detecting traffic - your sensor SSL certificate is expired or will expire soon
- :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/system-messages.png" alt-text="System messages screen on main sensor console page, viewed by selecting the bell icon":::
+ :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/system-messages.png" alt-text="Screenshot of the System messages area on the sensor console page, displayed after selecting the bell icon.":::
**To review system messages:** 1. Sign into the sensor 1. Select the **System Messages** icon (Bell icon).
-## See also
+## Next steps
-[Threat intelligence research and packages ](how-to-work-with-threat-intelligence-packages.md)
+For more information, see:
-[Onboard a sensor](getting-started.md#onboard-a-sensor)
+- [Threat intelligence research and packages ](how-to-work-with-threat-intelligence-packages.md)
-[Manage sensor activation files](how-to-manage-individual-sensors.md#manage-sensor-activation-files)
+- [Onboard a sensor](getting-started.md#onboard-a-sensor)
-[Control what traffic is monitored](how-to-control-what-traffic-is-monitored.md)
+- [Manage sensor activation files](how-to-manage-individual-sensors.md#manage-sensor-activation-files)
+
+- [Control what traffic is monitored](how-to-control-what-traffic-is-monitored.md)
defender-for-iot How To Analyze Programming Details Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-analyze-programming-details-changes.md
You may need to review programming activity:
- When a process or machine is not working correctly (to see who carried out the last update and when)
- :::image type="content" source="media/how-to-work-with-maps/differences.png" alt-text="Programming Change Log":::
+ :::image type="content" source="media/how-to-work-with-maps/differences.png" alt-text="Screenshot of a Programming Change Log":::
Other options let you:
Access the Programming Analysis window from the:
Use the event timeline to display a timeline of events in which programming changes were detected. ### Unauthorized programming alerts Alerts are triggered when unauthorized programming devices carry out programming activities. > [!NOTE] > You can also view basic programming information in the Device Properties window and Device Inventory.
This section describes how to view programming files and compare versions. Searc
- File type
- :::image type="content" source="media/how-to-work-with-maps/timeline-view.png" alt-text="programming timeline window":::
+ :::image type="content" source="media/how-to-work-with-maps/timeline-view.png" alt-text="Screenshot of a programming timeline window.":::
|Programming timeline type | Description | |--|--|
This section describes how to choose a file to review.
2. Select a file from the File pane. The file appears in the Current pane.
- :::image type="content" source="media/how-to-work-with-maps/choose-file.png" alt-text="Select the file to work with.":::
+ :::image type="content" source="media/how-to-work-with-maps/choose-file.png" alt-text="Screenshot of selecting the file you want to work with.":::
### Compare files
This section describes how to compare programming files.
3. Select the compare indicator.
- :::image type="content" source="media/how-to-work-with-maps/compare.png" alt-text="Compare indicator":::
+ :::image type="content" source="media/how-to-work-with-maps/compare.png" alt-text="Screenshot of the compare indicator.":::
The window displays all dates the selected file was detected on the programmed device. The file may have been updated on the programmed device by multiple programming devices. The number of differences detected appears in the upper right-hand corner of the window. You may need to scroll down to view differences.
- :::image type="content" source="media/how-to-work-with-maps/scroll.png" alt-text="scroll down to your selection":::
+ :::image type="content" source="media/how-to-work-with-maps/scroll.png" alt-text="Screenshot of scrolling down to your selection.":::
The number is calculated by adjacent lines of changed text. For example, if eight consecutive lines of code were changed (deleted, updated, or added) this will be calculated as one difference.
- :::image type="content" source="media/how-to-work-with-maps/program-timeline.png" alt-text="Your programming timeline view." lightbox="media/how-to-work-with-maps/program-timeline.png":::
+ :::image type="content" source="media/how-to-work-with-maps/program-timeline.png" alt-text="Screenshot of the programming timeline view." lightbox="media/how-to-work-with-maps/program-timeline.png":::
4. Select a date. The file detected on the selected date appears in the window.
In addition to reviewing details in the Programming Timeline, you can access pro
| Device type | Description | |--|--| | Device properties | The device properties window provides information on the last programming event detected on the device. |
-| The device inventory | The device inventory indicates if the device is a programming device. <br> :::image type="content" source="media/how-to-work-with-maps/inventory-v2.png" alt-text="The inventory of devices"::: |
+| The device inventory | The device inventory indicates if the device is a programming device. <br> :::image type="content" source="media/how-to-work-with-maps/inventory-v2.png" alt-text="Screenshot of the device inventory page."::: |
+
+## Next steps
+
+For more information, see [Import device information to a sensor](how-to-import-device-information.md).
defender-for-iot How To Configure Windows Endpoint Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-configure-windows-endpoint-monitoring.md
Before you begin scanning, create a firewall rule that allows outgoing traffic f
1. Select **Save** to save the automatic scan settings. 1. When the scan is finished, select to view/export scan results.
+## Next steps
+
+For more information, see [Work with device notifications](how-to-work-with-device-notifications.md).
defender-for-iot How To Connect Sensor By Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-connect-sensor-by-proxy.md
This section describes how to set up a sensor to use Squid.
1. Select **Save**.
-## See also
+## Next steps
-[Manage your subscriptions](how-to-manage-subscriptions.md).
+For more information, see [Manage your subscriptions](how-to-manage-subscriptions.md).
defender-for-iot How To Control What Traffic Is Monitored https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-control-what-traffic-is-monitored.md
Configure a firewall rule that opens outgoing traffic from the sensor to the sca
1. When the scan is finished, select **View Scan Results**. A .csv file with the scan results is downloaded to your computer.
-## See also
+## Next steps
-[Investigate sensor detections in a device inventory](how-to-investigate-sensor-detections-in-a-device-inventory.md)
-[Investigate sensor detections in the device map](how-to-work-with-the-sensor-device-map.md)
+For more information, see:
+
+- [Investigate sensor detections in a device inventory](how-to-investigate-sensor-detections-in-a-device-inventory.md)
+- [Investigate sensor detections in the device map](how-to-work-with-the-sensor-device-map.md)
defender-for-iot How To Create And Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-create-and-manage-users.md
This section describes how to define users. Cyberx, support, and administrator u
1. From the left pane for the sensor or the on-premises management console, select **Users**.
- :::image type="content" source="media/how-to-create-and-manage-users/users-pane.png" alt-text="Users pane for creating users":::
+ :::image type="content" source="media/how-to-create-and-manage-users/users-pane.png" alt-text="Screenshot of the Users pane for creating users.":::
1. In the **Users** window, select **Create User**. 1. In the **Create User** pane, define the following parameters:
You can track user activity in the event timeline on each sensor. The timeline d
1. Verify that **User Operations** filter is set to **Show**.
- :::image type="content" source="media/how-to-create-and-manage-users/track-user-activity.png" alt-text="Event timeline showing user that signed in to Defender for IoT":::
+ :::image type="content" source="media/how-to-create-and-manage-users/track-user-activity.png" alt-text="Screenshot of the Event timeline showing a user that signed in to Defender for IoT.":::
1. Use the filters or Ctrl F option to find the information of interest to you.
You can associate Azure Active Directory groups defined here with specific permi
1. From the left pane, select **System Settings**. 1. Select **Integrations** and then select **Active Directory**. 1. Enable the **Active Directory Integration Enabled** toggle.
CyberX role can change the password for all user roles. The Support role can cha
1. On this row, select three dots (...) and then select **Edit**.
- :::image type="content" source="media/how-to-create-and-manage-users/change-password.png" alt-text="Change password dialog for local sensor users":::
+ :::image type="content" source="media/how-to-create-and-manage-users/change-password.png" alt-text="Screenshot of the Change password dialog for local sensor users.":::
1. Enter and confirm the new password in **Change Password** section.
You can recover the password for the on-premises management console, or the sens
1. On the sign in screen of either the on-premises management console, or the sensor, select **Password recovery**. The **Password recovery** screen opens.
- :::image type="content" source="media/how-to-create-and-manage-users/password-recovery.png" alt-text="Select Password recovery from the sign in screen of either the on-premises management console, or the sensor":::
+ :::image type="content" source="media/how-to-create-and-manage-users/password-recovery.png" alt-text="Screenshot of the Select Password recovery from the sign in screen of either the on-premises management console, or the sensor.":::
1. Select either **CyberX**, or **Support** from the drop-down menu, and copy the unique identifier code.
- :::image type="content" source="media/how-to-create-and-manage-users/password-recovery-screen.png" alt-text="Select either the CyberX user or the Support user from the drop-down menu.":::
+ :::image type="content" source="media/how-to-create-and-manage-users/password-recovery-screen.png" alt-text="Screenshot of selecting either the Defender for IoT user or the support user.":::
1. Navigate to the Azure portal, and select **Sites and Sensors**.
You can recover the password for the on-premises management console, or the sens
1. Select the **More Actions** drop down menu, and select **Recover on-premises management console password**.
- :::image type="content" source="media/how-to-create-and-manage-users/recover-password.png" alt-text="Select your sensor and select the recover on-premises management console password option.":::
+ :::image type="content" source="media/how-to-create-and-manage-users/recover-password.png" alt-text="Screenshot of the recover on-premises management console password option.":::
1. Enter the unique identifier that you received on the **Password recovery** screen and select **Recover**. The `password_recovery.zip` file is downloaded.
- :::image type="content" source="media/how-to-create-and-manage-users/enter-identifier.png" alt-text="Enter the unique identifier and then select recover." lightbox="media/how-to-create-and-manage-users/enter-identifier.png":::
+ :::image type="content" source="media/how-to-create-and-manage-users/enter-identifier.png" alt-text="Screenshot of entering enter the unique identifier and then selecting recover." lightbox="media/how-to-create-and-manage-users/enter-identifier.png":::
> [!NOTE] > Don't alter the password recovery file. It's a signed file, and will not work if tampered with.
You can recover the password for the on-premises management console, or the sens
## Next steps
-[Activate and set up your sensor](how-to-activate-and-set-up-your-sensor.md)
-[Activate and set up your on-premises management console](how-to-activate-and-set-up-your-on-premises-management-console.md)
-[Track sensor activity](how-to-track-sensor-activity.md)
+- [Activate and set up your sensor](how-to-activate-and-set-up-your-sensor.md)
+
+- [Activate and set up your on-premises management console](how-to-activate-and-set-up-your-on-premises-management-console.md)
+
+- [Track sensor activity](how-to-track-sensor-activity.md)
defender-for-iot How To Create Attack Vector Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-create-attack-vector-reports.md
This section describes how to create Attack Vector reports.
:::image type="content" source="media/how-to-generate-reports/sample-attack-vectors.png" alt-text="Screen shot of Attack vectors report.":::
-## See also
+## Next steps
-[Attack vector reporting](how-to-create-attack-vector-reports.md)
+For more information, see [Attack vector reporting](how-to-create-attack-vector-reports.md).
defender-for-iot How To Create Data Mining Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-create-data-mining-queries.md
To generate a report:
3. From the right drop-down list, select the report that you want to generate. 4. To create a PDF of the report results, select :::image type="icon" source="media/how-to-generate-reports/pdf-report-icon.png" border="false":::.++
+## Next steps
+
+For more information, see:
+
+- [Risk assessment reporting](how-to-create-risk-assessment-reports.md)
+
+- [Attack vector reporting](how-to-create-attack-vector-reports.md)
+
+- [Create trends and statistics dashboards](how-to-create-trends-and-statistics-reports.md)
defender-for-iot How To Create Risk Assessment Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-create-risk-assessment-reports.md
Create a risk assessment report based on detections made by sensors that are man
1. Select **Import logo**. 1. Choose a logo to add to the header of your Risk assessment reports.
-## See also
+## Next steps
-[Attack vector reporting](how-to-create-attack-vector-reports.md)
+For more information, see [Attack vector reporting](how-to-create-attack-vector-reports.md).
defender-for-iot How To Create Trends And Statistics Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-create-trends-and-statistics-reports.md
Number of devices per VLAN | Displays a pie chart that shows the number of disco
Top bandwidth by VLAN | Displays the bandwidth consumption by VLAN. By default, the widget shows five VLANs with the highest bandwidth usage. You can filter the data by the period presented in the widget. Select the down arrow to show more results.
-## See also
+## Next steps
-[Risk assessment reporting](how-to-create-risk-assessment-reports.md)
-[Sensor data mining queries](how-to-create-data-mining-queries.md)
-[Attack vector reporting](how-to-create-attack-vector-reports.md)
+For more information, see:
+
+- [Risk assessment reporting](how-to-create-risk-assessment-reports.md)
+
+- [Sensor data mining queries](how-to-create-data-mining-queries.md)
+
+- [Attack vector reporting](how-to-create-attack-vector-reports.md)
defender-for-iot How To Define Global User Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-define-global-user-access-control.md
When you're creating rules, be aware of the following information:
- If no business unit or region is selected, users will have access to all defined business units and regions.
-## See also
+## Next steps
-[About Defender for IoT console users](how-to-create-and-manage-users.md)
+For more information, see [About Defender for IoT console users](how-to-create-and-manage-users.md).
defender-for-iot How To Deploy Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-deploy-certificates.md
Defender for IoT uses SSL/TLS certificates to secure communication between the f
Defender for IoT Admin users can upload a certificate to sensor consoles and their on-premises management console from the SSL/TLS Certificates dialog box. ## About certificate generation methods
You can also convert existing certificate files if you do not want to create new
You can compare your certificate to the sample certificate below. Verify that the same fields exits and that the order of the fields is the same. ## Test certificates you create
If the conversion fails:
- Use the conversion commands described in [Convert existing files to supported files](#convert-existing-files-to-supported-files). - Make sure the file parameters are accurate. See, [File type requirements](#file-type-requirements) and [Certificate File Parameter Requirements](#certificate-file-parameter-requirements) for details. -- Consult your certificate lead.
+- Consult your certificate lead.
+
+## Next steps
+
+For more information, see [Identify required appliances](how-to-identify-required-appliances.md).
defender-for-iot How To Forward Alert Information To Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-forward-alert-information-to-partners.md
The administrator might have defined alert exclusion rules. These rules help adm
This means that the forwarding rules you define might be ignored based on exclusion rules that your administrator has created. Exclusion rules are defined in the on-premises management console.
-## See also
+## Next steps
-[Accelerate alert workflows](how-to-accelerate-alert-incident-response.md)
+For more information, see [Accelerate alert workflows](how-to-accelerate-alert-incident-response.md).
defender-for-iot How To Import Device Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-import-device-information.md
Import data as follows:
1. In **System settings**, under **Import settings**, select **Device Information** to import. Select **Add** and upload the CSV file that you prepared.
-### Import authorization status:**
+## Import authorization status
1. Download the [Authorization file](https://download.microsoft.com/download/8/2/3/823c55c4-7659-4236-bfda-cc2427be2cee/CSS/authorized_devices%20-%20example.csv) and save as a CSV file. 1. In the authorized_devices sheet, specify the device IP address.
Import data as follows:
When the information is imported, you receive alerts about unauthorized devices for all the devices that don't appear on this list.
-## See also
+## Next steps
-[Control what traffic is monitored](how-to-control-what-traffic-is-monitored.md)
+For more information, see:
-[Investigate sensor detections in a device inventory](how-to-investigate-sensor-detections-in-a-device-inventory.md)
+- [Control what traffic is monitored](how-to-control-what-traffic-is-monitored.md)
+
+- [Investigate sensor detections in a device inventory](how-to-investigate-sensor-detections-in-a-device-inventory.md)
defender-for-iot How To Install Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-install-software.md
You can enhance system security by preventing direct user access to the sensor.
1. Enter `--port 10000`.
-### Next steps
+## Next steps
-[Set up your network](how-to-set-up-your-network.md)
+For more information, see [Set up your network](how-to-set-up-your-network.md).
defender-for-iot How To Investigate Sensor Detections In A Device Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-investigate-sensor-detections-in-a-device-inventory.md
You can export device inventory information to a .csv file.
- Select **Export file** from the Device Inventory page. The report is generated and downloaded.
-## See also
+## Next steps
-[Investigate all enterprise sensor detections in a device inventory](how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md)
+For more information, see:
-[Manage your IoT devices with the device inventory](../device-builders/how-to-manage-device-inventory-on-the-cloud.md#manage-your-iot-devices-with-the-device-inventory)
+- [Investigate all enterprise sensor detections in a device inventory](how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md)
+
+- [Manage your IoT devices with the device inventory](../device-builders/how-to-manage-device-inventory-on-the-cloud.md#manage-your-iot-devices-with-the-device-inventory)
defender-for-iot How To Manage Cloud Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-cloud-alerts.md
Defender for IoT alerts lets you enhance the security and operation of your netw
- Protocol and operational anomalies - Suspected malware traffic Alerts triggered by Defender for IoT are displayed on the Alerts page in the Azure portal. Use the Alerts page to:
Various Alerts page options help you easily find and view alerts and alert infor
1. Use the **Search**, **Time Range**, and **Filter** options at the top of the Alerts page.
- :::image type="content" source="media/how-to-view-manage-cloud-alerts/filters-on-alerts-page.png" alt-text="Filters bar on alerts Cloud page":::
+ :::image type="content" source="media/how-to-view-manage-cloud-alerts/filters-on-alerts-page.png" alt-text="Screenshot of the filters bar on the Alerts page in the Azure portal.":::
**To group alerts:**
Various Alerts page options help you easily find and view alerts and alert infor
Use the category filter to quickly find information important to you. Using category filters also gives you information regarding the number of alerts for each category. For example, 50 operational alerts, 13 firmware changes or 23 command failures. The following categories are available: - Abnormal Communication Behavior
The number of alerts currently detected appears on the top-left section of the A
1. Select **Group by** and select a group. The number of alerts is displayed for each group.
- :::image type="content" source="media/how-to-view-manage-cloud-alerts/group-by-severity.png" alt-text="Alerts page group by filter with severity filter chosen":::
+ :::image type="content" source="media/how-to-view-manage-cloud-alerts/group-by-severity.png" alt-text="Screenshot of the Alerts page, filtered by severity.":::
1. Alternatively use the **Add filter** option to choose a subject of interest and select **Column.** The column dropdown shows the number alerts associated with the column name.
- :::image type="content" source="media/how-to-view-manage-cloud-alerts/alert-count-breakdown.png" alt-text="Alert filters showing protocols with count for each protocol":::
+ :::image type="content" source="media/how-to-view-manage-cloud-alerts/alert-count-breakdown.png" alt-text="Screenshot of Alert filters showing protocols with count for each protocol.":::
## View alert descriptions and other details
View more information about the alert, such as:
1. Select an alert. 1. The details pane opens with the alert description, source, and destination information and other details.
- :::image type="content" source="media/how-to-view-manage-cloud-alerts/alert-detected.png" alt-text="Alert selected from Alerts cloud page":::
+ :::image type="content" source="media/how-to-view-manage-cloud-alerts/alert-detected.png" alt-text="Screenshot of an alert selected from Alerts page in the Azure portal.":::
1. To view more details and review remediation steps, select **View full details**. The Alert Details pane provides more information about source device and related entities. Related links in the MITRE Partnership website are also available.
- :::image type="content" source="media/how-to-view-manage-cloud-alerts/alert-full-details.png" alt-text="Selected alert with full details":::
+ :::image type="content" source="media/how-to-view-manage-cloud-alerts/alert-full-details.png" alt-text="Screenshot of a selected alert with full details.":::
If you're integrating with Microsoft Sentinel, the Alert details and entity information are sent to Microsoft Sentinel.
Defender for IoT provides remediation steps you can carry out for the alert. Rem
1. Select an alert from the Alerts page. 1. Select **Take action** in the dialog box that opens.
- :::image type="content" source="media/how-to-view-manage-cloud-alerts/take-action-cloud-alert.png" alt-text="Remediation action for sample cloud alert":::
+ :::image type="content" source="media/how-to-view-manage-cloud-alerts/take-action-cloud-alert.png" alt-text="Screenshot of a remediation action for a sample alert in the Azure portal.":::
## Manage alert status and severity
Users working with alerts in Azure and on-premises should understand how alert m
| **Managing alerts on-premises** | Alerts **Learned**, **Acknowledged**, or **Muted** in the on-premises management console or in sensors aren't simultaneously updated in Alerts page on the Defender for IoT Cloud Alerts page. This means that this alert will stay open on the Cloud. However another alert will not be triggered from the on-premises components for this activity. | **Managing alert in the portal Alerts page** | Changing the status of an alert to **New**, **Active**, or **Closed** on the Alerts page or changing the alert severity on the Alerts page doesn't impact the alert status or severity in the on-premises management console or sensors.
-## See also
+## Next steps
-[Gain insight into global, regional, and local threats](how-to-gain-insight-into-global-regional-and-local-threats.md#gain-insight-into-global-regional-and-local-threats)
+For more information, see [Gain insight into global, regional, and local threats](how-to-gain-insight-into-global-regional-and-local-threats.md#gain-insight-into-global-regional-and-local-threats).
defender-for-iot How To Manage Individual Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-individual-sensors.md
To access system properties:
3. Select **System Properties** from the **General** section.
-## See also
+## Next steps
-[Threat intelligence research and packages](how-to-work-with-threat-intelligence-packages.md)
+For more information, see:
-[Manage sensors from the management console](how-to-manage-sensors-from-the-on-premises-management-console.md)
+- [Threat intelligence research and packages](how-to-work-with-threat-intelligence-packages.md)
+
+- [Manage sensors from the management console](how-to-manage-sensors-from-the-on-premises-management-console.md)
defender-for-iot How To Manage Sensors From The On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-sensors-from-the-on-premises-management-console.md
To restore by using the CLI:
1. Set `Backup.shared_location` to `<backup_folder_name_on_cyberx_server>`.
-## See also
+## Next steps
-[Manage individual sensors](how-to-manage-individual-sensors.md)
+For more information, see [Manage individual sensors](how-to-manage-individual-sensors.md).
defender-for-iot How To Manage The Alert Event https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-the-alert-event.md
Alerts are managed from the Alerts page on the sensor.
1. Select **Alerts** from the sensor console, side pane. 1. Review the alerts details and decide how to manage the alert.
- :::image type="content" source="media/how-to-manage-the-alert-event/main-alerts-screen.png" alt-text="Main sensor alerts screen":::
+ :::image type="content" source="media/how-to-manage-the-alert-event/main-alerts-screen.png" alt-text="Screenshot of the main sensor alerts screen.":::
See [View alerts on your sensor](how-to-view-alerts.md#view-alerts-on-your-sensor) for information on: - the kind of alert information available
Remediation steps help SOC teams better understand Operational Technology (OT) i
1. In the side pane, select **Take action.** 1. Review remediation steps.
- :::image type="content" source="media/how-to-manage-the-alert-event/remediation-steps.png" alt-text="Sample remediation steps for alert action":::
+ :::image type="content" source="media/how-to-manage-the-alert-event/remediation-steps.png" alt-text="Screenshot of a sample set of remediation steps for alert action.":::
Your administrator may have added instructions or comments to help you complete remediation or alert handling. If created, comments appear in the Alert Details section. After taking remediation steps, you may want to change the alert status to Close the alert.
When you want to approve these changes, you can instruct Defender for IoT to *le
1. Enable the **Alert Learn** toggle.
- :::image type="content" source="media/how-to-manage-the-alert-event/learn-remediation.png" alt-text="Learn option for Policy alert":::
+ :::image type="content" source="media/how-to-manage-the-alert-event/learn-remediation.png" alt-text="Screenshot of the Learn option for Policy alerts.":::
After learning the traffic, configurations, or activity are considered valid. An alert will no longer be triggered for this activity.
Learned traffic can be unlearned. When the sensor unlearns traffic, alerts are r
**To unlearn an alert**
-1. Navigate alert you learned.
+1. Navigate to the alert you learned.
1. Disable the **Alert learn** toggle.
Under certain circumstances, you might want to instruct your sensor to ignore a
In these situations, learning isn't available. You can mute the alert event when learning can't be carried out and you want to suppress the alert and remove the device when calculating risks and attack vectors. A muted scenario includes the network devices and traffic detected for an event. The alert title describes the traffic that is being muted.
If the traffic is detected again, the alert will be retriggered.
1. Select an alert. The Alert Details section opens. 1. Select the dropdown arrow in the Status field and select **Closed**.
- :::image type="content" source="media/how-to-manage-the-alert-event/close-alert.png" alt-text="Option to close an alert from the Alerts page":::
+ :::image type="content" source="media/how-to-manage-the-alert-event/close-alert.png" alt-text="Screenshot of the option to close an alert from the Alerts page.":::
**To close multiple alerts:**
If the traffic is detected again, the alert will be retriggered.
1. Select **Change Status** from the action items on the top of the page. 1. Select **Closed** and **Apply.**
- :::image type="content" source="media/how-to-manage-the-alert-event/multiple-close.png" alt-text="Selecting multiple alerts to close from the Alerts page":::
+ :::image type="content" source="media/how-to-manage-the-alert-event/multiple-close.png" alt-text="Screenshot of selecting multiple alerts to close from the Alerts page.":::
Change the alert status to **New** if further investigation is required. To view closed alerts on the Alerts page, verify that the **Status** filter is defined to show **Closed** alerts. ## Export alert information
Viewing and managing alerts in the portal provides significant advantages. For e
- Integrate alert details with Microsoft Sentinel - Change the severity of an alert
- :::image type="content" source="media/how-to-view-alerts/alert-cloud-mitre.png" alt-text="Sample of alert as shown in cloud":::
+ :::image type="content" source="media/how-to-view-alerts/alert-cloud-mitre.png" alt-text="Screenshot of a sample alert shown in the Azure portal.":::
Users working with alerts on the Defender for IoT portal on Azure should understand how alert management between the portal and the sensor operates.
Users working with alerts on the Defender for IoT portal on Azure should underst
| **Managing alerts on your sensor** | If you change the status of an alert, or learn or mute an alert on a sensor, the changes are not updated in the Defender for IoT Alerts page on the portal. This means that this alert will stay open on the portal. However another alert won't be triggered from sensor for this activity. | **Managing alerts in the portal Alerts page** | Changing the status of an alert on the Azure portal, Alerts page or changing the alert severity on the portal, does not impact the alert status or severity in on-premises sensors.
-## See also
+## Next steps
+
+For more information, see:
- [Detection engines and alerts](concept-key-concepts.md#detection-engines-and-alerts)
defender-for-iot How To Manage The On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-the-on-premises-management-console.md
To define:
`mail.sender=` 1. Enter the SMTP server name and sender and select enter.
-## See also
+## Next steps
-[Manage sensors from the management console](how-to-manage-sensors-from-the-on-premises-management-console.md)
+For more information, see:
-[Manage individual sensors](how-to-manage-individual-sensors.md)
+- [Manage sensors from the management console](how-to-manage-sensors-from-the-on-premises-management-console.md)
+
+- [Manage individual sensors](how-to-manage-individual-sensors.md)
defender-for-iot How To Set Up High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-set-up-high-availability.md
Perform the high availability update in the following order. Make sure each step
1. Update the sensors.
-## See also
+## Next steps
-[Activate and set up your on-premises management console](how-to-activate-and-set-up-your-on-premises-management-console.md)
+For more information, see [Activate and set up your on-premises management console](how-to-activate-and-set-up-your-on-premises-management-console.md).
defender-for-iot How To Set Up Snmp Mib Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-set-up-snmp-mib-monitoring.md
Note that:
5. Select **Save**.
-## See also
+## Next steps
-[Export troubleshooting logs](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md)
+For more information, see [Export troubleshooting logs](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md).
defender-for-iot How To Track Sensor Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-track-sensor-activity.md
In addition to viewing the events that the sensor has detected, you can manually
-## See also
+## Next steps
-[View alerts](how-to-view-alerts.md)
+For more information, see [View alerts](how-to-view-alerts.md).
defender-for-iot How To View Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-view-alerts.md
This section describes how to view and filter alerts details on your sensor.
- Select **Alerts** from the side menu. The page displays the alerts detected by your sensor.
- :::image type="content" source="media/how-to-view-alerts/view-alerts-main-page.png" alt-text="Alerts page on sensor" lightbox="media/how-to-view-alerts/view-alerts-main-page.png":::
+ :::image type="content" source="media/how-to-view-alerts/view-alerts-main-page.png" alt-text="Screenshot of the sensor Alerts page." lightbox="media/how-to-view-alerts/view-alerts-main-page.png":::
The following information is available from the Alerts page:
Use filter, grouping and text search tools to view alerts of interest to you.
1. Select **Add filter**. 1. Define a filter and select **Apply**.
- :::image type="content" source="media/how-to-view-alerts/alerts-filter.png" alt-text="Alert filter options":::
+ :::image type="content" source="media/how-to-view-alerts/alerts-filter.png" alt-text="Screenshot of Alert filter options.":::
**About the Groups type** The **Groups** option refers to the Device groups you created in the Device map and inventory. **To view alerts based on a pre-defined category:**
Gain contextual insight about alert activity by:
- Viewing source and destination devices in map view with other connected devices. Select **Map View** to see the map.
- :::image type="content" source="media/how-to-view-alerts/view-alerts-map.png" alt-text="Map view of source and detected device from alert" lightbox="media/how-to-view-alerts/view-alerts-map.png" :::
+ :::image type="content" source="media/how-to-view-alerts/view-alerts-map.png" alt-text="Screenshot of a map view of the source and detected devices from an alert." lightbox="media/how-to-view-alerts/view-alerts-map.png" :::
- Viewing an Event timeline with recent activity of the device. Select **Event Timeline** and use the filter options to customize the information displayed.
- :::image type="content" source="media/how-to-view-alerts/alert-event-timeline.png" alt-text="Alert timeline for selected alert from Alerts page" lightbox="media/how-to-view-alerts/alert-event-timeline.png" :::
+ :::image type="content" source="media/how-to-view-alerts/alert-event-timeline.png" alt-text="Screenshot of an alert timeline for the selected alert from the Alerts page." lightbox="media/how-to-view-alerts/alert-event-timeline.png" :::
### Remediate the alert incident
Remediation steps will help SOC teams better understand OT issues and resolution
1. Select an alert from the Alerts page. 1. In the side pane, select **Take action.**
- :::image type="content" source="media/how-to-view-alerts/alert-remediation-rename.png" alt-text="Take action section of alert":::
+ :::image type="content" source="media/how-to-view-alerts/alert-remediation-rename.png" alt-text="Screenshot of the alert's Take action section.":::
Your administrator may have added guidance to help you complete the remediation or alert handling. If created comments will appear in the Alert Details section. After taking remediation steps, you may want to change the alert status to close the alert.
Viewing alerts in the portal provides significant advantages. For example, it le
- View alerts based on the site - Change the severity of an alert
- :::image type="content" source="media/how-to-view-alerts/alert-cloud-mitre.png" alt-text="Sample of alert as shown in cloud":::
+ :::image type="content" source="media/how-to-view-alerts/alert-cloud-mitre.png" alt-text="Screenshot of a sample alert shown in the Azure portal.":::
### Manage alert events
You can manage an alert incident by:
## Next steps
-[Manage the alert event](how-to-manage-the-alert-event.md)
+For more information, see:
-[Accelerate alert workflows](how-to-accelerate-alert-incident-response.md)
+- [Manage the alert event](how-to-manage-the-alert-event.md)
+
+- [Accelerate alert workflows](how-to-accelerate-alert-incident-response.md)
defender-for-iot How To View Information Per Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-view-information-per-zone.md
Title: Learn about devices on specific zones description: Use the on-premises management console to get a comprehensive view information per specific zone -+ -+ Last updated 11/09/2021
defender-for-iot How To Work With Alerts On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-alerts-on-premises-management-console.md
You can do the following from the **Alerts** page in the management console:
If your deployment was set up to work with cloud-connected sensors, Alert detections shown on all enterprise sensors will also be seen in the Defender for IoT Alerts page, on the Azure portal. Viewing and managing alerts in the portal provides significant advantages. For example, it lets you:
Viewing and managing alerts in the portal provides significant advantages. For e
- Integrate alerts details with Microsoft Sentinel - Change the severity of an alert
- :::image type="content" source="media/how-to-view-alerts/alert-cloud-mitre.png" alt-text="Sample of alert as shown in cloud":::
+ :::image type="content" source="media/how-to-view-alerts/alert-cloud-mitre.png" alt-text="Screenshot of a sample of alert as shown in the Azure portal.":::
## View alerts in the on-premises management console
The **Alerts** window displays the alerts generated by sensors connected to your
Select **Clear Filters** to view all alerts. ### Work with alert counters Alert counters provide a breakdown of alerts by severity and the acknowledged state. The following severity levels appear in the alert counter:
You can adjust the counter to provide numbers based on acknowledged and unacknow
When the **Show Acknowledged Alerts** option is selected, all the acknowledged alerts appear in the **Alerts** window. ### View alert information
The alert presents the following information:
**Sensor alert ID** Working with UUIDs ensures that each alert displayed in the on-premises management console is searchable and identifiable by a unique number. This is required because alerts generated from multiple sensors might produce the same alert ID.
Several options are available for managing alert events from the on-premises man
- Learn or acknowledge alert events. Select **Learn & Acknowledge** to learn all alert events that can be authorized and to acknowledge all alert events that are currently not acknowledged.
- :::image type="content" source="media/how-to-work-with-alerts-on-premises-management-console/learn-and-acknowledge.png" alt-text="Select Learn & Acknowledge to learn all.":::
+ :::image type="content" source="media/how-to-work-with-alerts-on-premises-management-console/learn-and-acknowledge.png" alt-text="Screenshot of the Learn & Acknowledge button.":::
- Mute and unmute alert events.
Export alert information to a .csv file. You can export information of all alert
1. In the Create Forwarding Rule window, enter a name for the rule
- :::image type="content" source="media/how-to-work-with-alerts-on-premises-management-console/management-console-create-forwarding-rule.png" alt-text="Enter a meaningful name in the field of the Create Forwarding Rule window.":::
+ :::image type="content" source="media/how-to-work-with-alerts-on-premises-management-console/management-console-create-forwarding-rule.png" alt-text="Screenshot of the Create Forwarding Rule window..":::
Define criteria by which to trigger a forwarding rule. Working with forwarding rule criteria helps pinpoint and manage the volume of information sent from the sensor to external systems.
In addition to working with exclusion rules, you can suppress alerts by muting t
1. From the left pane of the on-premises management console, select **Alert Exclusion**. Define a new exclusion rule by selecting the **Add** icon :::image type="icon" source="media/how-to-work-with-alerts-on-premises-management-console/add-icon.png" border="false"::: in the upper-right corner of the window that opens. The **Create Exclusion Rule** dialog box opens.
- :::image type="content" source="media/how-to-work-with-alerts-on-premises-management-console/create-alert-exclusion-view.png" alt-text="Create an alert exclusion by filling in the information here.":::
+ :::image type="content" source="media/how-to-work-with-alerts-on-premises-management-console/create-alert-exclusion-view.png" alt-text="Screenshot of the Create Alert Exclusion pane.":::
1. Enter a rule name in the **Name** field. The name can't contain quotes (`"`).
defender-for-iot How To Work With Device Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-device-notifications.md
You might need to handle several notifications simultaneously. For example:
Respond as follows:
-1. In **Discovery Notifications*, choose **Select All**, and then clear the notifications you don't need. When you choose **Select All**, Defender for IoT displays information about which notifications can be handled or dismissed simultaneously, and which need your input.
+1. In **Discovery Notifications**, choose **Select All**, and then clear the notifications you don't need. When you choose **Select All**, Defender for IoT displays information about which notifications can be handled or dismissed simultaneously, and which need your input.
1. You can accept all recommendations, dismiss all recommendations, or handled notifications one at a time. 1. For notifications that indicate manual changes are required, such as **New IPs** and **No Subnets**, make the manual modifications as needed.
-1.
-## See also
-[View alerts](how-to-view-alerts.md)
+## Next steps
+
+For more information, see [View alerts](how-to-view-alerts.md).
defender-for-iot How To Work With The Sensor Device Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-the-sensor-device-map.md
The following basic search tools are available:
When you search by IP or MAC address, the map displays the device that you searched for with devices connected to it. ### Group highlight and filters tools
The following predefined groups are available:
| **Subnets** | Devices that belong to a specific subnet. | | **VLAN** | Devices associated with a specific VLAN ID. | | **Cross subnet connections** | Devices that communicate from one subnet to another subnet. |
-| **Attack vector simulations** | Vulnerable devices detected in attack vector reports. To view these devices on the map, select the **Display on Device Map** checkbox when generating the Attack Vector. :::image type="content" source="media/how-to-work-with-maps/add-attack-v3.png" alt-text="Add Attack Vector Simulations":::|
+| **Attack vector simulations** | Vulnerable devices detected in attack vector reports. To view these devices on the map, select the **Display on Device Map** checkbox when generating the Attack Vector. :::image type="content" source="media/how-to-work-with-maps/add-attack-v3.png" alt-text="Screenshot of the Add Attack Vector Simulations":::|
| **Last seen** | Devices grouped by the time frame they were last seen, for example: One hour, six hours, one day, seven days. | | **Not In Active Directory** | All non-PLC devices that are not communicating with the Active Directory. |
For information about creating custom groups, see [Define custom groups](#define
| :::image type="icon" source="media/how-to-work-with-maps/fit-to-selection-icon.png" border="false"::: | Fits a group of selected devices to the center of the screen. | | :::image type="icon" source="media/how-to-work-with-maps/collapse-view-icon.png" border="false"::: | IT/OT presentation. Collapse view to enable a focused view on OT devices, and group IT devices. | |:::image type="icon" source="media/how-to-work-with-maps/layouts-icon-v2.png" border="false"::: | Layout options, including: <br />**Pin layout**. Drag devices in the map to a new location and use the Pin option to save those locations when you leave the map to use another option. <br />**Layout by connection**. View connections between devices. <br />**Layout by Purdue**. View the devices in the map according to Enterprise, supervisory and process control layers. <br /> |
-| :::image type="icon" source="media/how-to-work-with-maps/zoom-in-icon-v2.png" alt-text="Zoom In" border="false"::: :::image type="icon" source="media/how-to-work-with-maps/zoom-out-icon-v2.png" alt-text="Zoom Out" border="false"::: | Zoom in or out of the map. |
+| :::image type="icon" source="media/how-to-work-with-maps/zoom-in-icon-v2.png" border="false"::: :::image type="icon" source="media/how-to-work-with-maps/zoom-out-icon-v2.png" border="false"::: | Zoom in or out of the map. |
### Map zoom views
This view provides an at-a-glance view of devices represented as follows:
- Black dots indicate devices with no alerts
- :::image type="content" source="media/how-to-work-with-maps/colored-dots-v2.png" alt-text="Bird eye view" lightbox="media/how-to-work-with-maps/colored-dots-v2.png":::
+ :::image type="content" source="media/how-to-work-with-maps/colored-dots-v2.png" alt-text="Screenshot of a bird eye view of the map." lightbox="media/how-to-work-with-maps/colored-dots-v2.png":::
### Device type and connection view
This view presents devices represented as icons on the map.
Overall connections are displayed. **To view specific connections:** 1. Select a device in the map. 1. Specific connections between devices are displayed in blue. In addition, you will see connections that cross various Purdue levels.
- :::image type="content" source="media/how-to-work-with-maps/connections-purdue-level.png" alt-text="Detailed view" lightbox="media/how-to-work-with-maps/connections-purdue-level.png" :::
+ :::image type="content" source="media/how-to-work-with-maps/connections-purdue-level.png" alt-text="Screenshot of the detailed map view." lightbox="media/how-to-work-with-maps/connections-purdue-level.png" :::
### View IT subnets
The following labels and indicators may appear on devices on the map:
| Device label | Description | |--|--|
-| :::image type="content" source="media/how-to-work-with-maps/host-v2.png" alt-text="IP host name"::: | IP address host name and IP address, or subnet addresses |
-| :::image type="content" source="media/how-to-work-with-maps/amount-alerts-v2.png" alt-text="Number of alerts"::: | Number of alerts associated with the device |
+| :::image type="content" source="media/how-to-work-with-maps/host-v2.png" alt-text="Screenshot of the I P host name."::: | IP address host name and IP address, or subnet addresses |
+| :::image type="content" source="media/how-to-work-with-maps/amount-alerts-v2.png" alt-text="Screenshot of the number of alerts"::: | Number of alerts associated with the device |
| :::image type="icon" source="media/how-to-work-with-maps/type-v2.png" border="false"::: | Device type icon, for example storage, PLC or historian. |
-| :::image type="content" source="media/how-to-work-with-maps/grouped-v2.png" alt-text="devices grouped"::: | Number of devices grouped in a subnet in an IT network. In this example 8. |
-| :::image type="content" source="media/how-to-work-with-maps/not-authorized-v2.png" alt-text="device Learning period"::: | A device that was detected after the Learning period and was not authorized as a network device. |
+| :::image type="content" source="media/how-to-work-with-maps/grouped-v2.png" alt-text="Screenshot of devices grouped together."::: | Number of devices grouped in a subnet in an IT network. In this example 8. |
+| :::image type="content" source="media/how-to-work-with-maps/not-authorized-v2.png" alt-text="Screenshot of the device learning period"::: | A device that was detected after the Learning period and was not authorized as a network device. |
| Solid line | Logical connection between devices |
-| :::image type="content" source="media/how-to-work-with-maps/new-v2.png" alt-text="New device"::: | New device discovered after Learning is complete. |
+| :::image type="content" source="media/how-to-work-with-maps/new-v2.png" alt-text="Screenshot of a new device discovered after learning is complete."::: | New device discovered after Learning is complete. |
### Device details and contextual information
You can access detailed and contextual information about a device from the map,
1. Select **View properties**. 1. Navigate to the information you need.
- :::image type="content" source="media/how-to-work-with-maps/device-details-from-map.png" alt-text="Device details shown for device selected in map":::
+ :::image type="content" source="media/how-to-work-with-maps/device-details-from-map.png" alt-text="Screenshot of the device details shown for the device selected in map.":::
#### Device details
If a PLC contains multiple modules separated into racks and slots, the character
You can use the Backplane option to review multiple controllers/cards and their nested devices as one entity with various definitions. Each slot in the Backplane view represents the underlying devices ΓÇô the devices that were discovered behind it. A Backplane can contain up to 30 controller cards and up to 30 rack units. The total number of devices included in the multiple levels can be up to 200 devices.
Each slot appears with the number of underlying devices and the icon that shows
| Icon | Module Type | |--|--|
-| :::image type="content" source="media/how-to-work-with-maps/power.png" alt-text="Power Supply"::: | Power Supply |
-| :::image type="content" source="media/how-to-work-with-maps/analog.png" alt-text="Analog I/O"::: | Analog I/O |
-| :::image type="content" source="media/how-to-work-with-maps/comms.png" alt-text="Communication Adapter"::: | Communication Adapter |
-| :::image type="content" source="media/how-to-work-with-maps/digital.png" alt-text="Digital I/O"::: | Digital I/O |
-| :::image type="content" source="media/how-to-work-with-maps/computer-processor.png" alt-text="CPU"::: | CPU |
-| :::image type="content" source="media/how-to-work-with-maps/HMI-icon.png" alt-text="HMI"::: | HMI |
-| :::image type="content" source="media/how-to-work-with-maps/average.png" alt-text="Generic"::: | Generic |
+| :::image type="content" source="media/how-to-work-with-maps/power.png" alt-text="Screenshot of the Power Supply icon."::: | Power Supply |
+| :::image type="content" source="media/how-to-work-with-maps/analog.png" alt-text="Screenshot the Analog I/O icon."::: | Analog I/O |
+| :::image type="content" source="media/how-to-work-with-maps/comms.png" alt-text="Screenshot of the Communication Adapter icon."::: | Communication Adapter |
+| :::image type="content" source="media/how-to-work-with-maps/digital.png" alt-text="Screenshot of the Digital I/O icon."::: | Digital I/O |
+| :::image type="content" source="media/how-to-work-with-maps/computer-processor.png" alt-text="Screenshot of the CPU icon."::: | CPU |
+| :::image type="content" source="media/how-to-work-with-maps/HMI-icon.png" alt-text="Screenshot of the HMI icon."::: | HMI |
+| :::image type="content" source="media/how-to-work-with-maps/average.png" alt-text="Screenshot of the Generic icon."::: | Generic |
When you select a slot, the slot details appear: To view the underlying devices behind the slot, select **VIEW ON MAP**. The slot is presented in the device map with all the underlying modules and devices connected to it. ## Manage device information from the map
Certain device properties can be updated manually. Information manually entered
1. Select **View properties**. 1. Select **Edit properties.**
- :::image type="content" source="media/how-to-work-with-maps/edit-config.png" alt-text="Dialog that allows user to edit the device properties":::
+ :::image type="content" source="media/how-to-work-with-maps/edit-config.png" alt-text="Screenshot of the Edit device property pane.":::
1. Update any of the following: - Authorized status
For example, if you merge two devices, each with an IP address, both IP addresse
The event timeline presents the merge event. You cannot undo a device merge. If you mistakenly merged two devices, delete the device and wait for the sensor to rediscover both.
You cannot undo a device merge. If you mistakenly merged two devices, delete the
3. In the set merge device attributes dialog box, choose a device name.
- :::image type="content" source="media/how-to-work-with-maps/name-the-device-v2.png" alt-text="attributes dialog box":::
+ :::image type="content" source="media/how-to-work-with-maps/name-the-device-v2.png" alt-text="Screenshot of the attributes dialog box.":::
4. Select **Save**.
During the Learning period, all the devices discovered in the network are identi
When a device is discovered after the Learning period, it appears as an unauthorized device. In addition to seeing unauthorized devices in the map, you can also see them in the Device Inventory. **New device vs unauthorized**
Unauthorized devices are included in Risk Assessment reports and Attack Vectors
- **Attack Vector Reports:** Devices marked as unauthorized are resolved in the Attack Vector as suspected rogue devices that might be a threat to the network.
- :::image type="content" source="media/how-to-work-with-maps/attack-vector-reports.png" alt-text="View your attack vector reports.":::
+ :::image type="content" source="media/how-to-work-with-maps/attack-vector-reports.png" alt-text="Screenshot of the attack vector reports.":::
- **Risk Assessment Reports:** Devices marked as unauthorized are identified in Risk Assessment reports.
- :::image type="content" source="media/how-to-work-with-maps/unauthorized-risk-assessment-report.png" alt-text="A Risk Assessment report showing an unauthorized device":::
+ :::image type="content" source="media/how-to-work-with-maps/unauthorized-risk-assessment-report.png" alt-text="Screenshot of a Risk Assessment report showing an unauthorized device.":::
**To authorize or unauthorize devices manually:**
Important devices are calculated when generating Risk Assessment reports and Att
Devices you mark as important on your sensor are also marked as important in the Device inventory on the Defender for IoT portal on Azure.
-## See also
+## Next steps
-[Investigate sensor detections in a Device Inventory](how-to-investigate-sensor-detections-in-a-device-inventory.md)
+For more information, see [Investigate sensor detections in a Device Inventory](how-to-investigate-sensor-detections-in-a-device-inventory.md).
defender-for-iot How To Work With Threat Intelligence Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-threat-intelligence-packages.md
To review threat intelligence information:
If cloud connected threat intelligence updates fail, review connection information in the **Sensor status** and **Last connected UTC** columns in the **Sites and Sensors** page.
-## See also
+## Next steps
-[Onboard a sensor](getting-started.md#onboard-a-sensor)
+For more information, see:
-[Manage sensors from the management console](how-to-manage-sensors-from-the-on-premises-management-console.md)
+- [Onboard a sensor](getting-started.md#onboard-a-sensor)
+
+- [Manage sensors from the management console](how-to-manage-sensors-from-the-on-premises-management-console.md)
defender-for-iot Overview Eiot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/overview-eiot.md
Defender for IoT helps ensure a quick, frictionless deployment of network sensor
If you experience any issues, we encourage you to contact our customer support team.
-## See also
+## Next steps
-To learn more, see [Tutorial: Get started with enterprise IoT](tutorial-getting-started-eiot-sensor.md).
+For more information, see [Tutorial: Get started with enterprise IoT](tutorial-getting-started-eiot-sensor.md).
defender-for-iot Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/overview.md
Detect anomalous or unauthorized activities with specialized IoT/OT-aware threat
Integrate into Microsoft Sentinel for a bird's-eye view of your entire organization. Implement unified IoT/OT security governance with integration into your existing workflows, including third-party tools like Splunk, IBM QRadar, and ServiceNow.
-## See also
+## Next steps
-[Microsoft Defender for IoT architecture](architecture.md)
+For more information, see [Microsoft Defender for IoT architecture](architecture.md).
defender-for-iot References Work With Defender For Iot Cli Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/references-work-with-defender-for-iot-cli-commands.md
When you're using the tool:
- Confirm with IT the appliance domain (as it appears in the certificate) with your DNS server and the corresponding IP address.
-## See also
+## Next steps
-[Defender for IoT API sensor and management console APIs](references-work-with-defender-for-iot-apis.md)
+For more information, see [Defender for IoT API sensor and management console APIs](references-work-with-defender-for-iot-apis.md).
defender-for-iot Resources Manage Proprietary Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/resources-manage-proprietary-protocols.md
Use edit and delete options as required. Certain rules are embedded and cannot b
When you create multiple rules, alerts are triggered when any rule condition or condition sets are valid.
-## See also
+## Next steps
-[Customize alert rules](how-to-accelerate-alert-incident-response.md#customize-alert-rules)
+For more information, see [Customize alert rules](how-to-accelerate-alert-incident-response.md#customize-alert-rules).
frontdoor Front Door Routing Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-routing-architecture.md
The route specifies the [backend pool](front-door-backend-pool.md) that the requ
## Evaluate rule sets
-If you have defined [rule sets](standard-premium/concept-rule-set.md) for the route, they're executed in the order they're configured. [Rule sets can override the origin group](standard-premium/concept-rule-set-actions.md#OriginGroupOverride) specified in a route. Rule sets can also trigger a redirection response to the request instead of forwarding it to an origin.
+If you have defined [rule sets](standard-premium/concept-rule-set.md) for the route, they're executed in the order they're configured. [Rule sets can override the origin group](front-door-rules-engine-actions.md#origin-group-override) specified in a route. Rule sets can also trigger a redirection response to the request instead of forwarding it to an origin.
::: zone-end
frontdoor Front Door Rules Engine Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-rules-engine-actions.md
Title: Azure Front Door Rules Engine actions
-description: This article provides a list of the various actions you can do with Azure Front Door Rules Engine.
+ Title: Azure Front Door Rules actions
+description: This article provides a list of various actions you can do with Azure Front Door Rules engine/Rules set.
Previously updated : 09/29/2020 Last updated : 03/07/2022
-# Customer intent: As an IT admin, I want to learn about Front Door and what new features are available.
+zone_pivot_groups: front-door-tiers
-# Azure Front Door Rules Engine actions
+# Azure Front Door Rules actions
-In [AFD Rules Engine](front-door-rules-engine.md), a rule consists of zero or more match conditions and actions. This article provides detailed descriptions of the actions you can use in AFD Rules Engine.
-An action defines the behavior that's applied to the request type that a match condition or set of match conditions identifies. In AFD Rules Engine, a rule can contain up to five actions. Only one of which may be a route configuration override action (forward or redirect).
+An Azure Front Door Standard/Premium [Rule Set](standard-premium/concept-rule-set.md) consist of rules with a combination of match conditions and actions. This article provides a detailed description of the actions you can use in Azure Front Door Standard/Premium Rule Set. The action defines the behavior that gets applied to a request type that a match condition(s) identifies. In an Azure Front Door (Standard/Premium) Rule Set, a rule can contain up to five actions.
-The following actions are available to use in Azure Front Door rules engine.
+> [!IMPORTANT]
+> Azure Front Door Standard/Premium (Preview) is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+The following actions are available to use in Azure Front Door rule set.
+
+## <a name="CacheExpiration"></a> Cache expiration
+
+Use the **cache expiration** action to overwrite the time to live (TTL) value of the endpoint for requests that the rules match conditions specify.
+
+> [!NOTE]
+> Origins may specify not to cache specific responses using the `Cache-Control` header with a value of `no-cache`, `private`, or `no-store`. In these circumstances, Front Door will never cache the content and this action will have no effect.
+
+### Properties
+
+| Property | Supported values |
+|-||
+| Cache behavior | <ul><li>**Bypass cache:** The content should not be cached. In ARM templates, set the `cacheBehavior` property to `BypassCache`.</li><li>**Override:** The TTL value returned from your origin is overwritten with the value specified in the action. This behavior will only be applied if the response is cacheable. In ARM templates, set the `cacheBehavior` property to `Override`.</li><li>**Set if missing:** If no TTL value gets returned from your origin, the rule sets the TTL to the value specified in the action. This behavior will only be applied if the response is cacheable. In ARM templates, set the `cacheBehavior` property to `SetIfMissing`.</li></ul> |
+| Cache duration | When _Cache behavior_ is set to `Override` or `Set if missing`, these fields must specify the cache duration to use. The maximum duration is 366 days.<ul><li>In the Azure portal: specify the days, hours, minutes, and seconds.</li><li>In ARM templates: specify the duration in the format `d.hh:mm:ss`. |
+
+### Example
+
+In this example, we override the cache expiration to 6 hours, for matched requests that don't specify a cache duration already.
+
+# [Portal](#tab/portal)
++
+# [JSON](#tab/json)
+
+```json
+{
+ "name": "CacheExpiration",
+ "parameters": {
+ "cacheBehavior": "SetIfMissing",
+ "cacheType": "All",
+ "cacheDuration": "0.06:00:00",
+ "typeName": "DeliveryRuleCacheExpirationActionParameters"
+ }
+}
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+{
+ name: 'CacheExpiration'
+ parameters: {
+ cacheBehavior: 'SetIfMissing'
+ cacheType: All
+ cacheDuration: '0.06:00:00'
+ typeName: 'DeliveryRuleCacheExpirationActionParameters'
+ }
+}
+```
+++
+## <a name="CacheKeyQueryString"></a> Cache key query string
+
+Use the **cache key query string** action to modify the cache key based on query strings. The cache key is the way that Front Door identifies unique requests to cache.
+
+### Properties
+
+| Property | Supported values |
+|-||
+| Behavior | <ul><li>**Include:** Query strings specified in the parameters get included when the cache key gets generated. In ARM templates, set the `queryStringBehavior` property to `Include`.</li><li>**Cache every unique URL:** Each unique URL has its own cache key. In ARM templates, use the `queryStringBehavior` of `IncludeAll`.</li><li>**Exclude:** Query strings specified in the parameters get excluded when the cache key gets generated. In ARM templates, set the `queryStringBehavior` property to `Exclude`.</li><li>**Ignore query strings:** Query strings aren't considered when the cache key gets generated. In ARM templates, set the `queryStringBehavior` property to `ExcludeAll`.</li></ul> |
+| Parameters | The list of query string parameter names, separated by commas. |
+
+### Example
+
+In this example, we modify the cache key to include a query string parameter named `customerId`.
+
+# [Portal](#tab/portal)
++
+# [JSON](#tab/json)
+
+```json
+{
+ "name": "CacheKeyQueryString",
+ "parameters": {
+ "queryStringBehavior": "Include",
+ "queryParameters": "customerId",
+ "typeName": "DeliveryRuleCacheKeyQueryStringBehaviorActionParameters"
+ }
+}
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+{
+ name: 'CacheKeyQueryString'
+ parameters: {
+ queryStringBehavior: 'Include'
+ queryParameters: 'customerId'
+ typeName: 'DeliveryRuleCacheKeyQueryStringBehaviorActionParameters'
+ }
+}
+```
+++
+## <a name="ModifyRequestHeader"></a> Modify request header
+
+Use the **modify request header** action to modify the headers in the request when it is sent to your origin.
+
+### Properties
+
+| Property | Supported values |
+|-||
+| Operator | <ul><li>**Append:** The specified header gets added to the request with the specified value. If the header is already present, the value is appended to the existing header value using string concatenation. No delimiters are added. In ARM templates, use the `headerAction` of `Append`.</li><li>**Overwrite:** The specified header gets added to the request with the specified value. If the header is already present, the specified value overwrites the existing value. In ARM templates, use the `headerAction` of `Overwrite`.</li><li>**Delete:** If the header specified in the rule is present, the header gets deleted from the request. In ARM templates, use the `headerAction` of `Delete`.</li></ul> |
+| Header name | The name of the header to modify. |
+| Header value | The value to append or overwrite. |
+
+### Example
+
+In this example, we append the value `AdditionalValue` to the `MyRequestHeader` request header. If the origin set the response header to a value of `ValueSetByClient`, then after this action is applied, the request header would have a value of `ValueSetByClientAdditionalValue`.
+
+# [Portal](#tab/portal)
++
+# [JSON](#tab/json)
+
+```json
+{
+ "name": "ModifyRequestHeader",
+ "parameters": {
+ "headerAction": "Append",
+ "headerName": "MyRequestHeader",
+ "value": "AdditionalValue",
+ "typeName": "DeliveryRuleHeaderActionParameters"
+ }
+}
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+{
+ name: 'ModifyRequestHeader'
+ parameters: {
+ headerAction: 'Append'
+ headerName: 'MyRequestHeader'
+ value: 'AdditionalValue'
+ typeName: 'DeliveryRuleHeaderActionParameters'
+ }
+}
+```
+++
+## <a name="ModifyResponseHeader"></a> Modify response header
+
+Use the **modify response header** action to modify headers that are present in responses before they are returned to your clients.
+
+### Properties
+
+| Property | Supported values |
+|-||
+| Operator | <ul><li>**Append:** The specified header gets added to the response with the specified value. If the header is already present, the value is appended to the existing header value using string concatenation. No delimiters are added. In ARM templates, use the `headerAction` of `Append`.</li><li>**Overwrite:** The specified header gets added to the response with the specified value. If the header is already present, the specified value overwrites the existing value. In ARM templates, use the `headerAction` of `Overwrite`.</li><li>**Delete:** If the header specified in the rule is present, the header gets deleted from the response. In ARM templates, use the `headerAction` of `Delete`.</li></ul> |
+| Header name | The name of the header to modify. |
+| Header value | The value to append or overwrite. |
+
+### Example
+
+In this example, we delete the header with the name `X-Powered-By` from the responses before they are returned to the client.
+
+# [Portal](#tab/portal)
++
+# [JSON](#tab/json)
+
+```json
+{
+ "name": "ModifyResponseHeader",
+ "parameters": {
+ "headerAction": "Delete",
+ "headerName": "X-Powered-By",
+ "typeName": "DeliveryRuleHeaderActionParameters"
+ }
+}
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+{
+ name: 'ModifyResponseHeader'
+ parameters: {
+ headerAction: 'Delete'
+ headerName: 'X-Powered-By'
+ typeName: 'DeliveryRuleHeaderActionParameters'
+ }
+}
+```
+++
+## <a name="UrlRedirect"></a> URL redirect
+
+Use the **URL redirect** action to redirect clients to a new URL. Clients are sent a redirection response from Front Door.
+
+### Properties
+
+| Property | Supported values |
+|-||
+| Redirect type | The response type to return to the requestor. <ul><li>In the Azure portal: Found (302), Moved (301), Temporary Redirect (307), Permanent Redirect (308).</li><li>In ARM templates: `Found`, `Moved`, `TemporaryRedirect`, `PermanentRedirect`</li></ul> |
+| Redirect protocol | <ul><li>In the Azure portal: `Match Request`, `HTTP`, `HTTPS`</li><li>In ARM templates: `MatchRequest`, `Http`, `Https`</li></ul> |
+| Destination host | The host name you want the request to be redirected to. Leave blank to preserve the incoming host. |
+| Destination path | The path to use in the redirect. Include the leading `/`. Leave blank to preserve the incoming path. |
+| Query string | The query string used in the redirect. Don't include the leading `?`. Leave blank to preserve the incoming query string. |
+| Destination fragment | The fragment to use in the redirect. Leave blank to preserve the incoming fragment. |
+
+### Example
+
+In this example, we redirect the request to `https://contoso.com/exampleredirection?clientIp={client_ip}`, while preserving the fragment. An HTTP Temporary Redirect (307) is used. The IP address of the client is used in place of the `{client_ip}` token within the URL by using the `client_ip` [server variable](#server-variables).
+
+# [Portal](#tab/portal)
++
+# [JSON](#tab/json)
+
+```json
+{
+ "name": "UrlRedirect",
+ "parameters": {
+ "redirectType": "TemporaryRedirect",
+ "destinationProtocol": "Https",
+ "customHostname": "contoso.com",
+ "customPath": "/exampleredirection",
+ "customQueryString": "clientIp={client_ip}",
+ "typeName": "DeliveryRuleUrlRedirectActionParameters"
+ }
+}
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+{
+ name: 'UrlRedirect'
+ parameters: {
+ redirectType: 'TemporaryRedirect'
+ destinationProtocol: 'Https'
+ customHostname: 'contoso.com'
+ customPath: '/exampleredirection'
+ customQueryString: 'clientIp={client_ip}'
+ typeName: 'DeliveryRuleUrlRedirectActionParameters'
+ }
+}
+```
+++
+## <a name="UrlRewrite"></a> URL rewrite
+
+Use the **URL rewrite** action to rewrite the path of a request that's en route to your origin.
+
+### Properties
+
+| Property | Supported values |
+|-||
+| Source pattern | Define the source pattern in the URL path to replace. Currently, source pattern uses a prefix-based match. To match all URL paths, use a forward slash (`/`) as the source pattern value. |
+| Destination | Define the destination path to use in the rewrite. The destination path overwrites the source pattern. |
+| Preserve unmatched path | If set to _Yes_, the remaining path after the source pattern is appended to the new destination path. |
+
+### Example
+
+In this example, we rewrite all requests to the path `/redirection`, and don't preserve the remainder of the path.
+
+# [Portal](#tab/portal)
++
+# [JSON](#tab/json)
+
+```json
+{
+ "name": "UrlRewrite",
+ "parameters": {
+ "sourcePattern": "/",
+ "destination": "/redirection",
+ "preserveUnmatchedPath": false,
+ "typeName": "DeliveryRuleUrlRewriteActionParameters"
+ }
+}
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+{
+ name: 'UrlRewrite'
+ parameters: {
+ sourcePattern: '/'
+ destination: '/redirection'
+ preserveUnmatchedPath: false
+ typeName: 'DeliveryRuleUrlRewriteActionParameters'
+ }
+}
+```
+++
+## Origin group override
+
+Use the **Origin group override** action to change the origin group that the request should be routed to.
+
+### Properties
+
+| Property | Supported values |
+|-||
+| Origin group | The origin group that the request should be routed to. This overrides the configuration specified in the Front Door endpoint route. |
+
+### Example
+
+In this example, we route all matched requests to an origin group named `SecondOriginGroup`, regardless of the configuration in the Front Door endpoint route.
+
+# [Portal](#tab/portal)
++
+# [JSON](#tab/json)
+
+```json
+{
+ "name": "OriginGroupOverride",
+ "parameters": {
+ "originGroup": {
+ "id": "/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/Microsoft.Cdn/profiles/<profile-name>/originGroups/SecondOriginGroup"
+ },
+ "typeName": "DeliveryRuleOriginGroupOverrideActionParameters"
+ }
+}
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+{
+ name: 'OriginGroupOverride'
+ parameters: {
+ originGroup: {
+ id: '/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/Microsoft.Cdn/profiles/<profile-name>/originGroups/SecondOriginGroup'
+ }
+ typeName: 'DeliveryRuleOriginGroupOverrideActionParameters'
+ }
+}
+```
+++
+## Server variables
+
+Rule Set server variables provide access to structured information about the request. You can use server variables to dynamically change the request/response headers or URL rewrite paths/query strings, for example, when a new page load or when a form is posted.
+
+### Supported variables
+
+| Variable name | Description |
+||-|
+| `socket_ip` | The IP address of the direct connection to Azure Front Door edge. If the client used an HTTP proxy or a load balancer to send the request, the value of `socket_ip` is the IP address of the proxy or load balancer. |
+| `client_ip` | The IP address of the client that made the original request. If there was an `X-Forwarded-For` header in the request, then the client IP address is picked from the header. |
+| `client_port` | The IP port of the client that made the request. |
+| `hostname` | The host name in the request from the client. |
+| `geo_country` | Indicates the requester's country/region of origin through its country/region code. |
+| `http_method` | The method used to make the URL request, such as `GET` or `POST`. |
+| `http_version` | The request protocol. Usually `HTTP/1.0`, `HTTP/1.1`, or `HTTP/2.0`. |
+| `query_string` | The list of variable/value pairs that follows the "?" in the requested URL.<br />For example, in the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam`, the `query_string` value will be `id=123&title=fabrikam`. |
+| `request_scheme` | The request scheme: `http` or `https`. |
+| `request_uri` | The full original request URI (with arguments).<br />For example, in the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam`, the `request_uri` value will be `/article.aspx?id=123&title=fabrikam`. |
+| `ssl_protocol` | The protocol of an established TLS connection. |
+| `server_port` | The port of the server that accepted a request. |
+| `url_path` | Identifies the specific resource in the host that the web client wants to access. This is the part of the request URI without the arguments.<br />For example, in the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam`, the `uri_path` value will be `/article.aspx`. |
+
+### Server variable format
+
+Server variables can be specified using the following formats:
+
+* `{variable}`: Include the entire server variable. For example, if the client IP address is `111.222.333.444` then the `{client_ip}` token would evaluate to `111.222.333.444`.
+* `{variable:offset}`: Include the server variable after a specific offset, until the end of the variable. The offset is zero-based. For example, if the client IP address is `111.222.333.444` then the `{client_ip:3}` token would evaluate to `.222.333.444`.
+* `{variable:offset:length}`: Include the server variable after a specific offset, up to the specified length. The offset is zero-based. For example, if the client IP address is `111.222.333.444` then the `{client_ip:4:3}` token would evaluate to `222`.
+
+### Supported actions
+
+Server variables are supported on the following actions:
+
+* Cache key query string
+* Modify request header
+* Modify response header
+* URL redirect
+* URL rewrite
+++
+In Azure Front Door (classic), a [Rules Engine](front-door-rules-engine.md) can consist up to 25 rules containing matching conditions and associated actions. This article provides a detailed description of each action you can define in a rule.
+
+An action defines the behavior that gets applied to the request type that matches the condition or set of match conditions. In the Rules engine configuration, a rule can have up to 10 matching conditions and 5 actions. You can only have one *Override Routing Configuration* action in a single rule.
+
+The following actions are available to use in Rules engine configuration.
## Modify request header
-Use this action to modify headers that are present in requests sent to your origin.
+Use these actions to modify headers that are present in requests sent to your backend.
### Required fields
-Action | HTTP header name | Value
--||
-Append | When this option gets selected and the rule matches, the header specified in **Header name** gets added to the request with the specified value. If the header is already present, the value is appended to the existing value. | String
-Overwrite | When this option is selected and the rule matches, the header specified in **Header name** gets added to the request with the specified value. If the header is already present, the specified value overwrites the existing value. | String
-Delete | When this option gets selected with matching rules and the header specified in the rule is present, the header gets deleted from the request. | String
+| Action | HTTP header name | Value |
+| | - | -- |
+| Append | When this option gets selected and the rule matches, the header specified in **Header name** gets added to the request with the specified value. If the header is already present, the value is appended to the existing value. | String |
+| Overwrite | When this option is selected and the rule matches, the header specified in **Header name** gets added to the request with the specified value. If the header is already present, the specified value overwrites the existing value. | String |
+| Delete | When this option gets selected with matching rules and the header specified in the rule is present, the header gets deleted from the request. | String |
## Modify response header
-Use this action to modify headers that are present in responses returned to your clients.
+Use these actions to modify headers that are present in responses returned to your clients.
### Required fields
-Action | HTTP Header name | Value
+| Action | HTTP Header name | Value |
-||
-Append | When this option gets selected and the rule matches, the header specified in **Header name** gets added to the response by using the specified **Value**. If the header is already present, **Value** is appended to the existing value. | String
-Overwrite | When this option is selected and the rule matches, the header specified in **Header name** is added to the response by using the specified **Value**. If the header is already present, **Value** overwrites the existing value. | String
-Delete | When this option gets selected and the rule matches the header specified in the rule is present, the header gets deleted from the response. | String
+| Append | When this option gets selected and the rule matches, the header specified in **Header name** gets added to the response by using the specified **Value**. If the header is already present, **Value** is appended to the existing value. | String |
+| Overwrite | When this option is selected and the rule matches, the header specified in **Header name** is added to the response by using the specified **Value**. If the header is already present, **Value** overwrites the existing value. | String |
+| Delete | When this option gets selected with matching rules and the header specified in the rule is present, the header gets deleted from the response. | String |
## Route configuration overrides ### Route Type: Redirect
-Use this action to redirect clients to a new URL.
+Use these actions to redirect clients to a new URL.
#### Required fields
-Field | Description
-|
-Redirect Type | Select the response type to return to the requestor: Found (302), Moved (301), Temporary redirect (307), and Permanent redirect (308).
-Redirect protocol | Match Request, HTTP, HTTPS.
-Destination host | Select the host name you want the request to be redirected to. Leave blank to preserve the incoming host.
-Destination path | Define the path to use in the redirect. Leave blank to preserve the incoming path.
-Query string | Define the query string used in the redirect. Leave blank to preserve the incoming query string.
-Destination fragment | Define the fragment to use in the redirect. Leave blank to preserve the incoming fragment.
-
+| Field | Description |
+| -- | -- |
+| Redirect type | Redirect is a way to send users/clients from one URL to another. A redirect type sets the status code used by clients to understand the purpose of the redirect. <br><br/>You can select the following redirect status codes: Found (302), Moved (301), Temporary redirect (307), and Permanent redirect (308). |
+| Redirect protocol | Retain the protocol as per the incoming request, or define a new protocol for the redirection. For example, select 'HTTPS' for HTTP to HTTPS redirection. |
+| Destination host | Set this to change the hostname in the URL for the redirection or otherwise retain the hostname from the incoming request. |
+| Destination path | Either retain the path as per the incoming request, or update the path in the URL for the redirection. |
+| Query string | Set this to replace any existing query string from the incoming request URL or otherwise retain the original set of query strings. |
+| Destination fragment | The destination fragment is the portion of URL after '#', normally used by browsers to land on a specific section on a page. Set this to add a fragment to the redirect URL. |
### Route Type: Forward
-Use this action to forward clients to a new URL. This action also contains sub actions for URL rewrites and Caching.
+Use these actions to forward clients to a new URL. These actions also contain sub actions for URL rewrites and caching.
-Field | Description
-|
-Backend pool | Select the backend pool to override and serve the requests, this will also show all your pre-configured backend pools currently in your Front Door profile.
-Forwarding protocol | Match Request, HTTP, HTTPS.
-URL rewrite | Use this action to rewrite the path of a request that's en route to your origin. If enabled, see following additional fields required
-Caching | Enabled, Disabled. See the following additional fields required if enabled.
+| Field | Description |
+| -- | -- |
+| Backend pool | Select the backend pool to override and serve the requests, this will also show all your pre-configured backend pools currently in your Front Door profile. |
+| Forwarding protocol | Protocol to use for forwarding request to backend or match the protocol from incoming request. |
+| URL rewrite | Path to use when constructing the request for URL rewrite to forward to the backend. |
+| Caching | Enable caching for this routing rule. When enabled, Azure Front Door will cache your static content. |
#### URL rewrite Use this setting to configure an optional **Custom Forwarding Path** to use when constructing the request to forward to the backend.
-Field | Description
-|
-Custom forwarding path | Define the path to forward the requests to.
+| Field | Description |
+| -- | -- |
+| Custom forwarding path | Define a path for which requests will be forwarded to. |
#### Caching Use these settings to control how files get cached for requests that contain query strings. Whether to cache your content based on all parameters or on selected parameters. You can use additional settings to overwrite the time to live (TTL) value to control how long contents stay in cache. To force caching as an action, set the caching field to "Enabled." When you force caching, the following options appear:
-Cache behavior | Description
-|-
-Ignore query strings | Once the asset is cached, all ensuing requests ignore the query strings until the cached asset expires.
-Cache every unique URL | Each request with a unique URL, including the query string, is treated as a unique asset with its own cache.
-Ignore specified query strings | Request URL query strings listed in "Query parameters" setting are ignored for caching.
-Include specified query strings | Request URL query strings listed in "Query parameters" setting are used for caching.
+| Cache behavior | Description |
+| -- | |
+| Ignore query strings | Once the asset is cached, all ensuing requests ignore the query strings until the cached asset expires. |
+| Cache every unique URL | Each request with a unique URL, including the query string, is treated as a unique asset with its own cache. |
+| Ignore specified query strings | Request URL query strings listed in "Query parameters" setting are ignored for caching. |
+| Include specified query strings | Request URL query strings listed in "Query parameters" setting are used for caching. |
-Additional fields | Description
+| Additional fields | Description
|
-Dynamic compression | Front Door can dynamically compress content on the edge, resulting in a smaller and faster response.
-Query parameters | A comma-separated list of allowed (or disallowed) parameters to use as a basis for caching.
-Cache duration | Cache expiration duration in Days, Hours, Minutes, Seconds. All values must be Int.
+| Dynamic compression | Front Door can dynamically compress content on the edge, resulting in a smaller and faster response. |
+| Query parameters | A comma-separated list of allowed or disallowed parameters to use as a basis for caching.
+| Use default cache duration | Set to use Azure Front Door default caching duration or define a caching duration which ignores the origin response directive. |
+ ## Next steps
frontdoor Rules Match Conditions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/rules-match-conditions.md
For rules that can transform strings, the following transforms are valid:
* Learn more about Azure Front Door [Rules Engine](front-door-rules-engine.md) * Learn how to [configure your first Rules Engine](front-door-tutorial-rules-engine.md).
-* Learn more about [Rules Engine actions](front-door-rules-engine-actions.md)
+* Learn more about [Rules actions](front-door-rules-engine-actions.md)
::: zone-end
For rules that can transform strings, the following transforms are valid:
* Learn more about Azure Front Door Standard/Premium [Rule Set](standard-premium/concept-rule-set.md). * Learn how to [configure your first Rule Set](standard-premium/how-to-configure-rule-set.md).
-* Learn more about [Rule Set actions](standard-premium/concept-rule-set-actions.md).
+* Learn more about [Rule actions](front-door-rules-engine-actions.md).
::: zone-end
frontdoor Concept Caching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/concept-caching.md
Cache behavior and duration can be configured in both the Front Door designer ro
## Next steps * Learn more about [Rule Set Match Conditions](concept-rule-set-match-conditions.md)
-* Learn more about [Rule Set Actions](concept-rule-set-actions.md)
+* Learn more about [Rule Set Actions](../front-door-rules-engine-actions.md)
frontdoor Concept Route https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/concept-route.md
A Front Door Standard/Premium routing configuration is composed of two major par
> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). > [!NOTE]
-> When you use the [Front Door rules engine](concept-rule-set.md), you can configure a rule to [override the origin group](concept-rule-set-actions.md#OriginGroupOverride) for a request. The origin group set by the rules engine overrides the routing process described in this article.
+> When you use the [Front Door rules engine](concept-rule-set.md), you can configure a rule to [override the origin group](../front-door-rules-engine-actions.md#origin-group-override) for a request. The origin group set by the rules engine overrides the routing process described in this article.
### Incoming match (left-hand side)
Given that configuration, the following example matching table would result:
Once Azure Front Door Standard/Premium has matched to a single routing rule, it then needs to choose how to process the request. If Azure Front Door Standard/Premium has a cached response available for the matched routing rule, then the request gets served back to the client.
-Finally, Azure Front Door Standard/Premium evaluates whether or not you have a [rule set](concept-rule-set.md) for the matched routing rule. If there's no rule set defined, then the request gets forwarded to the origin group as-is. Otherwise, the rule sets get executed in the order they're configured. [Rule sets can override the route](concept-rule-set-actions.md#OriginGroupOverride), forcing traffic to a specific origin group.
+Finally, Azure Front Door Standard/Premium evaluates whether or not you have a [rule set](concept-rule-set.md) for the matched routing rule. If there's no rule set defined, then the request gets forwarded to the origin group as-is. Otherwise, the rule sets get executed in the order they're configured. [Rule sets can override the route](../front-door-rules-engine-actions.md#origin-group-override), forcing traffic to a specific origin group.
## Next steps
frontdoor Concept Rule Set Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/concept-rule-set-actions.md
- Title: Configure Azure Front Door Standard/Premium rule set actions
-description: This article provides a list of the various actions you can do with Azure Front Door rule set.
---- Previously updated : 03/03/2022---
-# Azure Front Door Standard/Premium (Preview) Rule Set actions
-
-> [!Note]
-> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
-
-An Azure Front Door Standard/Premium [Rule Set](concept-rule-set.md) consist of rules with a combination of match conditions and actions. This article provides a detailed description of the actions you can use in Azure Front Door Standard/Premium Rule Set. The action defines the behavior that gets applied to a request type that a match condition(s) identifies. In an Azure Front Door (Standard/Premium) Rule Set, a rule can contain up to five actions.
-
-> [!IMPORTANT]
-> Azure Front Door Standard/Premium (Preview) is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-The following actions are available to use in Azure Front Door rule set.
-
-## <a name="CacheExpiration"></a> Cache expiration
-
-Use the **cache expiration** action to overwrite the time to live (TTL) value of the endpoint for requests that the rules match conditions specify.
-
-> [!NOTE]
-> Origins may specify not to cache specific responses using the `Cache-Control` header with a value of `no-cache`, `private`, or `no-store`. In these circumstances, Front Door will never cache the content and this action will have no effect.
-
-### Properties
-
-| Property | Supported values |
-|-||
-| Cache behavior | <ul><li>**Bypass cache:** The content should not be cached. In ARM templates, set the `cacheBehavior` property to `BypassCache`.</li><li>**Override:** The TTL value returned from your origin is overwritten with the value specified in the action. This behavior will only be applied if the response is cacheable. In ARM templates, set the `cacheBehavior` property to `Override`.</li><li>**Set if missing:** If no TTL value gets returned from your origin, the rule sets the TTL to the value specified in the action. This behavior will only be applied if the response is cacheable. In ARM templates, set the `cacheBehavior` property to `SetIfMissing`.</li></ul> |
-| Cache duration | When _Cache behavior_ is set to `Override` or `Set if missing`, these fields must specify the cache duration to use. The maximum duration is 366 days.<ul><li>In the Azure portal: specify the days, hours, minutes, and seconds.</li><li>In ARM templates: specify the duration in the format `d.hh:mm:ss`. |
-
-### Example
-
-In this example, we override the cache expiration to 6 hours, for matched requests that don't specify a cache duration already.
-
-# [Portal](#tab/portal)
--
-# [JSON](#tab/json)
-
-```json
-{
- "name": "CacheExpiration",
- "parameters": {
- "cacheBehavior": "SetIfMissing",
- "cacheType": "All",
- "cacheDuration": "0.06:00:00",
- "typeName": "DeliveryRuleCacheExpirationActionParameters"
- }
-}
-```
-
-# [Bicep](#tab/bicep)
-
-```bicep
-{
- name: 'CacheExpiration'
- parameters: {
- cacheBehavior: 'SetIfMissing'
- cacheType: All
- cacheDuration: '0.06:00:00'
- typeName: 'DeliveryRuleCacheExpirationActionParameters'
- }
-}
-```
---
-## <a name="CacheKeyQueryString"></a> Cache key query string
-
-Use the **cache key query string** action to modify the cache key based on query strings. The cache key is the way that Front Door identifies unique requests to cache.
-
-### Properties
-
-| Property | Supported values |
-|-||
-| Behavior | <ul><li>**Include:** Query strings specified in the parameters get included when the cache key gets generated. In ARM templates, set the `queryStringBehavior` property to `Include`.</li><li>**Cache every unique URL:** Each unique URL has its own cache key. In ARM templates, use the `queryStringBehavior` of `IncludeAll`.</li><li>**Exclude:** Query strings specified in the parameters get excluded when the cache key gets generated. In ARM templates, set the `queryStringBehavior` property to `Exclude`.</li><li>**Ignore query strings:** Query strings aren't considered when the cache key gets generated. In ARM templates, set the `queryStringBehavior` property to `ExcludeAll`.</li></ul> |
-| Parameters | The list of query string parameter names, separated by commas. |
-
-### Example
-
-In this example, we modify the cache key to include a query string parameter named `customerId`.
-
-# [Portal](#tab/portal)
--
-# [JSON](#tab/json)
-
-```json
-{
- "name": "CacheKeyQueryString",
- "parameters": {
- "queryStringBehavior": "Include",
- "queryParameters": "customerId",
- "typeName": "DeliveryRuleCacheKeyQueryStringBehaviorActionParameters"
- }
-}
-```
-
-# [Bicep](#tab/bicep)
-
-```bicep
-{
- name: 'CacheKeyQueryString'
- parameters: {
- queryStringBehavior: 'Include'
- queryParameters: 'customerId'
- typeName: 'DeliveryRuleCacheKeyQueryStringBehaviorActionParameters'
- }
-}
-```
---
-## <a name="ModifyRequestHeader"></a> Modify request header
-
-Use the **modify request header** action to modify the headers in the request when it is sent to your origin.
-
-### Properties
-
-| Property | Supported values |
-|-||
-| Operator | <ul><li>**Append:** The specified header gets added to the request with the specified value. If the header is already present, the value is appended to the existing header value using string concatenation. No delimiters are added. In ARM templates, use the `headerAction` of `Append`.</li><li>**Overwrite:** The specified header gets added to the request with the specified value. If the header is already present, the specified value overwrites the existing value. In ARM templates, use the `headerAction` of `Overwrite`.</li><li>**Delete:** If the header specified in the rule is present, the header gets deleted from the request. In ARM templates, use the `headerAction` of `Delete`.</li></ul> |
-| Header name | The name of the header to modify. |
-| Header value | The value to append or overwrite. |
-
-### Example
-
-In this example, we append the value `AdditionalValue` to the `MyRequestHeader` request header. If the origin set the response header to a value of `ValueSetByClient`, then after this action is applied, the request header would have a value of `ValueSetByClientAdditionalValue`.
-
-# [Portal](#tab/portal)
--
-# [JSON](#tab/json)
-
-```json
-{
- "name": "ModifyRequestHeader",
- "parameters": {
- "headerAction": "Append",
- "headerName": "MyRequestHeader",
- "value": "AdditionalValue",
- "typeName": "DeliveryRuleHeaderActionParameters"
- }
-}
-```
-
-# [Bicep](#tab/bicep)
-
-```bicep
-{
- name: 'ModifyRequestHeader'
- parameters: {
- headerAction: 'Append'
- headerName: 'MyRequestHeader'
- value: 'AdditionalValue'
- typeName: 'DeliveryRuleHeaderActionParameters'
- }
-}
-```
---
-## <a name="ModifyResponseHeader"></a> Modify response header
-
-Use the **modify response header** action to modify headers that are present in responses before they are returned to your clients.
-
-### Properties
-
-| Property | Supported values |
-|-||
-| Operator | <ul><li>**Append:** The specified header gets added to the response with the specified value. If the header is already present, the value is appended to the existing header value using string concatenation. No delimiters are added. In ARM templates, use the `headerAction` of `Append`.</li><li>**Overwrite:** The specified header gets added to the response with the specified value. If the header is already present, the specified value overwrites the existing value. In ARM templates, use the `headerAction` of `Overwrite`.</li><li>**Delete:** If the header specified in the rule is present, the header gets deleted from the response. In ARM templates, use the `headerAction` of `Delete`.</li></ul> |
-| Header name | The name of the header to modify. |
-| Header value | The value to append or overwrite. |
-
-### Example
-
-In this example, we delete the header with the name `X-Powered-By` from the responses before they are returned to the client.
-
-# [Portal](#tab/portal)
--
-# [JSON](#tab/json)
-
-```json
-{
- "name": "ModifyResponseHeader",
- "parameters": {
- "headerAction": "Delete",
- "headerName": "X-Powered-By",
- "typeName": "DeliveryRuleHeaderActionParameters"
- }
-}
-```
-
-# [Bicep](#tab/bicep)
-
-```bicep
-{
- name: 'ModifyResponseHeader'
- parameters: {
- headerAction: 'Delete'
- headerName: 'X-Powered-By'
- typeName: 'DeliveryRuleHeaderActionParameters'
- }
-}
-```
---
-## <a name="UrlRedirect"></a> URL redirect
-
-Use the **URL redirect** action to redirect clients to a new URL. Clients are sent a redirection response from Front Door.
-
-### Properties
-
-| Property | Supported values |
-|-||
-| Redirect type | The response type to return to the requestor. <ul><li>In the Azure portal: Found (302), Moved (301), Temporary Redirect (307), Permanent Redirect (308).</li><li>In ARM templates: `Found`, `Moved`, `TemporaryRedirect`, `PermanentRedirect`</li></ul> |
-| Redirect protocol | <ul><li>In the Azure portal: `Match Request`, `HTTP`, `HTTPS`</li><li>In ARM templates: `MatchRequest`, `Http`, `Https`</li></ul> |
-| Destination host | The host name you want the request to be redirected to. Leave blank to preserve the incoming host. |
-| Destination path | The path to use in the redirect. Include the leading `/`. Leave blank to preserve the incoming path. |
-| Query string | The query string used in the redirect. Don't include the leading `?`. Leave blank to preserve the incoming query string. |
-| Destination fragment | The fragment to use in the redirect. Leave blank to preserve the incoming fragment. |
-
-### Example
-
-In this example, we redirect the request to `https://contoso.com/exampleredirection?clientIp={client_ip}`, while preserving the fragment. An HTTP Temporary Redirect (307) is used. The IP address of the client is used in place of the `{client_ip}` token within the URL by using the `client_ip` [server variable](#server-variables).
-
-# [Portal](#tab/portal)
--
-# [JSON](#tab/json)
-
-```json
-{
- "name": "UrlRedirect",
- "parameters": {
- "redirectType": "TemporaryRedirect",
- "destinationProtocol": "Https",
- "customHostname": "contoso.com",
- "customPath": "/exampleredirection",
- "customQueryString": "clientIp={client_ip}",
- "typeName": "DeliveryRuleUrlRedirectActionParameters"
- }
-}
-```
-
-# [Bicep](#tab/bicep)
-
-```bicep
-{
- name: 'UrlRedirect'
- parameters: {
- redirectType: 'TemporaryRedirect'
- destinationProtocol: 'Https'
- customHostname: 'contoso.com'
- customPath: '/exampleredirection'
- customQueryString: 'clientIp={client_ip}'
- typeName: 'DeliveryRuleUrlRedirectActionParameters'
- }
-}
-```
---
-## <a name="UrlRewrite"></a> URL rewrite
-
-Use the **URL rewrite** action to rewrite the path of a request that's en route to your origin.
-
-### Properties
-
-| Property | Supported values |
-|-||
-| Source pattern | Define the source pattern in the URL path to replace. Currently, source pattern uses a prefix-based match. To match all URL paths, use a forward slash (`/`) as the source pattern value. |
-| Destination | Define the destination path to use in the rewrite. The destination path overwrites the source pattern. |
-| Preserve unmatched path | If set to _Yes_, the remaining path after the source pattern is appended to the new destination path. |
-
-### Example
-
-In this example, we rewrite all requests to the path `/redirection`, and don't preserve the remainder of the path.
-
-# [Portal](#tab/portal)
--
-# [JSON](#tab/json)
-
-```json
-{
- "name": "UrlRewrite",
- "parameters": {
- "sourcePattern": "/",
- "destination": "/redirection",
- "preserveUnmatchedPath": false,
- "typeName": "DeliveryRuleUrlRewriteActionParameters"
- }
-}
-```
-
-# [Bicep](#tab/bicep)
-
-```bicep
-{
- name: 'UrlRewrite'
- parameters: {
- sourcePattern: '/'
- destination: '/redirection'
- preserveUnmatchedPath: false
- typeName: 'DeliveryRuleUrlRewriteActionParameters'
- }
-}
-```
---
-## <a name="OriginGroupOverride"></a> Origin group override
-
-Use the **Origin group override** action to change the origin group that the request should be routed to.
-
-### Properties
-
-| Property | Supported values |
-|-||
-| Origin group | The origin group that the request should be routed to. This overrides the configuration specified in the Front Door endpoint route. |
-
-### Example
-
-In this example, we route all matched requests to an origin group named `SecondOriginGroup`, regardless of the configuration in the Front Door endpoint route.
-
-# [Portal](#tab/portal)
--
-# [JSON](#tab/json)
-
-```json
-{
- "name": "OriginGroupOverride",
- "parameters": {
- "originGroup": {
- "id": "/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/Microsoft.Cdn/profiles/<profile-name>/originGroups/SecondOriginGroup"
- },
- "typeName": "DeliveryRuleOriginGroupOverrideActionParameters"
- }
-}
-```
-
-# [Bicep](#tab/bicep)
-
-```bicep
-{
- name: 'OriginGroupOverride'
- parameters: {
- originGroup: {
- id: '/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/Microsoft.Cdn/profiles/<profile-name>/originGroups/SecondOriginGroup'
- }
- typeName: 'DeliveryRuleOriginGroupOverrideActionParameters'
- }
-}
-```
---
-## Server variables
-
-Rule Set server variables provide access to structured information about the request. You can use server variables to dynamically change the request/response headers or URL rewrite paths/query strings, for example, when a new page load or when a form is posted.
-
-### Supported variables
-
-| Variable name | Description |
-||-|
-| `socket_ip` | The IP address of the direct connection to Azure Front Door edge. If the client used an HTTP proxy or a load balancer to send the request, the value of `socket_ip` is the IP address of the proxy or load balancer. |
-| `client_ip` | The IP address of the client that made the original request. If there was an `X-Forwarded-For` header in the request, then the client IP address is picked from the header. |
-| `client_port` | The IP port of the client that made the request. |
-| `hostname` | The host name in the request from the client. |
-| `geo_country` | Indicates the requester's country/region of origin through its country/region code. |
-| `http_method` | The method used to make the URL request, such as `GET` or `POST`. |
-| `http_version` | The request protocol. Usually `HTTP/1.0`, `HTTP/1.1`, or `HTTP/2.0`. |
-| `query_string` | The list of variable/value pairs that follows the "?" in the requested URL.<br />For example, in the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam`, the `query_string` value will be `id=123&title=fabrikam`. |
-| `request_scheme` | The request scheme: `http` or `https`. |
-| `request_uri` | The full original request URI (with arguments).<br />For example, in the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam`, the `request_uri` value will be `/article.aspx?id=123&title=fabrikam`. |
-| `ssl_protocol` | The protocol of an established TLS connection. |
-| `server_port` | The port of the server that accepted a request. |
-| `url_path` | Identifies the specific resource in the host that the web client wants to access. This is the part of the request URI without the arguments.<br />For example, in the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam`, the `uri_path` value will be `/article.aspx`. |
-
-### Server variable format
-
-Server variables can be specified using the following formats:
-
-* `{variable}`: Include the entire server variable. For example, if the client IP address is `111.222.333.444` then the `{client_ip}` token would evaluate to `111.222.333.444`.
-* `{variable:offset}`: Include the server variable after a specific offset, until the end of the variable. The offset is zero-based. For example, if the client IP address is `111.222.333.444` then the `{client_ip:3}` token would evaluate to `.222.333.444`.
-* `{variable:offset:length}`: Include the server variable after a specific offset, up to the specified length. The offset is zero-based. For example, if the client IP address is `111.222.333.444` then the `{client_ip:4:3}` token would evaluate to `222`.
-
-### Supported actions
-
-Server variables are supported on the following actions:
-
-* Cache key query string
-* Modify request header
-* Modify response header
-* URL redirect
-* URL rewrite
-
-## Next steps
-
-* Learn more about [Azure Front Door Standard/Premium Rule Set](concept-rule-set.md).
-* Learn more about [Rule Set match conditions](concept-rule-set-match-conditions.md).
frontdoor Concept Rule Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/concept-rule-set.md
A Rule Set is a customized rule engine that groups a combination of rules into a
* Add, modify, or remove request/response header to hide sensitive information or capture important information through headers.
-* Support server variables to dynamically change the request/response headers or URL rewrite paths/query strings, for example, when a new page load or when a form is posted. Server variable is currently supported on **[Rule Set actions](concept-rule-set-actions.md)** only.
+* Support server variables to dynamically change the request/response headers or URL rewrite paths/query strings, for example, when a new page load or when a form is posted. Server variable is currently supported on **[Rule Set actions](../front-door-rules-engine-actions.md)** only.
## Architecture
For more quota limit, refer to [Azure subscription and service limits, quotas an
* *Match condition*: There are many match conditions that can be utilized to parse your incoming requests. A rule can contain up to 10 match conditions. Match conditions are evaluated with an **AND** operator. *Regular expression is supported in conditions*. A full list of match conditions can be found in [Rule Set match conditions](concept-rule-set-match-conditions.md).
-* *Action*: Actions dictate how AFD handles the incoming requests based on the matching conditions. You can modify caching behaviors, modify request headers/response headers, do URL rewrite and URL redirection. *Server variables are supported on Action*. A rule can contain up to 10 match conditions. A full list of actions can be found [Rule Set actions](concept-rule-set-actions.md).
+* *Action*: Actions dictate how AFD handles the incoming requests based on the matching conditions. You can modify caching behaviors, modify request headers/response headers, do URL rewrite and URL redirection. *Server variables are supported on Action*. A rule can contain up to 10 match conditions. A full list of actions can be found [Rule Set actions](../front-door-rules-engine-actions.md).
## ARM template support
-Rule Sets can be configured using Azure Resource Manager templates. [See an example template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.cdn/front-door-standard-premium-rule-set). You can customize the behavior by using the JSON or Bicep snippets included in the documentation examples for [match conditions](concept-rule-set-match-conditions.md) and [actions](concept-rule-set-actions.md).
+Rule Sets can be configured using Azure Resource Manager templates. [See an example template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.cdn/front-door-standard-premium-rule-set). You can customize the behavior by using the JSON or Bicep snippets included in the documentation examples for [match conditions](concept-rule-set-match-conditions.md) and [actions](../front-door-rules-engine-actions.md).
## Next steps
governance Get Compliance Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/get-compliance-data.md
pages have a green 'Try It' button on each operation that allows you to try it r
Use ARMClient or a similar tool to handle authentication to Azure for the REST API examples.
+> [!NOTE]
+> Currently "reason for non-compliance" cannot be retrieved from Command line. We are working on mapping the reason code to the "reason for non-compliance" and at this point there is no ETA on this.
+ ### Summarize results With the REST API, summarization can be performed by container, definition, or assignment. Here is
hdinsight Cluster Reboot Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/cluster-reboot-vm.md
You can use the **Try it** feature in the API doc to send requests to HDInsight.
POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.HDInsight/clusters/{clusterName}/listHosts?api-version=2018-06-01-preview ```
-1. Restart hosts. After you get the names of the nodes that you want to reboot, restart the nodes by using the REST API to reboot the nodes. The node name follows the pattern of *NodeType(wn/hn/zk)* + *x* + *first six characters of cluster name*. For more information, see [HDInsight restart hosts REST API operation](/rest/api/hdinsight/2021-06-01/virtual-machines/restart-hosts).
+1. Restart hosts. After you get the names of the nodes that you want to reboot, restart the nodes by using the REST API to reboot the nodes. The node name follows the pattern of *NodeType(wn/hn/zk/gw/id)* + *x* + *first six characters of cluster name*. For more information, see [HDInsight restart hosts REST API operation](/rest/api/hdinsight/2021-06-01/virtual-machines/restart-hosts).
``` POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.HDInsight/clusters/{clusterName}/restartHosts?api-version=2018-06-01-preview
hdinsight Hdinsight Restrict Outbound Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-restrict-outbound-traffic.md
Create an application rule collection that allows the cluster to send and receiv
| Rule_2 | * | https:443 | login.windows.net | Allows Windows login activity | | Rule_3 | * | https:443 | login.microsoftonline.com | Allows Windows login activity | | Rule_4 | * | https:443 | storage_account_name.blob.core.windows.net | Replace `storage_account_name` with your actual storage account name. Make sure ["secure transfer required"](../storage/common/storage-require-secure-transfer.md) is enabled on the storage account. If you are using Private endpoint to access storage accounts, this step is not needed and storage traffic is not forwarded to the firewall.|
- | Rule_5 | * | https:443 | azure.archive.ubuntu.com | Allows access to updates for security packages. |
+ | Rule_5 | * | https:443 | azure.archive.ubuntu.com | Allows Ubuntu security updates to be installed on the cluster |
:::image type="content" source="./media/hdinsight-restrict-outbound-traffic/hdinsight-restrict-outbound-traffic-add-app-rule-collection-details.png" alt-text="Title: Enter application rule collection details":::
industrial-iot Reference Command Line Arguments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industrial-iot/reference-command-line-arguments.md
There are a couple of environment variables, which can be used to control the ap
--tb, --addtrustedcertbase64=VALUE adds the certificate to the applications trusted cert store passed in as base64 string (multiple
- comma-seperated strings supported)
+ comma-separated strings supported)
--tf, --addtrustedcertfile=VALUE adds the certificate file(s) to the applications trusted cert store passed in as base64 string (
- multiple comma-seperated filenames supported)
+ multiple comma-separated filenames supported)
--ib, --addissuercertbase64=VALUE adds the specified issuer certificate to the applications trusted issuer cert store passed in
- as base64 string (multiple comma-seperated strings supported)
+ as base64 string (multiple comma-separated strings supported)
--if, --addissuercertfile=VALUE adds the specified issuer certificate file(s) to the applications trusted issuer cert store (
- multiple comma-seperated filenames supported)
+ multiple comma-separated filenames supported)
--rb, --updatecrlbase64=VALUE update the CRL passed in as base64 string to the corresponding cert store (trusted or trusted
There are a couple of environment variables, which can be used to control the ap
issuer) --rc, --removecert=VALUE remove cert(s) with the given thumbprint(s) (
- multiple comma-seperated thumbprints supported)
+ multiple comma-separated thumbprints supported)
--dt, --devicecertstoretype=VALUE the iothub device cert store type. (allowed values: Directory, X509Store)
Further resources can be found in the GitHub repositories:
> [OPC Publisher GitHub repository](https://github.com/Azure/Industrial-IoT) > [!div class="nextstepaction"]
-> [IIoT Platform GitHub repository](https://github.com/Azure/iot-edge-opc-publisher)
+> [IIoT Platform GitHub repository](https://github.com/Azure/iot-edge-opc-publisher)
iot-edge About Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/about-iot-edge.md
Title: What is Azure IoT Edge | Microsoft Docs description: Overview of the Azure IoT Edge service-+ # this is the PM responsible
Last updated 10/28/2019-+
iot-edge Deploy Confidential Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/deploy-confidential-applications.md
Title: Confidential applications as Azure IoT Edge modules description: Use the Open Enclave SDK and API to write confidential applications and deploy them as IoT Edge modules for confidential computing-+ Last updated 01/27/2021-+ # Confidential computing at the edge
iot-edge Deploy Modbus Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/deploy-modbus-gateway.md
Title: Translate modbus protocols with gateways - Azure IoT Edge | Microsoft Docs description: Allow devices that use Modbus TCP to communicate with Azure IoT Hub by creating an IoT Edge gateway device-+ Last updated 11/19/2019-+ # Connect Modbus TCP devices through an IoT Edge device gateway
iot-edge Development Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/development-environment.md
Title: Azure IoT Edge development environment | Microsoft Docs description: Learn about the supported systems and first-party development tools that will help you create IoT Edge modules-+ -+ Last updated 01/04/2019
iot-edge Gpu Acceleration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/gpu-acceleration.md
Title: GPU acceleration for Azure IoT Edge for Linux on Windows | Microsoft Docs description: Learn about how to configure your Azure IoT Edge for Linux on Windows virtual machines to use host device GPUs.-+ -+ Last updated 06/22/2021
iot-edge How To Access Built In Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-access-built-in-metrics.md
Title: Access built-in metrics - Azure IoT Edge description: Remote access to built-in metrics from the IoT Edge runtime components-+ -+ Last updated 06/25/2021
iot-edge How To Access Host Storage From Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-access-host-storage-from-module.md
Title: Use IoT Edge device local storage from a module - Azure IoT Edge | Microsoft Docs description: Use environment variables and create options to enable module access to IoT Edge device local storage.-+ -+ Last updated 08/14/2020
iot-edge How To Authenticate Downstream Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-authenticate-downstream-device.md
Title: Authenticate downstream devices - Azure IoT Edge | Microsoft Docs description: How to authenticate downstream devices or leaf devices to IoT Hub, and route their connection through Azure IoT Edge gateway devices. -+ -+ Last updated 10/15/2020
iot-edge How To Configure Api Proxy Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-configure-api-proxy-module.md
Title: Configure API proxy module - Azure IoT Edge | Microsoft Docs description: Learn how to customize the API proxy module for IoT Edge gateway hierarchies.-+ -+ Last updated 11/10/2020
iot-edge How To Configure Proxy Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-configure-proxy-support.md
Title: Configure devices for network proxies - Azure IoT Edge | Microsoft Docs description: How to configure the Azure IoT Edge runtime and any internet-facing IoT Edge modules to communicate through a proxy server. --++ Last updated 02/28/2022
iot-edge How To Connect Downstream Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-connect-downstream-device.md
Title: Connect downstream devices - Azure IoT Edge | Microsoft Docs description: How to configure downstream or leaf devices to connect to Azure IoT Edge gateway devices. -+ -+ Last updated 10/15/2020
iot-edge How To Connect Downstream Iot Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-connect-downstream-iot-edge-device.md
Title: Connect downstream IoT Edge devices - Azure IoT Edge | Microsoft Docs description: How to configure an IoT Edge device to connect to Azure IoT Edge gateway devices. -+ -+ Last updated 02/28/2022
iot-edge How To Continuous Integration Continuous Deployment Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-continuous-integration-continuous-deployment-classic.md
Title: Continuous integration and continuous deployment to Azure IoT Edge devices (classic editor) - Azure IoT Edge description: Set up continuous integration and continuous deployment using the classic editor - Azure IoT Edge with Azure DevOps, Azure Pipelines-+ -+ Last updated 08/26/2021
iot-edge How To Continuous Integration Continuous Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-continuous-integration-continuous-deployment.md
Title: Continuous integration and continuous deployment to Azure IoT Edge devices - Azure IoT Edge description: Set up continuous integration and continuous deployment using YAML - Azure IoT Edge with Azure DevOps, Azure Pipelines-+ -+ Last updated 08/20/2019
iot-edge How To Create Iot Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-create-iot-edge-device.md
Title: Create an IoT Edge device - Azure IoT Edge | Microsoft Docs description: Learn about the platform and provisioning options for creating an IoT Edge device-+ Last updated 11/11/2021-+ # Create an IoT Edge device
iot-edge How To Create Test Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-create-test-certificates.md
Title: Create test certificates - Azure IoT Edge | Microsoft Docs description: Create test certificates and learn how to install them on an Azure IoT Edge device to prepare for production deployment. -+ -+ Last updated 01/03/2022
iot-edge How To Create Transparent Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-create-transparent-gateway.md
Title: Create transparent gateway device - Azure IoT Edge | Microsoft Docs description: Use an Azure IoT Edge device as a transparent gateway that can process information from downstream devices-+ -+ Last updated 03/01/2021
iot-edge How To Create Virtual Switch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-create-virtual-switch.md
Title: Create virtual switch for Azure IoT Edge for Linux on Windows | Microsoft Docs description: Installations for creating a virtual switch for Azure IoT Edge for Linux on Windows-+ Last updated 11/30/2021-+ # Azure IoT Edge for Linux on Windows virtual switch creation
iot-edge How To Deploy At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-at-scale.md
Title: Deploy modules at scale in Azure portal - Azure IoT Edge description: Use the Azure portal to create automatic deployments for groups of IoT Edge devices keywords: -+ -+ Last updated 10/13/2020
iot-edge How To Deploy Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-blob.md
Title: Deploy blob storage on module to your device - Azure IoT Edge description: Deploy an Azure Blob Storage module to your IoT Edge device to store data at the edge.--++ Last updated 3/10/2020
iot-edge How To Deploy Cli At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-cli-at-scale.md
Title: Deploy modules at scale using Azure CLI - Azure IoT Edge description: Use the IoT extension for the Azure CLI to create automatic deployments for groups of IoT Edge devices. keywords: -+ -+ Last updated 10/13/2020
iot-edge How To Deploy Modules Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-modules-cli.md
Title: Deploy modules from the Azure CLI command line - Azure IoT Edge description: Use the Azure CLI with the Azure IoT Extension to push an IoT Edge module from your IoT Hub to your IoT Edge device, as configured by a deployment manifest.-+ -+ Last updated 10/13/2020
iot-edge How To Deploy Modules Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-modules-portal.md
Title: Deploy modules from Azure portal - Azure IoT Edge description: Use your IoT Hub in the Azure portal to push an IoT Edge module from your IoT Hub to your IoT Edge device, as configured by a deployment manifest.-+ -+ Last updated 10/13/2020
iot-edge How To Deploy Modules Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-modules-vscode.md
Title: Deploy modules from Visual Studio Code - Azure IoT Edge description: Use Visual Studio Code with the Azure IoT Tools to push an IoT Edge module from your IoT Hub to your IoT Edge device, as configured by a deployment manifest.-+ -+ Last updated 10/13/2020
iot-edge How To Deploy Vscode At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-vscode-at-scale.md
Title: Deploy modules at scale using Visual Studio Code - Azure IoT Edge description: Use the IoT extension for Visual Studio Code to create automatic deployments for groups of IoT Edge devices. keywords: -+ -+ Last updated 1/8/2020
iot-edge How To Devops Starter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-devops-starter.md
Title: CI/CD pipeline with Azure DevOps Starter - Azure IoT Edge | Microsoft Docs description: Azure DevOps Starter makes it easy to get started on Azure. It helps you launch an Azure IoT Edge app of your choice in few quick steps.--++ Last updated 08/25/2020
iot-edge How To Edgeagent Direct Method https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-edgeagent-direct-method.md
Title: Built-in edgeAgent direct methods - Azure IoT Edge description: Monitor and manage an IoT Edge deployment using built-in direct methods in the IoT Edge agent runtime module-+ -+ Last updated 03/02/2020
iot-edge How To Install Iot Edge Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-install-iot-edge-kubernetes.md
Title: How to install IoT Edge on Kubernetes | Microsoft Docs description: Learn on how to install IoT Edge on Kubernetes using a local development cluster environment-+ -++ Last updated 12/09/2021
iot-edge How To Manage Device Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-manage-device-certificates.md
Title: Manage device certificates - Azure IoT Edge | Microsoft Docs description: Create test certificates, install, and manage them on an Azure IoT Edge device to prepare for production deployment. -+ -+ Last updated 08/24/2021
iot-edge How To Monitor Iot Edge Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-monitor-iot-edge-deployments.md
Title: Monitor IoT Edge deployments - Azure IoT Edge description: High-level monitoring including edgeHub and edgeAgent reported properties and automatic deployment metrics. -+ -+ Last updated 04/21/2020
iot-edge How To Monitor Module Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-monitor-module-twins.md
Title: Monitor module twins - Azure IoT Edge description: How to interpret device twins and module twins to determine connectivity and health.-+ -+ Last updated 05/29/2020
iot-edge How To Provision Devices At Scale Linux On Windows Symmetric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-devices-at-scale-linux-on-windows-symmetric.md
Title: Create and provision IoT Edge devices using symmetric keys on Linux on Windows - Azure IoT Edge | Microsoft Docs description: Use symmetric key attestation to test provisioning Linux on Windows devices at scale for Azure IoT Edge with device provisioning service--++ Last updated 02/09/2022
iot-edge How To Provision Devices At Scale Linux On Windows Tpm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-devices-at-scale-linux-on-windows-tpm.md
Title: Create and provision an IoT Edge for Linux on Windows device by using a TPM - Azure IoT Edge | Microsoft Docs description: Use a simulated TPM on a Linux on Windows device to test the Azure device provisioning service for Azure IoT Edge.-+ -+ Last updated 02/09/2022
iot-edge How To Provision Devices At Scale Linux On Windows X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-devices-at-scale-linux-on-windows-x509.md
Title: Create and provision IoT Edge devices using X.509 certificates on Linux on Windows - Azure IoT Edge | Microsoft Docs description: Use X.509 certificate attestation to test provisioning devices at scale for Azure IoT Edge with device provisioning service--++ Last updated 02/09/2022
iot-edge How To Provision Devices At Scale Linux Symmetric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-devices-at-scale-linux-symmetric.md
Title: Create and provision IoT Edge devices using symmetric keys on Linux - Azure IoT Edge | Microsoft Docs description: Use symmetric key attestation to test provisioning Linux devices at scale for Azure IoT Edge with device provisioning service--++ Last updated 10/29/2021
iot-edge How To Provision Devices At Scale Linux Tpm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-devices-at-scale-linux-tpm.md
Title: Create and provision devices with a virtual TPM on Linux - Azure IoT Edge description: Use a simulated TPM on a Linux device to test the Azure IoT Hub device provisioning service for Azure IoT Edge.-+ -+ Last updated 10/28/2021
iot-edge How To Provision Devices At Scale Linux X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-devices-at-scale-linux-x509.md
Title: Create and provision IoT Edge devices at scale using X.509 certificates on Linux - Azure IoT Edge | Microsoft Docs description: Use X.509 certificates to test provisioning devices at scale for Azure IoT Edge with device provisioning service--++ Last updated 02/28/2022
iot-edge How To Provision Devices At Scale Windows Symmetric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-devices-at-scale-windows-symmetric.md
Title: Create and provision IoT Edge devices using symmetric keys on Windows - Azure IoT Edge | Microsoft Docs description: Use symmetric key attestation to test provisioning Windows devices at scale for Azure IoT Edge with device provisioning service--++ Last updated 10/27/2021
iot-edge How To Provision Devices At Scale Windows Tpm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-devices-at-scale-windows-tpm.md
Title: Create and provision devices with a virtual TPM on Windows - Azure IoT Edge | Microsoft Docs description: Use a simulated TPM on a Windows device to test the Azure device provisioning service for Azure IoT Edge--++ Last updated 10/28/2021
iot-edge How To Provision Devices At Scale Windows X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-devices-at-scale-windows-x509.md
Title: Create and provision IoT Edge devices at scale using X.509 certificates on Windows - Azure IoT Edge | Microsoft Docs description: Use X.509 certificates to test provisioning devices at scale for Azure IoT Edge with device provisioning service--++ Last updated 10/28/2021
iot-edge How To Provision Single Device Linux On Windows Symmetric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-single-device-linux-on-windows-symmetric.md
Title: Create and provision an IoT Edge for Linux on Windows device using symmetric keys - Azure IoT Edge | Microsoft Docs description: Create and provision a single IoT Edge for Linux on Windows device in IoT Hub using manual provisioning with symmetric keys-+ Last updated 02/09/2022-+ # Create and provision an IoT Edge for Linux on Windows device using symmetric keys
iot-edge How To Provision Single Device Linux On Windows X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-single-device-linux-on-windows-x509.md
Title: Create and provision an IoT Edge for Linux on Windows device using X.509 certificates - Azure IoT Edge | Microsoft Docs description: Create and provision a single IoT Edge for Linux on Windows device in IoT Hub using manual provisioning with X.509 certificates-+ Last updated 02/09/2022-+ # Create and provision an IoT Edge for Linux on Windows device using X.509 certificates
iot-edge How To Provision Single Device Linux Symmetric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-single-device-linux-symmetric.md
Title: Create and provision an IoT Edge device on Linux using symmetric keys - Azure IoT Edge | Microsoft Docs description: Create and provision a single IoT Edge device in IoT Hub for manual provisioning with symmetric keys-+ Last updated 11/01/2021-+ # Create and provision an IoT Edge device on Linux using symmetric keys
iot-edge How To Provision Single Device Linux X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-single-device-linux-x509.md
Title: Create and provision an IoT Edge device on Linux using X.509 certificates - Azure IoT Edge | Microsoft Docs description: Create and provision a single IoT Edge device in IoT Hub for manual provisioning with X.509 certificates-+ Last updated 10/28/2021-+ # Create and provision an IoT Edge device on Linux using X.509 certificates
iot-edge How To Provision Single Device Windows Symmetric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-single-device-windows-symmetric.md
Title: Create and provision an IoT Edge device on Windows using symmetric keys - Azure IoT Edge | Microsoft Docs description: Create and provision a single Windows IoT Edge device in IoT Hub using manual provisioning with symmetric keys-+ Last updated 10/28/2021-+ monikerRange: "iotedge-2018-06"
iot-edge How To Provision Single Device Windows X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-single-device-windows-x509.md
Title: Create and provision an IoT Edge device on Windows using X.509 certificates - Azure IoT Edge | Microsoft Docs description: Create and provision a single Windows IoT Edge device in IoT Hub using manual provisioning with X.509 certificates-+ Last updated 10/29/2021-+ monikerRange: "iotedge-2018-06"
iot-edge How To Publish Subscribe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-publish-subscribe.md
Title: Publish and subscribe with Azure IoT Edge | Microsoft Docs
description: Use IoT Edge MQTT broker to publish and subscribe messages keywords: --++ Last updated 11/30/2021
iot-edge How To Retrieve Iot Edge Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-retrieve-iot-edge-logs.md
Title: Retrieve IoT Edge logs - Azure IoT Edge description: IoT Edge module log retrieval and upload to Azure Blob Storage. -+ -+ Last updated 11/12/2020
iot-edge How To Store Data Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-store-data-blob.md
Title: Store block blobs on devices - Azure IoT Edge | Microsoft Docs description: Understand tiering and time-to-live features, see supported blob storage operations, and connect to your blob storage account.--++ Last updated 12/13/2019
iot-edge How To Update Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-update-iot-edge.md
Title: Update IoT Edge version on devices - Azure IoT Edge | Microsoft Docs description: How to update IoT Edge devices to run the latest versions of the security daemon and the IoT Edge runtime keywords: -+ -+ Last updated 06/15/2021
iot-edge How To Use Create Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-use-create-options.md
Title: Write createOptions for modules - Azure IoT Edge | Microsoft Docs description: How to use createOptions in the deployment manifest to configure modules at runtime keywords: -+ -+ Last updated 04/01/2020
iot-edge How To Visual Studio Develop Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-visual-studio-develop-module.md
Title: Develop and debug modules in Visual Studio - Azure IoT Edge description: Use Visual Studio with Azure IoT Tools to develop a C or C# IoT Edge module and push it from your IoT Hub to an IoT device, as configured by a deployment manifest. -+ -+ Last updated 08/24/2021
iot-edge How To Vs Code Develop Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-vs-code-develop-module.md
Title: Develop and debug modules for Azure IoT Edge | Microsoft Docs
description: Use Visual Studio Code to develop, build, and debug a module for Azure IoT Edge using C#, Python, Node.js, Java, or C keywords: -+ -+ Last updated 08/24/2021
iot-edge Iot Edge As Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-as-gateway.md
Title: Gateways for downstream devices - Azure IoT Edge | Microsoft Docs description: Use Azure IoT Edge to create a transparent, opaque, or proxy gateway device that sends data from multiple downstream devices to the cloud or processes it locally.-+ -+ Last updated 03/23/2021
iot-edge Iot Edge For Linux On Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-for-linux-on-windows.md
Title: What is Azure IoT Edge for Linux on Windows | Microsoft Docs description: Overview of you can run Linux IoT Edge modules on Windows 10 devices-+ # this is the PM responsible
Last updated 02/09/2022-+ # What is Azure IoT Edge for Linux on Windows
iot-edge Iot Edge Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-modules.md
Title: Learn how modules run logic on your devices - Azure IoT Edge | Microsoft Docs description: Azure IoT Edge modules are containerized units of logic that can be deployed and managed remotely so that you can run business logic on IoT Edge devices-+ -+ Last updated 03/21/2019
iot-edge Iot Edge Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-runtime.md
Title: Learn how the runtime manages devices - Azure IoT Edge | Microsoft Docs description: Learn how the IoT Edge runtime manages modules, security, communication, and reporting on your devices-+ -+ Last updated 11/10/2020
iot-edge Iot Edge Security Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-security-manager.md
Title: Azure IoT Edge security manager/module runtime - Azure IoT Edge
description: Manages the IoT Edge device security stance and the integrity of security services. keywords: security, secure element, enclave, TEE, IoT Edge--++ Last updated 09/17/2021
iot-edge Module Composition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/module-composition.md
Title: Deploy module & routes with deployment manifests - Azure IoT Edge description: Learn how a deployment manifest declares which modules to deploy, how to deploy them, and how to create message routes between them. -+ -+ Last updated 10/08/2020
iot-edge Module Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/module-development.md
Title: Develop modules for Azure IoT Edge | Microsoft Docs description: Develop custom modules for Azure IoT Edge that can communicate with the runtime and IoT Hub-+ -+ Last updated 09/03/2021
iot-edge Module Edgeagent Edgehub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/module-edgeagent-edgehub.md
Title: Properties of the agent and hub module twins - Azure IoT Edge description: Review the specific properties and their values for the edgeAgent and edgeHub module twins-+ -+ Last updated 04/16/2021
iot-edge Offline Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/offline-capabilities.md
Title: Operate devices offline - Azure IoT Edge | Microsoft Docs description: Understand how IoT Edge devices and modules can operate without internet connection for extended periods of time, and how IoT Edge can enable regular IoT devices to operate offline too.--++ Last updated 11/22/2019
iot-edge Production Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/production-checklist.md
Title: Prepare to deploy your solution in production - Azure IoT Edge description: Learn how to take your Azure IoT Edge solution from development to production, including setting up your devices with the appropriate certificates and making a deployment plan for future code updates. -+ -+ Last updated 03/01/2021
iot-edge Quickstart Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/quickstart-linux.md
Title: Quickstart create an Azure IoT Edge device on Linux | Microsoft Docs description: In this quickstart, learn how to create an IoT Edge device on Linux and then deploy prebuilt code remotely from the Azure portal.--++ Last updated 01/21/2022
iot-edge Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/quickstart.md
Title: Quickstart to create an Azure IoT Edge device on Windows | Microsoft Docs description: In this quickstart, learn how to create an IoT Edge device and then deploy prebuilt code remotely from the Azure portal.-+ -+ Last updated 01/25/2022
iot-edge Reference Iot Edge For Linux On Windows Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/reference-iot-edge-for-linux-on-windows-functions.md
Title: PowerShell functions for Azure IoT Edge for Linux on Windows | Microsoft Docs description: Reference information for Azure IoT Edge for Linux on Windows PowerShell functions to deploy, provision, and status IoT Edge for Linux on Windows virtual machines.-+ Last updated 10/15/2021
iot-edge Reference Windows Scripts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/reference-windows-scripts.md
Title: Scripts for Azure IoT Edge with Windows containers | Microsoft Docs description: Reference information for IoT Edge PowerShell scripts to install, uninstall, or update on Windows devices-+ -+ Last updated 10/06/2020
iot-edge Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/security.md
Title: Security framework - Azure IoT Edge | Microsoft Docs description: Learn about the security, authentication, and authorization standards that were used to develop Azure IoT Edge and should be considered as you design your solution-+ -+ Last updated 08/30/2019
iot-edge Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/support.md
Title: Supported operating systems, container engines - Azure IoT Edge description: Learn which operating systems can run the Azure IoT Edge daemon and runtime, and supported container engines for your production devices--++ Last updated 02/08/2022
iot-edge Troubleshoot Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/troubleshoot-common-errors.md
Title: Common errors - Azure IoT Edge | Microsoft Docs description: Use this article to resolve common issues encountered when deploying an IoT Edge solution-+ -+ Last updated 02/28/2022
iot-edge Troubleshoot In Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/troubleshoot-in-portal.md
Title: Troubleshoot from the Azure portal - Azure IoT Edge | Microsoft Docs description: Use the troubleshooting page in the Azure portal to monitor IoT Edge devices and modules-+ -+ Last updated 05/26/2021
iot-edge Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/troubleshoot.md
Title: Troubleshoot - Azure IoT Edge | Microsoft Docs description: Use this article to learn standard diagnostic skills for Azure IoT Edge, like retrieving component status and logs-+ -+ Last updated 05/04/2021
iot-edge Tutorial C Module Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-c-module-windows.md
Title: Tutorial - Develop C modules for Windows by using Azure IoT Edge description: This tutorial shows you how to create IoT Edge modules with C code and deploy them to Windows devices that are running IoT Edge. -+ -+ Last updated 05/28/2019
iot-edge Tutorial C Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-c-module.md
Title: Tutorial develop C module for Linux - Azure IoT Edge | Microsoft Docs description: This tutorial shows you how to create an IoT Edge module with C code and deploy it to a Linux device running IoT Edge -+ -+ Last updated 07/30/2020
iot-edge Tutorial Csharp Module Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-csharp-module-windows.md
Title: Tutorial - Develop C# modules for Windows by using Azure IoT Edge description: This tutorial shows you how to create IoT Edge modules with C# code and deploy them to Windows devices that are running IoT Edge. -+ -+ Last updated 08/03/2020
iot-edge Tutorial Csharp Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-csharp-module.md
Title: Tutorial - Develop C# module for Linux using Azure IoT Edge description: This tutorial shows you how to create an IoT Edge module with C# code and deploy it to a Linux IoT Edge device. -+ -+ Last updated 07/30/2020
iot-edge Tutorial Deploy Custom Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-deploy-custom-vision.md
Title: Tutorial - Deploy Custom Vision classifier to a device using Azure IoT Edge description: In this tutorial, learn how to make a computer vision model run as a container using Custom Vision and IoT Edge. -+ -+ Last updated 07/30/2020
iot-edge Tutorial Deploy Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-deploy-function.md
Title: 'Tutorial: Deploy Azure Functions as modules - Azure IoT Edge' description: In this tutorial, you develop an Azure Function as an IoT Edge module, then deploy it to an edge device.-+ -+ Last updated 07/29/2020
iot-edge Tutorial Deploy Stream Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-deploy-stream-analytics.md
Title: 'Tutorial - Stream Analytics at the edge using Azure IoT Edge' description: 'In this tutorial, you deploy Azure Stream Analytics as a module to an IoT Edge device'--++ Last updated 05/03/2021
iot-edge Tutorial Develop For Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-develop-for-linux.md
Title: 'Tutorial - Develop module for Linux devices using Azure IoT Edge' description: This tutorial walks through setting up your development machine and cloud resources to develop IoT Edge modules using Linux containers for Linux devices-+ -+ Last updated 07/30/2020
iot-edge Tutorial Develop For Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-develop-for-windows.md
Title: 'Tutorial - Develop module for Windows devices using Azure IoT Edge' description: This tutorial walks through setting up your development machine and cloud resources to develop IoT Edge modules using Windows containers for Windows devices-+ -+ Last updated 07/30/2020
iot-edge Tutorial Java Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-java-module.md
Title: Tutorial - Custom Java module tutorial using Azure IoT Edge description: This tutorial shows you how to create an IoT Edge module with Java code and deploy it to an edge device. -+ -+ Last updated 07/30/2020
iot-edge Tutorial Machine Learning Edge 01 Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-machine-learning-edge-01-intro.md
Title: 'Tutorial: Detailed walkthrough of Machine Learning on Azure IoT Edge' description: A high-level tutorial that walks through the various tasks necessary to create an end-to-end, machine learning at the edge scenario. -+ -+ Last updated 11/11/2019
iot-edge Tutorial Machine Learning Edge 02 Prepare Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-machine-learning-edge-02-prepare-environment.md
Title: 'Tutorial: Set up environment - Machine Learning on Azure IoT Edge' description: 'Tutorial: Prepare your environment for development and deployment of modules for machine learning at the edge.'-+ -+ Last updated 3/12/2020
iot-edge Tutorial Machine Learning Edge 03 Generate Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-machine-learning-edge-03-generate-data.md
Title: 'Tutorial: Generate simulated device data - Machine Learning on Azure IoT Edge' description: 'Tutorial - Create virtual devices that generate simulated telemetry that can later be used to train a machine learning model.'-+ -+ Last updated 1/20/2020
iot-edge Tutorial Machine Learning Edge 04 Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-machine-learning-edge-04-train-model.md
Title: 'Tutorial: Train and deploy a model - Machine Learning on Azure IoT Edge' description: In this tutorial, you'll train a machine learning model by using Azure Machine Learning and then package the model as a container image that can be deployed as an Azure IoT Edge module.-+ -+ Last updated 3/24/2020
iot-edge Tutorial Machine Learning Edge 05 Configure Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-machine-learning-edge-05-configure-edge-device.md
Title: 'Tutorial: Configure an Azure IoT Edge device - Machine learning on IoT Edge' description: In this tutorial, you'll configure an Azure virtual machine running Linux as an Azure IoT Edge device that acts as a transparent gateway.-+ -+ Last updated 2/5/2020
iot-edge Tutorial Machine Learning Edge 06 Custom Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-machine-learning-edge-06-custom-modules.md
Title: 'Tutorial: Create and deploy custom modules - Machine Learning on Azure IoT Edge' description: 'This tutorial shows how to create and deploy IoT Edge modules that process data from leaf devices through a machine learning model and then send the insights to IoT Hub.'-+ -+ Last updated 6/30/2020
iot-edge Tutorial Machine Learning Edge 07 Send Data To Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-machine-learning-edge-07-send-data-to-hub.md
Title: 'Tutorial: Send device data via transparent gateway - Machine Learning on Azure IoT Edge' description: 'This tutorial shows how you can use your development machine as a simulated IoT Edge device to send data to the IoT Hub by going through a device configured as a transparent gateway.'-+ -+ Last updated 6/30/2020
iot-edge Tutorial Monitor With Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-monitor-with-workbooks.md
Title: Tutorial - Azure Monitor workbooks for IoT Edge description: Learn how to monitor IoT Edge modules and devices using Azure Monitor Workbooks for IoT-+ -+ Last updated 08/13/2021
iot-edge Tutorial Nested Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-nested-iot-edge.md
Title: Tutorial - Create a hierarchy of IoT Edge devices - Azure IoT Edge description: This tutorial shows you how to create a hierarchical structure of IoT Edge devices using gateways.-+ -+ Last updated 2/26/2021
iot-edge Tutorial Node Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-node-module.md
Title: Tutorial develop Node.js module for Linux - Azure IoT Edge | Microsoft Docs description: This tutorial shows you how to create an IoT Edge module with Node.js code and deploy it to an edge device -+ -+ Last updated 07/30/2020
iot-edge Tutorial Python Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-python-module.md
Title: Tutorial - Custom Python module tutorial using Azure IoT Edge description: This tutorial shows you how to create an IoT Edge module with Python code and deploy it to an edge device. -+ -+ Last updated 08/04/2020
iot-edge Tutorial Store Data Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-store-data-sql-server.md
Title: Tutorial - Store data with SQL module using Azure IoT Edge description: This tutorial shows how to store data locally on your IoT Edge device with a SQL Server module -+ -+ Last updated 08/04/2020
iot-edge Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/version-history.md
Title: IoT Edge version navigation and history - Azure IoT Edge description: Discover what's new in IoT Edge with information about new features and capabilities in the latest releases.--++ Last updated 04/07/2021
iot-hub Iot Hub Tls Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-tls-support.md
TLS 1.0 and 1.1 are considered legacy and are planned for deprecation. For more
## IoT Hub's server TLS certificate
-During a TLS handshake, IoT Hub presents RSA-keyed server certificates to connecting clients. Its root is the Baltimore Cybertrust Root CA. Recently, we rolled out a change to our TLS server certificate so that it is now issued by new intermediate certificate authorities (ICA). For more information, see [IoT Hub TLS certificate update](https://azure.microsoft.com/updates/iot-hub-tls-certificate-update/).
+During a TLS handshake, IoT Hub presents RSA-keyed server certificates to connecting clients. Its' root is the Baltimore Cybertrust Root CA. Because the Baltimore root is at end-of-life, we'll be migrating to a new root called DigiCert Global G2. This change will impact all devices currently connecting to IoT Hub. To prepare for this migration and for all other details, see [IoT TLS certificate update](https://aka.ms/iot-ca-updates).
### Elliptic Curve Cryptography (ECC) server TLS certificate (preview)
key-vault Quick Create Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-go.md
Title: Quickstart ΓÇô Azure Key Vault Go client library ΓÇô manage secrets
-description: Learn how to create, retrieve, and delete secrets from an Azure key vault using the Go client library
+ Title: 'Quickstart: Manage secrets by using the Azure Key Vault Go client library'
+description: Learn how to create, retrieve, and delete secrets from an Azure key vault by using the Go client library.
Last updated 12/29/2021
ms.devlang: golang
-# Quickstart: Azure Key Vault secret client library for Go
+# Quickstart: Manage secrets by using the Azure Key Vault Go client library
-In this quickstart, you'll learn to use the Azure SDK for Go to create, retrieve, list, and delete secrets from Azure Key Vault.
+In this quickstart, you'll learn how to use the Azure SDK for Go to create, retrieve, list, and delete secrets from an Azure key vault.
- Azure Key Vault can store [several objects types](../general/about-keys-secrets-certificates.md#object-types). But, this quickstart focuses on secrets. By using Azure Key Vault to store secrets, you avoid storing secrets in your code, which increases the security of your applications.
+You can store a variety of [object types](../general/about-keys-secrets-certificates.md#object-types) in an Azure key vault. When you store secrets in a key vault, you avoid having to store them in your code, which helps improve the security of your applications.
-Get started with the [azsecrets](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/keyvault/azsecrets) package and learn how to manage Azure Key Vault secrets using Go.
+Get started with the [azsecrets](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/keyvault/azsecrets) package and learn how to manage your secrets in an Azure key vault by using Go.
## Prerequisites -- An Azure subscription - [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- **Go installed**: Version 1.16 or [above](https://golang.org/dl/)-- [Azure CLI](/cli/azure/install-azure-cli)
+- An Azure subscription. If you don't already have a subscription, you can [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- [Go version 1.16 or later](https://golang.org/dl/), installed.
+- [The Azure CLI](/cli/azure/install-azure-cli), installed.
## Setup
-This quickstart uses the [azidentity](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity) package to authenticate to Azure using Azure CLI. To learn more about different methods of authentication, see [Azure authentication with the Azure SDK for Go](/azure/developer/go/azure-sdk-authentication).
+For purposes of this quickstart, you use the [azidentity](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity) package to authenticate to Azure by using the Azure CLI. To learn about the various authentication methods, see [Azure authentication with the Azure SDK for Go](/azure/developer/go/azure-sdk-authentication).
-### Sign into Azure
+### Sign in to the Azure portal
-1. Run the `az login` command:
+1. In the Azure CLI, run the following command:
```azurecli-interactive az login ```
- If the CLI can open your default browser, it will do so and load an Azure sign-in page.
+ If the Azure CLI can open your default browser, it will do so on the Azure portal sign-in page.
- Otherwise, open a browser page at [https://aka.ms/devicelogin](https://aka.ms/devicelogin) and enter the
- authorization code displayed in your terminal.
+ If the page doesn't open automatically, go to [https://aka.ms/devicelogin](https://aka.ms/devicelogin), and then enter the authorization code that's displayed in your terminal.
-1. Sign in with your account credentials in the browser.
+1. Sign in to the Azure portal with your account credentials.
### Create a resource group and key vault instance
-1. Run the following Azure CLI commands:
+Run the following Azure CLI commands:
- ```azurecli
- az group create --name quickstart-rg --location eastus
- az keyvault create --name quickstart-kv --resource-group quickstart-rg
- ```
+```azurecli
+az group create --name quickstart-rg --location eastus
+az keyvault create --name quickstart-kv --resource-group quickstart-rg
+```
### Create a new Go module and install packages
-1. Run the following Go commands:
+Run the following Go commands:
- ```azurecli
- go mod init kvSecrets
- go get -u github.com/Azure/azure-sdk-for-go/sdk/keyvault/azsecrets
- go get -u github.com/Azure/azure-sdk-for-go/sdk/azidentity
- ```
+```azurecli
+go mod init kvSecrets
+go get -u github.com/Azure/azure-sdk-for-go/sdk/keyvault/azsecrets
+go get -u github.com/Azure/azure-sdk-for-go/sdk/azidentity
+```
## Code examples
-This Code examples section shows how to create a client, set a secret, retrieve a secret, and delete a secret.
+In the following sections, you create a client, set a secret, retrieve a secret, and delete a secret.
### Authenticate and create a client
if err != nil {
} ```
-If you used a different Key Vault name, replace `quickstart-kv` with your vault's name.
+If you used a different key vault name, replace `quickstart-kv` with that name.
### Create a secret
if err != nil {
fmt.Printf("secretValue: %s\n", *getResp.Value) ```
-### Lists secrets
+### List secrets
```go pager := client.ListSecrets(nil)
if err != nil {
} ```
-## Sample Code
+## Sample code
-Create a file named `main.go` and copy the following code into the file:
+Create a file named *main.go*, and then paste the following code into it:
```go package main
func main() {
## Run the code
-Before you run the code, create an environment variable named `KEY_VAULT_NAME`. Set the environment variable's value to the name of the Azure Key Vault created previously.
+1. Before you run the code, create an environment variable named `KEY_VAULT_NAME`. Set the environment variable value to the name of the key vault that you created previously.
-```azurecli
-export KEY_VAULT_NAME=quickstart-kv
-```
+ ```azurecli
+ export KEY_VAULT_NAME=quickstart-kv
+ ```
-Run the following `go run` command to run the Go app:
+1. To start the Go app, run the following command:
-```azurecli
-go run main.go
-```
+ ```azurecli
+ go run main.go
+ ```
-```output
-secretValue: createdWithGO
-Secret ID: https://quickstart-kv.vault.azure.net/secrets/quickstart-secret
-Secret ID: https://quickstart-kv.vault.azure.net/secrets/secretName
-quickstart-secret has been deleted
-```
+ ```output
+ secretValue: createdWithGO
+ Secret ID: https://quickstart-kv.vault.azure.net/secrets/quickstart-secret
+ Secret ID: https://quickstart-kv.vault.azure.net/secrets/secretName
+ quickstart-secret has been deleted
+ ```
## Clean up resources
-Run the following command to delete the resource group and all its remaining resources:
+Delete the resource group and all its remaining resources by running the following command:
```azurecli az group delete --resource-group quickstart-rg
az group delete --resource-group quickstart-rg
## Next steps - [Overview of Azure Key Vault](../general/overview.md)-- [Azure Key Vault developer's guide](../general/developers-guide.md)
+- [Azure Key Vault developers guide](../general/developers-guide.md)
- [Key Vault security overview](../general/security-features.md) - [Authenticate with Key Vault](../general/authentication.md)
logic-apps Edit App Settings Host Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/edit-app-settings-host-settings.md
ms.suite: integration Previously updated : 08/18/2021 Last updated : 03/07/2022 # Edit host and app settings for logic apps in single-tenant Azure Logic Apps
-In *single-tenant* Azure Logic Apps, the *app settings* for a logic app specify the global configuration options that affect *all the workflows* in that logic app. However, these settings apply *only* when these workflows run in your *local development environment*. While running locally, the workflows can access these app settings as *local environment variables*, which are used by local development tools for values that can often change between environments. For example, these values can contain connection strings. When you deploy to Azure, app settings are ignored and aren't included with your deployment.
+In *single-tenant* Azure Logic Apps, the *app settings* for a logic app specify the global configuration options that affect *all the workflows* in that logic app. However, these settings apply *only* when these workflows run in your *local development environment*. Locally-running workflows can access these app settings as *local environment variables*, which are used by local development tools for values that can often change between environments. For example, these values can contain connection strings. When you deploy to Azure, app settings are ignored and aren't included with your deployment.
Your logic app also has *host settings*, which specify the runtime configuration settings and values that apply to *all the workflows* in that logic app, for example, default values for throughput, capacity, data size, and so on, *whether they run locally or in Azure*.
These settings affect the throughput and capacity for single-tenant Azure Logic
| Setting | Default value | Description | |||-|
-| `Runtime.Backend.FlowDefaultForeachItemsLimit` | `100000` <br>(100K array items) | For a *stateful workflow*, sets the maximum number of array items to process in a `For each` loop. |
+| `Runtime.Backend.FlowDefaultForeachItemsLimit` | `100000` array items | For a *stateful workflow*, sets the maximum number of array items to process in a `For each` loop. |
| `Runtime.Backend.Stateless.FlowDefaultForeachItemsLimit` | `100` items | For a *stateless workflow*, sets the maximum number of array items to process in a `For each` loop. | | `Runtime.Backend.ForeachDefaultDegreeOfParallelism` | `20` iterations | Sets the default number of concurrent iterations, or degree of parallelism, in a `For each` loop. To run sequentially, set the value to `1`. |
-| `Runtime.Backend.FlowDefaultSplitOnItemsLimit` | `100000` <br>(100K array items) | Sets the maximum number of array items to debatch or split into multiple workflow instances based on the `SplitOn` setting. |
+| `Runtime.Backend.FlowDefaultSplitOnItemsLimit` | `100000` array items | Sets the maximum number of array items to debatch or split into multiple workflow instances based on the `SplitOn` setting. |
|||| <a name="until-loop"></a>
These settings affect the throughput and capacity for single-tenant Azure Logic
| Setting | Default value | Description | |||-|
-| `Runtime.Backend.DefaultAppendArrayItemsLimit` | `100000` <br>(100K array items) | Sets the maximum number of items in a variable with the Array type. |
+| `Runtime.Backend.DefaultAppendArrayItemsLimit` | `100000` array items | Sets the maximum number of items in a variable with the Array type. |
| `Runtime.Backend.VariableOperation.MaximumVariableSize` | Stateful workflow: `104857600` characters | Sets the maximum size in characters for the content that a variable can store when used in a stateful workflow. | | `Runtime.Backend.VariableOperation.MaximumStatelessVariableSize` | Stateless workflow: `1024` characters | Sets the maximum size in characters for the content that a variable can store when used in a stateless workflow. | ||||
-<a name="http-webhook"></a>
+<a name="recurrence-triggers"></a>
+
+### Recurrence-based triggers
+
+| Setting | Default value | Description |
+|||-|
+| `Microsoft.Azure.Workflows.ServiceProviders.MaximumAllowedTriggerStateSizeInKB` | `1` KB | Sets the trigger state's maximum allowed size for recurrence-based triggers such as the built-in SFTP trigger. The trigger state persists data across multiple service provider recurrence-based triggers. <br><br>**Important**: Based on your storage size, avoid setting this value too high, which can adversely affect storage and performance. |
+||||
+
+<a name="http-operations"></a>
### HTTP operations
These settings affect the throughput and capacity for single-tenant Azure Logic
| `Runtime.Backend.HttpWebhookOperation.DefaultRetryInterval` | `00:00:07` <br>(7 sec) | Sets the default retry interval for HTTP webhook triggers and actions. | | `Runtime.Backend.HttpWebhookOperation.DefaultRetryMaximumInterval` | `01:00:00` <br>(1 hour) | Sets the maximum retry interval for HTTP webhook triggers and actions. | | `Runtime.Backend.HttpWebhookOperation.DefaultRetryMinimumInterval` | `00:00:05` <br>(5 sec) | Sets the minimum retry interval for HTTP webhook triggers and actions. |
-| `Runtime.Backend.HttpWebhookOperation.DefaultWakeUpInterval` | `01:00:00` <br>(1 hour) | Sets the default wakeup interval for HTTP webhook trigger and action jobs. |
+| `Runtime.Backend.HttpWebhookOperation.DefaultWakeUpInterval` | `01:00:00` <br>(1 hour) | Sets the default wake up interval for HTTP webhook trigger and action jobs. |
|||| <a name="built-in-azure-functions"></a>
These settings affect the throughput and capacity for single-tenant Azure Logic
| `Runtime.Backend.ApiConnectionOperation.DefaultRetryInterval` | `00:00:07` <br>(7 sec) | Sets the default retry interval for managed API connector triggers and actions. | | `Runtime.Backend.ApiWebhookOperation.DefaultRetryMaximumInterval` | `01:00:00` <br>(1 day) | Sets the maximum retry interval for managed API connector webhook triggers and actions. | | `Runtime.Backend.ApiConnectionOperation.DefaultRetryMinimumInterval` | `00:00:05` <br>(5 sec) | Sets the minimum retry interval for managed API connector triggers and actions. |
-| `Runtime.Backend.ApiWebhookOperation.DefaultWakeUpInterval` | `01:00:00` <br>(1 day) | Sets the default wakeup interval for managed API connector webhook trigger and action jobs. |
+| `Runtime.Backend.ApiWebhookOperation.DefaultWakeUpInterval` | `01:00:00` <br>(1 day) | Sets the default wake up interval for managed API connector webhook trigger and action jobs. |
|||| <a name="blob-storage"></a>
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-limits-and-config.md
ms.suite: integration Previously updated : 03/01/2022 Last updated : 03/07/2022 # Limits and configuration reference for Azure Logic Apps
The following tables list the values for a single workflow definition:
| Name | Limit | Notes | | - | -- | -- | | Workflows per region per subscription | 1,000 workflows ||
-| Workflow - Maximum name length | 43 characters | Previously 80 characters |
+| Workflow - Maximum name length | - Consumption: 80 characters <br><br>- Standard: 43 characters ||
| Triggers per workflow | 10 triggers | This limit applies only when you work on the JSON workflow definition, whether in code view or an Azure Resource Manager (ARM) template, not the designer. | | Actions per workflow | 500 actions | To extend this limit, you can use nested workflows as necessary. | | Actions nesting depth | 8 actions | To extend this limit, you can use nested workflows as necessary. |
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-training-vnet.md
For steps on how to create a compute instance deployed in a virtual network, see
When you enable **No public IP**, your compute instance doesn't use a public IP for communication with any dependencies. Instead, it communicates solely within the virtual network using Azure Private Link ecosystem and service/private endpoints, eliminating the need for a public IP entirely. No public IP removes access and discoverability of compute instance node from the internet thus eliminating a significant threat vector. Compute instances will also do packet filtering to reject any traffic from outside virtual network. **No public IP** instances are dependent on [Azure Private Link](how-to-configure-private-link.md) for Azure Machine Learning workspace.
-For **outbound connections** to work, you need to set up an egress firewall such as Azure firewall with user defined routes. For instance, you can use a firewall set up with [invound/outbound configuration](how-to-access-azureml-behind-firewall.md) and route traffic there by defining a route table on the subnet in which the compute instance is deployed. The route table entry can set up the next hop of the private IP address of the firewall with the address prefix of 0.0.0.0/0.
+For **outbound connections** to work, you need to set up an egress firewall such as Azure firewall with user defined routes. For instance, you can use a firewall set up with [inbound/outbound configuration](how-to-access-azureml-behind-firewall.md) and route traffic there by defining a route table on the subnet in which the compute instance is deployed. The route table entry can set up the next hop of the private IP address of the firewall with the address prefix of 0.0.0.0/0.
A compute instance with **No public IP** enabled has **no inbound communication requirements** from public internet. Specifically, neither inbound NSG rule (`BatchNodeManagement`, `AzureMachineLearning`) is required. You still need to allow inbound from source of **VirtualNetwork**, any port source, destination of **VirtualNetwork**, and destination port of **29876, 29877, 44224**.
marketplace Isv Csp Reseller https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/isv-csp-reseller.md
Cloning a private offer helps you create a new private offer quickly.
By withdrawing a private offer, your CSP partners will immediately no longer receive a margin and all future purchases will be at the list price. > [!IMPORTANT]
-> Private offers can only be withdrawn if no CSP partners has sold it to a customer.
+> Private offers can only be withdrawn if no CSP partner has sold it to a customer.
To withdraw a private offer:
marketplace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/overview.md
- Previously updated : 10/15/2020 Last updated : 3/7/2022 # What is the Microsoft commercial marketplace?
The Microsoft commercial marketplace is a catalog of solutions from our independ
The commercial marketplace is available in more than 100 countries and regions, and we manage tax payment in many of them. If you sell to established Microsoft customers, they have the added benefit of including commercial marketplace purchases in their existing Microsoft purchase agreements to receive a consolidated invoice from Microsoft.
+The following video provides more information about transacting in the commercial marketplace.
+
+<br />
+<iframe src=https://docs.microsoft.com/_themes/docs.theme/master/en-us/_themes/global/video-embed.html?id=ae2b72e2-6591-407f-8740-50cc4860e8ee width="1080" height="529"></iframe>
+ ## Why sell with Microsoft? Our goal is to help you accelerate your business in partnership with Microsoft, and to connect Microsoft customers with the best solutions that our partner ecosystem offers. To do that, we support you throughout your journey, from onboarding to publishing and growth. Take advantage of the capabilities in the commercial marketplace to grow your business.
Partners who list with the commercial marketplace are eligible for a diverse set
- Get the technical resources you need to get your application ready for launch, from technical support, application design, and architecture design, to Azure credits for development and testing. - Access free Microsoft Go-To-Market Launch Fundamentals to help you launch and promote your solution. You might also be eligible for Microsoft marketing campaigns and opportunities to be featured in the commercial marketplace.-- Reach more customers and expand your sales opportunities with the [Cloud Solution Provider](https://partner.microsoft.com/cloud-solution-provider) (CSP) program, the [co-sell](/partner-center/co-sell-overview?context=/azure/marketplace/context/context) program, and Microsoft Sales teams.
+- Reach more customers and expand your sales opportunities with the [Cloud Solution Provider](/partner-center/csp-overview) (CSP) program, the [co-sell](/partner-center/co-sell-overview) program, and Microsoft Sales teams.
To learn about these benefits in more detail, see [Your commercial marketplace benefits](gtm-your-marketplace-benefits.md).
marketplace Plan Azure Application Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plan-azure-application-offer.md
-+ Last updated 11/11/2021
-# Plan an Azure Application offer
+# Tutorial: Plan an Azure Application offer
-This article explains the different options and requirements for publishing an Azure Application offer to the commercial marketplace.
+This tutorial explains how to publish an Azure Application offer to the commercial marketplace, including different options and requirements available to you.
-## Before you begin
+## Prerequisites
Designing, building, and testing Azure application offers requires technical knowledge of both the Azure platform and the technologies used to build the offer. Your engineering team should have knowledge about the following Microsoft technologies:
There are two kinds of Azure application plans: _solution template_ and _managed
## Next steps - To plan a solution template, see [Plan a solution template for an Azure application offer](plan-azure-app-solution-template.md).-- To plan an Azure managed application, see [Plan an Azure managed application for an Azure application offer](plan-azure-app-managed-app.md).
+- To plan an Azure managed application, see [Plan an Azure managed application for an Azure application offer](plan-azure-app-managed-app.md).
migrate Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/whats-new.md
[Azure Migrate](migrate-services-overview.md) helps you to discover, assess, and migrate on-premises servers, apps, and data to the Microsoft Azure cloud. This article summarizes new releases and features in Azure Migrate. ## Update (February 2022)
+- General Availability: Migrate Windows and Linux Hyper-V virtual machines with large data disks (up to 32 TB in size).
- Azure Migrate is now supported in Azure China. [Learn more](/azure/china/overview-operations#azure-operations-in-china). - Public preview of at-scale, software inventory, and agentless dependency analysis for Hyper-V virtual machines and bare metal servers or servers running on other clouds like AWS, GCP etc.
mysql Concepts Slow Query Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-slow-query-logs.md
See the MySQL [slow query log documentation](https://dev.mysql.com/doc/refman/5.
Slow query logs are integrated with Azure Monitor diagnostic settings. Once you've enabled slow query logs on your MySQL flexible server, you can emit them to Azure Monitor logs, Event Hubs, or Azure Storage. To learn more about diagnostic settings, see the [diagnostic logs documentation](../../azure-monitor/essentials/platform-logs-overview.md). To learn more about how to enable diagnostic settings in the Azure portal, see the [slow query log portal article](tutorial-query-performance-insights.md#set-up-diagnostics). >[!Note]
->Premium Storage accounts are not supported if you sending the logs to Azure storage via diagnostics and settings
+>Premium Storage accounts are not supported if you are sending the logs to Azure storage via diagnostics and settings.
The following table describes the output of the slow query log. Depending on the output method, the fields included and the order in which they appear may vary.
Once your slow query logs are piped to Azure Monitor Logs through Diagnostic Log
## Next steps - Learn more about [audit logs](concepts-audit-logs.md) - [Query performance insights](tutorial-query-performance-insights.md)
-<!-
+<!-
network-watcher Network Watcher Packet Capture Manage Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-manage-rest.md
This article takes you through the different management tasks that are currently
## Before you begin
-In this scenario, you call the Network Watcher Rest API to run IP Flow Verify. ARMclient is used to call the REST API using PowerShell. ARMClient is found on chocolatey at [ARMClient on Chocolatey](https://chocolatey.org/packages/ARMClient)
+In this scenario, you call the Network Watcher REST API to run IP Flow Verify. ARMclient is used to call the REST API using PowerShell. ARMClient is found on chocolatey at [ARMClient on Chocolatey](https://chocolatey.org/packages/ARMClient)
This scenario assumes you have already followed the steps in [Create a Network Watcher](network-watcher-create.md) to create a Network Watcher.
network-watcher Network Watcher Security Group View Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-security-group-view-rest.md
Security group view returns configured and effective network security rules that
## Before you begin
-In this scenario, you call the Network Watcher Rest API to get the security group view for a virtual machine. ARMclient is used to call the REST API using PowerShell. ARMClient is found on chocolatey at [ARMClient on Chocolatey](https://chocolatey.org/packages/ARMClient)
+In this scenario, you call the Network Watcher REST API to get the security group view for a virtual machine. ARMclient is used to call the REST API using PowerShell. ARMClient is found on chocolatey at [ARMClient on Chocolatey](https://chocolatey.org/packages/ARMClient)
This scenario assumes you have already followed the steps in [Create a Network Watcher](network-watcher-create.md) to create a Network Watcher. The scenario also assumes that a Resource Group with a valid virtual machine exists to be used.
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-extensions.md
az postgres flexible-server parameter set --resource-group <your resource group>
```
-After extensions are allow-listed, these must be installed in your database before you can use them. To install a particular extension, you should run the [CREATE EXTENSION](https://www.postgresql.org/docs/current/sql-createextension.html) command. This command loads the packaged objects into your database.
+Shared_Preload_Libraries is a server configuration parameter determining which libraries are to be loaded when PostgreSQL starts. Any libraries which use shared memory must be loaded via this parameter. If your extension needs to be added to shared preload libraries this can be done:
+
+Using the [Azure portal](https://portal.azure.com):
+
+ 1. Select your Azure Database for PostgreSQL - Flexible Server.
+ 2. On the sidebar, select **Server Parameters**.
+ 3. Search for the `shared_preload_libraries` parameter.
+ 4. Select extensions you wish to add.
+ :::image type="content" source="./media/concepts-extensions/shared-libraries.png" alt-text=" Screenshot showing Azure Database for PostgreSQL -setting shared preload libraries parameter setting for extensions installation .":::
+
+
+Using [Azure CLI](https://docs.microsoft.com/cli/azure/):
+
+ You can set `shared_preload_libraries` via CLI parameter set [command]( https://docs.microsoft.com/cli/azure/postgres/flexible-server/parameter?view=azure-cli-latest&preserve-view=true).
+
+ ```bash
+az postgres flexible-server parameter set --resource-group <your resource group> --server-name <your server name> --subscription <your subscription id> --name shared_preload_libraries --value <extension name>,<extension name>
+ ```
++
+After extensions are allow-listed and loaded, these must be installed in your database before you can use them. To install a particular extension, you should run the [CREATE EXTENSION](https://www.postgresql.org/docs/current/sql-createextension.html) command. This command loads the packaged objects into your database.
++ Azure Database for PostgreSQL supports a subset of key extensions as listed below. This information is also available by running `SHOW azure.extensions;`. Extensions not listed in this document are not supported on Azure Database for PostgreSQL - Flexible Server. You cannot create or load your own extension in Azure Database for PostgreSQL.
The following extensions are available in Azure Database for PostgreSQL - Flexib
> |[isn](https://www.postgresql.org/docs/13/isn.html) | 1.2 | data types for international product numbering standards| > |[lo](https://www.postgresql.org/docs/13/lo.html) | 1.1 | large object maintenance | > |[ltree](https://www.postgresql.org/docs/13/ltree.html) | 1.2 | data type for hierarchical tree-like structures|
+ > |[orafce](https://github.com/orafce/orafce) | 3.1.8 |implements in Postgres some of the functions from the Oracle database that are missing|
> |[pageinspect](https://www.postgresql.org/docs/13/pageinspect.html) | 1.8 | inspect the contents of database pages at a low level| > |[pg_buffercache](https://www.postgresql.org/docs/13/pgbuffercache.html) | 1.3 | examine the shared buffer cache|
-> |[pg_cron](https://github.com/citusdata/pg_cron) | 1.3 | Job scheduler for PostgreSQL|
+> |[pg_cron](https://github.com/citusdata/pg_cron) | 1.4 | Job scheduler for PostgreSQL|
> |[pg_freespacemap](https://www.postgresql.org/docs/13/pgfreespacemap.html) | 1.2 | examine the free space map (FSM)| > |[pg_partman](https://github.com/pgpartman/pg_partman) | 4.5.0 | Extension to manage partitioned tables by time or ID | > |[pg_prewarm](https://www.postgresql.org/docs/13/pgprewarm.html) | 1.2 | prewarm relation data|
The following extensions are available in Azure Database for PostgreSQL - Flexib
> |[postgis_topology](https://postgis.net/docs/Topology.html) | 3.1.1 | PostGIS topology spatial types and functions| > |[postgres_fdw](https://www.postgresql.org/docs/13/postgres-fdw.html) | 1.0 | foreign-data wrapper for remote PostgreSQL servers| > |[sslinfo](https://www.postgresql.org/docs/13/sslinfo.html) | 1.2 | information about SSL certificates|
+> |[timescaledb](https://github.com/timescale/timescaledb) | 2.5.1 | Open-source relational database for time-series and analytics|
> |[tsm_system_rows](https://www.postgresql.org/docs/13/tsm-system-rows.html) | 1.0 | TABLESAMPLE method which accepts number of rows as a limit| > |[tsm_system_time](https://www.postgresql.org/docs/13/tsm-system-time.html) | 1.0 | TABLESAMPLE method which accepts time in milliseconds as a limit| > |[unaccent](https://www.postgresql.org/docs/13/unaccent.html) | 1.1 | text search dictionary that removes accents|
The following extensions are available in Azure Database for PostgreSQL - Flexib
> |[isn](https://www.postgresql.org/docs/12/isn.html) | 1.2 | data types for international product numbering standards| > |[lo](https://www.postgresql.org/docs/12/lo.html) | 1.1 | large object maintenance | > |[ltree](https://www.postgresql.org/docs/12/ltree.html) | 1.1 | data type for hierarchical tree-like structures|
+> |[orafce](https://github.com/orafce/orafce) | 3.1.8 |implements in Postgres some of the functions from the Oracle database that are missing|
> |[pageinspect](https://www.postgresql.org/docs/12/pageinspect.html) | 1.7 | inspect the contents of database pages at a low level| > |[pg_buffercache](https://www.postgresql.org/docs/12/pgbuffercache.html) | 1.3 | examine the shared buffer cache|
-> |[pg_cron](https://github.com/citusdata/pg_cron) | 1.3 | Job scheduler for PostgreSQL|
+> |[pg_cron](https://github.com/citusdata/pg_cron) | 1.4 | Job scheduler for PostgreSQL|
> |[pg_freespacemap](https://www.postgresql.org/docs/12/pgfreespacemap.html) | 1.2 | examine the free space map (FSM)| > |[pg_partman](https://github.com/pgpartman/pg_partman) | 4.5.0 | Extension to manage partitioned tables by time or ID | > |[pg_prewarm](https://www.postgresql.org/docs/12/pgprewarm.html) | 1.2 | prewarm relation data|
The following extensions are available in Azure Database for PostgreSQL - Flexib
> |[postgis_topology](https://postgis.net/docs/Topology.html) | 3.0.0 | PostGIS topology spatial types and functions| > |[postgres_fdw](https://www.postgresql.org/docs/12/postgres-fdw.html) | 1.0 | foreign-data wrapper for remote PostgreSQL servers| > |[sslinfo](https://www.postgresql.org/docs/12/sslinfo.html) | 1.2 | information about SSL certificates|
+> |[timescaledb](https://github.com/timescale/timescaledb) | 2.5.1 | Open-source relational database for time-series and analytics|
> |[tsm_system_rows](https://www.postgresql.org/docs/12/tsm-system-rows.html) | 1.0 | TABLESAMPLE method which accepts number of rows as a limit| > |[tsm_system_time](https://www.postgresql.org/docs/12/tsm-system-time.html) | 1.0 | TABLESAMPLE method which accepts time in milliseconds as a limit| > |[unaccent](https://www.postgresql.org/docs/12/unaccent.html) | 1.1 | text search dictionary that removes accents|
The following extensions are available in Azure Database for PostgreSQL - Flexib
> |[isn](https://www.postgresql.org/docs/11/isn.html) | 1.2 | data types for international product numbering standards| > |[lo](https://www.postgresql.org/docs/11/lo.html) | 1.1 | large object maintenance | > |[ltree](https://www.postgresql.org/docs/11/ltree.html) | 1.1 | data type for hierarchical tree-like structures|
+> |[orafce](https://github.com/orafce/orafce) | 3.1.8 |implements in Postgres some of the functions from the Oracle database that are missing|
> |[pageinspect](https://www.postgresql.org/docs/11/pageinspect.html) | 1.7 | inspect the contents of database pages at a low level| > |[pg_buffercache](https://www.postgresql.org/docs/11/pgbuffercache.html) | 1.3 | examine the shared buffer cache|
-> |[pg_cron](https://github.com/citusdata/pg_cron) | 1.3 | Job scheduler for PostgreSQL|
+> |[pg_cron](https://github.com/citusdata/pg_cron) | 1.4 | Job scheduler for PostgreSQL|
> |[pg_freespacemap](https://www.postgresql.org/docs/11/pgfreespacemap.html) | 1.2 | examine the free space map (FSM)| > |[pg_partman](https://github.com/pgpartman/pg_partman) | 4.5.0 | Extension to manage partitioned tables by time or ID | > |[pg_prewarm](https://www.postgresql.org/docs/11/pgprewarm.html) | 1.2 | prewarm relation data|
The following extensions are available in Azure Database for PostgreSQL - Flexib
> |[postgres_fdw](https://www.postgresql.org/docs/11/postgres-fdw.html) | 1.0 | foreign-data wrapper for remote PostgreSQL servers| > |[sslinfo](https://www.postgresql.org/docs/11/sslinfo.html) | 1.2 | information about SSL certificates| > |[tablefunc](https://www.postgresql.org/docs/11/tablefunc.html) | 1.0 | functions that manipulate whole tables, including crosstab|
+> |[timescaledb](https://github.com/timescale/timescaledb) | 1.7.4 | Open-source relational database for time-series and analytics|
> |[tsm_system_rows](https://www.postgresql.org/docs/11/tsm-system-rows.html) | 1.0 | TABLESAMPLE method which accepts number of rows as a limit| > |[tsm_system_time](https://www.postgresql.org/docs/11/tsm-system-time.html) | 1.0 | TABLESAMPLE method which accepts time in milliseconds as a limit| > |[unaccent](https://www.postgresql.org/docs/11/unaccent.html) | 1.1 | text search dictionary that removes accents|
purview Abap Functions Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/abap-functions-deployment-guide.md
Previously updated : 12/20/2021 Last updated : 03/05/2022 # SAP ABAP function module deployment guide
-When you scan [SAP ECC](register-scan-sapecc-source.md) or [SAP S/4HANA](register-scan-saps4hana-source.md) sources in Azure Purview, you need to create the dependent ABAP function module in your SAP server. Azure Purview invokes this function module to extract the metadata from your SAP system during scan.
+When you scan [SAP ECC](register-scan-sapecc-source.md), [SAP S/4HANA](register-scan-saps4hana-source.md) and [SAP BW](register-scan-sap-bw.md) sources in Azure Purview, you need to create the dependent ABAP function module in your SAP server. Azure Purview invokes this function module to extract the metadata from your SAP system during scan.
This document details the steps required to deploy this module.
This document details the steps required to deploy this module.
## Prerequisites
-Download the SAP ABAP function module source code from Azure Purview Studio. After you register a source for [SAP ECC](register-scan-sapecc-source.md) or [SAP S/4HANA](register-scan-saps4hana-source.md), you can find a download link on top as follows.
+Download the SAP ABAP function module source code from Azure Purview Studio. After you register a source for [SAP ECC](register-scan-sapecc-source.md),[SAP S/4HANA](register-scan-saps4hana-source.md) or [SAP BW](register-scan-sap-bw.md), you can find a download link on top as follows. You can also see the link when new or edit a scan.
:::image type="content" source="media/abap-functions-deployment-guide/download-abap-code.png" alt-text="Download ABAP function module source code from Azure Purview Studio" border="true":::
Download the SAP ABAP function module source code from Azure Purview Studio. Aft
This step is optional, and an existing package can be used.
-1. Log in to the SAP S/4HANA or SAP ECC server and open **Object Navigator** (SE80 transaction).
+1. Log in to the SAP server and open **Object Navigator** (SE80 transaction).
2. Select option **Package** from the list and enter a name for the new package (for example, Z\_MITI) then press button **Display**.
When all the previous steps are completed, follow the below steps to test the fu
4. Put the name of the area of interest into P\_AREA field if a file with metadata must be downloaded or updated. When the function finishes working, the folder which has been indicated in P\_LOCAL\_PATH parameter must contain several files with metadata inside. The names of files mimic areas which can be specified in P\_AREA field.
-The function will finish its execution and metadata will be downloaded much faster in case of launching it on the machine which has high-speed network connection with SAP S/4HANA or ECC server.
+The function will finish its execution and metadata will be downloaded much faster in case of launching it on the machine which has high-speed network connection with SAP server.
## Next steps - [Register and scan SAP ECC source](register-scan-sapecc-source.md) - [Register and scan SAP S/4HANA source](register-scan-saps4hana-source.md)
+- [Register and scan SAP Business Wareouse (BW) source](register-scan-sap-bw.md)
purview Azure Purview Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/azure-purview-connector-overview.md
The table below shows the supported capabilities for each data source. Select th
|| [MySQL](register-scan-mysql.md) | [Yes](register-scan-mysql.md#register) | No | [Yes](register-scan-mysql.md#scan) | No | || [Oracle](register-scan-oracle-source.md) | [Yes](register-scan-oracle-source.md#register)| No | [Yes*](register-scan-oracle-source.md#lineage) | No| || [PostgreSQL](register-scan-postgresql.md) | [Yes](register-scan-postgresql.md#register) | No | [Yes](register-scan-postgresql.md#lineage) | No |
+|| [SAP Business Warehose](register-scan-sap-bw.md) | [Yes](register-scan-sap-bw.md#register) | No | No | No |
|| [SAP HANA](register-scan-sap-hana.md) | [Yes](register-scan-sap-hana.md#register) | No | No | No | || [Snowflake](register-scan-snowflake.md) | [Yes](register-scan-snowflake.md#register) | No | [Yes](register-scan-snowflake.md#lineage) | No | || [SQL Server](register-scan-on-premises-sql-server.md)| [Yes](register-scan-on-premises-sql-server.md#register) |[Yes](register-scan-on-premises-sql-server.md#scan) | No* | No|
purview How To Enable Data Use Governance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-enable-data-use-governance.md
Previously updated : 3/02/2022 Last updated : 3/07/2022
*Data use governance* (DUG) is an option in the data source registration in Azure Purview. Its purpose is to make those data sources available in the policy authoring experience of Azure Purview Studio. In other words, access policies can only be written on data sources that have been previously registered and with DUG toggle set to enable. ## Prerequisites-
-To enable the *Data use Governance* (DUG) toggle for a data source, resource group, or subscription, the same user needs to have both certain IAM privileges on the resource and certain Azure Purview privileges.
-
-1) User needs to have **either one of the following** IAM role combinations on the resource:
-- IAM *Owner*-- Both IAM *Contributor* + IAM *User Access Administrator*-
-Follow this [guide to configure Azure RBAC role permissions](../role-based-access-control/check-access.md).
-
-2) In addition, the same user needs to have Azure Purview Data source administrator role at the root collection level. See the guide on [managing Azure Purview role assignments](catalog-permissions.md#assign-permissions-to-your-users).
-
->[!IMPORTANT]
-> - Currently, policy operations are only supported at **root collection level** and not child collection level.
## Enable Data use governance
Once you have your resource registered, follow the rest of the steps to enable a
1. Set the *Data use governance* toggle to **Enabled**, as shown in the image below.
- :::image type="content" source="./media/tutorial-data-owner-policies-storage/register-data-source-for-policy-storage.png" alt-text="Set Data use governance toggle to **Enabled** at the bottom of the menu.":::
-
-> [!WARNING]
-> **Known issues**
-> - Moving data sources to a different resource group or subscription is not yet supported. If want to do that, de-register the data source in Azure Purview before moving it and then register it again after that happens.
- ## Disable Data use governance
->[!Note]
->If your resource is currently a part of any active access policy, you will not be able to disable data use governance. First [un-publish the policy from the resource](how-to-data-owner-policy-authoring-generic.md#update-or-delete-a-policy), then disable data use governance.
- To disable data use governance for a source, resource group, or subscription, a user needs to either be a resource IAM **Owner** or an Azure Purview **Data source admin**. Once you have those permissions follow these steps: 1. Go to the [Azure Purview Studio](https://web.purview.azure.com/resource/).
To disable data use governance for a source, resource group, or subscription, a
1. Set the **Data use governance** toggle to **Disabled**.
->[!NOTE]
-> Disabling **Data use governance** for a subscription source will disable it also for all assets registered in that subscription.
-
-> [!WARNING]
-> **Known issues**
-> - Once a subscription gets disabled for *Data use governance* any underlying assets that are enabled for *Data use governance* will be disabled, which is the right behavior. However, policy statements based on those assets will still be allowed after that
-
-## Data use governance best practices
--- We highly encourage registering data sources for *Data use governance* and managing all associated access policies in a single Azure Purview account.-- Should you have multiple Azure Purview accounts, be aware that **all** data sources belonging to a subscription must be registered for *Data use governance* in a single Azure Purview account. That Azure Purview account can be in any subscription in the tenant. The *Data use governance* toggle will become greyed out when there are invalid configurations. Some examples of valid and invalid configurations follow in the diagram below:
- - **Case 1** shows a valid configuration where a Storage account is registered in an Azure Purview account in the same subscription.
- - **Case 2** shows a valid configuration where a Storage account is registered in an Azure Purview account in a different subscription.
- - **Case 3** shows an invalid configuration arising because Storage accounts S3SA1 and S3SA2 both belong to Subscription 3, but are registered to different Azure Purview accounts. In that case, the *Data use governance* toggle will only work in the Azure Purview account that wins and registers a data source in that subscription first. The toggle will then be greyed out for the other data source.
-
- :::image type="content" source="./media/access-policies-common/valid-and-invalid-configurations.png" alt-text="Diagram shows valid and invalid configurations when using multiple Azure Purview accounts to manage policies.":::
## Next steps
purview Manage Integration Runtimes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/manage-integration-runtimes.md
Previously updated : 01/27/2022 Last updated : 03/05/2022 # Create and manage a self-hosted integration runtime
Here are the domains and outbound ports that you need to allow at both **corpora
| Domain names | Outbound ports | Description | | -- | -- | - |
-| `*.servicebus.windows.net` | 443 | Required for interactive authoring, for example, test connection on Azure Purview Studio. Currently wildcard is required as there is no dedicated resource. |
| `*.frontend.clouddatahub.net` | 443 | Required to connect to the Azure Purview service. Currently wildcard is required as there is no dedicated resource. |
+| `*.servicebus.windows.net` | 443 | Required for setting up scan on Azure Purview Studio. This endpoint is used for interactive authoring from UI, for example, test connection, browse folder list and table list to scope scan. Currently wildcard is required as there is no dedicated resource. |
| `<managed_storage_account>.blob.core.windows.net` | 443 | Required to connect to the Azure Purview managed Azure Blob storage account. | | `<managed_storage_account>.queue.core.windows.net` | 443 | Required to connect to the Azure Purview managed Azure Queue storage account. | | `<managed_Event_Hub_resource>.servicebus.windows.net` | 443 | Azure Purview uses this to connect with the associated service bus. It's covered by allowing the above domain. If you use private endpoint, you need to test access to this single domain.|
purview Register Scan Hive Metastore Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-hive-metastore-source.md
Previously updated : 01/17/2022 Last updated : 02/25/2022
When setting up scan, you can choose to scan an entire Hive metastore database,
* Ensure that Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the machine where the self-hosted integration runtime is running. If you don't have this update installed, [download it now](https://www.microsoft.com/download/details.aspx?id=30679).
-* Download and install the Hive Metastore database's JDBC driver on the machine where your self-hosted integration runtime is running. For example, if the database is *mssql*, download [Microsoft's JDBC driver for SQL Server](/sql/connect/jdbc/download-microsoft-jdbc-driver-for-sql-server).
+* Download the Hive Metastore database's JDBC driver on the machine where your self-hosted integration runtime is running. For example, if the database is *mssql*, download [Microsoft's JDBC driver for SQL Server](/sql/connect/jdbc/download-microsoft-jdbc-driver-for-sql-server). If you scan Azure Databricks's Hive Metastore, download the MariaDB Connector/J version 2.7.5 from [here](https://dlm.mariadb.com/1965742/Connectors/java/connector-java-2.7.5/mariadb-java-client-2.7.5.jar); version 3.0.3 is not supported.
> [!Note] > The driver should be accessible to all accounts in the machine. Don't install it in a user account.
Use the following steps to scan Hive Metastore databases to automatically identi
1. **Metastore JDBC Driver Location**: Specify the path to the JDBC driver location on your machine where the self-hosted integration runtime is running. This should be a valid path to the folder for JAR files.
- If you're scanning Azure Databricks, refer to the information on Azure Databricks in the next step.
- > [!Note] > The driver should be accessible to all accounts in the machine. Don't install it in a user account.
+ >
+ > If you scan Azure Databricks's Hive Metastore, download the MariaDB Connector/J version 2.7.5 from [here](https://dlm.mariadb.com/1965742/Connectors/java/connector-java-2.7.5/mariadb-java-client-2.7.5.jar). Version 3.0.3 is not supported.
1. **Metastore JDBC Driver Class**: Provide the class name for the connection driver. For example, enter **\com.microsoft.sqlserver.jdbc.SQLServerDriver**.
Use the following steps to scan Hive Metastore databases to automatically identi
:::image type="content" source="media/register-scan-hive-metastore-source/databricks-jdbc-connection.png" alt-text="Screenshot that shows an example connection U R L property." border="true"::: > [!NOTE]
- > When you copy the URL from *hive-site.xml*, remove `amp;` from the string or the scan will fail. Then append the path to your SSL certificate to the URL. This will be the path to the SSL certificate's location on your machine. [Download the SSL certificate](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem).
+ > When you copy the URL from *hive-site.xml*, remove `amp;` from the string or the scan will fail.
>
- > When you enter local file system paths in the Azure Purview Studio scan configuration, remember to change the Windows path separator character from a backslash (`\`) to a forward slash (`/`). For example, if your MariaDB JAR file is *C:\mariadb-jdbc.jar*, change it to *C:/mariadb-jdbc.jar*. Make the same change to the Metastore JDBC URL `sslCA` parameter. For example, if it's placed at local file system path *D:\Drivers\SSLCert\BaltimoreCyberTrustRoot.crt.pem*, change it to *D:/Drivers/SSLCert/BaltimoreCyberTrustRoot.crt.pem*.
+ > [Download the SSL certificate](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) to the self-hosted integration runtime machine, then update the path to the SSL certificate's location on your machine in the URL.
+ >
+ > When you enter local file paths in the scan configuration, change the Windows path separator character from a backslash (`\`) to a forward slash (`/`). For example, if you place the SSL certificate at local file path *D:\Drivers\SSLCert\BaltimoreCyberTrustRoot.crt.pem*, change the `serverSslCert` parameter value to *D:/Drivers/SSLCert/BaltimoreCyberTrustRoot.crt.pem*.
The **Metastore JDBC URL** value will look like this example:-
- `jdbc:mariadb://consolidated-westus2-prod-metastore-addl-1.mysql.database.azure.com:3306/organization1829255636414785?trustServerCertificate=true&useSSL=true&sslCA=D:/Drivers/SSLCert/BaltimoreCyberTrustRoot.crt.pem`
+
+ `jdbc:mariadb://consolidated-westus2-prod-metastore-addl-1.mysql.database.azure.com:3306/organizationXXXXXXXXXXXXXXXX?useSSL=true&enabledSslProtocolSuites=TLSv1,TLSv1.1,TLSv1.2&serverSslCert=D:/Drivers/SSLCert/BaltimoreCyberTrustRoot.crt.pem`
1. **Metastore database name**: Provide the name of the Hive Metastore database.
purview Register Scan Sap Bw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-sap-bw.md
+
+ Title: Connect to and manage an SAP Business Warehouse
+description: This guide describes how to connect to SAP Business Warehouse in Azure Purview, and use Azure Purview's features to scan and manage your SAP BW source.
+++++ Last updated : 03/05/2022+++
+# Connect to and manage SAP Business Warehouse in Azure Purview (Preview)
+
+This article outlines how to register SAP Business Warehouse (BW), and how to authenticate and interact with SAP BW in Azure Purview. For more information about Azure Purview, read the [introductory article](overview.md).
++
+## Supported capabilities
+
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|
+||||||||
+| [Yes](#register)| [Yes](#scan)| No | No | No | No| No|
+
+The supported SAP BW versions are 7.3 to 7.5. SAP BW4/HANA is not supported.
+
+When scanning SAP BW source, Azure Purview supports extracting technical metadata including:
+
+- Instance
+- InfoArea
+- InfoSet
+- InfoSet query
+- Classic InfoSet
+- InfoObject including unit of measurement, time characteristic, navigation attribute, data packet characteristic, currency, characteristic, field, and key figure
+- Data store object (DSO)
+- Aggregation level
+- Open hub destination
+- Query including the query condition
+- Query view
+- HybridProvider
+- MultiProvider
+- InfoCube
+- Aggregate
+- Dimension
+- Time dimension
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+* An active [Azure Purview resource](create-catalog-portal.md).
+
+* You need Data Source Administrator and Data Reader permissions to register a source and manage it in Azure Purview Studio. For more information about permissions, see [Access control in Azure Purview](catalog-permissions.md).
+
+* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md). The minimal supported Self-hosted Integration Runtime version is 5.15.8079.1.
+
+* Ensure [JDK 11](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed on the machine where the self-hosted integration runtime is installed.
+
+* Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
+
+* The connector reads metadata from SAP using the [SAP Java Connector (JCo)](https://support.sap.com/en/product/connectors/jco.html) 3.0 API. Make sure the Java Connector is available on your machine where self-hosted integration runtime is installed. Make sure that you use the correct JCo distribution for your environment, and the **sapjco3.jar** and **sapjco3.dll** files are available.
+
+ > [!Note]
+ > The driver should be accessible to all accounts in the machine. Don't put it in a path under user account.
+
+* Deploy the metadata extraction ABAP function module on the SAP server by following the steps mentioned in [ABAP functions deployment guide](abap-functions-deployment-guide.md). You need an ABAP developer account to create the RFC function module on the SAP server. The user account requires sufficient permissions to connect to the SAP server and execute the following RFC function modules:
+
+ * STFC_CONNECTION (check connectivity)
+ * RFC_SYSTEM_INFO (check system information)
+ * OCS_GET_INSTALLED_COMPS (check software versions)
+ * Z_MITI_BW_DOWNLOAD (main metadata import)
+
+## Register
+
+This section describes how to register SAP BW in Azure Purview using the [Azure Purview Studio](https://web.purview.azure.com/).
+
+### Authentication for registration
+
+The only supported authentication for SAP BW source is **Basic authentication**.
+
+### Steps to register
+
+1. Navigate to your Azure Purview account.
+1. Select **Data Map** on the left navigation.
+1. Select **Register**.
+1. In **Register sources**, select **SAP BW** > **Continue**.
+
+On the **Register sources (SAP BW)** screen, do the following:
+
+1. Enter a **Name** that the data source will be listed within the Catalog.
+
+1. Enter the **Application server** name to connect to SAP BW source. It can also be an IP address of the SAP application server host.
+
+1. Enter the SAP **System number**. It's an integer between 0 and 99.
+
+1. Select a collection or create a new one (Optional).
+
+1. Finish to register the data source.
+
+ :::image type="content" source="media/register-scan-sap-bw/register-sap-bw.png" alt-text="Screenshot of registering an SAP BW source." border="true":::
+
+## Scan
+
+Follow the steps below to scan SAP BW to automatically identify assets and classify your data. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md).
+
+### Create and run scan
+
+1. In the Management Center, select Integration runtimes. Make sure a self-hosted integration runtime is set up. If it isn't set up, use the steps mentioned [here](./manage-integration-runtimes.md) to create a self-hosted integration runtime.
+
+1. Navigate to **Sources**
+
+1. Select the registered SAP BW source.
+
+1. Select **+ New scan**
+
+1. Provide the below details:
+
+ 1. **Name**: The name of the scan
+
+ 1. **Connect via integration runtime**: Select the configured self-hosted integration runtime.
+
+ 1. **Credential**: Select the credential to connect to your data source. Make sure to:
+
+ * Select Basic Authentication while creating a credential.
+ * Provide a user ID to connect to SAP server in the User name input field.
+ * Store the user password used to connect to SAP server in the secret key.
+
+ 1. **Client ID**: Enter the SAP Client ID. It's a three-digit numeric number from 000 to 999.
+
+ 1. **JCo library path**: The directory path where the JCo libraries are located.
+
+ 1. **Maximum memory available:** Maximum memory (in GB) available on the Self-hosted Integration Runtime machine to be used by scanning processes. This is dependent on the size of SAP BW source to be scanned.
+
+ :::image type="content" source="media/register-scan-sap-bw/scan-sap-bw.png" alt-text="Screenshot of setting up an SAP BW scan." border="true":::
+
+1. Select **Test connection**.
+
+1. Select **Continue**.
+
+1. Choose your **scan trigger**. You can set up a schedule or ran the
+ scan once.
+
+1. Review your scan and select **Save and Run**.
++
+## Next steps
+
+Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
+
+- [Search Data Catalog](how-to-search-catalog.md)
+- [Data insights in Azure Purview](concept-insights.md)
+- [Supported data sources and file types](azure-purview-connector-overview.md)
purview Tutorial Data Owner Policies Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-data-owner-policies-resource-group.md
Enable the resource group or the subscription for access policies in Azure Purvi
[!INCLUDE [Access policies generic registration](./includes/access-policies-registration-generic.md)]
+More here on [registering a data source for Data use governance](./how-to-enable-data-use-governance.md)
+ ## Create and publish a data owner policy Execute the steps in the [data-owner policy authoring tutorial](how-to-data-owner-policy-authoring-generic.md) to create and publish a policy similar to the example shown in the image: a policy that provides security group *sg-Finance* *modify* access to resource group *finance-rg*:
purview Tutorial Data Owner Policies Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-data-owner-policies-storage.md
Previously updated : 2/2/2022 Last updated : 03/07/2022
Enable the data source for access policies in Azure Purview by setting the **Dat
[!INCLUDE [Access policies generic registration](./includes/access-policies-registration-generic.md)]
+More here on [registering a data source for Data use governance](./how-to-enable-data-use-governance.md)
## Create and publish a data owner policy Execute the steps in the [data-owner policy authoring tutorial](how-to-data-owner-policy-authoring-generic.md) to create and publish a policy similar to the example shown in the image: a policy that provides group *Contoso Team* *read* access to Storage account *marketinglake1*:
remote-rendering View Remote Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/tutorials/unity/view-remote-models/view-remote-models.md
public void DisconnectRuntimeFromRemoteSession()
ARRSessionService.CurrentActiveSession.ConnectionStatusChanged -= OnLocalRuntimeStatusChanged; CurrentCoordinatorState = RemoteRenderingState.RemoteSessionReady; }-
-/// <summary>
-/// The session must have its runtime pump updated.
-/// The Connection.Update() will push messages to the server, receive messages, and update the frame-buffer with the remotely rendered content.
-/// </summary>
-private void LateUpdate()
-{
- ARRSessionService?.CurrentActiveSession?.Connection?.Update();
-}
``` > [!NOTE]
role-based-access-control Resource Provider Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/resource-provider-operations.md
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/deletedWorkspaces/read | Lists workspaces in soft deleted period. | > | Microsoft.OperationalInsights/linkTargets/read | Lists workspaces in soft deleted period. | > | microsoft.operationalinsights/locations/operationStatuses/read | Get Log Analytics Azure Async Operation Status. |
-> | microsoft.operationalinsights/operations/read | Lists all of the available OperationalInsights Rest API operations. |
+> | microsoft.operationalinsights/operations/read | Lists all of the available OperationalInsights REST API operations. |
> | microsoft.operationalinsights/querypacks/write | Create or Update Query Packs. | > | microsoft.operationalinsights/querypacks/read | Get Query Packs. | > | microsoft.operationalinsights/querypacks/delete | Delete Query Packs. |
search Search Create Service Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-create-service-portal.md
Although most customers use just one service, service redundancy might be necess
A second service is not required for high availability. High availability for queries is achieved when you use 2 or more replicas in the same service. Replica updates are sequential, which means at least one is operational when a service update is rolled out. For more information about uptime, see [Service Level Agreements](https://azure.microsoft.com/support/legal/sla/search/v1_0/).
+## Add more services to a subscription
+
+Cognitive Search restricts the [number of resources](search-limits-quotas-capacity.md#subscription-limits) you can initially create in a subscription. If you exhaust your maximum limit, file a new support request to add more search services.
+
+1. Sign in to the Azure portal, and find your search service.
+1. On the left-navigation pane, scroll down and select **New Support Request.**
+1. ForΓÇ»**issue type**, chooseΓÇ»**Service and subscription limits (quotas).**
+1. Select the subscription that needs more quota.
+1. Under **Quota type**, select **Search**. Then select **Next**.
+1. In the **Problem details** section, select **Enter details**.
+1. Follow the prompts to select location and tier.
+1. Add the new limit you would like on the subscription. The value must not be empty and must between 0 to 100.
+ For example: The maximum number of S2 services is 8 and you would like to have 12 services, then request to add 4 of S2 services."
+1. When you're finished, select **Save and continue** to continue creating your support request.
+1. Complete the rest of the additional information requested, and then select **Next**.
+1. On the **review + create** screen, review the details that you'll send to support, and then select **Create**.
+ ## Next steps After provisioning a service, you can continue in the portal to create your first index.
security Log Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/log-audit.md
The following table lists the most important types of logs available in Azure:
| Log category | Log type | Usage | Integration | | | -- | | -- |
-|[Activity logs](../../azure-monitor/essentials/platform-logs-overview.md)|Control-plane events on Azure Resource Manager resources| Provides insight into the operations that were performed on resources in your subscription.| Rest API, [Azure Monitor](../../azure-monitor/essentials/platform-logs-overview.md)|
+|[Activity logs](../../azure-monitor/essentials/platform-logs-overview.md)|Control-plane events on Azure Resource Manager resources| Provides insight into the operations that were performed on resources in your subscription.| REST API, [Azure Monitor](../../azure-monitor/essentials/platform-logs-overview.md)|
|[Azure Resource logs](../../azure-monitor/essentials/platform-logs-overview.md)|Frequent data about the operation of Azure Resource Manager resources in subscription| Provides insight into operations that your resource itself performed.| Azure Monitor| |[Azure Active Directory reporting](../../active-directory/reports-monitoring/overview-reports.md)|Logs and reports | Reports user sign-in activities and system activity information about users and group management.|[Graph API](../../active-directory/develop/microsoft-graph-intro.md)| |[Virtual machines and cloud services](../../azure-monitor/vm/monitor-virtual-machine.md)|Windows Event Log service and Linux Syslog| Captures system data and logging data on the virtual machines and transfers that data into a storage account of your choice.| Windows (using Windows Azure Diagnostics [[WAD](../../azure-monitor/agents/diagnostics-extension-overview.md)] storage) and Linux in Azure Monitor|
security Operational Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/operational-security.md
Auditing your network security is vital for detecting network vulnerabilities an
Network Watcher currently has the following capabilities: -- **<a href="/azure/network-watcher/network-watcher-monitoring-overview">Audit Logs</a>**- Operations performed as part of the configuration of networks are logged. These logs can be viewed in the Azure portal or retrieved using Microsoft tools such as Power BI or third-party tools. Audit logs are available through the portal, PowerShell, CLI, and Rest API. For more information on Audit logs, see Audit operations with Resource Manager. Audit logs are available for operations done on all network resources.
+- **<a href="/azure/network-watcher/network-watcher-monitoring-overview">Audit Logs</a>**- Operations performed as part of the configuration of networks are logged. These logs can be viewed in the Azure portal or retrieved using Microsoft tools such as Power BI or third-party tools. Audit logs are available through the portal, PowerShell, CLI, and REST API. For more information on Audit logs, see Audit operations with Resource Manager. Audit logs are available for operations done on all network resources.
- **<a href="/azure/network-watcher/network-watcher-ip-flow-verify-overview">IP flow verifies </a>** - Checks if a packet is allowed or denied based on flow information 5-tuple packet parameters (Destination IP, Source IP, Destination Port, Source Port, and Protocol). If the packet is denied by a Network Security Group, the rule and Network Security Group that denied the packet is returned.
sentinel Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/billing.md
If you're billed at Pay-As-You-Go rate, the following table shows how Microsoft
#### [Free data meters](#tab/free-data-meters)
-The following table shows how Microsoft Sentinel and Log Analytics costs appear in the **Service name** and **Meter** columns of your Azure bill for free data services.
+The following table shows how Microsoft Sentinel and Log Analytics costs appear in the **Service name** and **Meter** columns of your Azure bill for free data services. For more information, see [Viewing Data Allocation Benefits](../azure-monitor/logs/manage-cost-storage.md#viewing-data-allocation-benefits).
Cost description | Service name | Meter | |--|--|--|
sentinel Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ci-cd.md
Before connecting your Microsoft Sentinel workspace to an external source contro
Microsoft Sentinel currently supports connections only with GitHub and Azure DevOps repositories. -- An **Owner** role in the resource group that contains your Microsoft Sentinel workspace. The **Owner** role is required to create the connection between Microsoft Sentinel and your source control repository.
+- An **Owner** role in the resource group that contains your Microsoft Sentinel workspace. The **Owner** role is required to create the connection between Microsoft Sentinel and your source control repository. If you are using Azure Lighthouse in your environment, you can instead have the combination of **User Access Administrator** and **Sentinel Contributor** roles to create the connection.
### Maximum connections and deployments
service-bus-messaging Service Bus Geo Dr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-geo-dr.md
To learn more about Service Bus messaging, see the following articles:
* [Service Bus queues, topics, and subscriptions](service-bus-queues-topics-subscriptions.md) * [Get started with Service Bus queues](service-bus-dotnet-get-started-with-queues.md) * [How to use Service Bus topics and subscriptions](service-bus-dotnet-how-to-use-topics-subscriptions.md)
-* [Rest API](/rest/api/servicebus/)
+* [REST API](/rest/api/servicebus/)
[1]: ./media/service-bus-geo-dr/geodr_setup_pairing.png [2]: ./media/service-bus-geo-dr/geo2.png
service-connector How To Integrate App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-app-configuration.md
+
+ Title: Integrate Azure App Configuration with Service Connector
+description: Integrate Azure App Configuration into your application with Service Connector
++++ Last updated : 03/02/2022++
+# Integrate Azure App Configuration with Service Connector
+
+This page shows the supported authentication types and client types of Azure App Configuration using Service Connector. You might still be able to connect to App Configuration in other programming languages without using Service Connector. You can learn more about the [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+
+## Supported compute services
+
+- Azure App Service
+- Azure Spring Cloud
+
+## Supported authentication types and client types
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal |
+|-|::|::|::|::|
+| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+
+## Default environment variable names or application properties
+
+### .NET, Java, Node.JS, Python
+
+#### Secret / connection string
+
+> [!div class="mx-tdBreakAll"]
+> | Default environment variable name | Description | Sample value |
+> | | | |
+> | AZURE_APPCONFIGURATION_CONNECTIONSTRING | Your App Configuration Connection String | `Endpoint=https://{AppConfigurationName}.azconfig.io;Id={ID};Secret={secret}` |
+
+#### System-assigned managed identity
+
+| Default environment variable name | Description | Sample value |
+|--||-|
+| AZURE_APPCONFIGURATION_ENDPOINT | App Configuration endpoint | `https://{AppConfigurationName}.azconfig.io` |
+
+#### User-assigned managed identity
+
+| Default environment variable name | Description | Sample value |
+|--|-|--|
+| AZURE_APPCONFIGURATION_ENDPOINT | App Configuration Endpoint | `https://{AppConfigurationName}.azconfig.io` |
+| AZURE_APPCONFIGURATION_CLIENTID | Your client ID | `UserAssignedMiClientId` |
+
+#### Service principal
+
+| Default environment variable name | Description | Sample value |
+|-|-|-|
+| AZURE_APPCONFIGURATION_ENDPOINT | App Configuration Endpoint | `https://{AppConfigurationName}.azconfig.io` |
+| AZURE_APPCONFIGURATION_CLIENTID | Your client ID | `{yourClientID}` |
+| AZURE_APPCONFIGURATION_CLIENTSECRET | Your client secret | `{yourClientSecret}` |
+| AZURE_APPCONFIGURATION_TENANTID | Your tenant ID | `{yourTenantID}` |
+
+## Next steps
+
+Follow the tutorial listed below to learn more about Service Connector.
+
+> [!div class="nextstepaction"]
+> [Learn about Service Connector concepts](./concept-service-connector-internals.md)
service-connector How To Integrate Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-event-hubs.md
+
+ Title: Integrate Azure Event Hubs with Service Connector
+description: Integrate Azure Event Hubs into your application with Service Connector
++++ Last updated : 02/21/2022++
+# Integrate Azure Event Hubs with Service Connector
+
+This page shows the supported authentication types and client types of Azure Event Hubs using Service Connector. You might still be able to connect to Event Hubs in other programming languages without using Service Connector. This page also shows default environment variable names and values or Spring Boot configuration you get when you create service connections. You can learn more about the [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+
+## Supported compute services
+
+- Azure App Service
+- Azure Spring Cloud
+
+## Supported authentication types and client types
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal |
+| | :-: | :--:| :--:| :--:|
+| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Go | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java - Spring Boot | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+
+## Default environment variable names or application properties
+
+### .NET, Java, Node.JS, Python
+
+#### Secret / connection string
+
+> [!div class="mx-tdBreakAll"]
+> |Default environment variable name | Description | Sample value |
+> | -- | -- | |
+> | AZURE_EVENTHUB_CONNECTIONSTRING | Event Hubs connection string | `Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey={****}` |
+
+#### System-assigned managed identity
+
+| Default environment variable name | Description | Sample value |
+| -- | -- | -- |
+| AZURE_EVENTHUB_FULLYQUALIFIEDNAMESPACE | Event Hubs namespace | `{EventHubNamespace}.servicebus.windows.net` |
+
+#### User-assigned managed identity
+
+| Default environment variable name | Description | Sample value |
+| -- | -- | -- |
+| AZURE_EVENTHUB_FULLYQUALIFIEDNAMESPACE | Event Hubs namespace | `{EventHubNamespace}.servicebus.windows.net` |
+| AZURE_EVENTHUB_CLIENTID | Your client ID | `{yourClientID}` |
+
+#### Service principal
+
+| Default environment variable name | Description | Sample value |
+| | -- | -- |
+| AZURE_EVENTHUB_FULLYQUALIFIEDNAMESPACE | Event Hubs namespace | `{EventHubNamespace}.servicebus.windows.net` |
+| AZURE_EVENTHUB_CLIENTID | Your client ID | `{yourClientID}` |
+| AZURE_EVENTHUB_CLIENTSECRET | Your client secret | `{yourClientSecret}` |
+| AZURE_EVENTHUB_TENANTID | Your tenant ID | `{yourTenantID}` |
+
+### Java - Spring Boot
+
+#### Spring Boot secret/connection string
+
+> [!div class="mx-tdBreakAll"]
+> | Default environment variable name | Description | Sample value |
+> |--| -- | |
+> | spring.cloud.azure.storage.connection-string | Event Hubs connection string | `Endpoint=sb://servicelinkertesteventhub.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=****` |
+
+#### Spring Boot system-assigned managed identity
+
+| Default environment variable name | Description | Sample value |
+| - | -- | -- |
+| spring.cloud.azure.eventhub.namespace | Event Hubs namespace | `{EventHubNamespace}.servicebus.windows.net` |
+
+#### Spring Boot user-assigned managed identity
+
+| Default environment variable name | Description | Sample value |
+| - | -- | -- |
+| spring.cloud.azure.eventhub.namespace | Event Hubs namespace | `{EventHubNamespace}.servicebus.windows.net` |
+| spring.cloud.azure.client-id | Your client ID | `{yourClientID}` |
+
+#### Spring Boot service principal
+
+| Default environment variable name | Description | Sample value |
+| - | -- | -- |
+| spring.cloud.azure.eventhub.namespace | Event Hubs namespace | `{EventHubNamespace}.servicebus.windows.net` |
+| spring.cloud.azure.client-id | Your client ID | `{yourClientID}` |
+| spring.cloud.azure.tenant-id | Your client secret | `******` |
+| spring.cloud.azure.client-secret | Your tenant ID | `{yourTenantID}` |
+
+## Next steps
+
+Follow the tutorial listed below to learn more about Service Connector.
+
+> [!div class="nextstepaction"]
+> [Learn about Service Connector concepts](./concept-service-connector-internals.md)
service-connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/overview.md
# What is Service Connector?
-The Service Connector service helps you connect Azure compute service to other backing services easily. This service configures the network settings and connection information (for example, generating environment variables) between compute service and target backing service in management plane. Developers just use preferred SDK or library that consumes the connection information to do data plane operations against target backing service.
+The Service Connector service helps you connect Azure compute service to other backing services easily. This service configures the network settings and connection information (for example, generating environment variables) between compute service and target backing service in management plane. Developers just use preferred SDK or library that consumes the connection information to do data plane operations against target backing service.
This article provides an overview of Service Connector service.
Once a service connection is created. Developers can validate and check connecti
**Target Service:**
-* Azure Database for PostgreSQL
-* Azure Database for MySQL
+* Azure App Configuration
+* Azure Cache for Redis (Basic, Standard and Premium and Enterprise tiers)
* Azure Cosmos DB (SQL, MangoDB, Gremlin, Cassandra, Table)
-* Azure Storage (Blob, Queue, File and Table storage)
+* Azure Database for MySQL
+* Azure Database for PostgreSQL
+* Azure Event Hubs
* Azure Key Vault
+* Azure Service Bus
* Azure SignalR Service
-* Azure Cache for Redis (Basic, Standard and Premium and Enterprise tiers)
+* Azure Storage (Blob, Queue, File and Table storage)
* Apache Kafka on Confluent Cloud ## How to use Service Connector?
service-fabric Quickstart Managed Cluster Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/quickstart-managed-cluster-portal.md
+
+ Title: Deploy a Service Fabric managed cluster using the Azure portal
+description: Learn how to create a Service Fabric managed cluster using the Azure portal
+++++ Last updated : 03/02/2022++
+# Quickstart: Deploy a Service Fabric managed cluster using the Azure portal
+
+Test out Service Fabric managed clusters in this quickstart by creating a **three-node Basic SKU cluster**.
+
+Azure Service Fabric is a distributed systems platform that makes it easy to package, deploy, and manage scalable and reliable microservices and containers. A Service Fabric cluster is a network-connected set of virtual machines onto which your microservices are deployed and managed.
+
+Service Fabric managed clusters are an evolution of the Azure Service Fabric cluster resource model. Managed clusters streamline your deployment and cluster management experience. Service Fabric managed clusters are fully-encapsulated resources that save you the effort of manually deploying all the underlying resources that make up a Service Fabric cluster.
+
+In this quickstart, you learn how to:
+
+* Use Azure Key Vault to create a client certificate for your managed cluster
+* Deploy a Service Fabric managed cluster
+* View your managed cluster in Service Fabric Explorer
+
+This article describes how to deploy a Service Fabric managed cluster for testing in Azure using the **Azure portal**. There is also a quickstart for [Azure Resource Manager templates](quickstart-managed-cluster-template.md).
+
+The three-node Basic SKU cluster created in this tutorial is only intended for instructional purposes. The cluster will use a self-signed certificate for authentication and will operate in the bronze reliability tier, so it's not suitable for production workloads. For more information on SKUs, see [Service Fabric managed cluster SKUs](overview-managed-cluster.md#service-fabric-managed-cluster-skus). For more information about reliability tiers, see [Reliability characteristics of the cluster](service-fabric-cluster-capacity.md#reliability-characteristics-of-the-cluster).
+
+## Prerequisites
+
+* An Azure subscription. If you don't already have one, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+* A resource group to manage all the resources you use in this quickstart. We use the example resource group name **ServiceFabricResources** throughout this quickstart.
+
+ 1. Sign in to the [Azure portal](https://portal.azure.com).
+
+ 1. Select **Resource groups** under **Azure services**.
+
+ 1. Choose **+ Create**, select your Azure subscription, enter a name for your resource group, and pick your preferred region from the dropdown menu.
+
+ 1. Select **Review + create** and, once the validation passes, choose **Create**.
+
+## Create a client certificate
+
+Service Fabric managed clusters use a client certificate as a key for access control.
+
+In this quickstart, we use a client certificate called **ExampleCertificate** from an Azure Key Vault named **QuickstartSFKeyVault**.
+
+To create your own Azure Key Vault:
+
+1. In the [Azure portal](https://portal.azure.com), select **Key vaults** under **Azure services** and select **+ Create**. Alternatively, select **Create a resource**, enter **Key Vault** in the `Search services and marketplace` box, choose **Key Vault** from the results, and select **Create**.
+
+1. On the **Create a key vault** page, provide the following information:
+ - `Subscription`: Choose your Azure subscription.
+ - `Resource group`: Choose the resource group you created in the prerequisites or create a new one if you didn't already. For this quickstart, we use **ServiceFabricResources**.
+ - `Name`: Enter a unique name. For this quickstart, we use **QuickstartSFKeyVault**.
+ - `Region`: Choose your preferred region from the dropdown menu.
+ - Leave the other options as their defaults.
+
+1. Select **Review + create** and, once the validation passes, choose **Create**.
+
+To generate and retrieve your client certificate:
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your Azure Key Vault.
+
+1. Under **Settings** in the pane on the left, select **Certificates**.
+
+ ![Screenshot of Certificates tab under Settings in the left pane, PNG.](./media/quickstart-managed-cluster-portal/key-vault-settings-certificates.png)
+
+1. Choose **+ Generate/Import**.
+
+1. On the **Create a certificate** page, provide the following information:
+ - `Method of Certificate Creation`: Choose **Generate**.
+ - `Certificate Name`: Use a unique name. For this quickstart, we use **ExampleCertificate**.
+ - `Type of Certificate Authority (CA)`: Choose **Self-signed certificate**.
+ - `Subject`: Use a unique domain name. For this quickstart, we use **CN=ExampleDomain**.
+ - Leave the other options as their defaults.
+
+1. Select **Create**.
+
+1. Your certificate will appear under **In progress, failed or canceled**. You may need to refresh the list for it to appear under **Completed**. Once it's completed, select it and choose the version under **CURRENT VERSION**.
+
+1. Select **Download in PFX/PEM format** and select **Download**. The certificate's name will be formatted as `yourkeyvaultname-yourcertificatename-yyyymmdd.pfx`.
+
+ ![Screenshot of Download in PFX/PEM format button used to retrieve your certificate so you can import it into your computer's certificate store, PNG.](./media/quickstart-managed-cluster-portal/download-pfx.png)
+
+1. Import the certificate to your computer's certificate store so that you may use it to access your Service Fabric managed cluster later.
+
+ >[!NOTE]
+ >The private key included in this certificate doesn't have a password. If your certificate store prompts you for a private key password, leave the field blank.
+
+Before you create your Service Fabric managed cluster, you need to make sure Azure Virtual Machines can retrieve certificates from your Azure Key Vault. To do so:
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your Azure Key Vault.
+
+1. Under **Settings** in the pane on the left, select **Access policies**.
+
+ ![Screenshot of Access policies tab under Settings in the left pane, PNG.](./media/quickstart-managed-cluster-portal/key-vault-settings-access-policies.png)
+
+1. Toggle **Azure Virtual Machines for deployment** under **Enable access to:**.
+
+1. Save your changes.
+
+## Create your Service Fabric managed cluster
+
+In this quickstart, we use a Service Fabric managed cluster named **quickstartsfcluster**.
+
+1. In the [Azure portal](https://portal.azure.com), select **Create a resource**, enter **Service Fabric** in the `Search services and marketplace` box, choose **Service Fabric Managed Cluster** from the results, and select **Create**.
+
+1. On the **Create a Service Fabric managed cluster** page, provide the following information:
+ - `Subscription`: Choose your Azure subscription.
+ - `Resource group`: Choose the resource group you created in the prerequisites or create a new one if you didn't already. For this quickstart, we use **ServiceFabricResources**.
+ - `Name`: Enter a unique name. For this quickstart, we use **quickstartsfcluster**.
+ - `Region`: Choose your preferred region from the dropdown menu. This must be the same region as your Azure Key Vault.
+ - `SKU`: Toggle **Basic** for your SKU option.
+ - `Username`: Enter a username for your managed cluster's administrator account.
+ - `Password`: Enter a password for your managed cluster's administrator account.
+ - `Confirm password`: Reenter the password you chose.
+ - `Key vault and primary certificate`: Choose **Select a certificate**, pictured below. Select your Azure Key Vault from the **Key vault** dropdown menu and your certificate from the **Certificate** dropdown menu, pictured below.
+ - Leave the other options as their defaults.
+
+ ![Screenshot of Select a certificate button in the Authentication method section of the settings, PNG.](./media/quickstart-managed-cluster-portal/create-a-service-fabric-managed-cluster-authentication-method.png)
+
+ ![Screenshot of Azure Key Vault and certificate dropdown menus, PNG.](./media/quickstart-managed-cluster-portal/select-a-certificate-from-azure-key-vault.png)
+
+ If you didn't already change your Azure Key Vault's access policies, you may get text prompting you to do so after you select your key vault and certificate. If so, choose **Edit access policies for yourkeyvaultname**, select **Click to show advanced access policies**, toggle **Azure Virtual Machines for deployment**, and save your changes. Click **Create a Service Fabric managed cluster** to return to the creation page.
+
+1. Select **Review + create** and, once the validation passes, choose **Create**.
+
+Now, your managed cluster's deployment is in progress. The deployment will likely take around 20 minutes to complete.
+
+## Validate the deployment
+
+Once the deployment completes, you're ready to view your new Service Fabric managed cluster.
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your managed cluster.
+
+1. On your managed cluster's **Overview** page, find the **SF Explorer** link and select it.
+
+ ![Screenshot of SF Explorer link on your managed cluster's Overview page, PNG.](./media/quickstart-managed-cluster-portal/service-fabric-explorer-address.png)
+
+ >[!NOTE]
+ >You may get a warning that your connection to your cluster isn't private. Select **Advanced** and choose **continue to yourmanagedclusterfqdn (unsafe)**.
+
+1. When prompted for a certificate, choose the certificate you created, downloaded, and stored for this quickstart and select **OK**. If you completed those steps successfully, the certificate should be in the list of certificates.
+
+1. You'll arrive at the Service Fabric Explorer display for your cluster, pictured below.
+
+ ![Screenshot of your managed cluster's page in the Service Fabric Explorer, PNG.](./media/quickstart-managed-cluster-portal/service-fabric-explorer.png)
+
+Your Service Fabric managed cluster consists of three nodes. These nodes are WindowsServer 2019-Datacenter virtual machines with 2 vCPUs, 8 GiB of RAM, and four 256-GiB disks. These features are determined by the **Basic SKU** option and the default values in the **Primary node type** settings on the **Create a Service Fabric managed cluster** page.
+
+## Clean up resources
+
+When no longer needed, delete the resource group for your Service Fabric managed cluster. To delete your resource group:
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your resource group.
+
+1. Select **Delete resource group**.
+
+1. In the `TYPE THE RESOURCE GROUP NAME:` box, type the name of your resource group and select **Delete**.
+
+## Next steps
+
+In this quickstart, you deployed a managed Service Fabric cluster. To learn more about how to scale a cluster, see:
+
+> [!div class="nextstepaction"]
+> [Scale out a Service Fabric managed cluster](tutorial-managed-cluster-scale.md)
service-fabric Service Fabric Diagnostics Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-events.md
Here are some examples of scenarios that you should see events for in your clust
There are a few different ways through which Service Fabric events can be accessed: * The events are logged through standard channels such as ETW/Windows Event logs and can be visualized by any monitoring tool that supports these such as Azure Monitor logs. By default, clusters created in the portal have diagnostics turned on and have the Windows Azure diagnostics agent sending the events to Azure table storage, but you still need to integrate this with your log analytics resource. Read more about configuring the [Azure Diagnostics agent](service-fabric-diagnostics-event-aggregation-wad.md) to modify the diagnostics configuration of your cluster to pick up more logs or performance counters and the [Azure Monitor logs integration](service-fabric-diagnostics-event-analysis-oms.md)
-* EventStore service's Rest APIs that allow you to query the cluster directly, or through the Service Fabric Client Library. See [Query EventStore APIs for cluster events](service-fabric-diagnostics-eventstore-query.md).
+* EventStore service's REST APIs that allow you to query the cluster directly, or through the Service Fabric Client Library. See [Query EventStore APIs for cluster events](service-fabric-diagnostics-eventstore-query.md).
## Next steps * More information on monitoring your cluster - [Monitoring the cluster and platform](service-fabric-diagnostics-event-generation-infra.md).
site-recovery Azure To Azure Common Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-common-questions.md
Yes, you can create a Capacity Reservation for your VM SKU in the disaster recov
### Why should I reserve capacity using Capacity Reservation at the destination location?
-While Site Recovery team makes a best effort to ensure that capacity is available, we do not guarantee the same. Site Recovery's best effort is backed by a 2-hour RTO SLA. But if you require further assurance and _guaranteed compute capacity,_ then we recommend you to purchase [Capacity Reservations](https://aka.ms/on-demand-ca.pacity-reservations-docs)
+While Site Recovery makes a best effort to ensure that capacity is available in the recovery region, it does not guarantee the same. Site Recovery's best effort is backed by a 2-hour RTO SLA. But if you require further assurance and _guaranteed compute capacity,_ then we recommend you to purchase [Capacity Reservations](https://aka.ms/on-demand-ca.pacity-reservations-docs)
### Does Site Recovery work with reserved instances?
site-recovery Azure To Azure Troubleshoot Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-troubleshoot-replication.md
A spike in data change rate might come from an occasional data burst. If the dat
1. Go to **Disks** of the affected replicated machine and copy the replica disk name. 1. Go to this replica of the managed disk. 1. You might see a banner in **Overview** that says an SAS URL has been generated. Select this banner and cancel the export. Ignore this step if you don't see the banner.
- 1. As soon as the SAS URL is revoked, go to **Configuration** for the managed disk. Increase the size so that Site Recovery supports the observed churn rate on the source disk.
+ 1. As soon as the SAS URL is revoked, go to **Size + Performance** for the managed disk. Increase the size so that Site Recovery supports the observed churn rate on the source disk.
## Network connectivity problems
site-recovery Hyper V Azure Common Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-azure-common-questions.md
No, VMs must be located on a Hyper-V host server that's running on a supported W
Yes. Site Recovery converts from generation 2 to generation 1 during failover. At failback the machine is converted back to generation 2. ### Can I automate Site Recovery scenarios with an SDK?
-Yes. You can automate Site Recovery workflows using the Rest API, PowerShell, or the Azure SDK. Currently supported scenarios for replicating Hyper-V to Azure using PowerShell:
+Yes. You can automate Site Recovery workflows using the REST API, PowerShell, or the Azure SDK. Currently supported scenarios for replicating Hyper-V to Azure using PowerShell:
- [Replicate Hyper-V without VMM using PowerShell](hyper-v-azure-powershell-resource-manager.md) - [Replicating Hyper-V with VMM using PowerShell](hyper-v-vmm-powershell-resource-manager.md)
site-recovery Site Recovery Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-whats-new.md
This public preview covers a complete overhaul of the current architecture for p
| **Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup. **Issue fixes/improvements** | A number of fixes and improvements as detailed in the rollup.
-**Azure VM disaster recovery** | Support added for cross-continental disaster recovery of Azure VMs.<br/><br/> Rest API support for protection of VMSS Flex.<br/><br/> Now supported for VMs running Oracle Linux 8.2 and 8.3.
+**Azure VM disaster recovery** | Support added for cross-continental disaster recovery of Azure VMs.<br/><br/> REST API support for protection of VMSS Flex.<br/><br/> Now supported for VMs running Oracle Linux 8.2 and 8.3.
**VMware VM/physical disaster recovery to Azure** | Added support for using Ubuntu-20.04 while setting up master target server.<br/><br/> Now supported for VMs running Oracle Linux 8.2 and 8.3.
site-recovery Unregister Vmm Server Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/unregister-vmm-server-script.md
try
} catch {
- Write-Host "Error occured" -ForegroundColor "Red"
+ Write-Host "Error occurred" -ForegroundColor "Red"
$error[0] return }
try
catch { $transaction.Rollback()
- Write-Host "Error occured" -ForegroundColor "Red"
+ Write-Host "Error occurred" -ForegroundColor "Red"
$error[0] Write-Error "FAILED" "All updates to the VMM database have been rolled back."
try
catch {
- Write-Error "Error occured"
+ Write-Error "Error occurred"
$error[0] Write-Error "FAILED" }
site-recovery Vmware Azure Common Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-common-questions.md
When you fail back from Azure, data from Azure is copied back to your on-premise
### Can I set up replication with scripting?
-Yes. You can automate Site Recovery workflows by using the Rest API, PowerShell, or the Azure SDK. [Learn more](vmware-azure-disaster-recovery-powershell.md).
+Yes. You can automate Site Recovery workflows by using the REST API, PowerShell, or the Azure SDK. [Learn more](vmware-azure-disaster-recovery-powershell.md).
## Performance and capacity
site-recovery Vmware Physical Mobility Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-mobility-service-overview.md
During a push installation of the Mobility service, the following steps are perf
Setting | Details |
-Syntax | `UnifiedAgent.exe /Role \<Agent/MasterTarget> /InstallLocation \<Install Location> /Platform "VmWare" /Silent`
+Syntax | `UnifiedAgent.exe /Role \<MS/MT> /InstallLocation \<Install Location> /Platform "VmWare" /Silent`
Setup logs | `%ProgramData%\ASRSetupLogs\ASRUnifiedAgentInstaller.log`
-`/Role` | Mandatory installation parameter. Specifies whether the Mobility service (Agent) or master target (MasterTarget) should be installed. Note: in prior versions, the correct switches were Mobility Service (MS) or master target (MT)
+`/Role` | Mandatory installation parameter. Specifies whether the mobility service (MS) or master target (MT) should be installed.
`/InstallLocation`| Optional parameter. Specifies the Mobility service installation location (any folder). `/Platform` | Mandatory. Specifies the platform on which the Mobility service is installed: <br/> **VMware** for VMware VMs/physical servers. <br/> **Azure** for Azure VMs.<br/><br/> If you're treating Azure VMs as physical machines, specify **VMware**. `/Silent`| Optional. Specifies whether to run the installer in silent mode.
Agent configuration logs | `%ProgramData%\ASRSetupLogs\ASRUnifiedAgentConfigurat
Setting | Details |
-Syntax | `./install -d \<Install Location> -r \<Agent/MasterTarget> -v VmWare -q`
-`-r` | Mandatory installation parameter. Specifies whether the Mobility service (Agent) or master target (MasterTarget) should be installed.
+Syntax | `./install -d \<Install Location> -r \<MS/MT> -v VmWare -q`
+`-r` | Mandatory installation parameter. Specifies whether the mobility service (MS) or master target (MT) should be installed.
`-d` | Optional parameter. Specifies the Mobility service installation location: `/usr/local/ASR`. `-v` | Mandatory. Specifies the platform on which Mobility service is installed. <br/> **VMware** for VMware VMs/physical servers. <br/> **Azure** for Azure VMs. `-q` | Optional. Specifies whether to run the installer in silent mode.
storage Secure File Transfer Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-known-issues.md
This article describes limitations and known issues of SFTP support for Azure Bl
> > To enroll in the preview, complete [this form](https://forms.office.com/r/gZguN0j65Y) AND request to join via 'Preview features' in Azure portal.
-## Authorization
+## Authentication and authorization
-- Local users are the only form of identity management that is currently supported for the SFTP endpoint.
+- _Local users_ is the only form of identity management that is currently supported for the SFTP endpoint.
-- Azure Active Directory (Azure AD), shared access signature (SAS) and account key authorization are not supported for the SFTP endpoint.
+- Azure Active Directory (Azure AD) is not supported for the SFTP endpoint.
- POSIX-like access control lists (ACLs) are not supported for the SFTP endpoint. > [!NOTE] > After your data is ingested into Azure Storage, you can use the full breadth of Azure storage security settings. While authorization mechanisms such as role-based access control (RBAC) and access control lists aren't supported as a means to authorize a connecting SFTP client, they can be used to authorize access via Azure tools (such Azure portal, Azure CLI, Azure PowerShell commands, and AzCopy) as well as Azure SDKS, and Azure REST APIs. -- Account level operations such as listing, putting/getting, creating/deleting containers are not supported.
+- Account and container level operations are not supported for the SFTP endpoint.
## Networking
This article describes limitations and known issues of SFTP support for Azure Bl
## Security -- Host keys are published [here](secure-file-transfer-protocol-host-keys.md). During the public preview, host keys will rotate up to once per month.--- There a few different reasons for "remote host identification has changed" warning:-
- - The remote host key was updated (host keys are periodically rotated).
-
- - The client selected a different host key algorithm than the one stored in the local ssh "known_hosts" file. OpenSSH will use an already trusted key if the host (account.blob.core.windows.net) matches, even when the algorithm doesn't necessarily match.
-
- - The storage account failed over to a different region.
-
- - The remote host (account.blob.core.windows.net) is being faked.
+- Host keys are published [here](secure-file-transfer-protocol-host-keys.md). During the public preview, host keys may rotate frequently.
## Integrations -- Change feed is not supported.--- Account metrics such as transactions and capacity are available. Filter logs by operations to see SFTP activity.
+- Change feed and Event Grid notifications are not supported.
- Network File System (NFS) 3.0 and SFTP can't be enabled on the same storage account.
This article describes limitations and known issues of SFTP support for Azure Bl
- There's a 4 minute timeout for idle or inactive connections. OpenSSH will appear to stop responding and then disconnect. Some clients reconnect automatically. -- Maximum file size upload is limited by client message size. A few examples below: -
- - 32KB message (OpenSSH default) * 50k blocks = 1.52GB
-
- - 100KB message (OpenSSH Windows max) * 50k blocks = 4.77GB
-
- - 256KB message (OpenSSH Linux max) * 50k blocks = 12.20GB
+- Maximum file upload size is 90 GB.
## Other
+- Special containers such as $logs, $blobchangefeed, $root, $web are not accessible via the SFTP endpoint.
+ - Symbolic links are not supported. - `ssh-keyscan` is not supported.
This article describes limitations and known issues of SFTP support for Azure Bl
- The user has been assigned appropriate permissions to the container.
- - The container name is specified in the connection string if you have not configured (set home directory) and provisioned (create the directory inside the container) a home directory for the user.
+ - The container name is specified in the connection string for local users don't have a home directory.
+
+ - The container name is specified in the connection string for local users that have a home directory that doesn't exist.
## See also - [SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-support.md) - [Connect to Azure Blob Storage by using the SSH File Transfer Protocol (SFTP)](secure-file-transfer-protocol-support-how-to.md)-- [Host keys for SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-host-keys.md)
+- [Host keys for SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-host-keys.md)
storage Storage Blob Change Feed How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-change-feed-how-to.md
description: Learn how to process change feed logs in a .NET client application
Previously updated : 10/01/2021 Last updated : 03/03/2022
storage Storage Blob Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-change-feed.md
description: Learn about change feed logs in Azure Blob Storage and how to use t
Previously updated : 10/01/2021 Last updated : 03/07/2022
The purpose of the change feed is to provide transaction logs of all the changes
## How the change feed works
-The change feed is stored as [blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs) in a special container in your storage account at standard [blob pricing](https://azure.microsoft.com/pricing/details/storage/blobs/) cost. You can control the retention period of these files based on your requirements (See the [conditions](#conditions) of the current release). Change events are appended to the change feed as records in the [Apache Avro](https://avro.apache.org/docs/1.8.2/spec.html) format specification: a compact, fast, binary format that provides rich data structures with inline schema. This format is widely used in the Hadoop ecosystem, Stream Analytics, and Azure Data Factory.
+Change feed records are stored as [blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs) in a special container in your storage account at standard [blob pricing](https://azure.microsoft.com/pricing/details/storage/blobs/) cost. You can control the retention period of these files based on your requirements (See the [conditions](#conditions) of the current release). Change events are appended to the change feed as records in the [Apache Avro](https://avro.apache.org/docs/1.8.2/spec.html) format specification: a compact, fast, binary format that provides rich data structures with inline schema. This format is widely used in the Hadoop ecosystem, Stream Analytics, and Azure Data Factory.
You can process these logs asynchronously, incrementally or in-full. Any number of client applications can independently read the change feed, in parallel, and at their own pace. Analytics applications such as [Apache Drill](https://drill.apache.org/docs/querying-avro-files/) or [Apache Spark](https://spark.apache.org/docs/latest/sql-data-sources-avro.html) can consume logs directly as Avro files, which let you process them at a low-cost, with high-bandwidth, and without having to write a custom application.
You must enable the change feed on your storage account to begin capturing and r
Here's a few things to keep in mind when you enable the change feed. -- There's only one change feed for the blob service in each storage account and is stored in the **$blobchangefeed** container.
+- There's only one change feed for the blob service in each storage account. Change feed records are stored in the **$blobchangefeed** container.
- Create, Update, and Delete changes are captured only at the blob service level.
The change feed produces several metadata and log files. These files are located
> [!NOTE] > In the current release, the $blobchangefeed container is visible only in Azure portal but not visible in Azure Storage Explorer. You currently cannot see the $blobchangefeed container when you call ListContainers API but you are able to call the ListBlobs API directly on the container to see the blobs
-Your client applications can consume the change feed by using the blob change feed processor library that is provided with the Change feed processor SDK.
+Your client applications can consume the change feed by using the blob change feed processor library that is provided with the change feed processor SDK.
See [Process change feed logs in Azure Blob Storage](storage-blob-change-feed-how-to.md).
-## Understand change feed organization
- <a id="segment-index"></a>
-### Segments
+## Change feed segments
The change feed is a log of changes that are organized into **hourly** *segments* but appended to and updated every few minutes. These segments are created only when there are blob change events that occur in that hour. This enables your client application to consume changes that occur within specific ranges of time without having to search through the entire log. To learn more, see the [Specifications](#specifications).
The segment manifest file (`meta.json`) shows the path of the change feed files
<a id="log-files"></a>
-### Change event records
+## Change event records
The change feed files contain a series of change event records. Each change event record corresponds to one change to an individual blob. The records are serialized and written to the file using the [Apache Avro](https://avro.apache.org/docs/1.8.2/spec.html) format specification. The records can be read by using the Avro file format specification. There are several libraries available to process files in that format. Change feed files are stored in the `$blobchangefeed/log/` virtual directory as [append blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-append-blobs). The first change feed file under each path will have `00000` in the file name (For example `00000.avro`). The name of each subsequent log file added to that path will increment by 1 (For example: `00001.avro`).
-The following event types are captured in the change feed records:
+### Event record schemas
+
+For a description of each property, see [Azure Event Grid event schema for Blob Storage](../../event-grid/event-schema-blob-storage.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#event-properties). The BlobPropertiesUpdated and BlobSnapshotCreated events are currently exclusive to change feed and not yet supported for Blob Storage Events.
+
+> [!NOTE]
+> The change feed files for a segment don't immediately appear after a segment is created. The length of delay is within the normal interval of publishing latency of the change feed which is within a few minutes of the change.
+
+#### Schema version 1
+
+The following event types may be captured in the change feed records with schema version 1:
+ - BlobCreated - BlobDeleted - BlobPropertiesUpdated - BlobSnapshotCreated
-Here's an example of change event record from change feed file converted to Json.
+The following example shows a change event record in JSON format that uses event schema version 1:
```json {
- "schemaVersion": 1,
- "topic": "/subscriptions/dd40261b-437d-43d0-86cf-ef222b78fd15/resourceGroups/sadodd/providers/Microsoft.Storage/storageAccounts/mytestaccount",
- "subject": "/blobServices/default/containers/mytestcontainer/blobs/mytestblob",
- "eventType": "BlobCreated",
- "eventTime": "2019-02-22T18:12:01.079Z",
- "id": "55e5531f-8006-0000-00da-ca3467000000",
- "data": {
- "api": "PutBlob",
- "clientRequestId": "edf598f4-e501-4750-a3ba-9752bb22df39",
- "requestId": "00000000-0000-0000-0000-000000000000",
- "etag": "0x8D698F13DCB47F6",
- "contentType": "application/octet-stream",
- "contentLength": 128,
- "blobType": "BlockBlob",
- "url": "",
- "sequencer": "000000000000000100000000000000060000000000006d8a",
- "storageDiagnostics": {
- "bid": "11cda41c-13d8-49c9-b7b6-bc55c41b3e75",
- "seq": "(6,5614,28042,28038)",
- "sid": "591651bd-8eb3-c864-1001-fcd187be3efd"
- }
- }
+ "schemaVersion": 1,
+ "topic": "/subscriptions/<subscription>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>",
+ "subject": "/blobServices/default/containers/<container>/blobs/<blob>",
+ "eventType": "BlobCreated",
+ "eventTime": "2022-02-17T12:59:41.4003102Z",
+ "id": "322343e3-8020-0000-00fe-233467066726",
+ "data": {
+ "api": "PutBlob",
+ "clientRequestId": "f0270546-168e-4398-8fa8-107a1ac214d2",
+ "requestId": "322343e3-8020-0000-00fe-233467000000",
+ "etag": "0x8D9F2155CBF7928",
+ "contentType": "application/octet-stream",
+ "contentLength": 128,
+ "blobType": "BlockBlob",
+ "url": "https://www.myurl.com",
+ "sequencer": "00000000000000010000000000000002000000000000001d",
+ "storageDiagnostics": {
+ "bid": "9d725a00-8006-0000-00fe-233467000000",
+ "seq": "(2,18446744073709551615,29,29)",
+ "sid": "4cc94e71-f6be-75bf-e7b2-f9ac41458e5a"
+ }
+ }
} ```
-For a description of each property, see [Azure Event Grid event schema for Blob Storage](../../event-grid/event-schema-blob-storage.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#event-properties). The BlobPropertiesUpdated and BlobSnapshotCreated events are currently exclusive to change feed and not yet supported for Blob Storage Events.
+#### Schema version 3
-> [!NOTE]
-> The change feed files for a segment don't immediately appear after a segment is created. The length of delay is within the normal interval of publishing latency of the change feed which is within a few minutes of the change.
+The following event types may be captured in the change feed records with schema version 3:
+
+- BlobCreated
+- BlobDeleted
+- BlobPropertiesUpdated
+- BlobSnapshotCreated
+
+The following example shows a change event record in JSON format that uses event schema version 3:
+
+```json
+{
+ "schemaVersion": 3,
+ "topic": "/subscriptions/<subscription>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>",
+ "subject": "/blobServices/default/containers/<container>/blobs/<blob>",
+ "eventType": "BlobCreated",
+ "eventTime": "2022-02-17T13:05:19.6798242Z",
+ "id": "eefe8fc8-8020-0000-00fe-23346706daaa",
+ "data": {
+ "api": "PutBlob",
+ "clientRequestId": "00c0b6b7-bb67-4748-a3dc-86464863d267",
+ "requestId": "eefe8fc8-8020-0000-00fe-233467000000",
+ "etag": "0x8D9F216266170DC",
+ "contentType": "application/octet-stream",
+ "contentLength": 128,
+ "blobType": "BlockBlob",
+ "url": "https://www.myurl.com",
+ "sequencer": "00000000000000010000000000000002000000000000001d",
+ "previousInfo": {
+ "SoftDeleteSnapshot": "2022-02-17T13:08:42.4825913Z",
+ "WasBlobSoftDeleted": true,
+ "BlobVersion": "2024-02-17T16:11:52.0781797Z",
+ "LastVersion" : "2022-02-17T16:11:52.0781797Z",
+ "PreviousTier": "Hot"
+ },
+ "snapshot": "2022-02-17T16:09:16.7261278Z",
+ "blobPropertiesUpdated" : {
+ "ContentLanguage" : {
+ "current" : "pl-Pl",
+ "previous" : "nl-NL"
+ },
+ "CacheControl" : {
+ "current" : "max-age=100",
+ "previous" : "max-age=99"
+ },
+ "ContentEncoding" : {
+ "current" : "gzip, identity",
+ "previous" : "gzip"
+ },
+ "ContentMD5" : {
+ "current" : "Q2h1Y2sgSW51ZwDIAXR5IQ==",
+ "previous" : "Q2h1Y2sgSW="
+ },
+ "ContentDisposition" : {
+ "current" : "attachment",
+ "previous" : ""
+ },
+ "ContentType" : {
+ "current" : "application/json",
+ "previous" : "application/octet-stream"
+ }
+ },
+ "storageDiagnostics": {
+ "bid": "9d726370-8006-0000-00ff-233467000000",
+ "seq": "(2,18446744073709551615,29,29)",
+ "sid": "4cc94e71-f6be-75bf-e7b2-f9ac41458e5a"
+ }
+ }
+}
+```
+
+#### Schema version 4
+
+The following event types may be captured in the change feed records with schema version 4:
+
+- BlobCreated
+- BlobDeleted
+- BlobPropertiesUpdated
+- BlobSnapshotCreated
+- BlobTierChanged
+- BlobAsyncOperationInitiated
+
+The following example shows a change event record in JSON format that uses event schema version 4:
+
+```json
+{
+ "schemaVersion": 4,
+ "topic": "/subscriptions/<subscription>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>",
+ "subject": "/blobServices/default/containers/<container>/blobs/<blob>",
+ "eventType": "BlobCreated",
+ "eventTime": "2022-02-17T13:08:42.4835902Z",
+ "id": "ca76bce1-8020-0000-00ff-23346706e769",
+ "data": {
+ "api": "PutBlob",
+ "clientRequestId": "58fbfee9-6cf5-4096-9666-c42980beee65",
+ "requestId": "ca76bce1-8020-0000-00ff-233467000000",
+ "etag": "0x8D9F2169F42D701",
+ "contentType": "application/octet-stream",
+ "contentLength": 128,
+ "blobType": "BlockBlob",
+ "blobVersion": "2022-02-17T16:11:52.5901564Z",
+ "containerVersion": "0000000000000001",
+ "blobTier": "Archive",
+ "url": "https://www.myurl.com",
+ "sequencer": "00000000000000010000000000000002000000000000001d",
+ "previousInfo": {
+ "SoftDeleteSnapshot": "2022-02-17T13:08:42.4825913Z",
+ "WasBlobSoftDeleted": true,
+ "BlobVersion": "2024-02-17T16:11:52.0781797Z",
+ "LastVersion" : "2022-02-17T16:11:52.0781797Z",
+ "PreviousTier": "Hot"
+ },
+ "snapshot": "2022-02-17T16:09:16.7261278Z",
+ "blobPropertiesUpdated" : {
+ "ContentLanguage" : {
+ "current" : "pl-Pl",
+ "previous" : "nl-NL"
+ },
+ "CacheControl" : {
+ "current" : "max-age=100",
+ "previous" : "max-age=99"
+ },
+ "ContentEncoding" : {
+ "current" : "gzip, identity",
+ "previous" : "gzip"
+ },
+ "ContentMD5" : {
+ "current" : "Q2h1Y2sgSW51ZwDIAXR5IQ==",
+ "previous" : "Q2h1Y2sgSW="
+ },
+ "ContentDisposition" : {
+ "current" : "attachment",
+ "previous" : ""
+ },
+ "ContentType" : {
+ "current" : "application/json",
+ "previous" : "application/octet-stream"
+ }
+ },
+ "asyncOperationInfo": {
+ "DestinationTier": "Hot",
+ "WasAsyncOperation": true,
+ "CopyId": "copyId"
+ },
+ "storageDiagnostics": {
+ "bid": "9d72687f-8006-0000-00ff-233467000000",
+ "seq": "(2,18446744073709551615,29,29)",
+ "sid": "4cc94e71-f6be-75bf-e7b2-f9ac41458e5a"
+ }
+ }
+}
+```
<a id="specifications"></a>
storage Customer Managed Keys Configure Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-key-vault.md
Previously updated : 03/03/2022 Last updated : 03/07/2022
This article shows how to configure encryption with customer-managed keys stored
## Configure a key vault
-You can use a new or existing key vault to store customer-managed keys. The starage account and key vault may be in different regions or subscriptions in the same tenant. To learn more about Azure Key Vault, see [Azure Key Vault Overview](../../key-vault/general/overview.md) and [What is Azure Key Vault?](../../key-vault/general/basic-concepts.md).
+You can use a new or existing key vault to store customer-managed keys. The storage account and key vault may be in different regions or subscriptions in the same tenant. To learn more about Azure Key Vault, see [Azure Key Vault Overview](../../key-vault/general/overview.md) and [What is Azure Key Vault?](../../key-vault/general/basic-concepts.md).
Using customer-managed keys with Azure Storage encryption requires that both soft delete and purge protection be enabled for the key vault. Soft delete is enabled by default when you create a new key vault and cannot be disabled. You can enable purge protection either when you create the key vault or after it is created.
stream-analytics Power Bi Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/power-bi-output.md
Datetime | String | String | Datetime | String
## Limitations and best practices Currently, Power BI can be called roughly once per second. Streaming visuals support packets of 15 KB. Beyond that, streaming visuals fail (but push continues to work). Because of these limitations, Power BI lends itself most naturally to cases where Azure Stream Analytics does a significant data load reduction. We recommend using a Tumbling window or Hopping window to ensure that data push is at most one push per second, and that your query lands within the throughput requirements.
-For more info on output batch size, see [Power BI Rest API limits](/power-bi/developer/automation/api-rest-api-limitations).
+For more info on output batch size, see [Power BI REST API limits](/power-bi/developer/automation/api-rest-api-limitations).
## Next steps
stream-analytics Stream Analytics Quick Create Vs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-quick-create-vs.md
This quickstart shows you how to create and run a Stream Analytics job using Azure Stream Analytics tools for Visual Studio. The example job reads streaming data from an IoT Hub device. You define a job that calculates the average temperature when over 27┬░ and writes the resulting output events to a new file in blob storage.
-> [!NOTE]
-> Visual Studio and Visual Studio Code tools don't support jobs in the China East, China North, Germany Central, and Germany NorthEast regions.
+> [!NOTE]
+> - We strongly recommend using [**Stream Analytics tools for Visual Studio Code**](./stream-analytics-quick-create-vs.md) for best local development experience. There are known feature gaps in Stream Analytics tools for Visual Studio 2019 (version 2.6.3000.0) and it won't be improved going forward.
+> - Visual Studio and Visual Studio Code tools don't support jobs in the China East, China North, Germany Central, and Germany NorthEast regions.
## Before you begin
synapse-analytics How To Move Workspace From One Region To Another https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/how-to-move-workspace-from-one-region-to-another.md
Title: Move an Azure Synapse Analytics workspace from region to another description: This article teaches you how to move an Azure Synapse Analytics workspace from one region to another. -+ Last updated 08/16/2021-+
synapse-analytics Business Intelligence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/partner/business-intelligence.md
To create your data warehouse solution, you can choose from different kinds of i
| ![Information Builders](./media/business-intelligence/informationbuilders_logo.png) |**Information Builders (WebFOCUS)**<br>WebFOCUS business intelligence helps companies use data more strategically across and beyond the enterprise. It allows users and administrators to rapidly create dashboards that combine content from multiple data sources and formats. It also provides robust security and comprehensive governance that enables seamless and secure sharing of any BI and analytics content|[Product page](https://www.informationbuilders.com/products/bi-and-analytics-platform)<br> | | ![Jinfonet](./media/business-intelligence/jinfonet_logo.png) |**Jinfonet JReport**<br>JReport is an embeddable BI solution for the enterprise. The solution offers capabilities such as report creation, dashboards, and data analysis on cloud, big data, and transactional data sources. By visualizing data, you can conduct your own reporting and data discovery for agile, on-the-fly decision making. |[Product page](https://www.logianalytics.com/jreport/)<br> | | ![LogiAnalytics](./media/business-intelligence/logianalytics_logo.png) |**Logi Analytics**<br>Together, Logi Analytics enables your organization to collect, analyze, and immediately act on the largest and most diverse data sets in the world. |[Product page](https://www.logianalytics.com/)<br>|
-| ![Looker](./media/business-intelligence/looker_logo.png) |**Looker BI**<br>Looker gives everyone in your company the ability to explore and understand the data that drives your business. Looker also gives the data analyst a flexible and reusable modeling layer to control and curate that data. Companies have fundamentally transformed their culture using Looker as the catalyst.|[Product page](https://looker.com/partners/microsoft-azure/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/aad.lookeranalyticsplatform)<br> |
+| ![Looker](./media/business-intelligence/looker_logo.png) |**Looker BI**<br>Looker gives everyone in your company the ability to explore and understand the data that drives your business. Looker also gives the data analyst a flexible and reusable modeling layer to control and curate that data. Companies have fundamentally transformed their culture using Looker as the catalyst.|[Product page](https://looker.com/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/aad.lookeranalyticsplatform)<br> |
| ![Microstrategy](./media/business-intelligence/microstrategy_logo.png) |**MicroStrategy**<br>The MicroStrategy platform offers a complete set of business intelligence and analytics capabilities that enable organizations to get value from their business data. MicroStrategy's powerful analytical engine, comprehensive toolsets, variety of data connectors, and open architecture ensures you have everything you need to extend access to analytics across every team.|[Product page](https://www.microstrategy.com/us/product/analytics)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/microstrategy.microstrategy_enterprise_platform_vm)<br> | | ![Mode Analytics](./media/business-intelligence/mode-logo.png) |**Mode**<br>Mode is a modern analytics and BI solution that helps teams make decisions through unreasonably fast and unexpectedly delightful data analysis. Data teams move faster through a preferred workflow that combines SQL, Python, R, and visual analysis, while stakeholders work alongside them exploring and sharing data on their own. With data more accessible to everyone, we shorten the distance from questions to answers and help businesses make better decisions, faster.|[Product page](https://mode.com/)<br> |
-| ![Pyramid Analytics](./media/business-intelligence/pyramid-logo.png) |**Pyramid Analytics**<br>Pyramid 2020 is the trusted analytics platform that connects your teams, drives confident decisions, and produces winning results. Business users can do high-end, cloud-scale analytics and data science without IT help ΓÇö on any browser or device. Data scientists can take advantage of machine learning algorithms and scripting to understand difficult business problems. Power users can prepare and model their own data to create illuminating analytic content. Non-technical users can benefit from stunning visualizations and guided analytic presentations. It's the next generation of self-service analytics with governance. |[Product page](https://www.pyramidanalytics.com/analytics-os)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/pyramidanalytics.pyramid2020v4) |
+| ![Pyramid Analytics](./media/business-intelligence/pyramid-logo.png) |**Pyramid Analytics**<br>Pyramid 2020 is the trusted analytics platform that connects your teams, drives confident decisions, and produces winning results. Business users can do high-end, cloud-scale analytics and data science without IT help ΓÇö on any browser or device. Data scientists can take advantage of machine learning algorithms and scripting to understand difficult business problems. Power users can prepare and model their own data to create illuminating analytic content. Non-technical users can benefit from stunning visualizations and guided analytic presentations. It's the next generation of self-service analytics with governance. |[Product page](https://www.pyramidanalytics.com/resources/analyst-reports/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/pyramidanalytics.pyramid2020v4) |
| ![Qlik](./media/business-intelligence/qlik_logo.png) |**Qlik Sense Enterprise**<br>Drive insight discovery with the data visualization app that anyone can use. With Qlik Sense, everyone in your organization can easily create flexible, interactive visualizations and make meaningful decisions. |[Product page](https://www.qlik.com/us/products/qlik-sense/enterprise)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qlik.qlik-sense) | | ![SAS](./media/business-intelligence/sas-logo.jpg) |**SAS® Viya®**<br>SAS® Viya® is an AI, analytic, and data management solution running on a scalable, cloud-native architecture. It enables you to operationalize insights, empowering everyone – from data scientists to business users – to collaborate and realize innovative results faster. Using open source or SAS models, SAS® Viya® can be accessed through APIs or interactive interfaces to transform raw data into actions. |[Product page](https://www.sas.com/microsoft)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/sas-institute-560503.sas-viya-saas?tab=Overview)<br>| | ![SiSense](./media/business-intelligence/sisense_logo.png) |**SiSense**<br>SiSense is a full-stack Business Intelligence software that comes with tools that a business needs to analyze and visualize data: a high-performance analytical database, the ability to join multiple sources, simple data extraction (ETL), and web-based data visualization. Start to analyze and visualize large data sets with SiSense BI and Analytics today. |[Product page](https://www.sisense.com/product/)<br> |
synapse-analytics Data Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/partner/data-integration.md
To create your data warehouse solution using the dedicated SQL pool in Azure Syn
| ![TimeXtender](./media/data-integration/timextender-logo.png) |**TimeXtender**<br>TimeXtender's Discovery Hub helps companies build a modern data estate by providing an integrated data management platform that accelerates time to data insights by up to 10 times. Going beyond everyday ETL and ELT, it provides capabilities for data access, data modeling, and compliance in a single platform. Discovery Hub provides a cohesive data fabric for cloud scale analytics. It allows you to connect and integrate various data silos, catalog, model, move, and document data for analytics and AI. | [Product page](https://www.timextender.com/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?search=timextender&page=1) | | ![Trifacta](./media/data-integration/trifacta_logo.png) |**Trifacta Wrangler**<br> Trifacta helps individuals and organizations explore, and join together diverse data for analysis. Trifacta Wrangler is designed to handle data wrangling workloads that need to support data at scale and a large number of end users.|[Product page](https://www.trifacta.com/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/trifactainc1587522950142.trifactaazure?tab=Overview) | | ![WhereScape](./media/data-integration/wherescape_logo.png) |**Wherescape RED**<br> WhereScape RED is an IDE that provides teams with automation tools to streamline ETL workflows. The IDE provides best practice, optimized native code for popular data targets. Use WhereScape RED to cut the time to develop, deploy, and operate your data infrastructure.|[Product page](https://www.wherescape.com/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/wherescapesoftware.wherescape-red?source=datamarket&tab=Overview) |
-| ![Xplenty](./media/data-integration/xplenty-logo.png) |**Xplenty**<br> Xplenty ELT platform lets you quickly and easily prepare your data for analytics and production use cases using a simple cloud service. Xplenty's point & select, drag & drop interface enables data integration, processing and preparation without installing, deploying, or maintaining any software. Connect and integrate with a wide set of data repositories and SaaS applications including Azure Synapse, Azure blob storage, and SQL Server. Xplenty also supports all Web Services that are accessible via Rest API.|[Product page](https://www.xplenty.com/integrations/azure-synapse-analytics/ )<br> |
+| ![Xplenty](./media/data-integration/xplenty-logo.png) |**Xplenty**<br> Xplenty ELT platform lets you quickly and easily prepare your data for analytics and production use cases using a simple cloud service. Xplenty's point & select, drag & drop interface enables data integration, processing and preparation without installing, deploying, or maintaining any software. Connect and integrate with a wide set of data repositories and SaaS applications including Azure Synapse, Azure blob storage, and SQL Server. Xplenty also supports all Web Services that are accessible via REST API.|[Product page](https://www.xplenty.com/integrations/azure-synapse-analytics/ )<br> |
## Next steps
synapse-analytics Apache Spark Development Using Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-development-using-notebooks.md
The number of tasks per each job or stage help you to identify the parallel leve
![Screenshot of spark-progress-indicator](./media/apache-spark-development-using-notebooks/synapse-spark-progress-indicator.png)
-### Spark session config
+### Spark session configuration
You can specify the timeout duration, the number, and the size of executors to give to the current Spark session in **Configure session**. Restart the Spark session is for configuration changes to take effect. All cached notebook variables are cleared. [![Screenshot of session-management](./media/apache-spark-development-using-notebooks/synapse-azure-notebook-spark-session-management.png)](./media/apache-spark-development-using-notebooks/synapse-azure-notebook-spark-session-management.png#lightbox)
-#### Spark session config magic command
+#### Spark session configuration magic command
You can also specify spark session settings via a magic command **%%configure**. The spark session needs to restart to make the settings effect. We recommend you to run the **%%configure** at the beginning of your notebook. Here is a sample, refer to https://github.com/cloudera/livy#request-body for full list of valid parameters. ```json
You can also specify spark session settings via a magic command **%%configure**.
``` > [!NOTE] > - "DriverMemory" and "ExecutorMemory" are recommended to set as same value in %%configure, so do "driverCores" and "executorCores".
-> - You can use Spark session config magic command in Synapse pipelines. It only takes effect when it's called in the top level. The %%configure used in referenced notebook is going to be ignored.
-> - The Spark configuration properties has to be used in the "conf" body. We do not support top level reference for the Spark configuration properties.
+> - You can use %%configure in Synapse pipelines, but if it's not set in the first code cell, the pipeline run will fail due to cannot restart session.
+> - The %%configure used in mssparkutils.notebook.run is going to be ignored but used in %run notebook will continue executing.
+> - The standard Spark configuration properties must be used in the "conf" body. We do not support first level reference for the Spark configuration properties.
+> - Some special spark properties including "spark.driver.cores", "spark.executor.cores", "spark.driver.memory", "spark.executor.memory", "spark.executor.instances" won't take effect in "conf" body.
> +
+#### Parameterized session configuration from pipeline
+
+Parameterized session configuration allows you to replace the value in %%configure magic with Pipeline run (Notebook activity) parameters. When preparing %%configure code cell, you can override default values (also configurable, 4 and "2000" in the below example) with an object like this:
+
+```
+{
+ "activityParameterName": "paramterNameInPipelineNotebookActivity",
+ "defaultValue": "defaultValueIfNoParamterFromPipelineNotebookActivity"
+}
+```
+
+```python
+%%configure
+
+{
+ "driverCores":
+ {
+ "activityParameterName": "driverCoresFromNotebookActivity",
+ "defaultValue": 4
+ },
+ "conf":
+ {
+ "livy.rsc.sql.num-rows":
+ {
+ "activityParameterName": "rows",
+ "defaultValue": "2000"
+ }
+ }
+}
+```
+
+Notebook will use default value if run a notebook in interactive mode directly or no parameter that match "activityParameterName" is given from Pipeline Notebook activity.
+
+During the pipeline run mode, you can configure pipeline Notebook activity settings as below:
+![Screenshot of parameterized session configuration](./media/apache-spark-development-using-notebooks/parameterized-session-config.png)
+
+If you want to change the session configuration, pipeline Notebook activity parameters name should be same as activityParameterName in the notebook. When run this pipeline, in this example driverCores in %%configure will be replaced by 8 and livy.rsc.sql.num-rows will be replaced by 4000.
+
+> [!NOTE]
+> If run pipeline failed because of using this new %%configure magic, you can check more error information by running %%configure magic cell in the interactive mode of the notebook.
+>
++ ## Bring data to a notebook You can load data from Azure Blob Storage, Azure Data Lake Store Gen 2, and SQL pool as shown in the code samples below.
Available line magics:
[%lsmagic](https://ipython.readthedocs.io/en/stable/interactive/magics.html#magic-lsmagic), [%time](https://ipython.readthedocs.io/en/stable/interactive/magics.html#magic-time), [%timeit](https://ipython.readthedocs.io/en/stable/interactive/magics.html#magic-timeit), [%history](https://ipython.readthedocs.io/en/stable/interactive/magics.html#magic-history), [%run](#notebook-reference), [%load](https://ipython.readthedocs.io/en/stable/interactive/magics.html#magic-load) Available cell magics:
-[%%time](https://ipython.readthedocs.io/en/stable/interactive/magics.html#magic-time), [%%timeit](https://ipython.readthedocs.io/en/stable/interactive/magics.html#magic-timeit), [%%capture](https://ipython.readthedocs.io/en/stable/interactive/magics.html#cellmagic-capture), [%%writefile](https://ipython.readthedocs.io/en/stable/interactive/magics.html#cellmagic-writefile), [%%sql](#use-multiple-languages), [%%pyspark](#use-multiple-languages), [%%spark](#use-multiple-languages), [%%csharp](#use-multiple-languages), [%%html](https://ipython.readthedocs.io/en/stable/interactive/magics.html#cellmagic-html), [%%configure](#spark-session-config-magic-command)
+[%%time](https://ipython.readthedocs.io/en/stable/interactive/magics.html#magic-time), [%%timeit](https://ipython.readthedocs.io/en/stable/interactive/magics.html#magic-timeit), [%%capture](https://ipython.readthedocs.io/en/stable/interactive/magics.html#cellmagic-capture), [%%writefile](https://ipython.readthedocs.io/en/stable/interactive/magics.html#cellmagic-writefile), [%%sql](#use-multiple-languages), [%%pyspark](#use-multiple-languages), [%%spark](#use-multiple-languages), [%%csharp](#use-multiple-languages), [%%html](https://ipython.readthedocs.io/en/stable/interactive/magics.html#cellmagic-html), [%%configure](#spark-session-configuration-magic-command)
+## Reference unpublished notebook
+
+Reference unpublished notebook is helpful when you want to debug "locally", when enabling this feature, notebook run will fetch the current content in web cache, if you run a cell including a reference notebooks statement, you will reference the presenting notebooks in the current notebook browser instead of a saved versions in cluster, that means the changes in your notebook editor can be referenced immediately by other notebooks without having to be published(Live mode) or committed(Git mode), by leveraging this approach you can easily avoid common libraries getting polluted during developing or debugging process.
+
+For different cases comparison please check the table below:
+
+Notice that [%run](./apache-spark-development-using-notebooks.md) and [mssparkutils.notebook.run](./microsoft-spark-utilities.md) has same behavior here. We use `%run` here as an example.
+
+|Case|Disable|Enable|
+|-|-||
+|**Live Mode**|||
+|- Nb1 (Published) <br/> `%run Nb1`|Run published version of Nb1|Run published version of Nb1|
+|- Nb1 (New) <br/> `%run Nb1`|Error|Run new Nb1|
+|- Nb1 (Previously published, edited) <br/> `%run Nb1`|Run **published** version of Nb1|Run **edited** version of Nb1|
+|**Git Mode**|||
+|- Nb1 (Published) <br/> `%run Nb1`|Run published version of Nb1|Run published version of Nb1|
+|- Nb1 (New) <br/> `%run Nb1`|Error|Run new Nb1|
+|- Nb1 (Not published, committed) <br/> `%run Nb1`|Error|Run committed Nb1|
+|- Nb1 (Previously published, committed) <br/> `%run Nb1`|Run **published** version of Nb1|Run **committed** version of Nb1|
+|- Nb1 (Previously published, new in current branch) <br/> `%run Nb1`|Run **published** version of Nb1|Run **new** Nb1|
+|- Nb1 (Not published, previously committed, edited) <br/> `%run Nb1`|Error|Run **edited** version of Nb1|
+|- Nb1 (Previously published and committed, edited) <br/> `%run Nb1`|Run **published** version of Nb1|Run **edited** version of Nb1|
+
+
+## Conclusion
+
+* If disabled, always run **published** version.
+* If enabled, priority is: edited / new > committed > published.
+++ ## Integrate a notebook ### Add a notebook to a pipeline
synapse-analytics Apache Spark Notebook Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-notebook-concept.md
To learn more on how you can create and manage notebooks, see the following arti
- [Use multiple languages using magic commands and temporary tables](./spark/../apache-spark-development-using-notebooks.md#integrate-a-notebook) - [Use cell magic commands](./spark/../apache-spark-development-using-notebooks.md#magic-commands) - Development
- - [Configure Spark session settings](./spark/../apache-spark-development-using-notebooks.md#spark-session-config)
+ - [Configure Spark session settings](./spark/../apache-spark-development-using-notebooks.md#spark-session-configuration)
- [Use Microsoft Spark utilities](./spark/../microsoft-spark-utilities.md) - [Visualize data using notebooks and libraries](./spark/../apache-spark-data-visualization.md) - [Integrate a notebook into pipelines](./spark/../apache-spark-development-using-notebooks.md#integrate-a-notebook)
synapse-analytics Performance Tuning Materialized Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/performance-tuning-materialized-views.md
In comparison to other tuning options such as scaling and statistics management,
**Need different data distribution strategy for faster query performance**
-Dedicated SQL pool is a distributed query processing system. Data in a SQL table is distributed across 60 nodes using one of three [distribution strategies](sql-data-warehouse-tables-distribute.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) (hash, round_robin, or replicated).
+Dedicated SQL pool is a distributed query processing system. Data in a SQL table is distributed upto 60 nodes using one of three [distribution strategies](sql-data-warehouse-tables-distribute.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) (hash, round_robin, or replicated).
The data distribution is specified at the table creation time and stays unchanged until the table is dropped. Materialized view, being a virtual table on disk, supports hash and round_robin data distributions. Users can choose a data distribution that is different from the base tables but optimal for the performance of queries that use the views.
virtual-machine-scale-sets Instance Generalized Image Version Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/instance-generalized-image-version-cli.md
Title: Create a scale set from a generalized image with Azure CLI description: Create a scale set using a generalized image in an Azure Compute Gallery using the Azure CLI.-++ Last updated 05/01/2020--+ # Create a scale set from a generalized image with Azure CLI
virtual-machine-scale-sets Instance Generalized Image Version Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/instance-generalized-image-version-powershell.md
Title: Create a scale set from a generalized image with Azure PowerShell description: Create a scale set using a generalized image in an Azure Compute Gallery using PowerShell.-++ Last updated 05/04/2020--+
virtual-machine-scale-sets Instance Specialized Image Version Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/instance-specialized-image-version-cli.md
Title: Create a scale set from a specialized image version using the Azure CLI description: Create a scale set using a specialized image version in an Azure Compute Gallery using the Azure CLI.-++ Last updated 05/01/2020--+
virtual-machine-scale-sets Instance Specialized Image Version Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/instance-specialized-image-version-powershell.md
Title: Create a scale set from a specialized image description: Create a scale set using a specialized image in an Azure Compute Gallery.-++ Last updated 05/04/2020--+
virtual-machine-scale-sets Share Images Across Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/share-images-across-tenants.md
Title: Share gallery images across tenants description: Learn how to create scale sets using images that are shared across Azure tenants using Shared Image Galleries.--++ Last updated 04/05/2019--++ # Share images across tenants with Azure Compute Gallery
virtual-machines Create Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/create-gallery.md
Title: Create an Azure Compute Gallery for sharing resources description: Learn how to create an Azure Compute Gallery.-+ Last updated 10/05/2021-++ ms.devlang: azurecli
virtual-machines Disks Incremental Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-incremental-snapshots.md
az feature show --namespace Microsoft.Compute --name CreateOptionClone
### Restrictions - Cross-region snapshot copy is currently only available in Central US, East US, East US 2, Germany West central, North Central US, North Europe, South Central US, West Central US, West US, West US 2, West Europe, South India, Central India-- You must use version 2020-12-01 or newer of the Azure Compute Rest API.
+- You must use version 2020-12-01 or newer of the Azure Compute REST API.
### Get started
virtual-machines Agent Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/agent-linux.md
description: Learn how to install and configure Linux Agent (waagent) to manage
--++ Last updated 10/17/2016
virtual-machines Agent Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/agent-windows.md
description: Azure Virtual Machine Agent Overview
--++ Last updated 07/20/2019
virtual-machines Chef https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/chef.md
description: Deploy the Chef Client to a virtual machine using the Chef VM Exten
--++ Last updated 09/21/2018
virtual-machines Custom Script Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/custom-script-linux.md
description: Automate Linux VM configuration tasks by using the Custom Script Ex
--++ Last updated 04/25/2018
virtual-machines Custom Script Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/custom-script-windows.md
description: Automate Windows VM configuration tasks by using the Custom Script
--++ Last updated 08/31/2020
virtual-machines Diagnostics Linux V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/diagnostics-linux-v3.md
description: How to configure the Azure Linux diagnostic extension (LAD) 3.0 to
--++ Last updated 12/13/2018
virtual-machines Diagnostics Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/diagnostics-linux.md
description: How to configure the Azure Linux diagnostic extension (LAD) 4.0 to
--++ Last updated 02/05/2021
virtual-machines Diagnostics Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/diagnostics-template.md
description: Use an Azure Resource Manager template to create a new Windows virt
--++ Last updated 05/31/2017
virtual-machines Diagnostics Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/diagnostics-windows.md
Title: Use Azure PowerShell to enable diagnostics on a Windows VM description: Learn how to use PowerShell to enable Azure Diagnostics in a virtual machine running Windows --++
virtual-machines Export Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/export-templates.md
description: Export Resource Manager templates that include virtual machine exte
--++ Last updated 12/05/2016
virtual-machines Extensions Rmpolicy Howto Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/extensions-rmpolicy-howto-cli.md
description: Use Azure Policy to restrict VM extension deployments.
--++ Last updated 03/23/2018
virtual-machines Extensions Rmpolicy Howto Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/extensions-rmpolicy-howto-ps.md
description: Use Azure Policy to restrict extension deployments.
--++ Last updated 03/23/2018
virtual-machines Features Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/features-linux.md
description: Learn what extensions are available for Azure virtual machines on L
--++ Last updated 03/30/2018
virtual-machines Features Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/features-windows.md
description: Learn what extensions are available for Azure virtual machines on W
--++ Last updated 03/30/2018
virtual-machines Network Watcher Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/network-watcher-linux.md
description: Deploy the Network Watcher Agent on Linux virtual machine using a v
--++ Last updated 02/14/2017
virtual-machines Network Watcher Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/network-watcher-windows.md
description: Deploy the Network Watcher Agent on Windows virtual machine using a
--++ Last updated 02/14/2017
virtual-machines Oms Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/oms-linux.md
description: Deploy the Log Analytics agent on Linux virtual machine using a vir
--++ Last updated 11/02/2021
virtual-machines Oms Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/oms-windows.md
description: Deploy the Log Analytics agent on Windows virtual machine using a v
--++ Last updated 11/02/2021
virtual-machines Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/overview.md
description: Learn more about Azure VM extensions
--++ Last updated 08/03/2020
virtual-machines Stackify Retrace Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/stackify-retrace-linux.md
description: Deploy the Stackify Retrace Linux agent on a Linux virtual machine.
--++ Last updated 04/12/2018
virtual-machines Symantec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/symantec.md
description: Learn how to install and configure the Symantec Endpoint Protection
--++ Last updated 03/31/2017
virtual-machines Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/troubleshoot.md
description: Learn about troubleshooting Azure Windows VM extension failures
--++ Last updated 03/29/2016
virtual-machines Update Linux Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/update-linux-agent.md
description: Learn how to update Azure Linux Agent for your Linux VM in Azure
--++ Last updated 08/02/2017
virtual-machines Vmaccess https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/vmaccess.md
description: How to manage administrative users and reset access on Linux VMs us
--++ Last updated 05/10/2018
virtual-machines How To Enable Write Accelerator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/how-to-enable-write-accelerator.md
To attach a disk with Write Accelerator enabled use [az vm disk attach](/cli/azu
To disable Write Accelerator, use [az vm update](/cli/azure/vm#az_vm_update), setting the properties to false: `az vm update -g group1 -n vm1 -write-accelerator 0=false 1=false`
-## Enabling Write Accelerator using Rest APIs
+## Enabling Write Accelerator using REST APIs
-To deploy through Azure Rest API, you need to install the Azure armclient.
+To deploy through Azure REST API, you need to install the Azure armclient.
### Install armclient
virtual-machines Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-version.md
Title: Create an image definition and image version description: Learn how to create an image in an Azure Compute Gallery.-+ Last updated 08/31/2021-++
virtual-machines Cli Ps Findimage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cli-ps-findimage.md
Title: Find and use marketplace purchase plan information using the CLI description: Learn how to use the Azure CLI to find image URNs and purchase plan parameters, like the publisher, offer, SKU, and version, for Marketplace VM images.- Last updated 03/22/2021-++
virtual-machines Share Images Across Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/share-images-across-tenants.md
Last updated 05/04/2019 ++ # Share gallery VM images across Azure tenants using the Azure CLI
virtual-machines Shared Images Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/shared-images-portal.md
Title: Create shared Azure Linux VM images using the portal description: Learn how to use Azure portal to create and share Linux virtual machine images.- Last updated 06/21/2021-+++ #Customer intent: As an IT administrator, I want to learn about how to create shared VM images to minimize the number of post-deployment configuration tasks.
virtual-machines Marketplace Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/marketplace-images.md
Title: Specify Marketplace purchase plan information using Azure PowerShell description: Learn how to specify Azure Marketplace purchase plan details when creating images in an Azure Compute Gallery (formerly known as Shared Image Gallery).-+ Last updated 07/07/2020-++
virtual-machines Share Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/share-gallery.md
Title: Share a gallery using RBAC description: Learn how to share a gallery using role-based access control (RBAC).-+ Last updated 08/31/2021-++ ms.devlang: azurecli
virtual-machines Shared Image Galleries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/shared-image-galleries.md
Title: Share VM images in a compute gallery description: Learn how to use an Azure Compute Gallery to share VM images.-++
virtual-machines Troubleshooting Shared Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/troubleshooting-shared-images.md
Title: Troubleshoot problems with shared images in Azure description: Learn how to troubleshoot problems with shared images in Azure Compute Galleries. +++
virtual-machines Update Image Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/update-image-resources.md
Title: List, update, and delete image resources description: List, update, and delete image resources in your Azure Compute Gallery.-+ Last updated 08/05/2021-++
virtual-machines User Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/user-data.md
Virtual machine scale set VM:
## Updating user data
-With Rest API, you can use a normal PUT or PATCH request to update the user data. The user data will be updated without the need to stop or reboot the VM.
+With REST API, you can use a normal PUT or PATCH request to update the user data. The user data will be updated without the need to stop or reboot the VM.
`PUT "/subscriptions/{guid}/resourceGroups/{RGName}/providers/Microsoft.Compute/ virtualMachines/{VMName}
virtual-machines Vm Generalized Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-generalized-image-version.md
Title: Create a VM from a generalized image in a gallery description: Create a VM from a generalized image in a gallery.-+ Last updated 08/31/2021-++
virtual-machines Vm Specialized Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-specialized-image-version.md
Title: Create a VM from a specialized image version description: Create a VM using a specialized image version in an Azure Compute Gallery.-+ Last updated 08/05/2021-++
virtual-machines Cli Ps Findimage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/cli-ps-findimage.md
Title: Find and use marketplace purchase plan information using PowerShell description: Use Azure PowerShell to find image URNs and purchase plan parameters, like the publisher, offer, SKU, and version, for Marketplace VM images.- Last updated 03/17/2021-++ # Find and use Azure Marketplace VM images with Azure PowerShell
virtual-machines Client Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/client-images.md
Title: Use Windows client images in Azure description: How to use Visual Studio subscription benefits to deploy Windows 7, Windows 8, or Windows 10 in Azure for dev/test scenarios-++ Last updated 12/15/2017- # Use Windows client in Azure for dev/test scenarios
virtual-machines Share Images Across Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/share-images-across-tenants.md
Last updated 07/15/2019++
virtual-network Nat Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-overview.md
NAT won't affect the network bandwidth of your compute resources since it's a so
## Virtual Network NAT basics
-NAT can be created in a specific availability zone and has redundancy built in within the specified zone. NAT is non-zonal by default. When you create [availability zones](../../availability-zones/az-overview.md) scenarios, NAT can be isolated in a specific zone. This deployment is called a zonal deployment.
+NAT can be created in a specific availability zone and has redundancy built in within the specified zone. NAT is non-zonal by default. A non-zonal Virtual Network NAT is one that hasn't been associated to a specific zone and instead is assigned to a specific zone by Azure.