Updates from: 02/01/2022 02:09:28
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Partner F5 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-f5.md
Title: Tutorial to extend Azure Active Directory B2C with F5 BIG-IP
+ Title: Tutorial to enable Secure Hybrid Access to applications with Azure AD B2C and F5 BIG-IP
description: Learn how to integrate Azure AD B2C authentication with F5 BIG-IP for secure hybrid access --++ Last updated 10/15/2021+
-# Tutorial: Extend Azure Active Directory B2C using F5 BIG-IP
+# Tutorial: Secure Hybrid Access to applications with Azure AD B2C and F5 BIG-IP
In this sample tutorial, learn how to integrate Azure Active Directory (Azure AD) B2C with [F5 BIG-IP Access Policy Manager (APM)](https://www.f5.com/services/resources/white-papers/easily-configure-secure-access-to-all-your-applications-via-azure-active-directory). This tutorial demonstrates how legacy applications can be securely exposed to the internet through BIG-IP security combined with Azure AD B2C pre-authentication, Conditional Access (CA), and Single sign-on (SSO).
The following diagram illustrates the Service Provider (SP) initiated flow for t
| 5. | OIDC client asks the authorization server to exchange authorization code for an ID token | | 6. | BIG-IP APM grants user access and injects the HTTP headers in the client request forwarded on to the application |
-For increased security, organizations using this pattern could also consider blocking all direct access to the application, in that way forcing a strict path through the BIG-IP.
- ## Azure AD B2C Configuration Enabling a BIG-IP with Azure AD B2C authentication requires an Azure AD B2C tenant with a suitable user flow or custom policy. [Set up an Azure AD B2C user flow](tutorial-create-user-flows.md).
You will then be redirected to sign up and authenticate against your Azure AD B2
![Screenshot shows post sign in welcome message](./media/partner-f5/welcome-page.png)
+For increased security, organizations using this pattern could also consider blocking all direct access to the application, in that way forcing a strict path through the BIG-IP.
+ ### Supplemental configurations **Single Log-Out (SLO)**
One optional step for improving the user login experience would be to suppress t
![Screenshot shows optimized login flow](./media/partner-f5/optimized-login-flow.png)
- Unlocking the strict configuration prevents any further changes via the wizard UI, leaving all BIG-IP objects associated with the published instance of the application open for direct management.
+Unlocking the strict configuration prevents any further changes via the wizard UI, leaving all BIG-IP objects associated with the published instance of the application open for direct management.
2. Navigate to **Access** > **Profiles/ Policies** > **Access Profiles (Per-session Policies)** and select the **Per-Session Policy** Edit link for the applicationΓÇÖs policy object.
Your applicationΓÇÖs logs would then help understand if it received those attrib
![Screenshot shows the error message](./media/partner-f5/error-message.png)
- This is a policy violation due to the BIG-IPΓÇÖs inability to validate the signature of the token issued by Azure AD B2C. The same access log should be able to provide more detail on the issue.
+This is a policy violation due to the BIG-IPΓÇÖs inability to validate the signature of the token issued by Azure AD B2C. The same access log should be able to provide more detail on the issue.
![Screenshot shows the access logs](./media/partner-f5/access-log.png)
- Exact root cause is still being investigated by F5 engineering, but issue appears related to the AGC not enabling the Auto JWT setting during deployment, thereby preventing the APM from obtaining the current token signing keys.
+Exact root cause is still being investigated by F5 engineering, but issue appears related to the AGC not enabling the Auto JWT setting during deployment, thereby preventing the APM from obtaining the current token signing keys.
Until resolved, one way to work around the issue is to manually enable this setting.
Your applicationΓÇÖs logs would then help understand if it received those attrib
4. Check the **Use Auto JWT** box then select **Discover**, followed by **Save**.
- You should now see the Key (JWT) field populated with the key ID (KID) of the token signing certificate provided through the OpenID URI metadata.
+You should now see the Key (JWT) field populated with the key ID (KID) of the token signing certificate provided through the OpenID URI metadata.
5. Finally, select the yellow **Apply Access Policy** option in the top left-hand corner, located next to the F5 logo. Apply those settings and select **Apply** again to refresh the access profile list.
active-directory Msal Net Token Cache Serialization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-token-cache-serialization.md
services.Configure<MsalDistributedTokenCacheAdapterOptions>(options =>
// Then, choose your implementation of distributed cache // --
-// For instance, the distributed in-memory cache (not cleared when you stop the app)
+// good for prototyping and testing, but this is NOT persisted and it is NOT distributed - do not use in production
services.AddDistributedMemoryCache(); // Or a Redis cache
services.AddCosmosCache((CosmosCacheOptions cacheOptions) =>
``` For more information, see:-- [Difference between in-memory and distributed in memory caches](https://github.com/AzureAD/microsoft-identity-web/wiki/token-cache-serialization#inmemory-vs-distributedmemory-cache-options) - [Distributed cache advanced options](https://github.com/AzureAD/microsoft-identity-web/wiki/L1-Cache-in-Distributed-(L2)-Token-Cache) - [Handle L2 cache eviction](https://github.com/AzureAD/microsoft-identity-web/wiki/Handle-L2-cache-eviction) - [Set up a Redis cache in Docker](https://github.com/AzureAD/microsoft-identity-web/wiki/Set-up-a-Redis-cache-in-Docker)
+- [Troubleshooting](https://github.com/AzureAD/microsoft-identity-web/wiki/Token-Cache-Troubleshooting)
The usage of distributed cache is featured in the [ASP.NET Core web app tutorial](/aspnet/core/tutorials/first-mvc-app/) in the [phase 2-2 token cache](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/2-WebApp-graph-user/2-2-TokenCache).
You can also specify options to limit the size of the in-memory token cache:
); ``` - #### Distributed caches If you use `app.AddDistributedTokenCache`, the token cache is an adapter against the .NET `IDistributedCache` implementation. So you can choose between a distributed memory cache, a SQL Server cache, a Redis cache, or an Azure Cosmos DB cache. For details about the `IDistributedCache` implementations, see [Distributed memory cache](/aspnet/core/performance/caching/distributed).
Here's the code for an Azure Cosmos DB cache:
For more information about distributed caches, see: -- [Difference between in-memory and distributed in-memory caches](https://github.com/AzureAD/microsoft-identity-web/wiki/token-cache-serialization#inmemory-vs-distributedmemory-cache-options) - [Distributed cache advanced options](https://github.com/AzureAD/microsoft-identity-web/wiki/L1-Cache-in-Distributed-(L2)-Token-Cache) - [Handle L2 cache eviction](https://github.com/AzureAD/microsoft-identity-web/wiki/Handle-L2-cache-eviction) - [Set up a Redis cache in Docker](https://github.com/AzureAD/microsoft-identity-web/wiki/Set-up-a-Redis-cache-in-Docker)
+- [Troubleshooting](https://github.com/AzureAD/microsoft-identity-web/wiki/Token-Cache-Troubleshooting)
### Disabling a legacy token cache
active-directory Scenario Daemon App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-daemon-app-configuration.md
Daemon applications use application permissions rather than delegated permission
The authority specified in the application configuration should be tenanted (specifying a tenant ID or a domain name associated with your organization).
-Even if want to provide a multitenant tool, you should use a tenant ID or domain name, and **not** `common` or `organizations` with this flow, because the service cannot reliably infer which tenant should be used.
+Even if you want to provide a multitenant tool, you should use a tenant ID or domain name, and **not** `common` or `organizations` with this flow, because the service cannot reliably infer which tenant should be used.
## Configure and instantiate the application
active-directory Scenario Daemon Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-daemon-overview.md
Here are some examples of use cases for daemon apps:
There's another common case where non-daemon applications use client credentials: even when they act on behalf of users, they need to access a web API or a resource under their own identity for technical reasons. An example is access to secrets in Azure Key Vault or Azure SQL Database for a cache.
+> [!NOTE]
+> You can't deploy a daemon application to a regular user's device, and a regular user can't access a daemon application. Only a limited set of IT administrators can access devices that have daemon applications running, so a bad actor can't access a client secret or token from device traffic and act on behalf of the daemon application. The daemon application scenario doesn't replace device authentication.
+>
+> Examples of non-daemon applications:
+> - A mobile application that accesses a web service on behalf of an application, but not on behalf of a user.
+> - An IoT device that accesses a web service on behalf of a device, but not on behalf of a user.
+>
+ Applications that acquire a token for their own identities: - Are confidential client applications. These apps, given that they access resources independently of users, need to prove their identity. They're also rather sensitive apps. They need to be approved by the Azure Active Directory (Azure AD) tenant admins.
active-directory Tutorial V2 Nodejs Desktop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/tutorial-v2-nodejs-desktop.md
Create a folder to host your application, for example *ElectronDesktopApp*.
width: 800, height: 600, webPreferences: {
- nodeIntegration: true
+ nodeIntegration: true,
+ contextIsolation: false
} });
active-directory Workload Identity Federation Create Trust Github https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/workload-identity-federation-create-trust-github.md
Previously updated : 10/18/2021 Last updated : 01/28/2022
In the **Federated credential scenario** drop-down box select **GitHub actions d
Specify the **Organization** and **Repository** for your GitHub Actions workflow.
-For **Entity type**, select **Environment**, **Branch**, **Pull request**, or **Tag** and specify the value.
+For **Entity type**, select **Environment**, **Branch**, **Pull request**, or **Tag** and specify the value. The values must exactly match the configuration in the [GitHub workflow](https://docs.github.com/actions/using-workflows/workflow-syntax-for-github-actions#on). For more info, read the [examples](#entity-type-examples).
Add a **Name** for the federated credential.
Click **Add** to configure the federated credential.
> [!IMPORTANT] > The **Organization**, **Repository**, and **Entity type** values must exactly match the configuration on the GitHub workflow configuration. Otherwise, Microsoft identity platform will look at the incoming external token and reject the exchange for an access token. You won't get an error, the exchange fails without error.
+### Entity type examples
+
+#### Branch example
+
+For a workflow triggered by a push or pull request event on the main branch:
+
+```yml
+on:
+ push:
+ branches: [ main ]
+ pull_request:
+ branches: [ main ]
+```
+
+Specify an **Entity type** of **Branch** and a **GitHub branch name** of "main".
+
+#### Environment example
+
+For Jobs tied to an environment named "production":
+
+```yml
+on:
+ push:
+ branches:
+ - main
+
+jobs:
+ deployment:
+ runs-on: ubuntu-latest
+ environment: production
+ steps:
+ - name: deploy
+ # ...deployment-specific steps
+```
+
+Specify an **Entity type** of **Environment** and a **GitHub environment name** of "production".
+
+#### Tag example
+
+For example, for a workflow triggered by a push to the tag named "v2":
+
+```yml
+on:
+ push:
+ # Sequence of patterns matched against refs/heads
+ branches:
+ - main
+ - 'mona/octocat'
+ - 'releases/**'
+ # Sequence of patterns matched against refs/tags
+ tags:
+ - v2
+ - v1.*
+```
+
+Specify an **Entity type** of **Tag** and a **GitHub tag name** of "v2".
+
+#### Pull request example
+
+For a workflow triggered by a pull request event, specify an **Entity type** of **Pull request**.
+ # [Microsoft Graph](#tab/microsoft-graph) Launch [Azure Cloud Shell](https://portal.azure.com/#cloudshell/) and sign in to your tenant.
az rest -m DELETE -u 'https://graph.microsoft.com/beta/applications/f6475511-fd
Before configuring your GitHub Actions workflow, get the *tenant-id* and *client-id* values of your app registration. You can find these values in the Azure portal. Go to the list of [registered applications](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredApps) and select your app registration. In **Overview**->**Essentials**, find the **Application (client) ID** and **Directory (tenant) ID**. Set these values in your GitHub environment to use in the Azure login action for your workflow. ## Next steps
-[Configure a GitHub Actions workflow](/azure/developer/github/connect-from-azure) to get an access token from Microsoft identity provider and access Azure resources.
+For an end-to-end example, read [Deploy to App Service using GitHub Actions](/azure/app-service/deploy-github-actions?tabs=openid).
Read the [GitHub Actions documentation](https://docs.github.com/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-azure) to learn more about configuring your GitHub Actions workflow to get an access token from Microsoft identity provider and access Azure resources.
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
Previously updated : 11/01/2021 Last updated : 01/28/2022
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information last updated on December 15th, 2021.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
+>This information last updated on January 28th, 2022.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
><br/> | Product name | String ID | GUID | Service plans included | Service plans included (friendly names) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Dynamics 365 - Additional Database Storage (Qualified Offer) | CRMSTORAGE | 328dc228-00bc-48c6-8b09-1fbc8bc3435d | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CRMSTORAGE (77866113-0f3e-4e6e-9666-b1e25c6f99b0) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Dynamics CRM Online Storage Add-On (77866113-0f3e-4e6e-9666-b1e25c6f99b0) | | Dynamics 365 - Additional Production Instance (Qualified Offer) | CRMINSTANCE | 9d776713-14cb-4697-a21d-9a52455c738a | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CRMINSTANCE (eeea837a-c885-4167-b3d5-ddde30cbd85f) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Dynamics CRM Online Instance (eeea837a-c885-4167-b3d5-ddde30cbd85f) | | Dynamics 365 - Additional Non-Production Instance (Qualified Offer) | CRMTESTINSTANCE | e06abcc2-7ec5-4a79-b08b-d9c282376f72 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/> CRMTESTINSTANCE (a98b7619-66c7-4885-bdfc-1d9c8c3d279f) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Dynamics CRM Online Additional Test Instance (a98b7619-66c7-4885-bdfc-1d9c8c3d279f) |
+| Dynamics 365 AI for Market Insights (Preview) | SOCIAL_ENGAGEMENT_APP_USER | c6df1e30-1c9f-427f-907c-3d913474a1c7 | SOCIAL_ENGAGEMENT_APP_USER (339f4def-5ad8-4430-8d12-da5fd4c769a7)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 AI for Market Insights ΓÇô Free (339f4def-5ad8-4430-8d12-da5fd4c769a7)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) |
| Dynamics 365 Asset Management Addl Assets | DYN365_ASSETMANAGEMENT | 673afb9d-d85b-40c2-914e-7bf46cd5cd75 | D365_AssetforSCM (90467813-5b40-40d4-835c-abd48009b1d9)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Asset Maintenance Add-in (90467813-5b40-40d4-835c-abd48009b1d9)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) | | Dynamics 365 Business Central Additional Environment Addon | DYN365_BUSCENTRAL_ADD_ENV_ADDON | a58f5506-b382-44d4-bfab-225b2fbf8390 | DYN365_BUSCENTRAL_ENVIRONMENT (d397d6c6-9664-4502-b71c-66f39c400ca4) | Dynamics 365 Business Central Additional Environment Addon (d397d6c6-9664-4502-b71c-66f39c400ca4) | | Dynamics 365 Business Central Database Capacity | DYN365_BUSCENTRAL_DB_CAPACITY | 7d0d4f9a-2686-4cb8-814c-eff3fdab6d74 | DYN365_BUSCENTRAL_DB_CAPACITY (ae6b27b3-fe31-4e77-ae06-ec5fabbc103a)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 Business Central Database Capacity (ae6b27b3-fe31-4e77-ae06-ec5fabbc103a)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Dynamics 365 Business Central for IWs | PROJECT_MADEIRA_PREVIEW_IW_SKU | 6a4a1628-9b9a-424d-bed5-4118f0ede3fd | PROJECT_MADEIRA_PREVIEW_IW (3f2afeed-6fb5-4bf9-998f-f2912133aead)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 Business Central for IWs (3f2afeed-6fb5-4bf9-998f-f2912133aead)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) | | Dynamics 365 Business Central Premium | DYN365_BUSCENTRAL_PREMIUM | f991cecc-3f91-4cd0-a9a8-bf1c8167e029 | DYN365_BUSCENTRAL_PREMIUM (8e9002c0-a1d8-4465-b952-817d2948e6e2)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | Dynamics 365 Business Central Premium (8e9002c0-a1d8-4465-b952-817d2948e6e2)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow for Dynamics 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>PowerApps for Dynamics 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | | Dynamics 365 Customer Engagement Plan | DYN365_ENTERPRISE_PLAN1 | ea126fc5-a19e-42e2-a731-da9d437bffcf | D365_CSI_EMBED_CE (1412cdc1-d593-4ad1-9050-40c30ad0b023)<br/>DYN365_ENTERPRISE_P1 (d56f3deb-50d8-465a-bedb-f079817ccac1)<br/>D365_ProjectOperations (69f07c66-bee4-4222-b051-195095efee5b)<br/>D365_ProjectOperationsCDS (18fa3aba-b085-4105-87d7-55617b8585e6)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DYN_P2 (b650d915-9886-424b-a08d-633cede56f57)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>Forms_Pro_CE (97f29a83-1a20-44ff-bf48-5e4ad11f3e51)<br/>NBENTERPRISE (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_DYN_P2 (0b03f40b-c404-40c3-8651-2aceb74365fa)<br/>PROJECT_FOR_PROJECT_OPERATIONS (0a05d977-a21a-45b2-91ce-61c240dbafa2)<br/>PROJECT_CLIENT_SUBSCRIPTION (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>SHAREPOINT_PROJECT (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72) | Dynamics 365 Customer Service Insights for CE Plan (1412cdc1-d593-4ad1-9050-40c30ad0b023)<br/>Dynamics 365 P1 (d56f3deb-50d8-465a-bedb-f079817ccac1)<br/>Dynamics 365 Project Operations (69f07c66-bee4-4222-b051-195095efee5b)<br/>Dynamics 365 Project Operations CDS (18fa3aba-b085-4105-87d7-55617b8585e6)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow for Dynamics 365 (b650d915-9886-424b-a08d-633cede56f57)<br/>Flow for Dynamics 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>Microsoft Dynamics 365 Customer Voice for Customer Engagement Plan (97f29a83-1a20-44ff-bf48-5e4ad11f3e51)<br/>Microsoft Social Engagement Enterprise (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>Office for the web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Dynamics 365 (0b03f40b-c404-40c3-8651-2aceb74365fa)<br/>Project for Project Operations (0a05d977-a21a-45b2-91ce-61c240dbafa2)<br/>Project Online Desktop Client (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>Project Online Service (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72) |
+| Dynamics 365 Customer Insights Viral | DYN365_CUSTOMER_INSIGHTS_VIRAL | 036c2481-aa8a-47cd-ab43-324f0c157c2d | CDS_CUSTOMER_INSIGHTS_TRIAL (94e5cbf6-d843-4ee8-a2ec-8b15eb52019e)<br/>DYN365_CUSTOMER_INSIGHTS_ENGAGEMENT_INSIGHTS_BASE_TRIAL (e2bdea63-235e-44c6-9f5e-5b0e783f07dd)<br/>DYN365_CUSTOMER_INSIGHTS_VIRAL (ed8e8769-94c5-4132-a3e7-7543b713d51f)<br/>Forms_Pro_Customer_Insights (fe581650-cf61-4a09-8814-4bd77eca9cb5) | Common Data Service for Customer Insights Trial (94e5cbf6-d843-4ee8-a2ec-8b15eb52019e)<br/>Dynamics 365 Customer Insights Engagement Insights Viral (e2bdea63-235e-44c6-9f5e-5b0e783f07dd)<br/>Dynamics 365 Customer Insights Viral Plan (ed8e8769-94c5-4132-a3e7-7543b713d51f)<br/>Microsoft Dynamics 365 Customer Voice for Customer Insights (fe581650-cf61-4a09-8814-4bd77eca9cb5) |
+| Dynamics 365 Customer Service Enterprise Viral Trial | Dynamics_365_Customer_Service_Enterprise_viral_trial | 1e615a51-59db-4807-9957-aa83c3657351 | CUSTOMER_VOICE_DYN365_VIRAL_TRIAL (dbe07046-af68-4861-a20d-1c8cbda9194f)<br/>CCIBOTS_PRIVPREV_VIRAL (ce312d15-8fdf-44c0-9974-a25a177125ee)<br/>DYN365_CS_MESSAGING_VIRAL_TRIAL (3bf52bdf-5226-4a97-829e-5cca9b3f3392)<br/>DYN365_CS_ENTERPRISE_VIRAL_TRIAL (94fb67d3-465f-4d1f-a50a-952da079a564)<br/>DYNB365_CSI_VIRAL_TRIAL (33f1466e-63a6-464c-bf6a-d1787928a56a)<br/>DYN365_CS_VOICE_VIRAL_TRIAL (3de81e39-4ce1-47f7-a77f-8473d4eb6d7c)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>POWER_APPS_DYN365_VIRAL_TRIAL (54b37829-818e-4e3c-a08a-3ea66ab9b45d)<br/>POWER_AUTOMATE_DYN365_VIRAL_TRIAL (81d4ecb8-0481-42fb-8868-51536c5aceeb) | Customer Voice for Dynamics 365 vTrial (dbe07046-af68-4861-a20d-1c8cbda9194f)<br/>Dynamics 365 AI for Customer Service Virtual Agents Viral (ce312d15-8fdf-44c0-9974-a25a177125ee)<br/>Dynamics 365 Customer Service Digital Messaging vTrial (3bf52bdf-5226-4a97-829e-5cca9b3f3392)<br/>Dynamics 365 Customer Service Enterprise vTrial (94fb67d3-465f-4d1f-a50a-952da079a564)<br/>Dynamics 365 Customer Service Insights vTrial (33f1466e-63a6-464c-bf6a-d1787928a56a)<br/>Dynamics 365 Customer Service Voice vTrial (3de81e39-4ce1-47f7-a77f-8473d4eb6d7c)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power Apps for Dynamics 365 vTrial (54b37829-818e-4e3c-a08a-3ea66ab9b45d)<br/>Power Automate for Dynamics 365 vTrial (81d4ecb8-0481-42fb-8868-51536c5aceeb) |
| Dynamics 365 Customer Service Insights Trial | DYN365_AI_SERVICE_INSIGHTS | 61e6bd70-fbdb-4deb-82ea-912842f39431 | DYN365_AI_SERVICE_INSIGHTS (4ade5aa6-5959-4d2c-bf0a-f4c9e2cc00f2) |Dynamics 365 AI for Customer Service Trial (4ade5aa6-5959-4d2c-bf0a-f4c9e2cc00f2) | | Dynamics 365 Customer Voice Trial | FORMS_PRO | bc946dac-7877-4271-b2f7-99d2db13cd2c | DYN365_CDS_FORMS_PRO (363430d1-e3f7-43bc-b07b-767b6bb95e4b)<br/>FORMS_PRO (17efdd9f-c22c-4ad8-b48e-3b1f3ee1dc9a)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FORMS_PLAN_E5 (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>FLOW_FORMS_PRO (57a0746c-87b8-4405-9397-df365a9db793) | Common Data Service (363430d1-e3f7-43bc-b07b-767b6bb95e4b)<br/>Dynamics 365 Customer Voice (17efdd9f-c22c-4ad8-b48e-3b1f3ee1dc9a)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Forms (Plan E5) (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>Power Automate for Dynamics 365 Customer Voice (57a0746c-87b8-4405-9397-df365a9db793) | | Dynamics 365 Customer Service Professional | DYN365_CUSTOMER_SERVICE_PRO | 1439b6e2-5d59-4873-8c59-d60e2a196e92 | DYN365_CUSTOMER_SERVICE_PRO (6929f657-b31b-4947-b4ce-5066c3214f54)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_CUSTOMER_SERVICE_PRO (c507b04c-a905-4940-ada6-918891e6d3ad)<br/>FLOW_CUSTOMER_SERVICE_PRO (0368fc9c-3721-437f-8b7d-3d0f888cdefc)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72) | Dynamics 365 for Customer Service Pro (6929f657-b31b-4947-b4ce-5066c3214f54)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Office for the web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Customer Service Pro (c507b04c-a905-4940-ada6-918891e6d3ad)<br/>Power Automate for Customer Service Pro (0368fc9c-3721-437f-8b7d-3d0f888cdefc)<br/>Project Online Essentials (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Dynamics 365 Customer Voice Additional Responses | DYN365_CUSTOMER_VOICE_ADDON | 65f71586-ade3-4ce1-afc0-1b452eaf3782 | CUSTOMER_VOICE_ADDON (e6e35e2d-2e7f-4e71-bc6f-2f40ed062f5d)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics Customer Voice Add-On (e6e35e2d-2e7f-4e71-bc6f-2f40ed062f5d)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) | | Dynamics 365 Customer Voice USL | Forms_Pro_USL | e2ae107b-a571-426f-9367-6d4c8f1390ba | CDS_FORM_PRO_USL (e9830cfd-e65d-49dc-84fb-7d56b9aa2c89)<br/>Forms_Pro_USL (3ca0766a-643e-4304-af20-37f02726339b)<br/>FLOW_FORMS_PRO (57a0746c-87b8-4405-9397-df365a9db793) | Common Data Service (e9830cfd-e65d-49dc-84fb-7d56b9aa2c89)<br/>Microsoft Dynamics 365 Customer Voice USL (3ca0766a-643e-4304-af20-37f02726339b)<br/>Power Automate for Dynamics 365 Customer Voice (57a0746c-87b8-4405-9397-df365a9db793) | | Dynamics 365 Enterprise Edition - Additional Portal (Qualified Offer) | CRM_ONLINE_PORTAL | a4bfb28e-becc-41b0-a454-ac680dc258d3 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CRM_ONLINE_PORTAL (1d4e9cb1-708d-449c-9f71-943aa8ed1d6a) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Dynamics CRM Online - Portal Add-On (1d4e9cb1-708d-449c-9f71-943aa8ed1d6a) |
+| Dynamics 365 Field Service Viral Trial | Dynamics_365_Field_Service_Enterprise_viral_trial | 29fcd665-d8d1-4f34-8eed-3811e3fca7b3 | CUSTOMER_VOICE_DYN365_VIRAL_TRIAL (dbe07046-af68-4861-a20d-1c8cbda9194f)<br/>DYN365_FS_ENTERPRISE_VIRAL_TRIAL (20d1455b-72b2-4725-8354-a177845ab77d)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>POWER_APPS_DYN365_VIRAL_TRIAL (54b37829-818e-4e3c-a08a-3ea66ab9b45d)<br/>POWER_AUTOMATE_DYN365_VIRAL_TRIAL (81d4ecb8-0481-42fb-8868-51536c5aceeb) | Customer Voice for Dynamics 365 vTrial (dbe07046-af68-4861-a20d-1c8cbda9194f)<br/>Dynamics 365 Field Service Enterprise vTrial (20d1455b-72b2-4725-8354-a177845ab77d)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power Apps for Dynamics 365 vTrial (54b37829-818e-4e3c-a08a-3ea66ab9b45d)<br/>Power Automate for Dynamics 365 vTrial (81d4ecb8-0481-42fb-8868-51536c5aceeb) |
| Dynamics 365 Finance | DYN365_FINANCE | 55c9eb4e-c746-45b4-b255-9ab6b19d5c62 | DYN365_CDS_FINANCE (e95d7060-d4d9-400a-a2bd-a244bf0b609e)<br/>DYN365_REGULATORY_SERVICE (c7657ae3-c0b0-4eed-8c1d-6a7967bd9c65)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>D365_Finance (9f0e1b4e-9b33-4300-b451-b2c662cd4ff7)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba) | Common Data Service for Dynamics 365 Finance (e95d7060-d4d9-400a-a2bd-a244bf0b609e)<br/>Dynamics 365 for Finance and Operations, Enterprise edition - Regulatory Service (c7657ae3-c0b0-4eed-8c1d-6a7967bd9c65)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Dynamics 365 for Finance (9f0e1b4e-9b33-4300-b451-b2c662cd4ff7)<br/>Power Apps for Dynamics 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>Power Automate for Dynamics 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba) | | DYNAMICS 365 FOR CUSTOMER SERVICE ENTERPRISE EDITION | DYN365_ENTERPRISE_CUSTOMER_SERVICE | 749742bf-0d37-4158-a120-33567104deeb | DYN365_ENTERPRISE_CUSTOMER_SERVICE (99340b49-fb81-4b1e-976b-8f2ae8e9394f)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>NBENTERPRISE (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014) |MICROSOFT SOCIAL ENGAGEMENT - SERVICE DISCONTINUATION (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>PROJECT ONLINE ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>DYNAMICS 365 FOR CUSTOMER SERVICE (99340b49-fb81-4b1e-976b-8f2ae8e9394f)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | | DYNAMICS 365 FOR FINANCIALS BUSINESS EDITION | DYN365_FINANCIALS_BUSINESS_SKU | cc13a803-544e-4464-b4e4-6d6169a138fa | DYN365_FINANCIALS_BUSINESS (920656a2-7dd8-4c83-97b6-a356414dbd36)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) |FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>DYNAMICS 365 FOR FINANCIALS (920656a2-7dd8-4c83-97b6-a356414dbd36) | | DYNAMICS 365 FOR SALES AND CUSTOMER SERVICE ENTERPRISE EDITION | DYN365_ENTERPRISE_SALES_CUSTOMERSERVICE | 8edc2cf8-6438-4fa9-b6e3-aa1660c640cc | DYN365_ENTERPRISE_P1 (d56f3deb-50d8-465a-bedb-f079817ccac1)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>NBENTERPRISE (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014) |DYNAMICS 365 CUSTOMER ENGAGEMENT PLAN (d56f3deb-50d8-465a-bedb-f079817ccac1)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MICROSOFT SOCIAL ENGAGEMENT - SERVICE DISCONTINUATION (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>PROJECT ONLINE ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | | DYNAMICS 365 FOR SALES ENTERPRISE EDITION | DYN365_ENTERPRISE_SALES | 1e1a282c-9c54-43a2-9310-98ef728faace | DYN365_ENTERPRISE_SALES (2da8e897-7791-486b-b08f-cc63c8129df7)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>NBENTERPRISE (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | DYNAMICS 365 FOR SALES (2da8e897-7791-486b-b08f-cc63c8129df7)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MICROSOFT SOCIAL ENGAGEMENT - SERVICE DISCONTINUATION (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>PROJECT ONLINE ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014) |
-| Dynamics 365 Sales Professional | D365_SALES_PRO | be9f9771-1c64-4618-9907-244325141096 | DYN365_SALES_PRO (88d83950-ff78-4e85-aa66-abfc787f8090)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_SALES_PRO (6f9f70ce-138d-49f8-bb8b-2e701b7dde75)<br/>FLOW_SALES_PRO (f944d685-f762-4371-806d-a1f48e5bea13)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72) | Dynamics 365 for Sales Professional (88d83950-ff78-4e85-aa66-abfc787f8090)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Sales Pro (6f9f70ce-138d-49f8-bb8b-2e701b7dde75)<br/>Power Automate for Sales Pro (f944d685-f762-4371-806d-a1f48e5bea13)<br/>Project Online Essentials (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72) |
+| Dynamics 365 For Sales Professional | D365_SALES_PRO | be9f9771-1c64-4618-9907-244325141096 | DYN365_SALES_PRO (88d83950-ff78-4e85-aa66-abfc787f8090)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_SALES_PRO (6f9f70ce-138d-49f8-bb8b-2e701b7dde75)<br/>FLOW_SALES_PRO (f944d685-f762-4371-806d-a1f48e5bea13)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72) | Dynamics 365 for Sales Professional (88d83950-ff78-4e85-aa66-abfc787f8090)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Sales Pro (6f9f70ce-138d-49f8-bb8b-2e701b7dde75)<br/>Power Automate for Sales Pro (f944d685-f762-4371-806d-a1f48e5bea13)<br/>Project Online Essentials (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72) |
+| Dynamics 365 For Sales Professional Trial | D365_SALES_PRO_IW | 9c7bff7a-3715-4da7-88d3-07f57f8d0fb6 | D365_SALES_PRO_IW (73f205fc-6b15-47a5-967e-9e64fdf72d0a)<br/>D365_SALES_PRO_IW_Trial (db39a47e-1f4f-462b-bf5b-2ec471fb7b88) | Dynamics 365 for Sales Professional Trial (73f205fc-6b15-47a5-967e-9e64fdf72d0a)<br/>Dynamics 365 for Sales Professional Trial (db39a47e-1f4f-462b-bf5b-2ec471fb7b88) |
| Dynamics 365 Sales Professional Attach to Qualifying Dynamics 365 Base Offer | D365_SALES_PRO_ATTACH | 245e6bf9-411e-481e-8611-5c08595e2988 | D365_SALES_PRO_ATTACH (065f3c64-0649-4ec7-9f47-ef5cf134c751)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 for Sales Pro Attach (065f3c64-0649-4ec7-9f47-ef5cf134c751)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) | | DYNAMICS 365 FOR SUPPLY CHAIN MANAGEMENT | DYN365_SCM | f2e48cb3-9da0-42cd-8464-4a54ce198ad0 | DYN365_CDS_SUPPLYCHAINMANAGEMENT (b6a8b974-2956-4e14-ae81-f0384c363528)<br/>DYN365_REGULATORY_SERVICE (c7657ae3-c0b0-4eed-8c1d-6a7967bd9c65)<br/>D365_SCM (1224eae4-0d91-474a-8a52-27ec96a63fe7)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | COMMON DATA SERVICE FOR DYNAMICS 365 SUPPLY CHAIN MANAGEMENT (b6a8b974-2956-4e14-ae81-f0384c363528)<br/>DYNAMICS 365 FOR FINANCE AND OPERATIONS, ENTERPRISE EDITION - REGULATORY SERVICE (c7657ae3-c0b0-4eed-8c1d-6a7967bd9c65)<br/>DYNAMICS 365 FOR SUPPLY CHAIN MANAGEMENT (1224eae4-0d91-474a-8a52-27ec96a63fe7)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | | Dynamics 365 for Talent | SKU_Dynamics_365_for_HCM_Trial | 3a256e9a-15b6-4092-b0dc-82993f4debc6 | DYN365_CDS_DYN_APPS (2d925ad8-2479-4bd8-bb76-5b80f1d48935)<br/>Dynamics_365_Hiring_Free_PLAN (f815ac79-c5dd-4bcc-9b78-d97f7b817d0d)<br/>Dynamics_365_Onboarding_Free_PLAN (300b8114-8555-4313-b861-0c115d820f50)<br/>Dynamics_365_for_HCM_Trial (5ed38b64-c3b7-4d9f-b1cd-0de18c9c4331)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | Common Data Service (2d925ad8-2479-4bd8-bb76-5b80f1d48935)<br/>Dynamics 365 for Talent: Attract (f815ac79-c5dd-4bcc-9b78-d97f7b817d0d)<br/>Dynamics 365 for Talent: Onboard (300b8114-8555-4313-b861-0c115d820f50)<br/>Dynamics 365 for HCM Trial (5ed38b64-c3b7-4d9f-b1cd-0de18c9c4331)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow for Dynamics 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>PowerApps for Dynamics 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | | DYNAMICS 365 FOR TEAM MEMBERS ENTERPRISE EDITION | DYN365_ENTERPRISE_TEAM_MEMBERS | 8e7a3d30-d97d-43ab-837c-d7701cef83dc | DYN365_Enterprise_Talent_Attract_TeamMember (643d201a-9884-45be-962a-06ba97062e5e)<br/>DYN365_Enterprise_Talent_Onboard_TeamMember (f2f49eef-4b3f-4853-809a-a055c6103fe0)<br/>DYN365_ENTERPRISE_TEAM_MEMBERS (6a54b05e-4fab-40e7-9828-428db3b336fa)<br/>DYNAMICS_365_FOR_OPERATIONS_TEAM_MEMBERS (f5aa7b45-8a36-4cd1-bc37-5d06dea98645)<br/>Dynamics_365_for_Retail_Team_members (c0454a3d-32b5-4740-b090-78c32f48f0ad)<br/>Dynamics_365_for_Talent_Team_members (d5156635-0704-4f66-8803-93258f8b2678)<br/>FLOW_DYN_TEAM (1ec58c70-f69c-486a-8109-4b87ce86e449)<br/>POWERAPPS_DYN_TEAM (52e619e2-2730-439a-b0d3-d09ab7e8b705)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | DYNAMICS 365 FOR TALENT - ATTRACT EXPERIENCE TEAM MEMBER (643d201a-9884-45be-962a-06ba97062e5e)<br/>DYNAMICS 365 FOR TALENT - ONBOARD EXPERIENCE (f2f49eef-4b3f-4853-809a-a055c6103fe0)<br/>DYNAMICS 365 FOR TEAM MEMBERS (6a54b05e-4fab-40e7-9828-428db3b336fa)<br/>DYNAMICS 365 FOR OPERATIONS TEAM MEMBERS (f5aa7b45-8a36-4cd1-bc37-5d06dea98645)<br/>DYNAMICS 365 FOR RETAIL TEAM MEMBERS (c0454a3d-32b5-4740-b090-78c32f48f0ad)<br/>DYNAMICS 365 FOR TALENT TEAM MEMBERS (d5156635-0704-4f66-8803-93258f8b2678)<br/>FLOW FOR DYNAMICS 365 (1ec58c70-f69c-486a-8109-4b87ce86e449)<br/>POWERAPPS FOR DYNAMICS 365 (52e619e2-2730-439a-b0d3-d09ab7e8b705)<br/>PROJECT ONLINE ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | | Dynamics 365 Guides | GUIDES_USER | 0a389a77-9850-4dc4-b600-bc66fdfefc60 | DYN365_CDS_GUIDES (1315ade1-0410-450d-b8e3-8050e6da320f)<br/>GUIDES (0b2c029c-dca0-454a-a336-887285d6ef07)<br/>POWERAPPS_GUIDES (816971f4-37c5-424a-b12b-b56881f402e7) | Common Data Service (1315ade1-0410-450d-b8e3-8050e6da320f)<br/>Dynamics 365 Guides (0b2c029c-dca0-454a-a336-887285d6ef07)<br/>Power Apps for Guides (816971f4-37c5-424a-b12b-b56881f402e7) |
+| Dynamics 365 Marketing Business Edition | DYN365_BUSINESS_MARKETING | 238e2f8d-e429-4035-94db-6926be4ffe7b | DYN365_BUSINESS_Marketing (393a0c96-9ba1-4af0-8975-fa2f853a25ac)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 Marketing (393a0c96-9ba1-4af0-8975-fa2f853a25ac)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) |
| Dynamics 365 Operations - Device | Dynamics_365_for_Operations_Devices | 3bbd44ed-8a70-4c07-9088-6232ddbd5ddd | DYN365_RETAIL_DEVICE (ceb28005-d758-4df7-bb97-87a617b93d6c)<br/>Dynamics_365_for_OperationsDevices (2c9fb43e-915a-4d61-b6ca-058ece89fd66)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 for Retail Device (ceb28005-d758-4df7-bb97-87a617b93d6c)<br/>Dynamics 365 for Operations Devices (2c9fb43e-915a-4d61-b6ca-058ece89fd66)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) | | Dynamics 365 Operations - Sandbox Tier 2:Standard Acceptance Testing | Dynamics_365_for_Operations_Sandbox_Tier2_SKU | e485d696-4c87-4aac-bf4a-91b2fb6f0fa7 | Dynamics_365_for_Operations_Sandbox_Tier2 (d8ba6fb2-c6b1-4f07-b7c8-5f2745e36b54)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 for Operations non-production multi-box instance for standard acceptance testing (Tier 2) (d8ba6fb2-c6b1-4f07-b7c8-5f2745e36b54)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) | | Dynamics 365 Operations - Sandbox Tier 4:Standard Performance Testing | Dynamics_365_for_Operations_Sandbox_Tier4_SKU | f7ad4bca-7221-452c-bdb6-3e6089f25e06 | Dynamics_365_for_Operations_Sandbox_Tier4 (f6b5efb1-1813-426f-96d0-9b4f7438714f)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 for Operations, Enterprise Edition - Sandbox Tier 4:Standard Performance Testing (f6b5efb1-1813-426f-96d0-9b4f7438714f)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) | | DYNAMICS 365 P1 TRIAL FOR INFORMATION WORKERS | DYN365_ENTERPRISE_P1_IW | 338148b6-1b11-4102-afb9-f92b6cdc0f8d | DYN365_ENTERPRISE_P1_IW (056a5f80-b4e0-4983-a8be-7ad254a113c9)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | DYNAMICS 365 P1 TRIAL FOR INFORMATION WORKERS (056a5f80-b4e0-4983-a8be-7ad254a113c9)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) |
+| Dynamics 365 Regulatory Service - Enterprise Edition Trial | DYN365_REGULATORY_SERVICE | 7ed4877c-0863-4f69-9187-245487128d4f | DYN365_REGULATORY_SERVICE (c7657ae3-c0b0-4eed-8c1d-6a7967bd9c65)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 for Finance and Operations, Enterprise edition - Regulatory Service (c7657ae3-c0b0-4eed-8c1d-6a7967bd9c65)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) |
| Dynamics 365 Remote Assist | MICROSOFT_REMOTE_ASSIST | 7a551360-26c4-4f61-84e6-ef715673e083 | CDS_REMOTE_ASSIST (0850ebb5-64ee-4d3a-a3e1-5a97213653b5)<br/>MICROSOFT_REMOTE_ASSIST (4f4c7800-298a-4e22-8867-96b17850d4dd)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929) | Common Data Service for Remote Assist (0850ebb5-64ee-4d3a-a3e1-5a97213653b5)<br/>Microsoft Remote Assist (4f4c7800-298a-4e22-8867-96b17850d4dd)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929) | | Dynamics 365 Remote Assist HoloLens | MICROSOFT_REMOTE_ASSIST_HOLOLENS | e48328a2-8e98-4484-a70f-a99f8ac9ec89 | CDS_REMOTE_ASSIST (0850ebb5-64ee-4d3a-a3e1-5a97213653b5)<br/>MICROSOFT_REMOTE_ASSIST (4f4c7800-298a-4e22-8867-96b17850d4dd)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929) | Common Data Service for Remote Assist (0850ebb5-64ee-4d3a-a3e1-5a97213653b5)<br/>Microsoft Remote Assist (4f4c7800-298a-4e22-8867-96b17850d4dd)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929) | | Dynamics 365 Sales Enterprise Attach to Qualifying Dynamics 365 Base Offer | D365_SALES_ENT_ATTACH | 5b22585d-1b71-4c6b-b6ec-160b1a9c2323 | D365_SALES_ENT_ATTACH (3ae52229-572e-414f-937c-ff35a87d4f29)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 for Sales Enterprise Attach (3ae52229-572e-414f-937c-ff35a87d4f29)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) |
+| Dynamics 365 Sales Premium Viral Trial | Dynamics_365_Sales_Premium_Viral_Trial | 6ec92958-3cc1-49db-95bd-bc6b3798df71 | CUSTOMER_VOICE_DYN365_VIRAL_TRIAL (dbe07046-af68-4861-a20d-1c8cbda9194f)<br/>DYN365_SALES_ENTERPRISE_VIRAL_TRIAL (7f636c80-0961-41b2-94da-9642ccf02de0)<br/>DYN365_SALES_INSIGHTS_VIRAL_TRIAL (456747c0-cf1e-4b0d-940f-703a01b964cc)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>POWER_APPS_DYN365_VIRAL_TRIAL (54b37829-818e-4e3c-a08a-3ea66ab9b45d)<br/>POWER_AUTOMATE_DYN365_VIRAL_TRIAL (81d4ecb8-0481-42fb-8868-51536c5aceeb) | Customer Voice for Dynamics 365 vTrial (dbe07046-af68-4861-a20d-1c8cbda9194f)<br/>Dynamics 365 Sales Enterprise vTrial (7f636c80-0961-41b2-94da-9642ccf02de0)<br/>Dynamics 365 Sales Insights vTrial (456747c0-cf1e-4b0d-940f-703a01b964cc)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power Apps for Dynamics 365 vTrial (54b37829-818e-4e3c-a08a-3ea66ab9b45d)<br/>Power Automate for Dynamics 365 vTrial (81d4ecb8-0481-42fb-8868-51536c5aceeb) |
+| Dynamics 365 Talent: Attract | Dynamics_365_Hiring_SKU | e561871f-74fa-4f02-abee-5b0ef54dd36d | DYN365_CDS_DYN_APPS (2d925ad8-2479-4bd8-bb76-5b80f1d48935)<br/>Dynamics_365_Hiring_Free_PLAN (f815ac79-c5dd-4bcc-9b78-d97f7b817d0d)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Common Data Service (2d925ad8-2479-4bd8-bb76-5b80f1d48935)<br/>Dynamics 365 for Talent: Attract (f815ac79-c5dd-4bcc-9b78-d97f7b817d0d)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) |
| DYNAMICS 365 TALENT: ONBOARD | DYNAMICS_365_ONBOARDING_SKU | b56e7ccc-d5c7-421f-a23b-5c18bdbad7c0 | DYN365_CDS_DYN_APPS (2d925ad8-2479-4bd8-bb76-5b80f1d48935)<br/>Dynamics_365_Onboarding_Free_PLAN (300b8114-8555-4313-b861-0c115d820f50)<br/>Dynamics_365_Talent_Onboard (048a552e-c849-4027-b54c-4c7ead26150a)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | COMMON DATA SERVICE (2d925ad8-2479-4bd8-bb76-5b80f1d48935)<br/>DYNAMICS 365 FOR TALENT: ONBOARD (300b8114-8555-4313-b861-0c115d820f50)<br/>DYNAMICS 365 FOR TALENT: ONBOARD (048a552e-c849-4027-b54c-4c7ead26150a)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | | DYNAMICS 365 TEAM MEMBERS | DYN365_TEAM_MEMBERS | 7ac9fe77-66b7-4e5e-9e46-10eed1cff547 | DYNAMICS_365_FOR_RETAIL_TEAM_MEMBERS (c0454a3d-32b5-4740-b090-78c32f48f0ad)<br/>DYN365_ENTERPRISE_TALENT_ATTRACT_TEAMMEMBER (643d201a-9884-45be-962a-06ba97062e5e)<br/>DYN365_ENTERPRISE_TALENT_ONBOARD_TEAMMEMBER (f2f49eef-4b3f-4853-809a-a055c6103fe0)<br/>DYNAMICS_365_FOR_TALENT_TEAM_MEMBERS (d5156635-0704-4f66-8803-93258f8b2678)<br/>DYN365_TEAM_MEMBERS (4092fdb5-8d81-41d3-be76-aaba4074530b)<br/>DYNAMICS_365_FOR_OPERATIONS_TEAM_MEMBERS (f5aa7b45-8a36-4cd1-bc37-5d06dea98645)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DYN_TEAM (1ec58c70-f69c-486a-8109-4b87ce86e449)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_DYN_TEAM (52e619e2-2730-439a-b0d3-d09ab7e8b705)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72) | DYNAMICS 365 FOR RETAIL TEAM MEMBERS (c0454a3d-32b5-4740-b090-78c32f48f0ad)<br/>DYNAMICS 365 FOR TALENT - ATTRACT EXPERIENCE TEAM MEMBER (643d201a-9884-45be-962a-06ba97062e5e)<br/>DYNAMICS 365 FOR TALENT - ONBOARD EXPERIENCE (f2f49eef-4b3f-4853-809a-a055c6103fe0)<br/>DYNAMICS 365 FOR TALENT TEAM MEMBERS (d5156635-0704-4f66-8803-93258f8b2678)<br/>DYNAMICS 365 TEAM MEMBERS (4092fdb5-8d81-41d3-be76-aaba4074530b)<br/>DYNAMICS 365 FOR OPERATIONS TEAM MEMBERS (f5aa7b45-8a36-4cd1-bc37-5d06dea98645)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW FOR DYNAMICS 365 (1ec58c70-f69c-486a-8109-4b87ce86e449)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>OFFICE FOR THE WEB (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS FOR DYNAMICS 365 (52e619e2-2730-439a-b0d3-d09ab7e8b705)<br/>PROJECT ONLINE ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINT (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72) | | DYNAMICS 365 UNF OPS PLAN ENT EDITION | Dynamics_365_for_Operations | ccba3cfe-71ef-423a-bd87-b6df3dce59a9 | DDYN365_CDS_DYN_P2 (d1142cfd-872e-4e77-b6ff-d98ec5a51f66)<br/>DYN365_TALENT_ENTERPRISE (65a1ebf4-6732-4f00-9dcb-3d115ffdeecd)<br/>Dynamics_365_for_Operations (95d2cd7b-1007-484b-8595-5e97e63fe189)<br/>Dynamics_365_for_Retail (a9e39199-8369-444b-89c1-5fe65ec45665)<br/>DYNAMICS_365_HIRING_FREE_PLAN (f815ac79-c5dd-4bcc-9b78-d97f7b817d0d)<br/>Dynamics_365_Onboarding_Free_PLAN (300b8114-8555-4313-b861-0c115d820f50)<br/>FLOW_DYN_P2 (b650d915-9886-424b-a08d-633cede56f57)<br/>POWERAPPS_DYN_P2 (0b03f40b-c404-40c3-8651-2aceb74365fa) | COMMON DATA SERVICE (d1142cfd-872e-4e77-b6ff-d98ec5a51f66)<br/>DYNAMICS 365 FOR TALENT (65a1ebf4-6732-4f00-9dcb-3d115ffdeecd)<br/>DYNAMICS 365 FOR_OPERATIONS (95d2cd7b-1007-484b-8595-5e97e63fe189)<br/>DYNAMICS 365 FOR RETAIL (a9e39199-8369-444b-89c1-5fe65ec45665)<br/>DYNAMICS 365 HIRING FREE PLAN (f815ac79-c5dd-4bcc-9b78-d97f7b817d0d)<br/>DYNAMICS 365 FOR TALENT: ONBOARD (300b8114-8555-4313-b861-0c115d820f50)<br/>FLOW FOR DYNAMICS 365(b650d915-9886-424b-a08d-633cede56f57)<br/>POWERAPPS FOR DYNAMICS 365 (0b03f40b-c404-40c3-8651-2aceb74365fa) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft 365 E5 without Audio Conferencing | SPE_E5_NOPSTNCONF | cd2925a3-5076-4233-8931-638a8c94f773 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>DYN365_CDS_O365_P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>CDS_O365_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>GRAPH_CONNECTORS_SEARCH_INDEX (a6520331-d7d4-4276-95f5-15c0933bc757)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>MICROSOFTENDPOINTDLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>EXCEL_PREMIUM (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>FORMS_PLAN_E5 (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>INSIDER_RISK (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/> EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>POWER_VIRTUAL_AGENTS_O365_P3 (ded3d325-1bdc-453e-8432-5bac26d7a014)<br/>RREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>PROJECT_O365_P3 (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>COMMUNICATIONS_COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>INSIDER_RISK_MANAGEMENT (9d0c4ee5-e4a1-4625-ab39-d82b619b1a34)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Azure Information Protection Premium P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>Common Data Service - O365 P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>Common Data Service for Teams_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Graph Connectors Search with Index (a6520331-d7d4-4276-95f5-15c0933bc757)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics ΓÇô Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 ΓÇô Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 ΓÇô Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft 365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Cloud App Security (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Endpoint DLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>Microsoft Excel Advanced Analytics (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>Microsoft Forms (Plan E5) (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Insider Risk Management (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Kaizala (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft ML-Based Classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Office 365 (Plan 3) (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Power Automate for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>Power Virtual Agents for Office 365 P3 (ded3d325-1bdc-453e-8432-5bac26d7a014)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>Project for Office (Plan E5) (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>Microsoft Communications Compliance (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>Microsoft Insider Risk Management (9d0c4ee5-e4a1-4625-ab39-d82b619b1a34)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Windows 10 Enterprise (Original) (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653) | | Microsoft 365 F1 | M365_F1 | 44575883-256e-4a79-9da4-ebe9acabe2b2 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_ENTERPRISE_GOV (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>STREAM_O365_K (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>MCOIMP (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Azure Rights Management (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>Cloud App Security Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Stream for O365 K SKU (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SharePoint Online Kiosk (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>Skype for Business Online (Plan 1) (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653) | | Microsoft 365 F3 | SPE_F1 | 66b55226-6b4f-492c-910c-a3b7a3c9d993 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>DYN365_CDS_O365_F1 (ca6e61ec-d4f4-41eb-8b88-d96e0e14323f)<br/>CDS_O365_F1 (90db65a7-bf11-4904-a79f-ef657605145b)<br/>EXCHANGE_S_DESKLESS (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>FORMS_PLAN_K (f07046bd-2a3c-4b96-b0be-dea79d7cbfb8)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>KAIZALA_O365_P1 (73b2a583-6a59-42e3-8e83-54db46bc3278)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_K (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_S1 (e0287f9f-e222-4f98-9a83-f379e249159a)<br/>FLOW_O365_S1 (bd91b1a4-9f94-4ecf-b45b-3a65e5c8128a)<br/>POWER_VIRTUAL_AGENTS_O365_F1 (ba2fdb48-290b-4632-b46a-e4ecc58ac11a)<br/>PROJECT_O365_F3 (7f6f28c2-34bb-4d4b-be36-48ca2e77e1ec)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>MCOIMP (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_FIRSTLINE (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>WHITEBOARD_FIRSTLINE1 (36b29273-c6d0-477a-aca6-6fbe24f538e3)<br/>WIN10_ENT_LOC_F1 (e041597c-9c7f-4ed9-99b0-2663301576f7)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Cloud App Security Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Common Data Service - O365 F1 (ca6e61ec-d4f4-41eb-8b88-d96e0e14323f)<br/>Common Data Service for Teams_F1 (90db65a7-bf11-4904-a79f-ef657605145b)<br/>Exchange Online Kiosk (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Forms (Plan F1) (f07046bd-2a3c-4b96-b0be-dea79d7cbfb8)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Kaizala Pro Plan 1 (73b2a583-6a59-42e3-8e83-54db46bc3278)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 K SKU (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office for the web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Office Mobile Apps for Office 365 (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>Power Apps for Office 365 K1 (e0287f9f-e222-4f98-9a83-f379e249159a)<br/>Power Automate for Office 365 K1 (bd91b1a4-9f94-4ecf-b45b-3a65e5c8128a)<br/>Power Virtual Agents for Office 365 F1 (ba2fdb48-290b-4632-b46a-e4ecc58ac11a)<br/>Project for Office (Plan F) (7f6f28c2-34bb-4d4b-be36-48ca2e77e1ec)<br/>SharePoint Kiosk (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>Skype for Business Online (Plan 1) (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Firstline) (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Whiteboard (Firstline) (36b29273-c6d0-477a-aca6-6fbe24f538e3)<br/>Windows 10 Enterprise E3 (local only) (e041597c-9c7f-4ed9-99b0-2663301576f7)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653) |
+| Microsoft 365 F5 Security + Compliance Add-on | SPE_F5_SECCOMP | 32b47245-eb31-44fc-b945-a8b1576c439f | AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>BPOS_S_DlpAddOn (9bec7e34-c9fa-40b7-a9d1-bd6d1165c7ed)<br/>EXCHANGE_S_ARCHIVE_ADDON (176a09a6-7ec5-4039-ac02-b2791c6ba793)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>WINDEFATP(871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>MICROSOFTENDPOINTDLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>INSIDER_RISK (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f) | Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Azure Information Protection Premium P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Data Loss Prevention (9bec7e34-c9fa-40b7-a9d1-bd6d1165c7ed)<br/>Exchange Online Archiving (176a09a6-7ec5-4039-ac02-b2791c6ba793)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics ΓÇô Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection for Office 365 ΓÇô Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/> Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft Communications DLP(6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/> Microsoft Defender for Cloud Apps (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Endpoint DLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Insider Risk Management (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>Microsoft ML-Based Classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f) |
| MICROSOFT FLOW FREE | FLOW_FREE | f30db892-07e9-47e9-837c-80727f46fd3d | DYN365_CDS_VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_P2_VIRAL (50e68c76-46c6-4674-81f9-75456511b170) | COMMON DATA SERVICE - VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW FREE (50e68c76-46c6-4674-81f9-75456511b170) | | MICROSOFT 365 AUDIO CONFERENCING FOR GCC | MCOMEETADV_GOV | 2d3091c7-0712-488b-b3d8-6b97bde6a1f5 | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>MCOMEETADV_GOV (f544b08d-1645-4287-82de-8d91f37c02a1) | EXCHANGE FOUNDATION FOR GOVERNMENT (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>MICROSOFT 365 AUDIO CONFERENCING FOR GOVERNMENT (f544b08d-1645-4287-82de-8d91f37c02a1) | | Microsoft 365 E5 Suite features | M365_E5_SUITE_COMPONENTS | 99cc8282-2f74-4954-83b7-c6a9a1999067 | Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>MICROSOFTENDPOINTDLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>INSIDER_RISK (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7) | Information Protection and Governance Analytics ΓÇô Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Microsoft Endpoint DLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>Microsoft Insider Risk Management (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>Microsoft ML-based classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft Intune Device | INTUNE_A_D | 2b317a4a-77a6-4188-9437-b68a77b4e2c6 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | | MICROSOFT INTUNE DEVICE FOR GOVERNMENT | INTUNE_A_D_GOV | 2c21e77a-e0d6-4570-b38a-7ff2dc17d2ca | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | | Microsoft Power Apps Plan 2 Trial | POWERAPPS_VIRAL | dcb1a3ae-b33f-4487-846a-a640262fadf4 | DYN365_CDS_VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_P2_VIRAL (50e68c76-46c6-4674-81f9-75456511b170)<br/>FLOW_P2_VIRAL_REAL (d20bfa21-e9ae-43fc-93c2-20783f0840c3)<br/>POWERAPPS_P2_VIRAL (d5368ca3-357e-4acb-9c21-8495fb025d1f) | Common Data Service ΓÇô VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow Free (50e68c76-46c6-4674-81f9-75456511b170)<br/>Flow P2 Viral (d20bfa21-e9ae-43fc-93c2-20783f0840c3)<br/>PowerApps Trial (d5368ca3-357e-4acb-9c21-8495fb025d1f) |
+| Microsoft Power Apps for Developer | POWERAPPS_DEV | 5b631642-bd26-49fe-bd20-1daaa972ef80 | DYN365_CDS_DEV_VIRAL (d8c638e2-9508-40e3-9877-feb87603837b)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DEV_VIRAL (c7ce3f26-564d-4d3a-878d-d8ab868c85fe)<br/>POWERAPPS_DEV_VIRAL (a2729df7-25f8-4e63-984b-8a8484121554) | Common Data Service - DEV VIRAL (d8c638e2-9508-40e3-9877-feb87603837b)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow for Developer (c7ce3f26-564d-4d3a-878d-d8ab868c85fe)<br/>PowerApps for Developer (a2729df7-25f8-4e63-984b-8a8484121554) |
| MICROSOFT POWER AUTOMATE PLAN 2 | FLOW_P2 | 4755df59-3f73-41ab-a249-596ad72b5504 | DYN365_CDS_P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_P2 (56be9436-e4b2-446c-bb7f-cc15d16cca4d) | Common Data Service - P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power Automate (Plan 2) (56be9436-e4b2-446c-bb7f-cc15d16cca4d) | | MICROSOFT INTUNE SMB | INTUNE_SMB | e6025b08-2fa5-4313-bd0a-7e5ffca32958 | AAD_SMB (de377cbc-0019-4ec2-b77c-3f223947e102)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>INTUNE_SMBIZ (8e9ff0ff-aa7a-4b20-83c1-2f636b600ac2)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/> | AZURE ACTIVE DIRECTORY (de377cbc-0019-4ec2-b77c-3f223947e102)<br/> EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/> MICROSOFT INTUNE (8e9ff0ff-aa7a-4b20-83c1-2f636b600ac2)<br/> MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | | Microsoft Power Apps Plan 2 (Qualified Offer) | POWERFLOW_P2 | ddfae3e3-fcb2-4174-8ebd-3023cb213c8b | DYN365_CDS_P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>POWERAPPS_P2 (00527d7f-d5bc-4c2a-8d1e-6c0de2410c81)<br/>FLOW_P2 (56be9436-e4b2-446c-bb7f-cc15d16cca4d) | Common Data Service - P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/> Power Apps (Plan 2) (00527d7f-d5bc-4c2a-8d1e-6c0de2410c81)<br/>Power Automate (Plan 2) (56be9436-e4b2-446c-bb7f-cc15d16cca4d) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Windows 365 Business 2 vCPU, 4 GB, 64 GB | CPC_B_2C_4RAM_64GB | 42e6818f-8966-444b-b7ac-0027c83fa8b5 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>(CPC_B_2C_4RAM_64GB (a790cd6e-a153-4461-83c7-e127037830b6) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Business 2 vCPU, 4 GB, 64 GB (a790cd6e-a153-4461-83c7-e127037830b6) | | Windows 365 Business 4 vCPU, 16 GB, 128 GB (with Windows Hybrid Benefit) | CPC_B_4C_16RAM_128GB_WHB | 439ac253-bfbc-49c7-acc0-6b951407b5ef | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_B_4C_16RAM_128GB (1d4f75d3-a19b-49aa-88cb-f1ea1690b550) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Business 4 vCPU, 16 GB, 128 GB (1d4f75d3-a19b-49aa-88cb-f1ea1690b550) | | Windows 365 Enterprise 2 vCPU, 4 GB, 64 GB | CPC_E_2C_4GB_64GB | 7bb14422-3b90-4389-a7be-f1b745fc037f | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_E_2C_4GB_64GB (23a25099-1b2f-4e07-84bd-b84606109438) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 2 vCPU, 4 GB, 64 GB (23a25099-1b2f-4e07-84bd-b84606109438) |
+| Windows 365 Enterprise 2 vCPU, 8 GB, 128 GB | CPC_E_2C_8GB_128GB | e2aebe6c-897d-480f-9d62-fff1381581f7 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_2 (3efff3fe-528a-4fc5-b1ba-845802cc764f) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 2 vCPU, 8 GB, 128 GB (3efff3fe-528a-4fc5-b1ba-845802cc764f) |
+| Windows 365 Enterprise 2 vCPU, 8 GB, 128 GB (Preview) | CPC_LVL_2 | 461cb62c-6db7-41aa-bf3c-ce78236cdb9e | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_2 (3efff3fe-528a-4fc5-b1ba-845802cc764f) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 2 vCPU, 8 GB, 128 GB (3efff3fe-528a-4fc5-b1ba-845802cc764f) |
+| Windows 365 Enterprise 4 vCPU, 16 GB, 256 GB (Preview) | CPC_LVL_3 | bbb4bf6e-3e12-4343-84a1-54d160c00f40 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_E_4C_16GB_256GB (9ecf691d-8b82-46cb-b254-cd061b2c02fb) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 4 vCPU, 16 GB, 256 GB (9ecf691d-8b82-46cb-b254-cd061b2c02fb) |
| WINDOWS STORE FOR BUSINESS | WINDOWS_STORE | 6470687e-a428-4b7a-bef2-8a291ad947c9 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDOWS_STORE (a420f25f-a7b3-4ff5-a9d0-5d58f73b537d) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDOWS STORE SERVICE (a420f25f-a7b3-4ff5-a9d0-5d58f73b537d) | | Microsoft Workplace Analytics | WORKPLACE_ANALYTICS | 3d957427-ecdc-4df2-aacd-01cc9d519da8 | WORKPLACE_ANALYTICS (f477b0f0-3bb1-4890-940c-40fcee6ce05f)<br/>WORKPLACE_ANALYTICS_INSIGHTS_BACKEND (ff7b261f-d98b-415b-827c-42a3fdf015af)<br/>WORKPLACE_ANALYTICS_INSIGHTS_USER (b622badb-1b45-48d5-920f-4b27a2c0996c) | Microsoft Workplace Analytics (f477b0f0-3bb1-4890-940c-40fcee6ce05f)<br/>Microsoft Workplace Analytics Insights Backend (ff7b261f-d98b-415b-827c-42a3fdf015af)<br/>Microsoft Workplace Analytics Insights User (b622badb-1b45-48d5-920f-4b27a2c0996c) |
active-directory Assign User Or Group Access Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/assign-user-or-group-access-portal.md
Last updated 10/23/2021 + #customer intent: As an admin, I want to manage user assignment for an app in Azure Active Directory using Powershell # Assign users and groups to an application
-This article shows you how to assign users and groups to an enterprise application in Azure Active Directory (Azure AD) using PowerShell. When you assign a user to an application, the application appears in the user's My Apps portal for easy access. If the application exposes roles, you can also assign a specific role to the user.
+This article shows you how to assign users and groups to an enterprise application in Azure Active Directory (Azure AD) using PowerShell. When you assign a user to an application, the application appears in the user's [My Apps](https://myapps.microsoft.com/) portal for easy access. If the application exposes roles, you can also assign a specific role to the user.
+
+When you assign a group to an application, only users in the group will have access. The assignment does not cascade to nested groups.
+
+Group-based assignment requires Azure Active Directory Premium P1 or P2 edition. Group-based assignment is supported for Security groups only. Nested group memberships and Microsoft 365 groups are not currently supported. For more licensing requirements for the features discussed in this article, see the [Azure Active Directory pricing page](https://azure.microsoft.com/pricing/details/active-directory).
+
+For greater control, certain types of enterprise applications can be configured to require user assignment. See [Manage access to an application](what-is-access-management.md#requiring-user-assignment-for-an-app) for more information on requiring user assignment for an app.
## Prerequisites To assign users to an app using PowerShell, you need: -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure account with an active subscription. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. - If you have not yet installed the AzureAD module (use the command `Install-Module -Name AzureAD`). If you're prompted to install a NuGet module or the new Azure Active Directory V2 PowerShell module, type Y and press ENTER. - Azure Active Directory Premium P1 or P2 for group-based assignment. For more licensing requirements for the features discussed in this article, see the [Azure Active Directory pricing page](https://azure.microsoft.com/pricing/details/active-directory).
active-directory Configure Linked Sign On https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/configure-linked-sign-on.md
# Add linked single sign-on to an application
-This article shows you how to configure linked-based single sign-on (SSO) for your application in Azure Active Directory (Azure AD). Linked-based SSO enables Azure AD to provide single SSO to an application that is already configured for SSO in another service. The linked option lets you configure the target location when a user selects the application in your organization's My Apps or Microsoft 365 portal.
+This article shows you how to configure linked-based single sign-on (SSO) for your application in Azure Active Directory (Azure AD). Linked-based SSO enables Azure AD to provide SSO to an application that is already configured for SSO in another service. The linked option lets you configure the target location when a user selects the application in your organization's My Apps or Microsoft 365 portal.
Linked-based SSO doesn't provide sign-on functionality through Azure AD. The option simply sets the location that users are sent when they select the application on the My Apps or Microsoft 365 portal.
To configure linked-based SSO in your Azure AD tenant, you need:
## Next steps -- [Manage access to apps](what-is-access-management.md)
+- [Manage access to apps](what-is-access-management.md)
active-directory F5 Aad Password Less Vpn https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-aad-password-less-vpn.md
Last updated 10/12/2020-+ -+
-# Configure F5 BIG-IP SSL-VPN solution in Azure AD
+# Tutorial: Configure F5 BIG-IP SSL-VPN for Azure AD SSO
-In this tutorial, learn how to configure F5ΓÇÖs BIG-IP based Secure socket layer Virtual private network (SSL-VPN) solution with Azure Active Directory (AD) for Secure Hybrid Access (SHA).
+In this tutorial, learn how to integrate F5ΓÇÖs BIG-IP based Secure socket layer Virtual Private Network (SSL-VPN) with Azure Active Directory (AD) for Secure Hybrid Access (SHA).
-Configuring a BIG-IP SSL-VPN with Azure AD provides [many key benefits](f5-aad-integration.md), including:
--- Improved Zero trust governance through [Azure AD pre-authentication and authorization](../../app-service/overview-authentication-authorization.md)
+Enabling a BIG-IP SSL-VPN for Azure AD single sign-on (SSO) provides many benefits, including:
+- Improved Zero trust governance through Azure AD pre-authentication and [Conditional Access](/conditional-access/overview)
- [Password-less authentication to the VPN service](https://www.microsoft.com/security/business/identity/passwordless)
+- Manage Identities and access from a single control plane, the [Azure portal](https://azure.microsoft.com/features/azure-portal/)
-- Manage Identities and access from a single control plane - The [Azure portal](https://portal.azure.com/#home)
+To learn about all of the benefits, see [Integrate F5 BIG-IP with Azure Active Directory](./f5-aad-integration.md) and [What is single sign-on in Azure Active Directory?](/azure/active-directory/active-directory-appssoaccess-whatis).
-Despite these great value adds, the classic VPN does however remain predicated on the notion of a network perimeter, where trusted is on the inside and untrusted the outside. This model is no longer effective in achieving a true Zero Trust posture, since corporate assets are no longer confined to the walls of an enterprise data center, but rather across multi-cloud environments with no fixed boundaries. For this reason, we encourage our customers to consider moving to a more Identity driven approach at managing [access on a per application basis](../fundamentals/five-steps-to-full-application-integration-with-azure-ad.md).
+Despite these great value adds, classic VPNs do however remain network orientated, often providing little to zero fine grained access to corporate applications. For this reason, we encourage moving to a more Identity centric approach at achieving Zero Trust [access on a per application basis](/fundamentals/five-steps-to-full-application-integration-with-azure-ad).
## Scenario description
-In this scenario, the BIG-IP APM instance of the SSL-VPN service will be configured as a SAML Service Provider (SP) and Azure AD becomes the trusted SAML IDP, providing pre-authentication. Single sign-on (SSO) from Azure AD is then provided through claims-based authentication to the BIG-IP APM, providing a seamless VPN access experience.
+In this scenario, the BIG-IP APM instance of the SSL-VPN service will be configured as a SAML Service Provider (SP) and Azure AD becomes the trusted SAML IDP. SSO from Azure AD is then provided through claims-based authentication to the BIG-IP APM, providing a seamless VPN access experience.
![Image shows ssl-vpn architecture](media/f5-sso-vpn/ssl-vpn-architecture.png)
Prior experience or knowledge of F5 BIG-IP isn't necessary, however, you'll need
- The BIG-IP should be provisioned with the necessary SSL certificates for publishing services over HTTPS.
-Familiarizing yourself with [F5 BIG-IP terminology](https://www.f5.com/services/resources/glossary) will also help understand the various components that are referenced throughout the tutorial.
+Familiarizing yourself with [F5 BIG-IP terminology](https://www.f5.com/services/resources/glossary) will also help understand the various components referenced throughout the tutorial.
>[!NOTE] >Azure is constantly evolving so donΓÇÖt be surprised if you find any nuances between the instructions in this guide and what you see in the Azure portal. Screenshots are from BIG-IP v15, however, remain relatively similar from v13.1.
Setting up a SAML federation trust between the BIG-IP allows the Azure AD BIG-IP
- For the Logout URL enter the BIG-IP APM Single logout (SLO) endpoint pre-pended by the host header of the service being published. For example, `https://ssl-vpn.contoso.com/saml/sp/profile/redirect/slr`
- Providing an SLO URL ensures a user session is terminated at both ends, the BIG-IP and Azure AD, after the user signs out. BIG-IP APM also provides an [option](https://support.f5.com/csp/article/K12056) for terminating all sessions when calling a specific application URL.
+Providing an SLO URL ensures a user session is terminated at both ends, the BIG-IP and Azure AD, after the user signs out. BIG-IP APM also provides an [option](https://support.f5.com/csp/article/K12056) for terminating all sessions when calling a specific application URL.
![Image shows basic saml configuration](media/f5-sso-vpn/basic-saml-configuration.png).
The following section creates the BIG-IP SAML service provider and corresponding
![Image shows creating new SAML SP service](media/f5-sso-vpn/create-new-saml-sp.png)
- SP **Name** settings are only required if the entity ID isn't an exact match of the hostname portion of the published URL, or if it isnΓÇÖt in regular hostname-based URL format. Provide the external scheme and hostname of the application being published if entity ID is `urn:ssl-vpn:contosoonline`.
+SP **Name** settings are only required if the entity ID isn't an exact match of the hostname portion of the published URL, or if it isnΓÇÖt in regular hostname-based URL format. Provide the external scheme and hostname of the application being published if entity ID is `urn:ssl-vpn:contosoonline`.
3. Scroll down to select the new **SAML SP object** and select **Bind/UnBind IDP Connectors**.
With all the settings in place, the APM now requires a front-end virtual server
8. Your SSL-VPN service is now published and accessible via SHA, either directly via its URL or through MicrosoftΓÇÖs application portals.
-## Additional resources
--- [The end of passwords, go passwordless](https://www.microsoft.com/security/business/identity/passwordless)--- [What is Conditional Access?](../conditional-access/overview.md)--- [Microsoft Zero Trust framework to enable remote work](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/)--- [Five steps to full application integration with Azure AD](../fundamentals/five-steps-to-full-application-integration-with-azure-ad.md) ## Next steps
Open a browser on a remote Windows client and browse to the URL of the **BIG-IP
Selecting the VPN tile will install the BIG-IP Edge client and establish a VPN connection configured for SHA. The F5 VPN application should also be visible as a target resource in Azure AD Conditional Access. See our [guidance](../conditional-access/concept-conditional-access-policies.md) for building Conditional Access policies and also enabling users for Azure AD [password-less authentication](https://www.microsoft.com/security/business/identity/passwordless).++
+## Additional resources
+
+- [The end of passwords, go passwordless](https://www.microsoft.com/security/business/identity/passwordless)
+
+- [Five steps to full application integration with Azure AD](../fundamentals/five-steps-to-full-application-integration-with-azure-ad.md)
+
+- [Microsoft Zero Trust framework to enable remote work](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/)
active-directory F5 Big Ip Forms Advanced https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-forms-advanced.md
In this article, you'll learn how to configure F5's BIG-IP Access Policy Manager (APM) and Azure Active Directory (Azure AD) for secure hybrid access to form-based applications.
-Configuring BIG-IP published applications with Azure AD provides many benefits, including:
+Enabling BIG-IP published services for Azure Active Directory (Azure AD) SSO provides many benefits, including:
-- Improved Zero Trust governance through Azure AD pre-authentication and authorization
+- Improved Zero Trust governance through Azure AD pre-authentication and [Conditional Access](/conditional-access/overview)
- Full single sign-on (SSO) between Azure AD and BIG-IP published services-- Identities and access are managed from a single control plane, the Azure portal
+- Identities and access are managed from a single control plane, the [Azure portal](https://azure.microsoft.com/features/azure-portal/)
To learn about all the benefits, see [Integrate F5 BIG-IP with Azure Active Directory](f5-aad-integration.md) and [What is application access and single sign-on with Azure AD?](../active-directory-appssoaccess-whatis.md).
active-directory F5 Big Ip Header Advanced https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-header-advanced.md
In this article, youΓÇÖll learn to implement Secure Hybrid Access (SHA) with sin
Configuring BIG-IP published applications with Azure AD provides many benefits, including: -- Improved Zero trust governance through Azure AD pre-authentication and authorization
+- Improved Zero trust governance through Azure AD pre-authentication and [Conditional Access](/conditional-access/overview)
- Full Single sign-on (SSO) between Azure AD and BIG-IP published services. -- Manage identities and access from a single control plane, The [Azure portal](https://azure.microsoft.com/features/azure-portal)
+- Manage identities and access from a single control plane, the [Azure portal](https://azure.microsoft.com/features/azure-portal)
To learn about all of the benefits, see the article on [F5 BIG-IP and Azure AD integration](./f5-aad-integration.md) and [what is application access and single sign-on with Azure AD](/azure/active-directory/active-directory-appssoaccess-whatis).
active-directory F5 Big Ip Headers Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-headers-easy-button.md
In this article, youΓÇÖll learn to implement Secure Hybrid Access (SHA) with sin
Enabling BIG-IP published services for Azure Active Directory (Azure AD) SSO provides many benefits, including:
- * Improved Zero Trust governance through Azure AD pre-authentication and authorization
+ * Improved Zero Trust governance through Azure AD pre-authentication and [Conditional Access](/conditional-access/overview)
* Full SSO between Azure AD and BIG-IP published services
- * Manage Identities and access from a single control plane, [the Azure portal](https://portal.azure.com/)
+ * Manage Identities and access from a single control plane, the [Azure portal](https://portal.azure.com/)
To learn about all of the benefits, see the article on [F5 BIG-IP and Azure AD integration](./f5-aad-integration.md) and [what is application access and single sign-on with Azure AD](/azure/active-directory/active-directory-appssoaccess-whatis).
Our backend application sits on HTTP port 80 but obviously switch to 443 if your
Enabling SSO allows users to access BIG-IP published services without having to enter credentials. The **Easy Button wizard** supports Kerberos, OAuth Bearer, and HTTP authorization headers for SSO, the latter of which weΓÇÖll enable to configure the following. * **Header Operation:** Insert- * **Header Name:** upn- * **Header Value:** %{session.saml.last.identity} * **Header Operation:** Insert- * **Header Name:** employeeid- * **Header Value:** %{session.saml.last.attr.name.employeeid} ![Screenshot for SSO and HTTP headers](./media/f5-big-ip-easy-button-header/sso-http-headers.png)
active-directory F5 Big Ip Kerberos Advanced https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-kerberos-advanced.md
In this tutorial, you'll learn to implement Secure Hybrid Access (SHA) with single sign-on (SSO) to Kerberos applications by using F5's BIG-IP advanced configuration.
-Integrating a BIG-IP with Azure Active Directory (Azure AD) provides many benefits, including:
+Enabling BIG-IP published services for Azure Active Directory (Azure AD) SSO provides many benefits, including:
-* Improved Zero Trust governance through Azure AD pre-authentication and authorization.
+* Improved Zero Trust governance through Azure AD pre-authentication and [Conditional Access](/conditional-access/overview)
* Full SSO between Azure AD and BIG-IP published services.
-* Management of identities and access from a single control plane, the [Azure portal](https://portal.azure.com/).
+* Management of identities and access from a single control plane, the [Azure portal](https://azure.microsoft.com/features/azure-portal/)
To learn about all of the benefits, see [Integrate F5 BIG-IP with Azure Active Directory](./f5-aad-integration.md) and [What is single sign-on in Azure Active Directory?](/azure/active-directory/active-directory-appssoaccess-whatis).
active-directory F5 Big Ip Kerberos Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-kerberos-easy-button.md
In this article, you'll learn to implement Secure Hybrid Access (SHA) with singl
Integrating a BIG-IP with Azure Active Directory (Azure AD) provides many benefits, including:
-* Improved Zero Trust governance through Azure AD pre-authentication and authorization
+* Improved Zero Trust governance through Azure AD pre-authentication and [Conditional Access](/conditional-access/overview)
* Full SSO between Azure AD and BIG-IP published services
-* Manage identities and access from a single control plane, [The Azure portal](https://portal.azure.com/)
+* Manage identities and access from a single control plane, the [Azure portal](https://portal.azure.com/)
To learn about all of the benefits, see the article on [F5 BIG-IP and Azure AD integration](./f5-aad-integration.md) and [what is application access and single sign-on with Azure AD](/azure/active-directory/active-directory-appssoaccess-whatis).
active-directory F5 Big Ip Ldap Header Easybutton https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-ldap-header-easybutton.md
In this article, you'll learn to implement Secure Hybrid Access (SHA) with singl
Enabling BIG-IP published services for Azure Active Directory (Azure AD) SSO provides many benefits, including:
-* Improved Zero Trust governance through Azure AD pre-authentication and authorization
+* Improved Zero Trust governance through Azure AD pre-authentication and [Conditional Access](/conditional-access/overview)
* Full SSO between Azure AD and BIG-IP published services
active-directory What Is Single Sign On https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/what-is-single-sign-on.md
Last updated 11/18/2021 -+ # Customer intent: As an IT admin, I need to learn about single sign-on and my applications in Azure Active Directory.
Choosing an SSO method depends on how the application is configured for authenti
- You're testing other aspects of the application - An on-premises application doesn't require users to authenticate, but you want them to. With SSO disabled, the user needs to authenticate.
- If you configured the application for SP-initiated SAML-based SSO and you change the SSO mode to disabled, it won't stop users from signing in to the application outside the MyApps portal. To achieve this, you need to [disable the ability for users to sign in](disable-user-sign-in-portal.md).
+ If you configured the application for SP-initiated SAML-based SSO and you change the SSO mode to disabled, it won't stop users from signing in to the application outside the MyApps portal. To achieve this, you need to disable the ability for users to sign in.
## Plan SSO deployment
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/overview.md
ms.devlang: Previously updated : 08/26/2021 Last updated : 01/25/2022
# What are managed identities for Azure resources?
-A common challenge for developers is the management of secrets and credentials used to secure communication between different components making up a solution. Managed identities eliminate the need for developers to manage credentials. Managed identities provide an identity for applications to use when connecting to resources that support Azure Active Directory (Azure AD) authentication. Applications may use the managed identity to obtain Azure AD tokens. For example, an application may use a managed identity to access resources like [Azure Key Vault](../../key-vault/general/overview.md) where developers can store credentials in a secure manner or to access storage accounts.
+A common challenge for developers is the management of secrets and credentials used to secure communication between different components making up a solution. Managed identities eliminate the need for developers to manage credentials.
-Take a look at how you can use managed identities</br>
+Managed identities provide an identity for applications to use when connecting to resources that support Azure Active Directory (Azure AD) authentication. Applications may use the managed identity to obtain Azure AD tokens. With [Azure Key Vault](../../key-vault/general/overview.md), developers can use managed identities to access resources. Key Vault stores credentials in a secure manner and gives access to storage accounts.
+
+The following video shows how you can use managed identities:</br>
> [!VIDEO https://docs.microsoft.com/Shows/On-NET/Using-Azure-Managed-identities/player?format=ny]
-Here are some of the benefits of using Managed identities:
+Here are some of the benefits of using managed identities:
-- You don't need to manage credentials. Credentials are not even accessible to you.-- You can use managed identities to authenticate to any resource that supports [Azure Active Directory authentication](../authentication/overview-authentication.md) including your own applications.
+- You don't need to manage credentials. Credentials arenΓÇÖt even accessible to you.
+- You can use managed identities to authenticate to any resource that supports [Azure AD authentication](../authentication/overview-authentication.md), including your own applications.
- Managed identities can be used without any additional cost. > [!NOTE]
Here are some of the benefits of using Managed identities:
There are two types of managed identities: -- **System-assigned** Some Azure services allow you to enable a managed identity directly on a service instance. When you enable a system-assigned managed identity an identity is created in Azure AD that is tied to the lifecycle of that service instance. So when the resource is deleted, Azure automatically deletes the identity for you. By design, only that Azure resource can use this identity to request tokens from Azure AD.-- **User-assigned** You may also create a managed identity as a standalone Azure resource. You can [create a user-assigned managed identity](how-to-manage-ua-identity-portal.md) and assign it to one or more instances of an Azure service. In the case of user-assigned managed identities, the identity is managed separately from the resources that use it. </br></br>
+- **System-assigned**. Some Azure services allow you to enable a managed identity directly on a service instance. When you enable a system-assigned managed identity, an identity is created in Azure AD. The identity is tied to the lifecycle of that service instance. When the resource is deleted, Azure automatically deletes the identity for you. By design, only that Azure resource can use this identity to request tokens from Azure AD.
+- **User-assigned**. You may also create a managed identity as a standalone Azure resource. You can [create a user-assigned managed identity](how-to-manage-ua-identity-portal.md) and assign it to one or more instances of an Azure service. For user-assigned managed identities, the identity is managed separately from the resources that use it. </br></br>
-The table below shows the differences between the two types of managed identities.
+The following table shows the differences between the two types of managed identities:
| Property | System-assigned managed identity | User-assigned managed identity | ||-|--|
-| Creation | Created as part of an Azure resource (for example, an Azure virtual machine or Azure App Service) | Created as a stand-alone Azure resource |
+| Creation | Created as part of an Azure resource (for example, Azure Virtual Machines or Azure App Service). | Created as a stand-alone Azure resource. |
| Life cycle | Shared life cycle with the Azure resource that the managed identity is created with. <br/> When the parent resource is deleted, the managed identity is deleted as well. | Independent life cycle. <br/> Must be explicitly deleted. |
-| Sharing across Azure resources | Cannot be shared. <br/> It can only be associated with a single Azure resource. | Can be shared <br/> The same user-assigned managed identity can be associated with more than one Azure resource. |
-| Common use cases | Workloads that are contained within a single Azure resource <br/> Workloads for which you need independent identities. <br/> For example, an application that runs on a single virtual machine | Workloads that run on multiple resources and which can share a single identity. <br/> Workloads that need pre-authorization to a secure resource as part of a provisioning flow. <br/> Workloads where resources are recycled frequently, but permissions should stay consistent. <br/> For example, a workload where multiple virtual machines need to access the same resource |
+| Sharing across Azure resources | CanΓÇÖt be shared. <br/> It can only be associated with a single Azure resource. | Can be shared. <br/> The same user-assigned managed identity can be associated with more than one Azure resource. |
+| Common use cases | Workloads that are contained within a single Azure resource. <br/> Workloads for which you need independent identities. <br/> For example, an application that runs on a single virtual machine. | Workloads that run on multiple resources and can share a single identity. <br/> Workloads that need pre-authorization to a secure resource, as part of a provisioning flow. <br/> Workloads where resources are recycled frequently, but permissions should stay consistent. <br/> For example, a workload where multiple virtual machines need to access the same resource. |
> [!IMPORTANT]
-> Regardless of the type of identity chosen a managed identity is a service principal of a special type that may only be used with Azure resources. When the managed identity is deleted, the corresponding service principal is automatically removed.
+> Regardless of the type of identity chosen, a managed identity is a service principal of a special type that can only be used with Azure resources. When the managed identity is deleted, the corresponding service principal is automatically removed.
+
+<br/>
## How can I use managed identities for Azure resources?
-![some examples of how a developer may use managed identities to get access to resources from their code without managing authentication information](media/overview/when-use-managed-identities.png)
+[![This flowchart shows examples of how a developer may use managed identities to get access to resources from their code without managing authentication information.](media/overview/when-use-managed-identities.png)](media/overview/when-use-managed-identities.png#lightbox)
## What Azure services support the feature?<a name="which-azure-services-support-managed-identity"></a>
-Managed identities for Azure resources can be used to authenticate to services that support Azure AD authentication. For a list of Azure services that support the managed identities for Azure resources feature, see [Services that support managed identities for Azure resources](./services-support-managed-identities.md).
+Managed identities for Azure resources can be used to authenticate to services that support Azure AD authentication. For a list of supported Azure services, see [services that support managed identities for Azure resources](./services-support-managed-identities.md).
## Which operations can I perform using managed identities? Resources that support system assigned managed identities allow you to: - Enable or disable managed identities at the resource level.-- Use RBAC roles to [grant permissions](howto-assign-access-portal.md).-- View create, read, update, delete (CRUD) operations in [Azure Activity logs](../../azure-monitor/essentials/activity-log.md).-- View sign-in activity in Azure AD [sign-in logs](../reports-monitoring/concept-sign-ins.md).
+- Use role-based access control (RBAC) to [grant permissions](howto-assign-access-portal.md).
+- View the create, read, update, and delete (CRUD) operations in [Azure Activity logs](../../azure-monitor/essentials/activity-log.md).
+- View sign in activity in Azure AD [sign in logs](../reports-monitoring/concept-sign-ins.md).
If you choose a user assigned managed identity instead: -- You can [create, read, update, delete](how-to-manage-ua-identity-portal.md) the identities.
+- You can [create, read, update, and delete](how-to-manage-ua-identity-portal.md) the identities.
- You can use RBAC role assignments to [grant permissions](howto-assign-access-portal.md). - User assigned managed identities can be used on more than one resource. - CRUD operations are available for review in [Azure Activity logs](../../azure-monitor/essentials/activity-log.md).-- View sign-in activity in Azure AD [sign-in logs](../reports-monitoring/concept-sign-ins.md).
+- View sign in activity in Azure AD [sign in logs](../reports-monitoring/concept-sign-ins.md).
-Operations on managed identities may be performed by using an Azure Resource Manager (ARM) template, the Azure portal, the Azure CLI, PowerShell, and REST APIs.
+Operations on managed identities can be performed by using an Azure Resource Manager template, the Azure portal, Azure CLI, PowerShell, and REST APIs.
## Next steps
Operations on managed identities may be performed by using an Azure Resource Man
* [Use a Linux VM system-assigned managed identity to access Resource Manager](tutorial-linux-vm-access-arm.md) * [How to use managed identities for App Service and Azure Functions](../../app-service/overview-managed-identity.md) * [How to use managed identities with Azure Container Instances](../../container-instances/container-instances-managed-identity.md)
-* [Implementing Managed Identities for Microsoft Azure Resources](https://www.pluralsight.com/courses/microsoft-azure-resources-managed-identities-implementing).
+* [Implementing managed identities for Microsoft Azure Resources](https://www.pluralsight.com/courses/microsoft-azure-resources-managed-identities-implementing)
active-directory Qs Configure Portal Windows Vmss https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vmss.md
Managed identities for Azure resources provide Azure services with an automatically managed identity in Azure Active Directory. You can use this identity to authenticate to any service that supports Azure AD authentication, without having credentials in your code.
-In this article, using PowerShell, you learn how to perform the following managed identities for Azure resources operations on a virtual machine scale set:
+In this article, using the Azure portal, you learn how to perform the following managed identities for Azure resources operations on a virtual machine scale set:
- If you're unfamiliar with managed identities for Azure resources, check out the [overview section](overview.md). - If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before continuing.
active-directory Balsamiq Wireframes Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/balsamiq-wireframes-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Balsamiq Wireframes'
+description: Learn how to configure single sign-on between Azure Active Directory and Balsamiq Wireframes.
++++++++ Last updated : 01/20/2022++++
+# Tutorial: Azure AD SSO integration with Balsamiq Wireframes
+
+In this tutorial, you'll learn how to integrate Balsamiq Wireframes with Azure Active Directory (Azure AD). When you integrate Balsamiq Wireframes with Azure AD, you can:
+
+* Control in Azure AD who has access to Balsamiq Wireframes.
+* Enable your users to be automatically signed-in to Balsamiq Wireframes with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Balsamiq Wireframes single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Balsamiq Wireframes supports **SP and IDP** initiated SSO.
+* Balsamiq Wireframes supports **Just In Time** user provisioning.
+
+## Add Balsamiq Wireframes from the gallery
+
+To configure the integration of Balsamiq Wireframes into Azure AD, you need to add Balsamiq Wireframes from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Balsamiq Wireframes** in the search box.
+1. Select **Balsamiq Wireframes** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Balsamiq Wireframes
+
+Configure and test Azure AD SSO with Balsamiq Wireframes using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Balsamiq Wireframes.
+
+To configure and test Azure AD SSO with Balsamiq Wireframes, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Balsamiq Wireframes SSO](#configure-balsamiq-wireframes-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Balsamiq Wireframes test user](#create-balsamiq-wireframes-test-user)** - to have a counterpart of B.Simon in Balsamiq Wireframes that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Balsamiq Wireframes** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type a URL using the following pattern:
+ `https://balsamiq.cloud/samlsso/<ID>`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://balsamiq.cloud/samlsso/<ID>`
+
+ c. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://balsamiq.cloud/samlsso/<ID>`
+
+ d. In the **Relay State** text box, type a URL using the following pattern:
+ `https://balsamiq.cloud/<ID>/projects`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL, Sign-on URL and Relay State. Contact [Balsamiq Wireframes Client support team](mailto:support@balsamiq.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Balsamiq Wireframes application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows list of attributes.](common/default-attributes.png)
+
+1. In addition to above, Balsamiq Wireframes application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | -| |
+ | Email | user.mail |
+ | firstName | user.givenname |
+ | lastName | user.surname |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/metadataxml.png)
+
+1. On the **Set up Balsamiq Wireframes** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Balsamiq Wireframes.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Balsamiq Wireframes**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Balsamiq Wireframes SSO
+
+1. Log in to your Balsamiq Wireframes company site as an administrator.
+
+1. Go to **Settings** > **Space Settings** and click **Configure SSO** under Single Sign-On Authentication.
+
+ ![Screenshot shows the SSO Settings.](./media/balsamiq-wireframes-tutorial/settings.png "SSO Settings")
+
+1. Copy all the required values and paste it in **Basic SAML Configuration** section in the Azure portal and click **Next**.
+
+ ![Screenshot shows the Service Provider Details.](./media/balsamiq-wireframes-tutorial/details.png "Service Provider Details")
+
+1. In the **Configure IDp** section, perform the following steps:
+
+ ![Screenshot shows the IDP Metadata.](./media/balsamiq-wireframes-tutorial/certificate.png "IDP Metadata")
+
+ 1. In the **SAML 2.0 Endpoint(HTTP)** textbox, paste the value of **Login URL**, which you have copied from the Azure portal.
+
+ 1. In the **Identity Provider Issuer** textbox, paste the value of **Azure AD Identifier**, which you have copied from the Azure portal.
+
+ 1. Open the downloaded **Federation Metadata XML** file from the Azure portal and **Upload** the file into **Public Certificate** section.
+
+ 1. Click **Next**.
+
+ > [!Note]
+ > If you have an IdP Metadata file to upload, the fields will be automatically populated.
+
+1. Verify your SAML configuration, click **Test SAML Login** button and click **Next**.
+
+ ![Screenshot shows the SAML configuration.](./media/balsamiq-wireframes-tutorial/configuration.png "SAML Login")
+
+1. After the successful test configuration, click **Turn on SAML SSO Now**.
+
+ ![Screenshot shows the Test SAML.](./media/balsamiq-wireframes-tutorial/testing.png "Test SAML")
+
+### Create Balsamiq Wireframes test user
+
+In this section, a user called Britta Simon is created in Balsamiq Wireframes. Balsamiq Wireframes supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Balsamiq Wireframes, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Balsamiq Wireframes Sign on URL where you can initiate the login flow.
+
+* Go to Balsamiq Wireframes Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Balsamiq Wireframes for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Balsamiq Wireframes tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Balsamiq Wireframes for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Balsamiq Wireframes you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
aks Node Auto Repair https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/node-auto-repair.md
If AKS finds multiple unhealthy nodes during a health check, each node is repair
## Limitations
-In many cases, AKS can determine if a node is unhealthy and attempt to repair the issue, but there are cases where AKS either can't repair the issue or can't detect that there is an issue. For example, AKS can't detect issues if a node status is not being reported due to error in network configuration.
+In many cases, AKS can determine if a node is unhealthy and attempt to repair the issue, but there are cases where AKS either can't repair the issue or can't detect that there is an issue. For example, AKS can't detect issues if a node status is not being reported due to error in network configuration, or has failed to initially register as a healthy node.
## Next steps
analysis-services Analysis Services Gateway Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-gateway-install.md
description: Learn how to install and configure an On-premises data gateway to c
Previously updated : 11/17/2021 Last updated : 01/31/2022
To learn more about how Azure Analysis Services works with the gateway, see [Con
**Minimum Requirements:**
-* .NET 4.5 Framework
+* .NET 4.8 Framework
* 64-bit version of Windows 8 / Windows Server 2012 R2 (or later) **Recommended:**
app-service App Service Web Tutorial Dotnet Sqldatabase https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-web-tutorial-dotnet-sqldatabase.md
description: Learn how to deploy a C# ASP.NET app to Azure and to Azure SQL Data
ms.assetid: 03c584f1-a93c-4e3d-ac1b-c82b50c75d3e ms.devlang: csharp Previously updated : 11/08/2021 Last updated : 01/27/2022
In this tutorial, you learn how to:
> * Deploy the app to Azure > * Update the data model and redeploy the app > * Stream logs from Azure to your terminal
-> * Manage the app in the Azure portal
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
In this tutorial, you learn how to:
To complete this tutorial:
-Install <a href="https://www.visualstudio.com/downloads/" target="_blank">Visual Studio 2022</a> with the **ASP.NET and web development** workload.
+Install <a href="https://www.visualstudio.com/downloads/" target="_blank">Visual Studio 2022</a> with the **ASP.NET and web development** and **Azure development** workloads.
If you've installed Visual Studio already, add the workloads in Visual Studio by clicking **Tools** > **Get Tools and Features**.
Before creating a database, you need a [logical SQL server](../azure-sql/databas
#### Deploy your ASP.NET app
-1. In the **Publish** tab scroll back up to the top and click **Publish**. Once your ASP.NET app is deployed to Azure. Your default browser is launched with the URL to the deployed app.
+1. In the **Publish** tab, scroll back up to the top and click **Publish**. Once your ASP.NET app is deployed to Azure. Your default browser is launched with the URL to the deployed app.
1. Add a few to-do items.
Before creating a database, you need a [logical SQL server](../azure-sql/databas
## Access the database locally
-Visual Studio lets you explore and manage your new database in Azure easily in the **SQL Server Object Explorer**. The new database already opened its firewall to the App Service app that you created, but to access it from your local computer (such as from Visual Studio), you must open a firewall for your local machine's public IP address. If your internet service provider changes your public IP address, you need to reconfigure the firewall to access the Azure database again.
+Visual Studio lets you explore and manage your new database in Azure easily in the **SQL Server Object Explorer**. The new database already opened its firewall to the App Service app that you created. But to access it from your local computer (such as from Visual Studio), you must open a firewall for your local machine's public IP address. If your internet service provider changes your public IP address, you need to reconfigure the firewall to access the Azure database again.
#### Create a database connection
Each action starts with a `Trace.WriteLine()` method. This code is added to show
#### Enable log streaming
-1. From the **View** menu, select **Cloud Explorer**.
+1. In the publish page, scroll down to the **Hosting** section.
-1. In **Cloud Explorer**, expand the Azure subscription that has your app and expand **App Service**.
-
-1. Right-click your Azure app and select **View Streaming Logs**.
+1. At the right-hand corner, click **...** > **View Streaming Logs**.
![Enable log streaming](./media/app-service-web-tutorial-dotnet-sqldatabase/stream-logs.png)
Each action starts with a `Trace.WriteLine()` method. This code is added to show
#### Change trace levels
-1. To change the trace levels to output other trace messages, go back to **Cloud Explorer**.
+1. To change the trace levels to output other trace messages, go back to the publish page.
-1. Right-click your app again and select **Open in Portal**.
+1. In the **Hosting** section, click **...** > **Open in Azure portal**.
1. In the portal management page for your app, from the left menu, select **App Service logs**. 1. Under **Application Logging (File System)**, select **Verbose** in **Level**. Click **Save**.
- ![Change trace level to Verbose](./media/app-service-web-tutorial-dotnet-sqldatabase/trace-level-verbose.png)
- > [!TIP] > You can experiment with different trace levels to see what types of messages are displayed for each level. For example, the **Information** level includes all logs created by `Trace.TraceInformation()`, `Trace.TraceWarning()`, and `Trace.TraceError()`, but not logs created by `Trace.WriteLine()`.
To stop the log-streaming service, click the **Stop monitoring** button in the *
![Stop log streaming](./media/app-service-web-tutorial-dotnet-sqldatabase/stop-streaming.png)
-## Manage your Azure app
-
-Go to the [Azure portal](https://portal.azure.com) to manage the web app. Search for and select **App Services**.
-
-![Search for Azure App Services](./media/app-service-web-tutorial-dotnet-sqldatabase/azure-portal-navigate-app-services.png)
-
-Select the name of your Azure app.
-
-![Portal navigation to Azure app](./media/app-service-web-tutorial-dotnet-sqldatabase/access-portal.png)
-
-You have landed in your app's page.
-
-By default, the portal shows the **Overview** page. This page gives you a view of how your app is doing. Here, you can also perform basic management tasks like browse, stop, start, restart, and delete. The tabs on the left side of the page show the different configuration pages you can open.
-
-![App Service page in Azure portal](./media/app-service-web-tutorial-dotnet-sqldatabase/web-app-blade.png)
- [!INCLUDE [Clean up section](../../includes/clean-up-section-portal-web-app.md)] ## Next steps
In this tutorial, you learned how to:
> * Deploy the app to Azure > * Update the data model and redeploy the app > * Stream logs from Azure to your terminal
-> * Manage the app in the Azure portal
Advance to the next tutorial to learn how to easily improve the security of your connection Azure SQL Database. > [!div class="nextstepaction"]
-> [Access SQL Database securely using managed identities for Azure resources](tutorial-connect-msi-sql-database.md)
+> [Tutorial: Connect to SQL Database from App Service without secrets using a managed identity](tutorial-connect-msi-sql-database.md)
More resources:
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/overview.md
The multi-tenant version of Azure App Service contains numerous features to enab
## Feature differences
-Compared to earlier versions of the App Service Environment, there are some differences with App Service Environment v3. With App Service Environment v3:
+Compared to earlier versions of the App Service Environment, there are some differences with App Service Environment v3:
- There are no networking dependencies in the customer virtual network. You can secure all inbound and outbound as desired. Outbound traffic can be routed also as desired. - You can deploy it enabled for zone redundancy. Zone redundancy can only be set during creation and only in regions where all App Service Environment v3 dependencies are zone redundant.
app-service Overview Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/overview-managed-identity.md
Title: Managed identities description: Learn how managed identities work in Azure App Service and Azure Functions, how to configure a managed identity and generate a token for a back-end resource.- Previously updated : 05/27/2020-- Last updated : 01/27/2022+ # How to use managed identities for App Service and Azure Functions
-This topic shows you how to create a managed identity for App Service and Azure Functions applications and how to use it to access other resources.
+This article shows you how to create a managed identity for App Service and Azure Functions applications and how to use it to access other resources.
-> [!Important]
-> Managed identities for App Service and Azure Functions won't behave as expected if your app is migrated across subscriptions/tenants. The app needs to obtain a new identity, which is done by disabling and re-enabling the feature. See [Removing an identity](#remove) below. Downstream resources also need to have access policies updated to use the new identity.
+> [!IMPORTANT]
+> Managed identities for App Service and Azure Functions won't behave as expected if your app is migrated across subscriptions/tenants. The app needs to obtain a new identity, which is done by [disabling](#remove) and re-enabling the feature. Downstream resources also need to have access policies updated to use the new identity.
> [!NOTE] > Managed identities are not available for [apps deployed in Azure Arc](overview-arc-integration.md).
This topic shows you how to create a managed identity for App Service and Azure
## Add a system-assigned identity
-Creating an app with a system-assigned identity requires an additional property to be set on the application.
+# [Azure portal](#tab/portal)
-### Using the Azure portal
+1. In the left navigation of your app's page, scroll down to the **Settings** group.
-To set up a managed identity in the portal, you will first create an application as normal and then enable the feature.
+1. Select **Identity**.
-1. Create an app in the portal as you normally would. Navigate to it in the portal.
-
-2. If using a function app, navigate to **Platform features**. For other app types, scroll down to the **Settings** group in the left navigation.
-
-3. Select **Identity**.
-
-4. Within the **System assigned** tab, switch **Status** to **On**. Click **Save**.
+1. Within the **System assigned** tab, switch **Status** to **On**. Click **Save**.
![Screenshot that shows where to switch Status to On and then select Save.](media/app-service-managed-service-identity/system-assigned-managed-identity-in-azure-portal.png)
To set up a managed identity in the portal, you will first create an application
> To find the managed identity for your web app or slot app in the Azure portal, under **Enterprise applications**, look in the **User settings** section. Usually, the slot name is similar to `<app name>/slots/<slot name>`.
-### Using the Azure CLI
-
-To set up a managed identity using the Azure CLI, you will need to use the `az webapp identity assign` command against an existing application. You have three options for running the examples in this section:
--- Use [Azure Cloud Shell](../cloud-shell/overview.md) from the Azure portal.-- Use the embedded Azure Cloud Shell via the "Try It" button, located in the top-right corner of each code block below.-- [Install the latest version of Azure CLI](/cli/azure/install-azure-cli) (2.0.31 or later) if you prefer to use a local CLI console. -
-The following steps will walk you through creating a web app and assigning it an identity using the CLI:
-
-1. If you're using the Azure CLI in a local console, first sign in to Azure using [az login](/cli/azure/reference-index#az_login). Use an account that's associated with the Azure subscription under which you would like to deploy the application:
-
- ```azurecli-interactive
- az login
- ```
-
-2. Create a web application using the CLI. For more examples of how to use the CLI with App Service, see [App Service CLI samples](../app-service/samples-cli.md):
-
- ```azurecli-interactive
- az group create --name myResourceGroup --location westus
- az appservice plan create --name myPlan --resource-group myResourceGroup --sku S1
- az webapp create --name myApp --resource-group myResourceGroup --plan myPlan
- ```
-
-3. Run the `identity assign` command to create the identity for this application:
-
- ```azurecli-interactive
- az webapp identity assign --name myApp --resource-group myResourceGroup
- ```
-
-### Using Azure PowerShell
-
+# [Azure CLI](#tab/cli)
-The following steps will walk you through creating an app and assigning it an identity using Azure PowerShell. The instructions for creating a web app and a function app are different.
+Run the `az webapp identity assign` command to create a system-assigned identity:
-#### Using Azure PowerShell for a web app
-
-1. If needed, install the Azure PowerShell using the instructions found in the [Azure PowerShell guide](/powershell/azure/), and then run `Login-AzAccount` to create a connection with Azure.
-
-2. Create a web application using Azure PowerShell. For more examples of how to use Azure PowerShell with App Service, see [App Service PowerShell samples](../app-service/samples-powershell.md):
-
- ```azurepowershell-interactive
- # Create a resource group.
- New-AzResourceGroup -Name $resourceGroupName -Location $location
-
- # Create an App Service plan in Free tier.
- New-AzAppServicePlan -Name $webappname -Location $location -ResourceGroupName $resourceGroupName -Tier Free
-
- # Create a web app.
- New-AzWebApp -Name $webappname -Location $location -AppServicePlan $webappname -ResourceGroupName $resourceGroupName
- ```
-
-3. Run the `Set-AzWebApp -AssignIdentity` command to create the identity for this application:
-
- ```azurepowershell-interactive
- Set-AzWebApp -AssignIdentity $true -Name $webappname -ResourceGroupName $resourceGroupName
- ```
+```azurecli-interactive
+az webapp identity assign --name myApp --resource-group myResourceGroup
+```
-#### Using Azure PowerShell for a function app
+# [Azure PowerShell](#tab/ps)
-1. If needed, install the Azure PowerShell using the instructions found in the [Azure PowerShell guide](/powershell/azure/), and then run `Login-AzAccount` to create a connection with Azure.
+#### For App Service
-2. Create a function app using Azure PowerShell. For more examples of how to use Azure PowerShell with Azure Functions, see the [Az.Functions reference](/powershell/module/az.functions/#functions):
+Run the `Set-AzWebApp -AssignIdentity` command to create a system-assigned identity for App Service:
- ```azurepowershell-interactive
- # Create a resource group.
- New-AzResourceGroup -Name $resourceGroupName -Location $location
+```azurepowershell-interactive
+Set-AzWebApp -AssignIdentity $true -Name <app-name> -ResourceGroupName <group-name>
+```
- # Create a storage account.
- New-AzStorageAccount -Name $storageAccountName -ResourceGroupName $resourceGroupName -SkuName $sku
+#### For Functions
- # Create a function app with a system-assigned identity.
- New-AzFunctionApp -Name $functionAppName -ResourceGroupName $resourceGroupName -Location $location -StorageAccountName $storageAccountName -Runtime $runtime -IdentityType SystemAssigned
- ```
+Run the `Update-AzFunctionApp -IdentityType` command to create a system-assigned identity for a function app:
-You can also update an existing function app using `Update-AzFunctionApp` instead.
+```azurepowershell-interactive
+Update-AzFunctionApp -Name $functionAppName -ResourceGroupName $resourceGroupName -IdentityType SystemAssigned
+```
-### Using an Azure Resource Manager template
+# [ARM template](#tab/arm)
An Azure Resource Manager template can be used to automate deployment of your Azure resources. To learn more about deploying to App Service and Functions, see [Automating resource deployment in App Service](../app-service/deploy-complex-application-predictably.md) and [Automating resource deployment in Azure Functions](../azure-functions/functions-infrastructure-as-code.md).
Any resource of type `Microsoft.Web/sites` can be created with an identity by in
} ```
-> [!NOTE]
-> An application can have both system-assigned and user-assigned identities at the same time. In this case, the `type` property would be `SystemAssigned,UserAssigned`
- Adding the system-assigned type tells Azure to create and manage the identity for your application.
-For example, a web app might look like the following:
+For example, a web app's template might look like the following JSON:
```json {
If you need to reference these properties in a later stage in the template, you
} ```
+--
+ ## Add a user-assigned identity Creating an app with a user-assigned identity requires that you create the identity and then add its resource identifier to your app config.
-### Using the Azure portal
+# [Azure portal](#tab/portal)
First, you'll need to create a user-assigned identity resource. 1. Create a user-assigned managed identity resource according to [these instructions](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md#create-a-user-assigned-managed-identity).
-2. Create an app in the portal as you normally would. Navigate to it in the portal.
+1. In the left navigation for your app's page, scroll down to the **Settings** group.
-3. If using a function app, navigate to **Platform features**. For other app types, scroll down to the **Settings** group in the left navigation.
+1. Select **Identity**.
-4. Select **Identity**.
+1. Within the **User assigned** tab, click **Add**.
-5. Within the **User assigned** tab, click **Add**.
-
-6. Search for the identity you created earlier and select it. Click **Add**.
+1. Search for the identity you created earlier and select it. Click **Add**.
![Managed identity in App Service](media/app-service-managed-service-identity/user-assigned-managed-identity-in-azure-portal.png)
-### Using Azure PowerShell
+# [Azure CLI](#tab/cli)
+1. Create a user-assigned identity.
-The following steps will walk you through creating an app and assigning it an identity using Azure PowerShell.
+ ```azurepowershell-interactive
+ az identity create --resource-group <group-name> --name <identity-name>
+ ```
-> [!NOTE]
-> The current version of the Azure PowerShell commandlets for Azure App Service do not support user-assigned identities. The below instructions are for Azure Functions.
+1. Run the `az webapp identity assign` command to assign the identity to the app.
-1. If needed, install the Azure PowerShell using the instructions found in the [Azure PowerShell guide](/powershell/azure/), and then run `Login-AzAccount` to create a connection with Azure.
+ ```azurepowershell-interactive
+ az webapp identity assign --resource-group <group-name> --name <app-name> --identities <identity-name>
+ ```
-2. Create a function app using Azure PowerShell. For more examples of how to use Azure PowerShell with Azure Functions, see the [Az.Functions reference](/powershell/module/az.functions/#functions). The below script also makes use of `New-AzUserAssignedIdentity` which must be installed separately as per [Create, list or delete a user-assigned managed identity using Azure PowerShell](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-powershell.md).
+# [Azure PowerShell](#tab/ps)
- ```azurepowershell-interactive
- # Create a resource group.
- New-AzResourceGroup -Name $resourceGroupName -Location $location
+#### For App Service
+
+Adding a user-assigned identity in App Service is currently not supported.
- # Create a storage account.
- New-AzStorageAccount -Name $storageAccountName -ResourceGroupName $resourceGroupName -SkuName $sku
+#### For Functions
- # Create a user-assigned identity. This requires installation of the "Az.ManagedServiceIdentity" module.
- $userAssignedIdentity = New-AzUserAssignedIdentity -Name $userAssignedIdentityName -ResourceGroupName $resourceGroupName
+1. Create a user-assigned identity.
- # Create a function app with a user-assigned identity.
- New-AzFunctionApp -Name $functionAppName -ResourceGroupName $resourceGroupName -Location $location -StorageAccountName $storageAccountName -Runtime $runtime -IdentityType UserAssigned -IdentityId $userAssignedIdentity.Id
+ ```azurepowershell-interactive
+ Install-Module -Name Az.ManagedServiceIdentity -AllowPrerelease
+ $userAssignedIdentity = New-AzUserAssignedIdentity -Name $userAssignedIdentityName -ResourceGroupName <group-name>
```
-You can also update an existing function app using `Update-AzFunctionApp` instead.
+1. Run the `Update-AzFunctionApp -IdentityType UserAssigned -IdentityId` command to assign the identity in Functions:
+
+ ```azurepowershell-interactive
+ Update-AzFunctionApp -Name <app-name> -ResourceGroupName <group-name> -IdentityType UserAssigned -IdentityId $userAssignedIdentity.Id
+ ```
-### Using an Azure Resource Manager template
+# [ARM template](#tab/arm)
An Azure Resource Manager template can be used to automate deployment of your Azure resources. To learn more about deploying to App Service and Functions, see [Automating resource deployment in App Service](../app-service/deploy-complex-application-predictably.md) and [Automating resource deployment in Azure Functions](../azure-functions/functions-infrastructure-as-code.md).
Any resource of type `Microsoft.Web/sites` can be created with an identity by in
Adding the user-assigned type tells Azure to use the user-assigned identity specified for your application.
-For example, a web app might look like the following:
+For example, a web app's template might look like the following JSON:
```json {
When the site is created, it has the following additional properties:
The principalId is a unique identifier for the identity that's used for Azure AD administration. The clientId is a unique identifier for the application's new identity that's used for specifying which identity to use during runtime calls.
-## Obtain tokens for Azure resources
+--
-An app can use its managed identity to get tokens to access other resources protected by Azure AD, such as Azure Key Vault. These tokens represent the application accessing the resource, and not any specific user of the application.
+## Configure target resource
-You may need to configure the target resource to allow access from your application. For example, if you request a token to access Key Vault, you need to make sure you have added an access policy that includes your application's identity. Otherwise, your calls to Key Vault will be rejected, even if they include the token. To learn more about which resources support Azure Active Directory tokens, see [Azure services that support Azure AD authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication).
+You may need to configure the target resource to allow access from your app or function. For example, if you [request a token](#connect-to-azure-services-in-app-code) to access Key Vault, you must also add an access policy that includes the managed identity of your app or function. Otherwise, your calls to Key Vault will be rejected, even if you use a valid token. The same is true for Azure SQL Database. To learn more about which resources support Azure Active Directory tokens, see [Azure services that support Azure AD authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication).
> [!IMPORTANT] > The back-end services for managed identities maintain a cache per resource URI for around 24 hours. If you update the access policy of a particular target resource and immediately retrieve a token for that resource, you may continue to get a cached token with outdated permissions until that token expires. There's currently no way to force a token refresh.
-There is a simple REST protocol for obtaining a token in App Service and Azure Functions. This can be used for all applications and languages. For .NET and Java, the Azure SDK provides an abstraction over this protocol and facilitates a local development experience.
+## Connect to Azure services in app code
-### Using the REST protocol
+With its managed identity, an app can obtain tokens for Azure resources that are protected by Azure Active Directory, such as Azure SQL Database, Azure Key Vault, and Azure Storage. These tokens represent the application accessing the resource, and not any specific user of the application.
-> [!NOTE]
-> An older version of this protocol, using the "2017-09-01" API version, used the `secret` header instead of `X-IDENTITY-HEADER` and only accepted the `clientid` property for user-assigned. It also returned the `expires_on` in a timestamp format. MSI_ENDPOINT can be used as an alias for IDENTITY_ENDPOINT, and MSI_SECRET can be used as an alias for IDENTITY_HEADER.
-
-An app with a managed identity has two environment variables defined:
--- IDENTITY_ENDPOINT - the URL to the local token service.-- IDENTITY_HEADER - a header used to help mitigate server-side request forgery (SSRF) attacks. The value is rotated by the platform.-
-The **IDENTITY_ENDPOINT** is a local URL from which your app can request tokens. To get a token for a resource, make an HTTP GET request to this endpoint, including the following parameters:
+App Service and Azure Functions provide an internally accessible [REST endpoint](#rest-endpoint-reference) for token retrieval. The REST endpoint can be accessed from within the app with a standard HTTP GET, which can be implemented with a generic HTTP client in every language. For .NET, JavaScript, Java, and Python, the Azure Identity client library provides an abstraction over this REST endpoint and simplifies the development experience. Connecting to other Azure services is as simple as adding a credential object to the service-specific client.
-> | Parameter name | In | Description |
-> |-|--|--|
-> | resource | Query | The Azure AD resource URI of the resource for which a token should be obtained. This could be one of the [Azure services that support Azure AD authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication) or any other resource URI. |
-> | api-version | Query | The version of the token API to be used. Please use "2019-08-01" or later. |
-> | X-IDENTITY-HEADER | Header | The value of the IDENTITY_HEADER environment variable. This header is used to help mitigate server-side request forgery (SSRF) attacks. |
-> | client_id | Query | (Optional) The client ID of the user-assigned identity to be used. Cannot be used on a request that includes `principal_id`, `mi_res_id`, or `object_id`. If all ID parameters (`client_id`, `principal_id`, `object_id`, and `mi_res_id`) are omitted, the system-assigned identity is used. |
-> | principal_id | Query | (Optional) The principal ID of the user-assigned identity to be used. `object_id` is an alias that may be used instead. Cannot be used on a request that includes client_id, mi_res_id, or object_id. If all ID parameters (`client_id`, `principal_id`, `object_id`, and `mi_res_id`) are omitted, the system-assigned identity is used. |
-> | mi_res_id | Query | (Optional) The Azure resource ID of the user-assigned identity to be used. Cannot be used on a request that includes `principal_id`, `client_id`, or `object_id`. If all ID parameters (`client_id`, `principal_id`, `object_id`, and `mi_res_id`) are omitted, the system-assigned identity is used. |
+# [HTTP GET](#tab/http)
-> [!IMPORTANT]
-> If you are attempting to obtain tokens for user-assigned identities, you must include one of the optional properties. Otherwise the token service will attempt to obtain a token for a system-assigned identity, which may or may not exist.
-
-A successful 200 OK response includes a JSON body with the following properties:
-
-> | Property name | Description |
-> ||-|
-> | access_token | The requested access token. The calling web service can use this token to authenticate to the receiving web service. |
-> | client_id | The client ID of the identity that was used. |
-> | expires_on | The timespan when the access token expires. The date is represented as the number of seconds from "1970-01-01T0:0:0Z UTC" (corresponds to the token's `exp` claim). |
-> | not_before | The timespan when the access token takes effect, and can be accepted. The date is represented as the number of seconds from "1970-01-01T0:0:0Z UTC" (corresponds to the token's `nbf` claim). |
-> | resource | The resource the access token was requested for, which matches the `resource` query string parameter of the request. |
-> | token_type | Indicates the token type value. The only type that Azure AD supports is Bearer. For more information about bearer tokens, see [The OAuth 2.0 Authorization Framework: Bearer Token Usage (RFC 6750)](https://www.rfc-editor.org/rfc/rfc6750.txt). |
-
-This response is the same as the [response for the Azure AD service-to-service access token request](../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md#service-to-service-access-token-response).
-
-### REST protocol examples
-
-An example request might look like the following:
+A raw HTTP GET request looks like the following example:
```http GET /MSI/token?resource=https://vault.azure.net&api-version=2019-08-01 HTTP/1.1
Content-Type: application/json
} ```
-### Code examples
+This response is the same as the [response for the Azure AD service-to-service access token request](../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md#service-to-service-access-token-response). To access Key Vault, you will then add the value of `access_token` to a client connection with the vault.
# [.NET](#tab/dotnet)
-> [!TIP]
-> For .NET languages, you can also use [Microsoft.Azure.Services.AppAuthentication](#asal) instead of crafting this request yourself.
-
-```csharp
-private readonly HttpClient _client;
-// ...
-public async Task<HttpResponseMessage> GetToken(string resource) {
- var request = new HttpRequestMessage(HttpMethod.Get,
- String.Format("{0}/?resource={1}&api-version=2019-08-01", Environment.GetEnvironmentVariable("IDENTITY_ENDPOINT"), resource));
- request.Headers.Add("X-IDENTITY-HEADER", Environment.GetEnvironmentVariable("IDENTITY_HEADER"));
- return await _client.SendAsync(request);
-}
-```
+> [!NOTE]
+> When connecting to Azure SQL data sources with [Entity Framework Core](/ef/core/), consider [using Microsoft.Data.SqlClient](/sql/connect/ado-net/sql/azure-active-directory-authentication), which provides special connection strings for managed identity connectivity. For an example, see [Tutorial: Secure Azure SQL Database connection from App Service using a managed identity](tutorial-connect-msi-sql-database.md).
+
+For .NET apps and functions, the simplest way to work with a managed identity is through the [Azure Identity client library for .NET](/dotnet/api/overview/azure/identity-readme?). See the respective documentation headings of the client library for information:
+
+- [Add Azure Identity client library to your project](/dotnet/api/overview/azure/identity-readme#getting-started)
+- [Access Azure service with a system-assigned identity](/dotnet/api/overview/azure/identity-readme#authenticating-with-defaultazurecredential)
+- [Access Azure service with a user-assigned identity](/dotnet/api/overview/azure/identity-readme#specifying-a-user-assigned-managed-identity-with-the-defaultazurecredential)
+
+The linked examples use [`DefaultAzureCredential`](/dotnet/api/overview/azure/identity-readme#defaultazurecredential). It's useful for the majority of the scenarios because the same pattern works in Azure (with managed identities) and on your local machine (without managed identities).
# [JavaScript](#tab/javascript)
-```javascript
-const rp = require('request-promise');
-const getToken = function(resource, cb) {
- let options = {
- uri: `${process.env["IDENTITY_ENDPOINT"]}/?resource=${resource}&api-version=2019-08-01`,
- headers: {
- 'X-IDENTITY-HEADER': process.env["IDENTITY_HEADER"]
- }
- };
- rp(options)
- .then(cb);
-}
-```
+For Node.js apps and JavaScript functions, the simplest way to work with a managed identity is through the [Azure Identity client library for JavaScript](/javascript/api/overview/azure/identity-readme?). See the respective documentation headings of the client library for information:
+
+- [Add Azure Identity client library to your project](/javascript/api/overview/azure/identity-readme#install-the-package)
+- [Access Azure service with a system-assigned identity](/javascript/api/overview/azure/identity-readme#authenticating-with-defaultazurecredential)
+- [Access Azure service with a user-assigned identity](/javascript/api/overview/azure/identity-readme#authenticating-a-user-assigned-managed-identity-with-defaultazurecredential)
+
+The linked examples use [`DefaultAzureCredential`](/javascript/api/overview/azure/identity-readme#defaultazurecredential). It's useful for the majority of the scenarios because the same pattern works in Azure (with managed identities) and on your local machine (without managed identities).
+
+For more code examples of the Azure Identity client library for JavaScript, see [Azure Identity examples](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/identity_2.0.1/sdk/identity/identity/samples/AzureIdentityExamples.md).
# [Python](#tab/python)
-```python
-import os
-import requests
+For Python apps and functions, the simplest way to work with a managed identity is through the [Azure Identity client library for Python](/python/api/overview/azure/identity-readme). See the respective documentation headings of the client library for information:
-identity_endpoint = os.environ["IDENTITY_ENDPOINT"]
-identity_header = os.environ["IDENTITY_HEADER"]
+- [Add Azure Identity client library to your project](/python/api/overview/azure/identity-readme#getting-started)
+- [Access Azure service with a system-assigned identity](/python/api/overview/azure/identity-readme#authenticating-with-defaultazurecredential)
+- [Access Azure service with a user-assigned identity](/python/api/overview/azure/identity-readme#authenticating-a-user-assigned-managed-identity-with-defaultazurecredential)
-def get_bearer_token(resource_uri):
- token_auth_uri = f"{identity_endpoint}?resource={resource_uri}&api-version=2019-08-01"
- head_msi = {'X-IDENTITY-HEADER':identity_header}
+The linked examples use [`DefaultAzureCredential`](/python/api/overview/azure/identity-readme#defaultazurecredential). It's useful for the majority of the scenarios because the same pattern works in Azure (with managed identities) and on your local machine (without managed identities).
- resp = requests.get(token_auth_uri, headers=head_msi)
- access_token = resp.json()['access_token']
+# [Java](#tab/java)
- return access_token
-```
+For Java apps and functions, the simplest way to work with a managed identity is through the [Azure Identity client library for Java](/java/api/overview/azure/identity-readme). See the respective documentation headings of the client library for information:
+
+- [Add Azure Identity client library to your project](/java/api/overview/azure/identity-readme#include-the-package)
+- [Access Azure service with a system-assigned identity](/java/api/overview/azure/identity-readme#authenticating-with-defaultazurecredential)
+- [Access Azure service with a user-assigned identity](/java/api/overview/azure/identity-readme#authenticating-a-user-assigned-managed-identity-with-defaultazurecredential)
+
+The linked examples use [`DefaultAzureCredential`](/azure/developer/java/sdk/identity-azure-hosted-auth#default-azure-credential). It's useful for the majority of the scenarios because the same pattern works in Azure (with managed identities) and on your local machine (without managed identities).
+
+For more code examples of the Azure Identity client library for Java, see [Azure Identity Examples](https://github.com/Azure/azure-sdk-for-java/wiki/Azure-Identity-Examples).
# [PowerShell](#tab/powershell)
+Use the following script to retrieve a token from the local endpoint by specifying a resource URI of an Azure service:
+ ```powershell $resourceURI = "https://<AAD-resource-URI-for-resource-to-obtain-token>" $tokenAuthURI = $env:IDENTITY_ENDPOINT + "?resource=$resourceURI&api-version=2019-08-01"
$tokenResponse = Invoke-RestMethod -Method Get -Headers @{"X-IDENTITY-HEADER"="$
$accessToken = $tokenResponse.access_token ``` -
+--
-### <a name="asal"></a>Using the Microsoft.Azure.Services.AppAuthentication library for .NET
+For more information on the REST endpoint, see [REST endpoint reference](#rest-endpoint-reference).
+## <a name="remove"></a>Remove an identity
-For .NET applications and functions, the simplest way to work with a managed identity is through the Microsoft.Azure.Services.AppAuthentication package. This library will also allow you to test your code locally on your development machine, using your user account from Visual Studio, the [Azure CLI](/cli/azure), or Active Directory Integrated Authentication. When hosted in the cloud, it will default to using a system-assigned identity, but you can customize this behavior using a connection string environment variable which references the client ID of a user-assigned identity. For more on development options with this library, see the [Microsoft.Azure.Services.AppAuthentication reference]. This section shows you how to get started with the library in your code.
+When you remove a system-assigned identity, it's deleted from Azure Active Directory. System-assigned identities are also automatically removed from Azure Active Directory when you delete the app resource itself.
-1. Add references to the [Microsoft.Azure.Services.AppAuthentication](https://www.nuget.org/packages/Microsoft.Azure.Services.AppAuthentication) and any other necessary NuGet packages to your application. The below example also uses [Microsoft.Azure.KeyVault](https://www.nuget.org/packages/Microsoft.Azure.KeyVault).
+# [Azure portal](#tab/portal)
-2. Add the following code to your application, modifying to target the correct resource. This example shows two ways to work with Azure Key Vault:
+1. In the left navigation of your app's page, scroll down to the **Settings** group.
- ```csharp
- using Microsoft.Azure.Services.AppAuthentication;
- using Microsoft.Azure.KeyVault;
- // ...
- var azureServiceTokenProvider = new AzureServiceTokenProvider();
- string accessToken = await azureServiceTokenProvider.GetAccessTokenAsync("https://vault.azure.net");
- // OR
- var kv = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
- ```
+1. Select **Identity**. Then follow the steps based on the identity type:
-If you want to use a user-assigned managed identity, you can set the `AzureServicesAuthConnectionString` application setting to `RunAs=App;AppId=<clientId-guid>`. Replace `<clientId-guid>` with the client ID of the identity you want to use. You can define multiple such connection strings by using custom application settings and passing their values into the AzureServiceTokenProvider constructor.
+ - **System-assigned identity**: Within the **System assigned** tab, switch **Status** to **Off**. Click **Save**.
+ - **User-assigned identity**: Click the **User assigned** tab, select the checkbox for the identity, and click **Remove**. Click **Yes** to confirm.
-```csharp
- var identityConnectionString1 = Environment.GetEnvironmentVariable("UA1_ConnectionString");
- var azureServiceTokenProvider1 = new AzureServiceTokenProvider(identityConnectionString1);
-
- var identityConnectionString2 = Environment.GetEnvironmentVariable("UA2_ConnectionString");
- var azureServiceTokenProvider2 = new AzureServiceTokenProvider(identityConnectionString2);
-```
+# [Azure CLI](#tab/cli)
-To learn more about configuring AzureServiceTokenProvider and the operations it exposes, see the [Microsoft.Azure.Services.AppAuthentication reference] and the [App Service and KeyVault with MSI .NET sample](https://github.com/Azure-Samples/app-service-msi-keyvault-dotnet).
+To remove the system-assigned identity:
-### Using the Azure SDK for Java
+```azurecli-interactive
+az webapp identity remove --name <app-name> --resource-group <group-name>
+```
-For Java applications and functions, the simplest way to work with a managed identity is through the [Azure SDK for Java](https://github.com/Azure/azure-sdk-for-java). This section shows you how to get started with the library in your code.
+To remove one or more user-assigned identities:
-1. Add a reference to the [Azure SDK library](https://mvnrepository.com/artifact/com.azure.resourcemanager/azure-resourcemanager). For Maven projects, you might add this snippet to the `dependencies` section of the project's POM file:
+```azurecli-interactive
+az webapp identity remove --name <app-name> --resource-group <group-name> --identities <identity-name1>,<identity-name2>,...
+```
- ```xml
- <dependency>
- <groupId>com.azure.resourcemanager</groupId>
- <artifactId>azure-resourcemanager</artifactId>
- <version>2.10.0</version>
- </dependency>
- ```
+You can also remove the system assigned identity by specifying `[system]` in `--identities`.
-2. Use the `ManagedIdentityCredential` object for authentication. This example shows how this mechanism may be used for working with Azure Key Vault:
+# [Azure PowerShell](#tab/ps)
- ```java
- import com.azure.core.management.AzureEnvironment;
- import com.azure.core.management.profile.AzureProfile;
- import com.azure.identity.ManagedIdentityCredential;
- import com.azure.identity.ManagedIdentityCredentialBuilder;
- import com.azure.resourcemanager.AzureResourceManager;
- import com.azure.resourcemanager.keyvault.models.Vault;
- //...
- AzureProfile azureProfile = new AzureProfile(AzureEnvironment.AZURE);
- ManagedIdentityCredential managedIdentityCredential = new ManagedIdentityCredentialBuilder().build();
- AzureResourceManager azure = AzureResourceManager.authenticate(managedIdentityCredential, azureProfile).withSubscription("subscription");
+#### For App Service
- Vault vault = azure.vaults().getByResourceGroup("resourceGroup", "keyVaultName");
+Run the `Set-AzWebApp -AssignIdentity` command to remove a system-assigned identity for App Service:
- ```
-For more information on how to use the Azure SDK for Java, please refer to this [quickstart guide](https://aka.ms/azsdk/java/mgmt). To learn more about Azure Identiy and authentication and Managed Identity in general, please visit [this guide](https://github.com/Azure/azure-sdk-for-java/wiki/Azure-Identity-Examples#authenticating-a-user-assigned-managed-identity-with-defaultazurecredential)
+```azurepowershell-interactive
+Set-AzWebApp -AssignIdentity $false -Name <app-name> -ResourceGroupName <group-name>
+```
-## <a name="remove"></a>Remove an identity
+#### For Functions
-A system-assigned identity can be removed by disabling the feature using the portal, PowerShell, or CLI in the same way that it was created. User-assigned identities can be removed individually. To remove all identities, set the identity type to "None".
+To remove all identities in Azure PowerShell (Azure Functions only):
-Removing a system-assigned identity in this way will also delete it from Azure AD. System-assigned identities are also automatically removed from Azure AD when the app resource is deleted.
+```azurepowershell-interactive
+# Update an existing function app to have IdentityType "None".
+Update-AzFunctionApp -Name $functionAppName -ResourceGroupName $resourceGroupName -IdentityType None
+```
-To remove all identities in an [ARM template](#using-an-azure-resource-manager-template):
+# [ARM template](#tab/arm)
+
+To remove all identities in an ARM template:
```json "identity": {
To remove all identities in an [ARM template](#using-an-azure-resource-manager-t
} ```
-To remove all identities in Azure PowerShell (Azure Functions only):
-
-```azurepowershell-interactive
-# Update an existing function app to have IdentityType "None".
-Update-AzFunctionApp -Name $functionAppName -ResourceGroupName $resourceGroupName -IdentityType None
-```
+--
> [!NOTE] > There is also an application setting that can be set, WEBSITE_DISABLE_MSI, which just disables the local token service. However, it leaves the identity in place, and tooling will still show the managed identity as "on" or "enabled." As a result, use of this setting is not recommended.
+## REST endpoint reference
+
+> [!NOTE]
+> An older version of this endpoint, using the "2017-09-01" API version, used the `secret` header instead of `X-IDENTITY-HEADER` and only accepted the `clientid` property for user-assigned. It also returned the `expires_on` in a timestamp format. `MSI_ENDPOINT` can be used as an alias for `IDENTITY_ENDPOINT`, and `MSI_SECRET` can be used as an alias for `IDENTITY_HEADER`. This version of the protocol is currently required for Linux Consumption hosting plans.
+
+An app with a managed identity makes this endpoint available by defining two environment variables:
+
+- IDENTITY_ENDPOINT - the URL to the local token service.
+- IDENTITY_HEADER - a header used to help mitigate server-side request forgery (SSRF) attacks. The value is rotated by the platform.
+
+The **IDENTITY_ENDPOINT** is a local URL from which your app can request tokens. To get a token for a resource, make an HTTP GET request to this endpoint, including the following parameters:
+
+> | Parameter name | In | Description |
+> |-|--|--|
+> | resource | Query | The Azure AD resource URI of the resource for which a token should be obtained. This could be one of the [Azure services that support Azure AD authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication) or any other resource URI. |
+> | api-version | Query | The version of the token API to be used. Use "2019-08-01" or later. |
+> | X-IDENTITY-HEADER | Header | The value of the IDENTITY_HEADER environment variable. This header is used to help mitigate server-side request forgery (SSRF) attacks. |
+> | client_id | Query | (Optional) The client ID of the user-assigned identity to be used. Cannot be used on a request that includes `principal_id`, `mi_res_id`, or `object_id`. If all ID parameters (`client_id`, `principal_id`, `object_id`, and `mi_res_id`) are omitted, the system-assigned identity is used. |
+> | principal_id | Query | (Optional) The principal ID of the user-assigned identity to be used. `object_id` is an alias that may be used instead. Cannot be used on a request that includes client_id, mi_res_id, or object_id. If all ID parameters (`client_id`, `principal_id`, `object_id`, and `mi_res_id`) are omitted, the system-assigned identity is used. |
+> | mi_res_id | Query | (Optional) The Azure resource ID of the user-assigned identity to be used. Cannot be used on a request that includes `principal_id`, `client_id`, or `object_id`. If all ID parameters (`client_id`, `principal_id`, `object_id`, and `mi_res_id`) are omitted, the system-assigned identity is used. |
+
+> [!IMPORTANT]
+> If you are attempting to obtain tokens for user-assigned identities, you must include one of the optional properties. Otherwise the token service will attempt to obtain a token for a system-assigned identity, which may or may not exist.
+ ## Next steps -- [Access SQL Database securely using a managed identity](tutorial-connect-msi-sql-database.md)
+- [Tutorial: Connect to SQL Database from App Service without secrets using a managed identity](tutorial-connect-msi-sql-database.md)
- [Access Azure Storage securely using a managed identity](scenario-secure-app-access-storage.md) - [Call Microsoft Graph securely using a managed identity](scenario-secure-app-access-microsoft-graph-as-app.md) - [Connect securely to services with Key Vault secrets](tutorial-connect-msi-key-vault.md)-
-[Microsoft.Azure.Services.AppAuthentication reference]: /dotnet/api/overview/azure/service-to-service-authentication
app-service Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/quickstart-python.md
Having issues? [Let us know](https://aka.ms/FlaskCLIQuickstartHelp).
## Next steps > [!div class="nextstepaction"]
-> [Tutorial: Python (Django) web app with PostgreSQL](/azure/app-service/tutorial-python-postgresql-app.md)
+> [Tutorial: Python (Django) web app with PostgreSQL](/azure/app-service/tutorial-python-postgresql-app)
> [!div class="nextstepaction"]
-> [Configure Python app](/azure/app-service/configure-language-python.md)
+> [Configure Python app](/azure/app-service/configure-language-python)
> [!div class="nextstepaction"]
-> [Add user sign-in to a Python web app](/azure/active-directory/develop/quickstart-v2-python-webapp.md)
+> [Add user sign-in to a Python web app](/azure/active-directory/develop/quickstart-v2-python-webapp)
> [!div class="nextstepaction"]
-> [Tutorial: Run Python app in custom container](/azure/app-service/tutorial-custom-container.md)
+> [Tutorial: Run Python app in custom container](/azure/app-service/tutorial-custom-container)
app-service Tutorial Connect Msi Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/tutorial-connect-msi-sql-database.md
description: Learn how to make database connectivity more secure by using a mana
ms.devlang: csharp Previously updated : 04/27/2021 Last updated : 01/27/2022
-# Tutorial: Secure Azure SQL Database connection from App Service using a managed identity
+# Tutorial: Connect to SQL Database from App Service without secrets using a managed identity
-[App Service](overview.md) provides a highly scalable, self-patching web hosting service in Azure. It also provides a [managed identity](overview-managed-identity.md) for your app, which is a turn-key solution for securing access to [Azure SQL Database](/azure/sql-database/) and other Azure services. Managed identities in App Service make your app more secure by eliminating secrets from your app, such as credentials in the connection strings. In this tutorial, you will add managed identity to the sample web app you built in one of the following tutorials:
+[App Service](overview.md) provides a highly scalable, self-patching web hosting service in Azure. It also provides a [managed identity](overview-managed-identity.md) for your app, which is a turn-key solution for securing access to [Azure SQL Database](/azure/sql-database/) and other Azure services. Managed identities in App Service make your app more secure by eliminating secrets from your app, such as credentials in the connection strings. In this tutorial, you'll add managed identity to the sample web app you built in one of the following tutorials:
- [Tutorial: Build an ASP.NET app in Azure with Azure SQL Database](app-service-web-tutorial-dotnet-sqldatabase.md) - [Tutorial: Build an ASP.NET Core and Azure SQL Database app in Azure App Service](tutorial-dotnetcore-sqldb-app.md)
When you're finished, your sample app will connect to SQL Database securely with
> [!NOTE] > The steps covered in this tutorial support the following versions: >
-> - .NET Framework 4.7.2 and above
-> - .NET Core 2.2 and above
+> - .NET Framework 4.8 and above
+> - .NET 6.0 and above
> What you will learn:
What you will learn:
## Prerequisites
-This article continues where you left off in [Tutorial: Build an ASP.NET app in Azure with SQL Database](app-service-web-tutorial-dotnet-sqldatabase.md) or [Tutorial: Build an ASP.NET Core and SQL Database app in Azure App Service](tutorial-dotnetcore-sqldb-app.md). If you haven't already, follow one of the two tutorials first. Alternatively, you can adapt the steps for your own .NET app with SQL Database.
+This article continues where you left off in either one of the following tutorials:
+
+- [Tutorial: Build an ASP.NET app in Azure with SQL Database](app-service-web-tutorial-dotnet-sqldatabase.md)
+- [Tutorial: Build an ASP.NET Core and SQL Database app in Azure App Service](tutorial-dotnetcore-sqldb-app.md).
+
+If you haven't already, follow one of the two tutorials first. Alternatively, you can adapt the steps for your own .NET app with SQL Database.
To debug your app using SQL Database as the back end, make sure that you've allowed client connection from your computer. If not, add the client IP by following the steps at [Manage server-level IP firewall rules using the Azure portal](../azure-sql/database/firewall-configure.md#use-the-azure-portal-to-manage-server-level-ip-firewall-rules).
Prepare your environment for the Azure CLI.
[!INCLUDE [azure-cli-prepare-your-environment-no-header.md](../../includes/azure-cli-prepare-your-environment-no-header.md)]
-## Grant database access to Azure AD user
+## 1. Grant database access to Azure AD user
-First enable Azure AD authentication to SQL Database by assigning an Azure AD user as the Active Directory admin of the server. This user is different from the Microsoft account you used to sign up for your Azure subscription. It must be a user that you created, imported, synced, or invited into Azure AD. For more information on allowed Azure AD users, see [Azure AD features and limitations in SQL Database](../azure-sql/database/authentication-aad-overview.md#azure-ad-features-and-limitations).
+First, enable Azure Active Directory authentication to SQL Database by assigning an Azure AD user as the admin of the server. This user is different from the Microsoft account you used to sign up for your Azure subscription. It must be a user that you created, imported, synced, or invited into Azure AD. For more information on allowed Azure AD users, see [Azure AD features and limitations in SQL Database](../azure-sql/database/authentication-aad-overview.md#azure-ad-features-and-limitations).
1. If your Azure AD tenant doesn't have a user yet, create one by following the steps at [Add or delete users using Azure Active Directory](../active-directory/fundamentals/add-users-azure-active-directory.md).
First enable Azure AD authentication to SQL Database by assigning an Azure AD us
For more information on adding an Active Directory admin, see [Provision an Azure Active Directory administrator for your server](../azure-sql/database/authentication-aad-configure.md#provision-azure-ad-admin-sql-managed-instance)
-## Set up Visual Studio
+## 2. Set up your dev environment
-# [Windows client](#tab/windowsclient)
+# [Visual Studio Windows](#tab/windowsclient)
-1. Visual Studio for Windows is integrated with Azure AD authentication. To enable development and debugging in Visual Studio, add your Azure AD user in Visual Studio by selecting **File** > **Account Settings** from the menu, and click **Add an account**.
+1. Visual Studio for Windows is integrated with Azure AD authentication. To enable development and debugging in Visual Studio, add your Azure AD user in Visual Studio by selecting **File** > **Account Settings** from the menu, and select **Sign in** or **Add**.
-1. To set the Azure AD user for Azure service authentication, select **Tools** > **Options** from the menu, then select **Azure Service Authentication** > **Account Selection**. Select the Azure AD user you added and click **OK**.
+1. To set the Azure AD user for Azure service authentication, select **Tools** > **Options** from the menu, then select **Azure Service Authentication** > **Account Selection**. Select the Azure AD user you added and select **OK**.
-# [macOS client](#tab/macosclient)
+# [Visual Studio for macOS](#tab/macosclient)
-1. Visual Studio for Mac is not integrated with Azure AD authentication. However, the [Microsoft.Azure.Services.AppAuthentication](https://www.nuget.org/packages/Microsoft.Azure.Services.AppAuthentication) library that you will use later can use tokens from Azure CLI. To enable development and debugging in Visual Studio, [install Azure CLI](/cli/azure/install-azure-cli) on your local machine.
+1. Visual Studio for Mac is *not* integrated with Azure AD authentication. However, the Azure Identity client library that you'll use later can use tokens from Azure CLI. To enable development and debugging in Visual Studio, [install Azure CLI](/cli/azure/install-azure-cli) on your local machine.
1. Sign in to Azure CLI with the following command using your Azure AD user:
For more information on adding an Active Directory admin, see [Provision an Azur
az login --allow-no-subscriptions ```
+# [Visual Studio Code](#tab/vscode)
-You're now ready to develop and debug your app with the SQL Database as the back end, using Azure AD authentication.
+1. Visual Studio Code is integrated with Azure AD authentication through the Azure extension. Install the <a href="https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack" target="_blank">Azure Tools</a> extension in Visual Studio Code.
-## Modify your project
+1. In Visual Studio Code, in the [Activity Bar](https://code.visualstudio.com/docs/getstarted/userinterface), select the **Azure** logo.
-The steps you follow for your project depends on whether it's an ASP.NET project or an ASP.NET Core project.
+1. In the **App Service** explorer, select **Sign in to Azure...** and follow the instructions.
-# [ASP.NET](#tab/dotnet)
+# [Azure CLI](#tab/cli)
-1. In Visual Studio, open the Package Manager Console and add the NuGet package [Microsoft.Azure.Services.AppAuthentication](https://www.nuget.org/packages/Microsoft.Azure.Services.AppAuthentication):
+1. The Azure Identity client library that you'll use later can use tokens from Azure CLI. To enable command-line based development, [install Azure CLI](/cli/azure/install-azure-cli) on your local machine.
- ```powershell
- Install-Package Microsoft.Azure.Services.AppAuthentication -Version 1.4.0
+1. Sign in to Azure with the following command using your Azure AD user:
+
+ ```azurecli
+ az login --allow-no-subscriptions
```
-1. In *Web.config*, working from the top of the file and make the following changes:
+# [Azure PowerShell](#tab/ps)
- - In `<configSections>`, add the following section declaration in it:
-
- ```xml
- <section name="SqlAuthenticationProviders" type="System.Data.SqlClient.SqlAuthenticationProviderConfigurationSection, System.Data, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" />
- ```
-
- - below the closing `</configSections>` tag, add the following XML code for `<SqlAuthenticationProviders>`.
-
- ```xml
- <SqlAuthenticationProviders>
- <providers>
- <add name="Active Directory Interactive" type="Microsoft.Azure.Services.AppAuthentication.SqlAppAuthenticationProvider, Microsoft.Azure.Services.AppAuthentication" />
- </providers>
- </SqlAuthenticationProviders>
- ```
-
- - Find the connection string called `MyDbConnection` and replace its `connectionString` value with `"server=tcp:<server-name>.database.windows.net;database=<db-name>;UID=AnyString;Authentication=Active Directory Interactive"`. Replace _\<server-name>_ and _\<db-name>_ with your server name and database name.
-
- > [!NOTE]
- > The SqlAuthenticationProvider you just registered is based on top of the AppAuthentication library you installed earlier. By default, it uses a system-assigned identity. To leverage a user-assigned identity, you will need to provide an additional configuration. Please see [connection string support](/dotnet/api/overview/azure/service-to-service-authentication#connection-string-support) for the AppAuthentication library.
+1. The Azure Identity client library that you'll use later can use tokens from Azure PowerShell. To enable command-line based development, [install Azure PowerShell](/powershell/azure/install-az-ps) on your local machine.
- That's every thing you need to connect to SQL Database. When debugging in Visual Studio, your code uses the Azure AD user you configured in [Set up Visual Studio](#set-up-visual-studio). You'll set up SQL Database later to allow connection from the managed identity of your App Service app.
+1. Sign in to Azure CLI with the following cmdlet using your Azure AD user:
-1. Type `Ctrl+F5` to run the app again. The same CRUD app in your browser is now connecting to the Azure SQL Database directly, using Azure AD authentication. This setup lets you run database migrations from Visual Studio.
+ ```powershell-interactive
+ Connect-AzAccount
+ ```
-# [ASP.NET Core](#tab/dotnetcore)
+--
+
+For more information about setting up your dev environment for Azure Active Directory authentication, see [Azure Identity client library for .NET](/dotnet/api/overview/azure/Identity-readme).
+
+You're now ready to develop and debug your app with the SQL Database as the back end, using Azure AD authentication.
+
+## 3. Modify your project
> [!NOTE] > **Microsoft.Azure.Services.AppAuthentication** is no longer recommended to use with new Azure SDK. > It is replaced with new **Azure Identity client library** available for .NET, Java, TypeScript and Python and should be used for all new development. > Information about how to migrate to `Azure Identity`can be found here: [AppAuthentication to Azure.Identity Migration Guidance](/dotnet/api/overview/azure/app-auth-migration).
-1. In Visual Studio, open the Package Manager Console and add the NuGet package [Azure.Identity](https://www.nuget.org/packages/Azure.Identity):
+The steps you follow for your project depends on whether you're using [Entity Framework](/ef/ef6/) (default for ASP.NET) or [Entity Framework Core](/ef/core/) (default for ASP.NET Core).
- ```powershell
- Install-Package Microsoft.Data.SqlClient -Version 2.1.2
- Install-Package Azure.Identity -Version 1.4.0
- ```
+# [Entity Framework](#tab/ef)
-1. In the [ASP.NET Core and SQL Database tutorial](tutorial-dotnetcore-sqldb-app.md), the `MyDbConnection` connection string isn't used at all because the local development environment uses a Sqlite database file, and the Azure production environment uses a connection string from App Service. With Active Directory authentication, you want both environments to use the same connection string. In *appsettings.json*, replace the value of the `MyDbConnection` connection string with:
+1. In Visual Studio, open the Package Manager Console and add the NuGet package [Azure.Identity](https://www.nuget.org/packages/Azure.Identity) and update Entity Framework:
- ```json
- "Server=tcp:<server-name>.database.windows.net;Authentication=Active Directory Device Code Flow; Database=<database-name>;"
+ ```powershell
+ Install-Package Azure.Identity -Version 1.5.0
+ Update-Package EntityFramework
```
- > [!NOTE]
- > We use the `Active Directory Device Code Flow` authentication type because this is the closest we can get to a custom option. Ideally, a `Custom Authentication` type would be available. Without a better term to use at this time, we're using `Device Code Flow`.
- >
-
-1. Next, you need to create a custom authentication provider class to acquire and supply the Entity Framework database context with the access token for the SQL Database. In the *Data\\* directory, add a new class `CustomAzureSQLAuthProvider.cs` with the following code inside:
+1. In your DbContext object (in *Models/MyDbContext.cs*), add the following code to the default constructor.
```csharp
- public class CustomAzureSQLAuthProvider : SqlAuthenticationProvider
- {
- private static readonly string[] _azureSqlScopes = new[]
- {
- "https://database.windows.net//.default"
- };
-
- private static readonly TokenCredential _credential = new DefaultAzureCredential();
+ var conn = (System.Data.SqlClient.SqlConnection)Database.Connection;
+ var credential = new Azure.Identity.DefaultAzureCredential();
+ var token = credential.GetToken(new Azure.Core.TokenRequestContext(new[] { "https://database.windows.net/.default" }));
+ conn.AccessToken = token.Token;
+ ```
- public override async Task<SqlAuthenticationToken> AcquireTokenAsync(SqlAuthenticationParameters parameters)
- {
- var tokenRequestContext = new TokenRequestContext(_azureSqlScopes);
- var tokenResult = await _credential.GetTokenAsync(tokenRequestContext, default);
- return new SqlAuthenticationToken(tokenResult.Token, tokenResult.ExpiresOn);
- }
+ This code uses [Azure.Identity.DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential) to get a useable token for SQL Database from Azure Active Directory and then adds it to the database connection. While you can customize `DefaultAzureCredential`, by default it's already very versatile. When running in App Service, it uses app's system-assigned managed identity. When running locally, it can get a token using the logged-in identity of Visual Studio, Visual Studio Code, Azure CLI, and Azure PowerShell.
+
+1. In *Web.config*, find the connection string called `MyDbConnection` and replace its `connectionString` value with `"server=tcp:<server-name>.database.windows.net;database=<db-name>;"`. Replace _\<server-name>_ and _\<db-name>_ with your server name and database name. This connection string is used by the default constructor in *Models/MyDbContext.cs*.
- public override bool IsSupported(SqlAuthenticationMethod authenticationMethod) => authenticationMethod.Equals(SqlAuthenticationMethod.ActiveDirectoryDeviceCodeFlow);
- }
+ That's every thing you need to connect to SQL Database. When debugging in Visual Studio, your code uses the Azure AD user you configured in [2. Set up your dev environment](#2-set-up-your-dev-environment). You'll set up SQL Database later to allow connection from the managed identity of your App Service app.
+
+1. Type `Ctrl+F5` to run the app again. The same CRUD app in your browser is now connecting to the Azure SQL Database directly, using Azure AD authentication. This setup lets you run database migrations from Visual Studio.
+
+# [Entity Framework Core](#tab/efcore)
+
+1. In Visual Studio, open the Package Manager Console and add the NuGet package [Microsoft.Data.SqlClient](https://www.nuget.org/packages/Microsoft.Data.SqlClient):
+
+ ```powershell
+ Install-Package Microsoft.Data.SqlClient -Version 4.0.1
```
-1. In *Startup.cs*, update the `ConfigureServices()` method with the following code:
+1. In the [ASP.NET Core and SQL Database tutorial](tutorial-dotnetcore-sqldb-app.md), the `MyDbConnection` connection string in *appsettings.json* isn't used at all yet. The local environment and the Azure environment both get connection strings from their respective environment variables in order to keep connection secrets out of the source file. But now with Active Directory authentication, there are no more secrets. In *appsettings.json*, replace the value of the `MyDbConnection` connection string with:
- ```csharp
- services.AddControllersWithViews();
- services.AddDbContext<MyDatabaseContext>(options =>
- {
- SqlAuthenticationProvider.SetProvider(
- SqlAuthenticationMethod.ActiveDirectoryDeviceCodeFlow,
- new CustomAzureSQLAuthProvider());
- var sqlConnection = new SqlConnection(Configuration.GetConnectionString("MyDbConnection"));
- options.UseSqlServer(sqlConnection);
- });
+ ```json
+ "Server=tcp:<server-name>.database.windows.net;Authentication=Active Directory Default; Database=<database-name>;"
``` > [!NOTE]
- > This demonstration code is synchronous for clarity and simplicity.
-
- The preceding code uses the `Azure.Identity` library so that it can authenticate and retrieve an access token for the database, no matter where the code is running. If you're running on your local machine, `DefaultAzureCredential()` loops through a number of options to find a valid account that is logged in. You can read more about the [DefaultAzureCredential class](/dotnet/api/azure.identity.defaultazurecredential).
+ > The [Active Directory Default](/sql/connect/ado-net/sql/azure-active-directory-authentication#using-active-directory-default-authentication) authentication type can be used both on your local machine and in Azure App Service. The driver attempts to acquire a token from Azure Active Directory using various means. If the app is deployed, it gets a token from the app's managed identity. If the app is running locally, it tries to get a token from Visual Studio, Visual Studio Code, and Azure CLI.
+ >
- That's everything you need to connect to SQL Database. When debugging in Visual Studio, your code uses the Azure AD user you configured in [Set up Visual Studio](#set-up-visual-studio). You'll set up SQL Database later to allow connection from the managed identity of your App Service app. The `DefaultAzureCredential` class caches the token in memory and retrieves it from Azure AD just before expiration. You don't need any custom code to refresh the token.
+ That's everything you need to connect to SQL Database. When debugging in Visual Studio, your code uses the Azure AD user you configured in [2. Set up your dev environment](#2-set-up-your-dev-environment). You'll set up SQL Database later to allow connection from the managed identity of your App Service app. The `DefaultAzureCredential` class caches the token in memory and retrieves it from Azure AD just before expiration. You don't need any custom code to refresh the token.
1. Type `Ctrl+F5` to run the app again. The same CRUD app in your browser is now connecting to the Azure SQL Database directly, using Azure AD authentication. This setup lets you run database migrations from Visual Studio. --
-## Use managed identity connectivity
+## 4. Use managed identity connectivity
Next, you configure your App Service app to connect to SQL Database with a system-assigned managed identity.
Here's an example of the output:
1. Type `EXIT` to return to the Cloud Shell prompt. > [!NOTE]
- > The back-end services of managed identities also [maintains a token cache](overview-managed-identity.md#obtain-tokens-for-azure-resources) that updates the token for a target resource only when it expires. If you make a mistake configuring your SQL Database permissions and try to modify the permissions *after* trying to get a token with your app, you don't actually get a new token with the updated permissions until the cached token expires.
+ > The back-end services of managed identities also [maintains a token cache](overview-managed-identity.md#configure-target-resource) that updates the token for a target resource only when it expires. If you make a mistake configuring your SQL Database permissions and try to modify the permissions *after* trying to get a token with your app, you don't actually get a new token with the updated permissions until the cached token expires.
> [!NOTE] > Azure Active Directory and managed identities are not supported for on-premises SQL Server.
Remember that the same changes you made in *Web.config* or *appsettings.json* wo
az webapp config connection-string delete --resource-group myResourceGroup --name <app-name> --setting-names MyDbConnection ```
-## Publish your changes
+## 5. Publish your changes
All that's left now is to publish your changes to Azure.
All that's left now is to publish your changes to Azure.
![Publish from Solution Explorer](./media/app-service-web-tutorial-dotnet-sqldatabase/solution-explorer-publish.png)
-1. In the publish page, click **Publish**.
+1. In the publish page, select **Publish**.
> [!IMPORTANT] > Ensure that your app service name doesn't match with any existing [App Registrations](../active-directory/manage-apps/add-application-portal.md). This will lead to Principal ID conflicts.
app-service Tutorial Dotnetcore Sqldb App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/tutorial-dotnetcore-sqldb-app.md
description: Learn how to get a .NET Core app working in Azure App Service, with
ms.devlang: csharp Previously updated : 10/06/2021 Last updated : 01/27/2022 zone_pivot_groups: app-service-platform-windows-linux
In this tutorial, you learn how to:
> [!div class="checklist"] > * Create a SQL Database in Azure
-> * Connect an ASP.NET Core app to SQL Database
+> * Connect an ASP.NET Core app to SQL Database and run [database migrations](/ef/core/managing-schemas/migrations)
> * Deploy the app to Azure > * Update the data model and redeploy the app > * Stream diagnostic logs from Azure
-> * Manage the app in the Azure portal
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
In this tutorial, you learn how to:
To complete this tutorial: - <a href="https://git-scm.com/" target="_blank">Install Git</a>-- <a href="https://dotnet.microsoft.com/download/dotnet/5.0" target="_blank">Install the latest .NET 5.0 SDK</a>
+- <a href="https://dotnet.microsoft.com/download/dotnet/6.0" target="_blank">Install the latest .NET 6.0 SDK</a>
[!INCLUDE [azure-cli-prepare-your-environment-no-header.md](../../includes/azure-cli-prepare-your-environment-no-header.md)]
In this step, you set up the local ASP.NET Core project.
cd dotnetcore-sqldb-tutorial ```
- The sample project contains a basic CRUD (create-read-update-delete) app using [Entity Framework Core](/ef/core/).
+ The sample project contains a basic CRUD (create-read-update-delete) app using ASP.NET Core 6.0 and [Entity Framework Core](/ef/core/).
1. Make sure the default branch is `main`.
In this step, you set up the local ASP.NET Core project.
### Run the application
-1. Run the following commands to install the required packages, run database migrations, and start the application.
+1. Run the following commands to install the [EF Core tools](/ef/core/cli/), run [database migrations](/ef/core/managing-schemas/migrations), and start the application.
```bash dotnet tool install -g dotnet-ef
This is the connection string for your ASP.NET Core app. Copy it for use later.
In your local repository, open Startup.cs and find the following code: ```csharp
-services.AddDbContext<MyDatabaseContext>(options =>
- options.UseSqlite("Data Source=localdatabase.db"));
+builder.Services.AddDbContext<MyDatabaseContext>(options =>
+ options.UseSqlite("Data Source=localdatabase.db"));
``` Replace it with the following code. ```csharp
-services.AddDbContext<MyDatabaseContext>(options =>
- options.UseSqlServer(Configuration.GetConnectionString("MyDbConnection")));
+builder.Services.AddDbContext<MyDatabaseContext>(options =>
+ options.UseSqlServer(builder.Configuration.GetConnectionString("MyDbConnection")));
``` > [!IMPORTANT]
While the ASP.NET Core app runs in Azure App Service, you can get the console lo
The sample project already follows the guidance for the [Azure App Service logging provider](/dotnet/core/extensions/logging-providers#azure-app-service) with two configuration changes: - Includes a reference to `Microsoft.Extensions.Logging.AzureAppServices` in *DotNetCoreSqlDb.csproj*.-- Calls `loggerFactory.AddAzureWebAppDiagnostics()` in *Program.cs*.
+- Calls `builder.Logging.AddAzureWebAppDiagnostics()` in *Program.cs*.
1. To set the ASP.NET Core [log level](/dotnet/core/extensions/logging#log-level) in App Service to `Information` from the default level `Error`, use the [`az webapp log config`](/cli/azure/webapp/log#az_webapp_log_config) command in the Cloud Shell.
What you learned:
> [!div class="checklist"] > * Create a SQL Database in Azure
-> * Connect a ASP.NET Core app to SQL Database
+> * Connect an ASP.NET Core app to SQL Database and run [database migrations](/ef/core/managing-schemas/migrations)
> * Deploy the app to Azure > * Update the data model and redeploy the app
-> * Stream logs from Azure to your terminal
-> * Manage the app in the Azure portal
+> * Stream diagnostic logs from Azure
Advance to the next tutorial to learn how to map a custom DNS name to your app.
Advance to the next tutorial to learn how to map a custom DNS name to your app.
Or, check out other resources:
+> [!div class="nextstepaction"]
+> [Tutorial: Connect to SQL Database from App Service without secrets using a managed identity](tutorial-connect-msi-sql-database.md)
+ > [!div class="nextstepaction"] > [Configure ASP.NET Core app](configure-language-dotnetcore.md)
application-gateway Key Vault Certs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/key-vault-certs.md
Previously updated : 11/30/2021 Last updated : 01/31/2022
Application Gateway integration with Key Vault offers many benefits, including:
- Stronger security, because TLS/SSL certificates aren't directly handled by the application development team. Integration allows a separate security team to: * Set up application gateways. * Control application gateway lifecycles.
- * Grant permissions to selected application gateways to access certificates that are stored in your key vault.
-- Support for importing existing certificates into your key vault. Or use Key Vault APIs to create and manage new certificates with any of the trusted Key Vault partners.-- Support for automatic renewal of certificates that are stored in your key vault.
+ * Grant permissions to selected application gateways to access certificates that are stored in your Key Vault.
+- Support for importing existing certificates into your Key Vault. Or use Key Vault APIs to create and manage new certificates with any of the trusted Key Vault partners.
+- Support for automatic renewal of certificates that are stored in your Key Vault.
## Supported certificates
-Application Gateway currently supports software-validated certificates only. Hardware security module (HSM)-validated certificates are not supported.
+Application Gateway currently supports software-validated certificates only. Hardware security module (HSM)-validated certificates arenΓÇÖt supported.
After Application Gateway is configured to use Key Vault certificates, its instances retrieve the certificate from Key Vault and install them locally for TLS termination. The instances poll Key Vault at four-hour intervals to retrieve a renewed version of the certificate, if it exists. If an updated certificate is found, the TLS/SSL certificate that's currently associated with the HTTPS listener is automatically rotated. > [!TIP] > Any change to Application Gateway will force a check against Key Vault to see if any new versions of certificates are available. This includes, but not limited to, changes to Frontend IP Configurations, Listeners, Rules, Backend Pools, Resource Tags, and more. If an updated certificate is found, the new certificate will immediately be presented.
-Application Gateway uses a secret identifier in Key Vault to reference the certificates. For Azure PowerShell, the Azure CLI, or Azure Resource Manager, we strongly recommend that you use a secret identifier that doesn't specify a version. This way, Application Gateway will automatically rotate the certificate if a newer version is available in your key vault. An example of a secret URI without a version is `https://myvault.vault.azure.net/secrets/mysecret/`.
+Application Gateway uses a secret identifier in Key Vault to reference the certificates. For Azure PowerShell, the Azure CLI, or Azure Resource Manager, we strongly recommend that you use a secret identifier that doesn't specify a version. This way, Application Gateway will automatically rotate the certificate if a newer version is available in your Key Vault. An example of a secret URI without a version is `https://myvault.vault.azure.net/secrets/mysecret/`.
The Azure portal supports only Key Vault certificates, not secrets. Application Gateway still supports referencing secrets from Key Vault, but only through non-portal resources like PowerShell, the Azure CLI, APIs, and Azure Resource Manager templates (ARM templates). > [!WARNING]
-> Azure Application Gateway currently supports only Key Vault accounts in the same subscription as the Application Gateway resource. Choosing a key vault under a different subscription than your Application Gateway will result in a failure.
+> Azure Application Gateway currently supports only Key Vault accounts in the same subscription as the Application Gateway resource. Choosing a Key Vault under a different subscription than your Application Gateway will result in a failure.
## Certificate settings in Key Vault
-For TLS termination, Application Gateway only supports certificates in Personal Information Exchange (PFX) format. You can either import an existing certificate or create a new one in your key vault. To avoid any failures, ensure that the certificate's status is set to **Enabled** in Key Vault.
+For TLS termination, Application Gateway only supports certificates in Personal Information Exchange (PFX) format. You can either import an existing certificate or create a new one in your Key Vault. To avoid any failures, ensure that the certificate's status is set to **Enabled** in Key Vault.
## How integration works
You can either create a new user-assigned managed identity or reuse an existing
### Delegate user-assigned managed identity to Key Vault
-Define access policies to use the user-assigned managed identity with your key vault:
+Define access policies to use the user-assigned managed identity with your Key Vault:
1. In the Azure portal, go to **Key Vault**.
-1. Select the key vault that contains your certificate.
+1. Select the Key Vault that contains your certificate.
1. If you're using the permission model **Vault access policy**: Select **Access Policies**, select **+ Add Access Policy**, select **Get** for **Secret permissions**, and choose your user-assigned managed identity for **Select principal**. Then select **Save**.
- If you're using the permission model **Azure role-based access control**: Select **Access control (IAM)** and [Add a role assignment](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#assign-a-role-to-a-user-assigned-managed-identity) for the user-assigned managed identity to the Azure key vault for the role **Key Vault Secrets User**.
+ If you're using the permission model **Azure role-based access control**: Select **Access control (IAM)** and [Add a role assignment](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#assign-a-role-to-a-user-assigned-managed-identity) for the user-assigned managed identity to the Azure Key Vault for the role **Key Vault Secrets User**.
### Verify Firewall Permissions to Key Vault
-As of March 15, 2021, Key Vault recognizes Application Gateway as a trusted service by leveraging User Managed Identities for authentication to Azure Key Vault. With the use of service endpoints and enabling the trusted services option for key vault's firewall, you can build a secure network boundary in Azure. You can deny access to traffic from all networks (including internet traffic) to Key Vault but still make Key Vault accessible for an Application Gateway resource under your subscription.
+As of March 15, 2021, Key Vault recognizes Application Gateway as a trusted service by leveraging User Managed Identities for authentication to Azure Key Vault. With the use of service endpoints and enabling the trusted services option for Key Vault's firewall, you can build a secure network boundary in Azure. You can deny access to traffic from all networks (including internet traffic) to Key Vault but still make Key Vault accessible for an Application Gateway resource under your subscription.
-When you're using a restricted key vault, use the following steps to configure Application Gateway to use firewalls and virtual networks:
+When you're using a restricted Key Vault, use the following steps to configure Application Gateway to use firewalls and virtual networks:
-1. In the Azure portal, in your key vault, select **Networking**.
-1. On the **Firewalls and virtual networks** tab, select **Private endpoint and selected networks**.
+> [!TIP]
+> The following steps are not required if your Key Vault has a Private Endpoint enabled. The application gateway can access the Key Vault using the private IP address.
+
+1. In the Azure portal, in your Key Vault, select **Networking**.
+1. On the **Firewalls and virtual networks** tab, select **Selected networks**.
1. For **Virtual networks**, select **+ Add existing virtual networks**, and then add the virtual network and subnet for your Application Gateway instance. During the process, also configure the `Microsoft.KeyVault` service endpoint by selecting its checkbox.
-1. Select **Yes** to allow trusted services to bypass the key vault's firewall.
-
-![Screenshot that shows selections for configuring Application Gateway to use firewalls and virtual networks.](media/key-vault-certs/key-vault-firewall.png)
+1. Select **Yes** to allow trusted services to bypass the Key Vault's firewall.
+
+ ![Screenshot that shows selections for configuring Application Gateway to use firewalls and virtual networks.](media/key-vault-certs/key-vault-firewall.png)
> [!Note]
-> If you deploy the Application Gateway instance via an ARM template by using either the Azure CLI or PowerShell, or via an Azure application deployed from the Azure portal, the SSL certificate is stored in the key vault as a Base64-encoded PFX file. You must complete the steps in [Use Azure Key Vault to pass secure parameter value during deployment](../azure-resource-manager/templates/key-vault-parameter.md).
+> If you deploy the Application Gateway instance via an ARM template by using either the Azure CLI or PowerShell, or via an Azure application deployed from the Azure portal, the SSL certificate is stored in the Key Vault as a Base64-encoded PFX file. You must complete the steps in [Use Azure Key Vault to pass secure parameter value during deployment](../azure-resource-manager/templates/key-vault-parameter.md).
> > It's particularly important to set `enabledForTemplateDeployment` to `true`. The certificate might or might not have a password. In the case of a certificate with a password, the following example shows a possible configuration for the `sslCertificates` entry in `properties` for the ARM template configuration for Application Gateway. >
When you're using a restricted key vault, use the following steps to configure A
> ] > ``` >
-> The values of `appGatewaySSLCertificateData` and `appGatewaySSLCertificatePassword` are looked up from the key vault, as described in [Reference secrets with dynamic ID](../azure-resource-manager/templates/key-vault-parameter.md#reference-secrets-with-dynamic-id). Follow the references backward from `parameters('secretName')` to see how the lookup happens. If the certificate is passwordless, omit the `password` entry.
+> The values of `appGatewaySSLCertificateData` and `appGatewaySSLCertificatePassword` are looked up from the Key Vault, as described in [Reference secrets with dynamic ID](../azure-resource-manager/templates/key-vault-parameter.md#reference-secrets-with-dynamic-id). Follow the references backward from `parameters('secretName')` to see how the lookup happens. If the certificate is passwordless, omit the `password` entry.
### Configure Application Gateway Listener
Navigate to your Application Gateway in the Azure portal and select the **Listen
Under **Choose a certificate**, select **Create new** and then select **Choose a certificate from Key Vault** under **Https settings**.
-For Cert name, type a friendly name for the certificate to be referenced in Key Vault. Choose your Managed identity, Key vault, and Certificate.
+For Cert name, type a friendly name for the certificate to be referenced in Key Vault. Choose your Managed identity, Key Vault, and Certificate.
Once selected, select **Add** (if creating) or **Save** (if editing) to apply the referenced Key Vault certificate to the listener. #### Key Vault Azure role-based access control permission model
-Application Gateway supports certificates referenced in Key Vault via the Role-based access control permission model. The first few steps to reference the key vault must be completed via ARM, Bicep, CLI, or PowerShell.
+Application Gateway supports certificates referenced in Key Vault via the Role-based access control permission model. The first few steps to reference the Key Vault must be completed via ARM template, Bicep, CLI, or PowerShell.
> [!Note] > Specifying Azure Key Vault certificates that are subject to the role-based access control permission model is not supported via the portal.
-In this example, we will use PowerShell to reference a new Key Vault certificate.
+In this example, weΓÇÖll use PowerShell to reference a new Key Vault certificate.
``` # Get the Application Gateway we want to modify $appgw = Get-AzApplicationGateway -Name MyApplicationGateway -ResourceGroupName MyResourceGroup # Specify the resource id to the user assigned managed identity - This can be found by going to the properties of the managed identity Set-AzApplicationGatewayIdentity -ApplicationGateway $appgw -UserAssignedIdentityId "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/MyResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/MyManagedIdentity"
-# Get the secret ID from key vault
+# Get the secret ID from Key Vault
$secret = Get-AzKeyVaultSecret -VaultName "MyKeyVault" -Name "CertificateName" $secretId = $secret.Id # https://<keyvaultname>.vault.azure.net/secrets/<hash>
-# Specify the secret ID from key vault
+# Specify the secret ID from Key Vault
Add-AzApplicationGatewaySslCertificate -KeyVaultSecretId $secretId -ApplicationGateway $appgw -Name $secret.Name # Commit the changes to the Application Gateway Set-AzApplicationGateway -ApplicationGateway $appgw
Set-AzApplicationGateway -ApplicationGateway $appgw
Once the commands have been executed, you can navigate to your Application Gateway in the Azure portal and select the Listeners tab. Click Add Listener (or select an existing) and specify the Protocol to HTTPS.
-Under *Choose a certificate* select the certificate named in the previous steps. Once selected, select *Add* (if creating) or *Save* (if editing) to apply the referenced Key Vault certificate to the listener.
+Under **Choose a certificate** select the certificate named in the previous steps. Once selected, select *Add* (if creating) or *Save* (if editing) to apply the referenced Key Vault certificate to the listener.
## Investigating and resolving Key Vault errors
automanage Arm Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/arm-deploy.md
Last updated 12/10/2021
## Overview
-Follow the steps below to onboard a machine to Automanage Best Practices using an ARM template.
+Follow the steps to onboard a machine to Automanage Best Practices using an ARM template.
## Prerequisites
-* You must have necessary [RBAC permissions](./automanage-virtual-machines.md#required-rbac-permissions)
+* You must have necessary [Role-based access control permissions](./automanage-virtual-machines.md#required-rbac-permissions)
* You must be in a supported region and supported VM image highlighted in these [prerequisites](./automanage-virtual-machines.md#prerequisites) ## ARM template overview
-The following ARM template will onboard your specified machine onto Azure Automanage Best Practices. Details on the ARM template and steps on how to deploy are located in the ARM template deployment section [below](#arm-template-deployment).
+The following ARM template will onboard your specified machine onto Azure Automanage Best Practices. Details on the ARM template and steps on how to deploy are located in the ARM template deployment [section](#arm-template-deployment).
```json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
The following ARM template will onboard your specified machine onto Azure Automa
"apiVersion": "2021-04-30-preview", "name": "[concat(parameters('machineName'), '/Microsoft.Automanage/default')]", "properties": {
- "configurationProfile": "[parameters('configurationProfile')]",
+ "configurationProfile": "[parameters('configurationProfile')]"
} } ]
The following ARM template will onboard your specified machine onto Azure Automa
``` ## ARM template deployment
-The ARM template above will create a configuration profile assignment for your specified machine.
+This ARM template will create a configuration profile assignment for your specified machine.
The `configurationProfile` value can be one of the following values: * "/providers/Microsoft.Automanage/bestPractices/AzureBestPracticesProduction" * "/providers/Microsoft.Automanage/bestPractices/AzureBestPracticesDevTest" Follow these steps to deploy the ARM template:
-1. Save the ARM template above as `azuredeploy.json`
-1. Run the ARM template deployment with `az deployment group create --resource-group myResourceGroup --template-file azuredeploy.json`
-1. Provide the values for machineName, automanageAccountName, and configurationProfileAssignment when prompted
-1. You are done!
+1. Save this ARM template as `azuredeploy.json`
+1. Run this ARM template deployment with `az deployment group create --resource-group myResourceGroup --template-file azuredeploy.json`
+1. Provide the values for machineName, and configurationProfileAssignment when prompted
+1. You're ready to deploy
-As with any ARM template, it is possible to factor out the parameters into a separate `azuredeploy.parameters.json` file and use that as an argument when deploying.
+As with any ARM template, it's possible to factor out the parameters into a separate `azuredeploy.parameters.json` file and use that as an argument when deploying.
## Next steps Learn more about Automanage for [Linux](./automanage-linux.md) and [Windows](./automanage-windows-server.md)
automanage Virtual Machines Policy Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/virtual-machines-policy-enable.md
If you don't have an Azure subscription, [create an account](https://azure.micro
> The following Azure RBAC permission is needed to enable Automanage: **Owner** role or **Contributor** along with **User Access Administrator** roles. ## Direct link to Policy
-The Automanage policy definition can be found in the Azure portal by the name of [Configure virtual machines to be onboarded to Azure Automanage](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F270610db-8c04-438a-a739-e8e6745b22d3). If you click on this link, skip directly to step 8 in [Locate and assign the policy](#locate-and-assign-the-policy) below.
+The Automanage policy definition can be found in the Azure portal by the name of [Configure virtual machines to be onboarded to Azure Automanage](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff889cab7-da27-4c41-a3b0-de1f6f87c550). If you click on this link, skip directly to step 8 in [Locate and assign the policy](#locate-and-assign-the-policy) below.
## Sign in to Azure
Sign in to the [Azure portal](https://portal.azure.com/).
1. Navigate to **Policy** in the Azure portal 1. Go to the **Definitions** pane 1. Click the **Categories** dropdown to see the available options
-1. Select the **Enable Automanage ΓÇô Azure virtual machine best practices** option
-1. Now the list will update to show a built-in policy with a name that starts with *Enable Automanage…*
-1. Click on the *Enable Automanage - Azure virtual machine best practices* built-in policy name
+1. Select the **Automanage** option
+1. Now the list will update to show a built-in policy with a name that starts with *Configure virtual machines to be onboarded to Azure Automanage*
+1. Click on the *Configure virtual machines to be onboarded to Azure Automanage* built-in policy name
1. After clicking on the policy, you can now see the **Definition** tab > [!NOTE]
- > The Azure Policy definition is used to set Automanage parameters like the configuration profile or the account. It also sets filters that ensure the policy applies only to the correct VMs.
+ > The Azure Policy definition is used to set Automanage parameters like the configuration profile. It also sets filters that ensure the policy applies only to the correct VMs.
1. Click the **Assign** button to create an Assignment 1. Under the **Basics** tab, fill out **Scope** by setting the *Subscription* and *Resource Group*
Sign in to the [Azure portal](https://portal.azure.com/).
> [!NOTE] > The Scope lets you define which VMs this policy applies to. You can set application at the subscription level or resource group level. If you set a resource group, all VMs that are currently in that resource group or any future VMs we add to it will have Automanage automatically enabled.
-1. Click on the **Parameters** tab and set the **Automanage Account** and the desired **Configuration Profile**
+1. Click on the **Parameters** tab and set the **Configuration Profile** and the desired **Effect**
1. Under the **Review + create** tab, review the settings 1. Apply the Assignment by clicking **Create** 1. View your assignments in the **Assignments** tab next to **Definition**
automation Add User Assigned Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/add-user-assigned-identity.md
If you don't have an Azure subscription, create a [free account](https://azure.m
- Windows Hybrid Runbook Worker: version 7.3.1125.0 - Linux Hybrid Runbook Worker: version 1.7.4.0
+- To assign an Azure role, you must have ```Microsoft.Authorization/roleAssignments/write``` permissions, such as [User Access Administrator](/azure/role-based-access-control/built-in-roles#user-access-administrator) or [Owner](/azure/role-based-access-control/built-in-roles#owner).
+ ## Add user-assigned managed identity for Azure Automation account You can add a user-assigned managed identity for an Azure Automation account using the Azure portal, PowerShell, the Azure REST API, or ARM template. For the examples involving PowerShell, first sign in to Azure interactively using the [Connect-AzAccount](/powershell/module/Az.Accounts/Connect-AzAccount) cmdlet and follow the instructions.
Perform the following steps.
The output will look similar to the output shown for the REST API example, above.
-## Give identity access to Azure resources by obtaining a token
+## Assign a role to a user-assigned managed identity
An Automation account can use its user-assigned managed identity to obtain tokens to access other resources protected by Azure AD, such as Azure Key Vault. These tokens don't represent any specific user of the application. Instead, they represent the application that is accessing the resource. In this case, for example, the token represents an Automation account.
New-AzRoleAssignment `
-RoleDefinitionName "Contributor" ```
+## Verify role assignment to a user-managed identity
+
+To verify a role to a user-assigned managed identity of the Automation account, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Go to your Automation account.
+1. Under **Account Settings**, select **Identity**, **User assigned**.
+1. Click **User assigned identity name**.
+
+ :::image type="content" source="media/add-user-assigned-identity/user-assigned-main-screen-inline.png" alt-text="Assigning role in user-assigned identity in Azure portal." lightbox="media/add-user-assigned-identity/user-assigned-main-screen-expanded.png":::
+
+ If the roles are already assigned to the selected user-assigned managed identity, you can see a list of role assignments. This list includes all the role-assignments you have permission to read.
+
+ :::image type="content" source="media/add-user-assigned-identity/user-assigned-role-assignments-inline.png" alt-text="View role-assignments that you have permission in Azure portal." lightbox="media/add-user-assigned-identity/user-assigned-role-assignments-expanded.png":::
+
+1. To change the subscription, click the **Subscription** drop-down list and select the appropriate subscription.
+1. Click **Add role assignment (Preview)**
+1. In the drop-down list, select the set of resources that the role assignment applies - **Subscription**, **Resource group**, **Role**, and **Scope**. </br> If you don't have the role assignment, you can view the write permissions for the selected scope as an inline message.
+1. In the **Role** drop-down list, select a role as *Virtual Machine Contributor*.
+1. Click **Save**.
+
+ :::image type="content" source="media/managed-identity/add-role-assignment-inline.png" alt-text="Add a role assignment in Azure portal." lightbox="media/managed-identity/add-role-assignment-expanded.png":::
+
+After a few minutes, the managed identity is assigned the role at the selected scope.
++ ## Authenticate access with user-assigned managed identity After you enable the user-assigned managed identity for your Automation account and give an identity access to the target resource, you can specify that identity in runbooks against resources that support managed identity. For identity support, use the Az cmdlet [Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount).
automation Automation Create Standalone Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-create-standalone-account.md
When the Automation account is successfully created, several resources are autom
|AzureAutomationTutorialPython2Runbook |An example Python runbook that demonstrates how to authenticate by using a Run As account. The runbook lists all resource groups present in the subscription.| > [!NOTE]
-> The tutorial runbooks have not been updated to authenticate using a managed identity. Review the [Using system-assigned identity](enable-managed-identity-for-automation.md#give-access-to-azure-resources-by-obtaining-a-token) or [Using user-assigned identity](add-user-assigned-identity.md#give-identity-access-to-azure-resources-by-obtaining-a-token) to learn how to grant the managed identity access to resources and configure your runbooks to authenticate using either type of managed identity.
+> The tutorial runbooks have not been updated to authenticate using a managed identity. Review the [Using system-assigned identity](enable-managed-identity-for-automation.md#assign-role-to-a-system-assigned-managed-identity) or [Using user-assigned identity](add-user-assigned-identity.md#assign-a-role-to-a-user-assigned-managed-identity) to learn how to grant the managed identity access to resources and configure your runbooks to authenticate using either type of managed identity.
## Next steps
automation Enable Managed Identity For Automation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/enable-managed-identity-for-automation.md
If you don't have an Azure subscription, create a [free account](https://azure.m
- Windows Hybrid Runbook Worker: version 7.3.1125.0 - Linux Hybrid Runbook Worker: version 1.7.4.0
+- To assign an Azure role, you must have ```Microsoft.Authorization/roleAssignments/write``` permissions, such as [User Access Administrator](/azure/role-based-access-control/built-in-roles#user-access-administrator) or [Owner](/azure/role-based-access-control/built-in-roles#owner).
+
+
## Enable a system-assigned managed identity for an Azure Automation account Once enabled, the following properties will be assigned to the system-assigned managed identity.
Perform the following steps.
The output will look similar to the output shown for the REST API example, above.
-## Give access to Azure resources by obtaining a token
+## Assign role to a system-assigned managed identity
An Automation account can use its system-assigned managed identity to get tokens to access other resources protected by Azure AD, such as Azure Key Vault. These tokens don't represent any specific user of the application. Instead, they represent the application that's accessing the resource. In this case, for example, the token represents an Automation account.
New-AzRoleAssignment `
-RoleDefinitionName "Contributor" ```
+## Verify role assignment to a system-managed identity
+
+To verify a role to a system-assigned managed identity of the Automation account, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com)
+1. Go to your Automation account.
+1. Under **Account Settings**, select **Identity**.
+
+ :::image type="content" source="media/managed-identity/system-assigned-main-screen-inline.png" alt-text="Assigning role in system-assigned identity in Azure portal." lightbox="media/managed-identity/system-assigned-main-screen-expanded.png":::
+
+1. Under **Permissions**, click **Azure role assignments**.
+
+ If the roles are already assigned to the selected system-assigned managed identity, you can see a list of role assignments. This list includes all the role-assignments you have permission to read.
+
+ :::image type="content" source="media/managed-identity/role-assignments-view-inline.png" alt-text="View role-assignments that you have permission in Azure portal." lightbox="media/managed-identity/role-assignments-view-expanded.png":::
+
+1. To change the subscription, click the **Subscription** drop-down list and select the appropriate subscription.
+1. Click **Add role assignment (Preview)**
+1. In the drop-down list, select the set of resources that the role assignment applies - **Subscription**, **Resource group**, **Role**, and **Scope**. </br> If you don't have the role assignment, you can view the write permissions for the selected scope as an inline message.
+1. In the **Role** drop-down list, select a role as *Virtual Machine Contributor*.
+1. Click **Save**.
+
+ :::image type="content" source="media/managed-identity/add-role-assignment-inline.png" alt-text="Add a role assignment in Azure portal." lightbox="media/managed-identity/add-role-assignment-expanded.png":::
+
+After a few minutes, the managed identity is assigned the role at the selected scope.
++ ## Authenticate access with system-assigned managed identity After you enable the managed identity for your Automation account and give an identity access to the target resource, you can specify that identity in runbooks against resources that support managed identity. For identity support, use the Az cmdlet `Connect-AzAccount` cmdlet. See [Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) in the PowerShell reference.
automation Extension Based Hybrid Runbook Worker Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/extension-based-hybrid-runbook-worker-install.md
If you use a firewall to restrict access to the Internet, you must configure the
|Global URL |*.azure-automation.net| |Global URL of US Gov Virginia |*.azure-automation.us|
+### CPU quota limit
+There is a CPU quota limit of 5% while configuring extension-based Linux Hybrid Runbook worker. There is no such limit for Windows Hybrid Runbook Worker.
+ ## Create hybrid worker group
+You can create a Hybrid Worker Group via the Azure Portal. Currently, creating through the ARM template is not supported.
+ Perform the following steps to create a hybrid worker group in the Azure portal. 1. Sign in to the [Azure portal](https://portal.azure.com).
azure-arc Privacy Data Collection And Reporting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/privacy-data-collection-and-reporting.md
Operational data is collected for all database instances and for the Azure Arc-e
The operational data stored locally requires built in administrative privileges to view it in Grafana/Kibana.
-The operational data does not leave yous environment unless you chooses to export/upload (indirect connected mode) or automatically send (directly connected mode) the data to Azure Monitor/Log Analytics. The data goes into a Log Analytics workspace, which you control.
+The operational data does not leave your environment unless you chooses to export/upload (indirect connected mode) or automatically send (directly connected mode) the data to Azure Monitor/Log Analytics. The data goes into a Log Analytics workspace, which you control.
If the data is sent to Azure Monitor or Log Analytics, you can choose which Azure region or datacenter the Log Analytics workspace resides in. After that, access to view or copy it from other locations can be controlled by you.
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Microsoft Sentinel](https://azure.microsoft.com/services/azure-sentinel/) (incl. [UEBA](../../sentinel/identify-threats-with-entity-behavior-analytics.md#what-is-user-and-entity-behavior-analytics-ueba)) | &#x2705; | &#x2705; | | [Microsoft Stream](/stream/overview) | &#x2705; | &#x2705; | | [Microsoft Threat Experts](/microsoft-365/security/defender-endpoint/microsoft-threat-experts) | &#x2705; | &#x2705; |
-| [Multifactor authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; |
+| [Multi-factor authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; |
| [Network Watcher](https://azure.microsoft.com/services/network-watcher/) (incl. [Traffic Analytics](../../network-watcher/traffic-analytics.md)) | &#x2705; | &#x2705; | | **Service** | **FedRAMP High** | **DoD IL2** | | [Notification Hubs](https://azure.microsoft.com/services/notification-hubs/) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | | [Microsoft Sentinel](https://azure.microsoft.com/services/azure-sentinel/) (formerly Azure Sentinel) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Microsoft Stream](/stream/overview) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Multifactor authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Multi-factor authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Network Watcher](https://azure.microsoft.com/services/network-watcher/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Network Watcher Traffic Analytics](../../network-watcher/traffic-analytics.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Notification Hubs](https://azure.microsoft.com/services/notification-hubs/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
Learn more about Azure Government:
- [DoD Impact Level 4](/azure/compliance/offerings/offering-dod-il4) - [DoD Impact Level 5](/azure/compliance/offerings/offering-dod-il5) - [DoD Impact Level 6](/azure/compliance/offerings/offering-dod-il6)-- [Isolation guidelines for Impact Level 5 workloads](../documentation-government-impact-level-5.md)
+- [Azure Government isolation guidelines for Impact Level 5 workloads](../documentation-government-impact-level-5.md)
- [Azure guidance for secure isolation](../azure-secure-isolation-guidance.md)
azure-government Documentation Government Concept Naming Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-concept-naming-resources.md
Title: Considerations for naming Azure resources | Microsoft Docs
-description: Guidance on how customers should consider naming their Azure resources to prevent accidental spillage of sensitive data
-
-cloud: gov
--
+ Title: Considerations for naming Azure resources
+description: Guidance on naming Azure resources to prevent accidental spillage of sensitive data.
- Previously updated : 04/14/2021-+++
+recommendations: false
Last updated : 01/28/2022 + # Considerations for naming Azure resources
-Customers should not include sensitive or restricted information in Azure resource names because it may be stored or accessed outside the compliance boundary to facilitate support and troubleshooting. Examples of sensitive information include data subject to:
+You shouldn't include sensitive or restricted information in Azure resource names because it may be stored or accessed outside the compliance boundary to facilitate support and troubleshooting. Examples of sensitive information include data subject to:
- [Export control laws](./documentation-government-overview-itar.md) - [DoD Impact Level 5 isolation requirements](./documentation-government-impact-level-5.md) - [Controlled Unclassified Information](/azure/compliance/offerings/offering-nist-800-171) (CUI) that warrants extra protection or is subject to NOFORN marking - And others
-Data stored or processed in customer VMs, storage accounts, databases, Azure Import/Export, Azure Cache for Redis, ExpressRoute, Azure Cognitive Search, App Service, API Management, and other Azure services suitable for holding, processing, or transmitting customer data can contain sensitive data. However, metadata for these Azure services is not permitted to contain sensitive or restricted data. This metadata includes all configuration data entered when creating and maintaining an Azure service, including:
+Data stored or processed in customer VMs, storage accounts, databases, Azure Import/Export, Azure Cache for Redis, ExpressRoute, Azure Cognitive Search, App Service, API Management, and other Azure services suitable for holding, processing, or transmitting customer data can contain sensitive data. However, metadata for these Azure services isn't permitted to contain sensitive or restricted data. This metadata includes all configuration data entered when creating and maintaining an Azure service, including:
- Subscription names, service names, server names, database names, tenant role names, resource groups, deployment names, resource names, resource tags, circuit name, and so on. - All shipping information that is used to transport media for Azure Import/Export, such as carrier name, tracking number, description, return information, drive list, package list, storage account name, container name, and so on. - Data in HTTP headers sent to the REST API in search/query strings as part of the API. - Device/policy/application and [other metadata](/mem/intune/protect/privacy-data-collect) sent to Intune.
-Azure resource names include information provided by you, or on your behalf, that is used to identify or configure cloud service resources, such as software, systems, or containers. However, it does **not** include customer-created content or metadata inside the resource (for example, database column/table names). Azure resource names include the names a customer assigns to Azure Resource Manager level objects and resources deployed in Azure. Examples include the names of resources such as virtual networks, virtual hard disks, database servers and databases, virtual network interface, network security groups, key vaults, and others.
+Azure resource names include information provided by you, or on your behalf, that is used to identify or configure cloud service resources, such as software, systems, or containers. However, it does **not** include customer-created content or metadata inside the resource (for example, database column/table names). Azure resource names include the names you assign to Azure Resource Manager level objects and resources deployed in Azure. Examples include the names of resources such as virtual networks, virtual hard disks, database servers and databases, virtual network interface, network security groups, key vaults, and others.
->[!NOTE]
->The above examples are but a subset of the types of resources customers can name. This list is not meant to be fully exhaustive and the types of resources could change in the future as new cloud services are added.
+> [!NOTE]
+> The above examples are but a subset of the types of resources you can name. This list is not meant to be fully exhaustive and the types of resources could change in the future as new cloud services are added.
> ## Naming convention
An example of a virtual machine resource ID is:
## Naming considerations
-Customers should avoid names that are sensitive to business or mission functions. This guidance applies to all names that meet the criteria above, from the name of the larger resource group to the name of the end resources within it. Customers should also avoid names that indicate customer regulatory requirements, for example:
+You should avoid names that are sensitive to business or mission functions. This guidance applies to all names that meet the criteria mentioned previously, from the name of the larger resource group to the name of the end resources within it. You should also avoid names that indicate your regulatory requirements, for example:
-- [EAR](/azure/compliance/offerings/offering-ear)-- [ITAR](/azure/compliance/offerings/offering-itar)-- [CNSSI 1253](/azure/compliance/offerings/offering-cnssi-1253)-- [CJIS](/azure/compliance/offerings/offering-cjis)-- [IRS 1075](/azure/compliance/offerings/offering-irs-1075)
+- [Criminal Justice Information Services (CJIS)](/azure/compliance/offerings/offering-cjis)
+- [Committee on National Security Systems Instruction No. 1253 (CNSSI 1253)](/azure/compliance/offerings/offering-cnssi-1253)
+- [Internal Revenue Service (IRS) Publication 1075](/azure/compliance/offerings/offering-irs-1075)
+- [Export Administration Regulations (EAR)](/azure/compliance/offerings/offering-ear)
+- [International Traffic in Arms Regulations (ITAR)](/azure/compliance/offerings/offering-itar)
- And others as applicable
->[!NOTE]
->Also consider naming of resource tags when reviewing the **[Resource naming and tagging decision guide](/azure/cloud-adoption-framework/decision-guides/resource-tagging/).**
+> [!NOTE]
+> Also consider naming of resource tags when reviewing the **[Resource naming and tagging decision guide](/azure/cloud-adoption-framework/decision-guides/resource-tagging/).**
-Customers should understand and take into account the resource naming convention to help ensure operational security, as Microsoft personnel could use the full resource ID in the following example scenarios:
+You should understand and take into account the resource naming convention to help ensure operational security, as Microsoft personnel could use the full resource ID in the following example scenarios:
- Microsoft support personnel may use the full resource ID of resources during support events to ensure we're identifying the right resource within a customer's subscription. - Microsoft product engineering personnel could use full resource IDs during routine monitoring of telemetry data to identify deviations from baseline or average system performance. - Proactive communication to customers about impacted resources during internally discovered incidents.+
+## Next steps
+
+- [Develop your naming and tagging strategy for Azure resources](/azure/cloud-adoption-framework/ready/azure-best-practices/naming-and-tagging)
+- [Resource naming and tagging decision guide](/azure/cloud-adoption-framework/decision-guides/resource-tagging/)
+- [Azure Government compliance](./documentation-government-plan-compliance.md)
+- [Azure Government security](./documentation-government-plan-security.md)
+- [Azure compliance](../compliance/index.yml)
+- [Azure and other Microsoft services compliance offerings](/azure/compliance/offerings/)
azure-government Documentation Government Plan Compliance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-plan-compliance.md
recommendations: false Previously updated : 01/26/2022 Last updated : 01/28/2022 # Azure Government compliance
For current Azure Government regions and available services, see [Products avail
> - Some Azure services deployed in Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia) require extra configuration to meet DoD IL5 compute and storage isolation requirements, as explained in **[Isolation guidelines for Impact Level 5 workloads](./documentation-government-impact-level-5.md).** > - For DoD IL5 PA compliance scope in Azure Government DoD regions (US DoD Central and US DoD East), see **[Azure Government DoD regions IL5 audit scope](./documentation-government-overview-dod.md#azure-government-dod-regions-il5-audit-scope).**
+## Services in audit scope
+
+For a detailed list of Azure, Dynamics 365, Microsoft 365, and Power Platform services in FedRAMP and DoD compliance audit scope, see:
+
+- [Azure public services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-public-services-by-audit-scope)
+- [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope)
+ ## Audit documentation You can access Azure and Azure Government audit reports and related documentation via the [Service Trust Portal](https://servicetrust.microsoft.com) (STP) in the following sections:
You must have an existing subscription or free trial account in [Azure](https://
## Azure Policy regulatory compliance built-in initiatives
-For additional customer assistance, Microsoft provides the Azure Policy regulatory compliance built-in initiatives, which map to **compliance domains** and **controls** in key US government standards:
+For additional customer assistance, Microsoft provides **Azure Policy regulatory compliance built-in initiatives**, which map to **compliance domains** and **controls** in key US government standards, including:
- [FedRAMP High](../governance/policy/samples/gov-fedramp-high.md) - [DoD IL4](../governance/policy/samples/gov-dod-impact-level-4.md) - [DoD IL5](../governance/policy/samples/gov-dod-impact-level-5.md)
-For additional regulatory compliance built-in initiatives that pertain to Azure Government, see [Azure Policy samples](../governance/policy/samples/index.md).
+For additional regulatory compliance built-in initiatives that pertain to Azure Government, see [Azure Policy samples](../governance/policy/samples/index.md#regulatory-compliance).
-Regulatory compliance in Azure Policy provides built-in initiative definitions to view a list of the controls and compliance domains based on responsibility - customer, Microsoft, or shared. For Microsoft-responsible controls, we provide additional audit result details based on third-party attestations and our control implementation details to achieve that compliance. Each control is associated with one or more Azure Policy definitions. These policies may help you [assess compliance](/azure/governance/policy/how-to/get-compliance-data) with the control; however, compliance in Azure Policy is only a partial view of your overall compliance status. Azure Policy helps to enforce organizational standards and assess compliance at scale. Through its compliance dashboard, it provides an aggregated view to evaluate the overall state of the environment, with the ability to drill down to more granular status.
+Regulatory compliance in Azure Policy provides built-in initiative definitions to view a list of the controls and compliance domains based on responsibility - customer, Microsoft, or shared. For Microsoft-responsible controls, we provide additional audit result details based on third-party attestations and our control implementation details to achieve that compliance. Each control is associated with one or more Azure Policy definitions. These policies may help you [assess compliance](../governance/policy/how-to/get-compliance-data.md) with the control; however, compliance in Azure Policy is only a partial view of your overall compliance status. Azure Policy helps to enforce organizational standards and assess compliance at scale. Through its compliance dashboard, it provides an aggregated view to evaluate the overall state of the environment, with the ability to drill down to more granular status.
## Next steps - [Azure compliance](../compliance/index.yml)
+- [Azure and other Microsoft services compliance offerings](/azure/compliance/offerings/)
- [Azure Policy overview](../governance/policy/overview.md)
+- [Azure Policy regulatory compliance built-in initiatives](../governance/policy/samples/index.md#regulatory-compliance)
- [Azure Government overview](./documentation-government-welcome.md)-- [Connect with Azure Government portal](./documentation-government-get-started-connect-with-portal.md) - [Azure Government security](./documentation-government-plan-security.md) - [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md) - [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope)-- [Isolation guidelines for Impact Level 5 workloads](./documentation-government-impact-level-5.md)
+- [Azure Government isolation guidelines for Impact Level 5 workloads](./documentation-government-impact-level-5.md)
- [Azure Government DoD overview](./documentation-government-overview-dod.md)
azure-government Documentation Government Plan Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-plan-identity.md
Title: Azure Government Identity | Microsoft Docs
-description: Microsoft Azure Government provides the same ways to build applications and manage identities as Azure Public. This article provides Planning Guidance for Identity in Azure Government.
-
-cloud: gov
-
+ Title: Azure Government Identity
+description: Microsoft Azure Government provides the same ways to build applications and manage identities as Azure Public. This article provides planning guidance for identity management in Azure Government.
- Previously updated : 10/20/2017-
+recommendations: false
Last updated : 01/28/2022 + # Planning identity for Azure Government applications Microsoft Azure Government provides the same ways to build applications and manage identities as Azure Public. Azure Government customers may already have an Azure Active Directory (Azure AD) Public tenant or may create a tenant in Azure AD Government. This article provides guidance on identity decisions based on the application and location of your identity. ## Identity models
-Before determining the identity approach for your application, you need to know what identity types are available to you. There are three types: On-Premises Identity, Cloud Identity, and Hybrid Identity.
+Before determining the identity approach for your application, you need to know what identity types are available to you. There are three types: On-premises identity, Cloud identity, and Hybrid identity.
-|On-Premises Identity|Cloud Identity|Hybrid Identity
+|On-premises identity|Cloud identity|Hybrid identity|
||||
-|On-Premises Identities belong to on-premises Active Directory environments that most customers use today.|Cloud identities originate, only exist, and are managed in Azure AD.|Hybrid identities originate as on-premises identities, but become hybrid through directory synchronization to Azure AD. After directory synchronization they exist both on-premises and in the cloud, hence hybrid.|
+|On-premises identities belong to on-premises Active Directory environments that most customers use today.|Cloud identities originate, exist only, and are managed in Azure AD.|Hybrid identities originate as on-premises identities, but become hybrid through directory synchronization to Azure AD. After directory synchronization, they exist both on-premises and in the cloud, hence hybrid.|
->[!NOTE]
->Hybrid comes with deployment options (Synchronized Identity, Federated Identity, etc.) that all rely on directory synchronization and mostly define how identities are authenticated as discussed in [Choose a Hybrid Identity Solution](../active-directory/hybrid/whatis-hybrid-identity.md).
+> [!NOTE]
+> Hybrid comes with deployment options (synchronized identity, federated identity, and so on) that all rely on directory synchronization and mostly define how identities are authenticated as discussed in [What is hybrid identity with Azure Active Directory?](../active-directory/hybrid/whatis-hybrid-identity.md).
> ## Selecting identity for an Azure Government application
-When building any Azure application, a developer must first decide on the authentication technology:
-- **Applications using modern authentication** ΓÇô Applications using OAuth, OpenID Connect, and/or other modern authentication protocols supported by Azure Active Directory. An example is a newly developed application built using PaaS technologies (**for example**, Web Sites, Cloud Database as a Service, etc.)-- **Apps using legacy authentication protocols (Kerberos/NTLM)** ΓÇô Applications typically migrated from on-premises (**for example**, Lift-n-Shift).
+When building any Azure application, you must first decide on the authentication technology:
-Based on this decision there are different considerations when building in Azure Government.
+- **Applications using modern authentication** ΓÇô Applications using OAuth, OpenID Connect, and/or other modern authentication protocols supported by Azure AD such as newly developed application built using PaaS technologies (for example, Web Apps, Azure SQL Database, and so on).
+- **Applications using legacy authentication protocols (Kerberos/NTLM)** ΓÇô Applications typically migrated from on-premises (for example, lift-and-shift applications).
+
+Based on this decision there are different considerations when building and deploying on Azure Government.
### Applications using modern authentication in Azure Government
-[Integrating Applications with Azure Active Directory](../active-directory/develop/quickstart-register-app.md) shows how you can use Azure AD to provide secure sign-in and authorization to your applications. This process is the same for Azure Public and Azure Government once you choose your identity authority.
+
+[Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md) shows how you can use Azure AD to provide secure sign-in and authorization to your applications. This process is the same for Azure Public and Azure Government once you choose your identity authority.
#### Choosing your identity authority
-Azure Government applications can use Azure AD Government identities, but can you use Azure AD Public identities to authenticate to an application hosted in Azure Government? Yes! Since you can use either identity authority, you need to choose which to use:
+
+Azure Government applications can use Azure AD Government identities, but can you use Azure AD Public identities to authenticate to an application hosted in Azure Government? Yes! Since you can use either identity authority, you need to choose which to use:
- **Azure AD Public** ΓÇô Commonly used if your organization already has an Azure AD Public tenant to support Office 365 (Public or GCC) or another application. - **Azure AD Government** - Commonly used if your organization already has an Azure AD Government tenant to support Office 365 (GCC High or DoD) or are creating a new tenant in Azure AD Government.
-Once decided, the special consideration is where you perform your app registration. If you choose Azure AD Public identities for your Azure Government application, you must register the application in your Azure AD Public tenant. Otherwise, if you perform the app registration in the directory the subscription trusts (Azure Government) the intended set of users cannot authenticate.
+Once decided, the special consideration is where you perform your app registration. If you choose Azure AD Public identities for your Azure Government application, you must register the application in your Azure AD Public tenant. Otherwise, if you perform the app registration in the directory the subscription trusts (Azure Government) the intended set of users can't authenticate.
->[!NOTE]
+> [!NOTE]
> Applications registered with Azure AD only allow sign-in from users in the Azure AD tenant the application was registered in. If you have multiple Azure AD Public tenants, itΓÇÖs important to know which is intended to allow sign-ins from. If you intend to allow users to authenticate to the application from multiple Azure AD tenants the application must be registered in each tenant. >
-The other consideration is the identity authority URL. You need the correct URL based on your chosen authority:
+The other consideration is the identity authority URL. You need the correct URL based on your chosen authority:
-- **Azure AD Public** = login.microsoftonline.com-- **Azure AD Government** = login.microsoftonline.us
+|Identity authority|URL|
+|||
+|Azure AD Public|login.microsoftonline.com|
+|Azure AD Government|login.microsoftonline.us|
### Applications using legacy authentication protocols (Kerberos/NTLM)
-Supporting IaaS cloud-based applications dependent on NTLM/Kerberos authentication requires On-Premises Identity. The aim is to support logins for line-of-business application and other apps that require Windows Integrated authentication. Adding Active Directory domain controllers as virtual machines in Azure IaaS is the typical method to support these types of apps, shown in the following figure:
-
-<div id="imagecontainer">
-<div></div>
-<div align="center">
-![Diagram shows a site-to-site VPN connectivity example for Azure IaaS.](./media/documentation-government-plan-identity-extending-ad-to-azure-iaas.png "Extending On-Premises Active Directory Footprint to Azure IaaS")
+Supporting Infrastructure-as-a-Service (IaaS) cloud-based applications dependent on NTLM/Kerberos authentication requires on-premises identity. The aim is to support logins for line-of-business application and other apps that require Windows integrated authentication. Adding Active Directory domain controllers as virtual machines in Azure IaaS is the typical method to support these types of apps, shown in the following figure:
-</div>
-<div></div>
-</div>
->[!NOTE]
->The preceding figure is a simple connectivity example, using site-to-site VPN. Azure ExpressRoute is another and more preferred connectivity option.
+> [!NOTE]
+> The preceding figure is a simple connectivity example, using site-to-site VPN. Azure ExpressRoute is another and preferred connectivity option.
>
-The type of domain controller to place in Azure is also a consideration based on application requirements for directory access. If applications require directory write access, deploy a standard domain controller with a writable copy of the Active Directory database. If applications only require directory read access, we recommend deploying a RODC (Read-Only Domain Controller) to Azure instead. Specifically, for RODCs we recommend following the guidance available at [Deployment Decisions and Factors for Read-Only DCs](/windows-server/identity/ad-ds/introduction-to-active-directory-domain-services-ad-ds-virtualization-level-100).
+The type of domain controller to place in Azure is also a consideration based on application requirements for directory access. If applications require directory write access, deploy a standard domain controller with a writable copy of the Active Directory database. If applications only require directory read access, we recommend deploying a Read-Only Domain Controller (RODC) to Azure instead. Specifically, for RODCs we recommend following the guidance available at [Planning domain controller placement](/windows-server/identity/ad-ds/plan/planning-domain-controller-placement).
-We have documentation covering the guidelines for deploying AD Domain Controllers and ADFS (AD Federation Services) at these links:
+Documentation covering the guidelines for deploying Active Directory Domain Controllers and Active Director Federation Services (ADFS) is available from:
-- [Guidelines for Deploying Windows Server Active Directory on Azure Virtual Machines](/windows-server/identity/ad-ds/introduction-to-active-directory-domain-services-ad-ds-virtualization-level-100)
- - Answers questions such as:
- - Is it safe to virtualize Windows Server Active Directory Domain Controllers?
- - Why deploy AD to Azure Virtual Machines?
- - Can you deploy ADFS to Azure Virtual Machines?
-- [Deploying Active Directory Federation Services in Azure](/windows-server/identity/ad-fs/deployment/how-to-connect-fed-azure-adfs)
- - Provides guidance on how to deploy ADFS in Azure.
+- [Safely virtualizing Active Directory Domain Services](/windows-server/identity/ad-ds/introduction-to-active-directory-domain-services-ad-ds-virtualization-level-100) answers questions such as
+ - Is it safe to virtualize Windows Server Active Directory Domain Controllers?
+ - Why deploy Active Directory to Azure Virtual Machines?
+ - Can you deploy ADFS to Azure Virtual Machines?
+- [Deploying Active Directory Federation Services in Azure](/windows-server/identity/ad-fs/deployment/how-to-connect-fed-azure-adfs) provides guidance on how to deploy ADFS in Azure.
## Identity scenarios for subscription administration in Azure Government
-First, see [Managing and connecting to your subscription in Azure Government](./compare-azure-government-global-azure.md), for instructions on accessing Azure Government management portals.
+
+First, see [Connect to Azure Government using portal](./documentation-government-get-started-connect-with-portal.md) for instructions on accessing Azure Government management portal.
There are a few important points that set the foundation of this section:
+- Azure subscriptions only trust one directory, therefore subscription administration must be performed by an identity from that directory.
+- Azure Public subscriptions trust directories in Azure AD Public whereas Azure Government subscriptions trust directories in Azure AD Government.
+- If you have both Azure Public and Azure Government subscriptions, separate identities for both are required.
The currently supported identity scenarios to simultaneously manage Azure Public and Azure Government subscriptions are: -- Cloud identities - Cloud identities are used to manage both subscriptions-- Hybrid and cloud identities - Hybrid identity for one subscription, cloud identity for the other
+- Cloud identities - Cloud identities are used to manage both subscriptions.
+- Hybrid and cloud identities - Hybrid identity for one subscription, cloud identity for the other.
- Hybrid identities - Hybrid identities are used to manage both subscriptions.
-A common scenario, having both Office 365 and Azure subscriptions, is conveyed in each of the following scenarios.
+A common scenario, having both Office 365 and Azure subscriptions, is conveyed in the following sections.
### Using cloud identities for multi-cloud subscription administration The following diagram is the simplest of the scenarios to implement.
-<div id="imagecontainer">
-<div></div>
-<div align="center">
-
-![Diagram shows a multi-cloud subscription administration option using cloud identities for Office 365 and Microsoft Azure Government.](./media/documentation-government-plan-identity-cloud-identities-for-subscription-administration.png "Using Cloud Identities for Multi-Cloud Subscription Administration")
-
-</div>
-<div></div>
-</div>
While using cloud identities is the simplest approach, it is also the least secure because passwords are used as an authentication factor. We recommend [Azure AD Multi-Factor Authentication](../active-directory/authentication/concept-mfa-howitworks.md), Microsoft's two-step verification solution, to add a critical second layer of security to secure access to Azure subscriptions when using cloud identities.
-See [How Azure AD Multi-Factor Authentication works](../active-directory/authentication/concept-mfa-howitworks.md) to learn more about the available methods for two-step verification.
- ### Using hybrid and cloud identities for multi-cloud subscription administration
-In this scenario, we include administrator identities through directory synchronization to the Public tenant while cloud identities are still used in the government tenant:
-
-<div id="imagecontainer">
-<div></div>
-<div align="center">
-
-![Diagram shows a scenario for hybrid and cloud identities for multi-cloud subscription administration using smartcards for access.](./media/documentation-government-plan-identity-hybrid-and-cloud-identities-for-subscription-administration.png "Using Hybrid and Cloud Identities for Multi-Cloud Subscription Administration")
+In this scenario, we include administrator identities through directory synchronization to the Public tenant while cloud identities are still used in the government tenant.
-</div>
-<div></div>
-</div>
-Using hybrid identities for administrative accounts allows the use of smartcards (physical or virtual). Government agencies using Common Access Cards (CACs) or Personal Identity Verification (PIV) cards benefit from this approach. In this scenario ADFS serves as the identity provider and implements the two-step verification (**for example**, smart card + PIN).
+Using hybrid identities for administrative accounts allows the use of smartcards (physical or virtual). Government agencies using Common Access Cards (CACs) or Personal Identity Verification (PIV) cards benefit from this approach. In this scenario, ADFS serves as the identity provider and implements the two-step verification (for example, smart card + PIN).
### Using hybrid identities for multi-cloud subscription administration
-In this scenario, hybrid identities are used to administrator subscriptions in both clouds:
-
-<div id="imagecontainer">
-<div></div>
-<div align="center">
-
-![Diagram shows a scenario with hybrid identities for multi-cloud subscription administration, requiring different credentials for each cloud service.](./media/documentation-government-plan-identity-hybrid-identities-for-subscription-administration.png "Using Hybrid Identities for Multi-Cloud Subscription Administration")
+In this scenario, hybrid identities are used to administrator subscriptions in both clouds.
-</div>
-<div></div>
-</div>
## Frequently asked questions
-**Why does Office 365 GCC use Azure AD Public?**
+**Why does Office 365 GCC use Azure AD Public?** </br>
+The first Office 365 US Government environment, Government Community Cloud (GCC), was created when Microsoft had a single cloud directory. The Office 365 GCC environment was designed to use Azure AD Public while still adhering to controls and requirements outlined in FedRAMP Moderate, Criminal Justice Information Services (CJIS), Internal Revenue Service (IRS) 1075, and National Institute of Standards and Technology (NIST) Special Publication (SP) 800-171. Azure Government, with its Azure AD infrastructure, was created later. By that time, GCC had already secured the necessary compliance authorizations (for example, FedRAMP Moderate and CJIS) to meet Federal, State, and Local government requirements while serving hundreds of thousands of customers. Now, many Office 365 GCC customers have two Azure AD tenants: one from the Azure AD subscription that supports Office 365 GCC and the other from their Azure Government subscription, with identities in both.
-The first Office 365 US Government environment, Government Community Cloud (GCC), was created when Microsoft had a single cloud directory. The Office 365 GCC environment was designed to use Azure AD Public while still adhering to controls and requirements outlined in FedRAMP Moderate, CJIS (Criminal Justice Information Services), IRS 1075, and National Institute of Standards and Technology (NIST) publication 800-171. Azure Government, with its Azure AD infrastructure was created later. By that time, GCC had already secured the necessary compliance certifications (for example, FedRAMP Moderate and CJIS) to meet Federal, State, and Local government requirements while serving hundreds of thousands of customers. Now, many Office 365 GCC customers have two Azure AD tenants: one from the Azure AD subscription that supports Office 365 GCC and the other from their Azure Government subscription with identities in both.
--
-**How do I identify an Azure Government tenant?**
+**How do I identify an Azure Government tenant?** </br>
HereΓÇÖs a way to find out using your browser of choice:
- - Obtain your tenant name (**for example**, contoso.onmicrosoft.com) or a domain name registered to your Azure AD tenant (**for example**, contoso.gov).
- - Navigate to https:\//login.microsoftonline.com/\<domainname\>/.well-known/openid-configuration
- - \<domainname\> can either be the tenant name or domain name you gathered in step 1.
- - **An example URL**: https://login.microsoftonline.com/contoso.onmicrosoft.com/.well-known/openid-configuration
+ - Obtain your tenant name (for example, contoso.onmicrosoft.com) or a domain name registered to your Azure AD tenant (for example, contoso.gov).
+ - Navigate to `https://login.microsoftonline.com/<domainname>/.well-known/openid-configuration`
+ - \<domainname\> can either be the tenant name or domain name you gathered in the previous step.
+ - **An example URL**: `https://login.microsoftonline.com/contoso.onmicrosoft.com/.well-known/openid-configuration`
- The result posts back to the page in attribute/value pairs using JavaScript Object Notation (JSON) format that resembles: ```json
HereΓÇÖs a way to find out using your browser of choice:
- The result is a JSON file thatΓÇÖs natively rendered by more modern browsers such as Microsoft Edge, Mozilla Firefox, and Google Chrome. Internet Explorer doesnΓÇÖt natively render the JSON format so instead prompts you to open or save the file. If you must use Internet Explorer, choose the save option and open it with another browser or plain text reader. - The tenant_region_scope property is exactly how it sounds, regional. If you have a tenant in Azure Public in North America, the value would be **NA**.
-**If IΓÇÖm an Office 365 GCC customer and want to build solutions in Azure Government do I need to have two tenants?**
-Yes, the Azure AD Government tenant is required for your Azure Government Subscription administration.
+**If IΓÇÖm an Office 365 GCC customer and want to build solutions in Azure Government do I need to have two tenants?** </br>
+Yes, the Azure AD Government tenant is required for your Azure Government subscription administration.
-**If IΓÇÖm an Office 365 GCC customer that has built workloads in Azure Government, where should I authenticate from, Public or Government?**
-See ΓÇ£Choosing your Identity AuthorityΓÇ¥ earlier in this article.
-
-**IΓÇÖm an Office 365 customer and have chosen hybrid identity as my identity model. I also have several Azure subscriptions. Is it possible to use the same Azure AD tenant to handle sign-in for Office 365, applications built in my Azure subscriptions, and/or applications reconfigured to use Azure AD for sign-in?**
+**If IΓÇÖm an Office 365 GCC customer that has built workloads in Azure Government, where should I authenticate from: Public or Government?** </br>
+See [Choosing your identity authority](#choosing-your-identity-authority) earlier in this article.
+**IΓÇÖm an Office 365 customer and have chosen hybrid identity as my identity model. I also have several Azure subscriptions. Is it possible to use the same Azure AD tenant to handle sign-in for Office 365, applications built in my Azure subscriptions, and/or applications reconfigured to use Azure AD for sign-in?** </br>
Yes, see [Associate or add an Azure subscription to your Azure Active Directory tenant](../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md) to learn more about the relationship between Azure subscriptions and Azure AD. It also contains instructions on how to associate subscriptions to the common directory of your choosing.
-**Can an Azure Government subscription be associated with a directory in Azure AD Public?**
-
+**Can an Azure Government subscription be associated with a directory in Azure AD Public?** </br>
No, the ability to manage Azure Government subscriptions requires identities sourced from a directory in Azure AD Government. ## Next steps -- Check out the [Azure Government developer guide](../azure-government/documentation-government-developer-guide.md) and build your first application!-- For supplemental information and updates, subscribe to the [Microsoft Azure Government blog.](https://blogs.msdn.microsoft.com/azuregov/)
+- [Azure Government developer guide](./documentation-government-developer-guide.md)
+- [Azure Government security](./documentation-government-plan-security.md)
+- [Azure Government compliance](./documentation-government-plan-compliance.md)
+- [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md)
+- [Multi-tenant user management](../active-directory/fundamentals/multi-tenant-user-management-introduction.md)
+- [Azure Active Directory fundamentals documentation](../active-directory/fundamentals/index.yml)
azure-government Documentation Government Plan Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-plan-security.md
Previously updated : 11/17/2021
+recommendations: false
Last updated : 01/28/2022 # Azure Government security
These principles are applicable to both Azure and Azure Government. As described
Mitigating risk and meeting regulatory obligations are driving the increasing focus and importance of data encryption. Use an effective encryption implementation to enhance current network and application security measures and decrease the overall risk of your cloud environment. Azure has extensive support to safeguard customer data using [data encryption](../security/fundamentals/encryption-overview.md), including various encryption models: - Server-side encryption that uses service-managed keys, customer-managed keys (CMK) in Azure, or CMK in customer-controlled hardware.-- Client-side encryption that enables customers to manage and store keys on-premises or in another secure location. Client-side encryption is built into the Java and .NET storage client libraries, which can utilize Azure Key Vault APIs, making the implementation straightforward. Use Azure Key Vault to obtain access to the secrets in Azure Key Vault for specific individuals using Azure Active Directory.
+- Client-side encryption that enables customers to manage and store keys on-premises or in another secure location. Client-side encryption is built into the Java and .NET storage client libraries, which can use Azure Key Vault APIs, making the implementation straightforward. You can use Azure Active Directory to provide specific individuals with access to Azure Key Vault secrets.
Data encryption provides isolation assurances that are tied directly to encryption key access. Since Azure uses strong ciphers for data encryption, only entities with access to encryption keys can have access to data. Deleting or revoking encryption keys renders the corresponding data inaccessible.
Azure provides extensive options for [encrypting data at rest](../security/funda
Azure provides many options for [encrypting data in transit](../security/fundamentals/encryption-overview.md#encryption-of-data-in-transit). Data encryption in transit isolates your network traffic from other traffic and helps protect data from interception. For more information, see [Data encryption in transit](./azure-secure-isolation-guidance.md#data-encryption-in-transit).
-The basic encryption available for connectivity to Azure Government supports Transport Layer Security (TLS) 1.2 protocol and X.509 certificates. Federal Information Processing Standard (FIPS) 140 validated cryptographic algorithms are also used for infrastructure network connections between Azure Government datacenters. Windows, Windows Server, and Azure File shares can use SMB 3.0 for encryption between the virtual machine (VM) and the file share. Use client-side encryption to encrypt the data before it is transferred into storage in a client application, and to decrypt the data after it is transferred out of storage.
+The basic encryption available for connectivity to Azure Government supports Transport Layer Security (TLS) 1.2 protocol and X.509 certificates. Federal Information Processing Standard (FIPS) 140 validated cryptographic algorithms are also used for infrastructure network connections between Azure Government datacenters. Windows, Windows Server, and Azure File shares can use SMB 3.0 for encryption between the virtual machine (VM) and the file share. Use client-side encryption to encrypt the data before it's transferred into storage in a client application, and to decrypt the data after it's transferred out of storage.
### Best practices for encryption -- **IaaS VMs:** Use Azure disk encryption. Turn on Storage service encryption to encrypt the VHD files that are used to back up those disks in Azure Storage. This approach only encrypts newly written data, which means that, if you create a VM and then enable Storage service encryption on the storage account that holds the VHD file, only the changes will be encrypted, not the original VHD file.-- **Client-side encryption:** Represents the most secure method for encrypting your data, because it encrypts it before transit, and encrypts the data at rest. However, it does require that you add code to your applications using storage, which you might not want to do. In those cases, you can use HTTPS for your data in transit, and Storage service encryption to encrypt the data at rest. Client-side encryption also involves more load on the client that you have to account for in your scalability plans, especially if you are encrypting and transferring much data.
+- **IaaS VMs:** Use Azure disk encryption. Turn on Storage service encryption to encrypt the VHD files that are used to back up those disks in Azure Storage. This approach only encrypts newly written data. If you create a VM and then enable Storage service encryption on the storage account that holds the VHD file, only the changes will be encrypted, not the original VHD file.
+- **Client-side encryption:** Represents the most secure method for encrypting your data, because it encrypts it before transit, and encrypts the data at rest. However, it does require that you add code to your applications using storage, which you might not want to do. In those cases, you can use HTTPS for your data in transit, and Storage service encryption to encrypt the data at rest. Client-side encryption also involves more load on the client that you have to account for in your scalability plans, especially if you're encrypting and transferring much data.
## Managing secrets
Proper protection and management of encryption keys is essential for data securi
### Best practices for managing secrets - Use Key Vault to minimize the risks of secrets being exposed through hard-coded configuration files, scripts, or in source code. For added assurance, you can import or generate keys in Azure Key Vault HSMs.-- Application code and templates should only contain URI references to the secrets, meaning the actual secrets are not in code, configuration, or source code repositories. This approach prevents key phishing attacks on internal or external repositories, such as harvest-bots in GitHub.
+- Application code and templates should only contain URI references to the secrets, meaning the actual secrets aren't in code, configuration, or source code repositories. This approach prevents key phishing attacks on internal or external repositories, such as harvest-bots at GitHub.
- Utilize strong Azure role-based access control (RBAC) within Key Vault. A trusted operator who leaves the company or transfers to a new group within the company should be prevented from being able to access the secrets. ## Understanding isolation
Isolation in Azure Government is achieved through the implementation of trust bo
### Environment isolation
-The Azure Government multi-tenant cloud platform environment is an Internet standards-based Autonomous System (AS) that is physically isolated and separately administered from the rest of Azure public cloud. The AS as defined by [IETF RFC 4271](https://datatracker.ietf.org/doc/rfc4271/) is composed of a set of switches and routers under a single technical administration, using an interior gateway protocol and common metrics to route packets within the AS, and using an exterior gateway protocol to route packets to other ASs though a single and clearly defined routing policy. In addition, Azure Government for DoD regions within Azure Government are geographically separated physical instances of compute, storage, SQL, and supporting services that store and/or process customer content in accordance with DoD Impact Level 5 (IL5) tenant separation requirements, as stated in the DoD Cloud Computing Security Requirements Guide (SRG) [Section 5.2.2.3](https://dl.dod.cyber.mil/wp-content/uploads/cloud/SRG/https://docsupdatetracker.net/index.html#5.2LegalConsiderations).
+The Azure Government multi-tenant cloud platform environment is an Internet standards-based Autonomous System (AS) that is physically isolated and separately administered from the rest of Azure public cloud. As defined by [IETF RFC 4271](https://datatracker.ietf.org/doc/rfc4271/), the AS is composed of a set of switches and routers under a single technical administration, using an interior gateway protocol and common metrics to route packets within the AS. An exterior gateway protocol is used to route packets to other ASs through a single and clearly defined routing policy.
-The isolation of the Azure Government environment is achieved through a series of physical and logical controls, and associated capabilities that include:
+The isolation of the Azure Government environment is achieved through a series of physical and logical controls that include:
- Physically isolated hardware - Physical barriers to the hardware using biometric devices and cameras
The isolation of the Azure Government environment is achieved through a series o
- Specific credentials and multifactor authentication for logical access - Infrastructure for Azure Government is located within the United States
-Within the Azure Government network, internal network system components are isolated from other system components through implementation of separate subnets and access control policies on management interfaces. Azure Government does not directly peer with the public internet or with the Microsoft corporate network. Azure Government directly peers to the commercial Microsoft Azure network, which has routing and transport capabilities to the Internet and the Microsoft Corporate network. Azure Government limits its exposed surface area by applying extra protections and communications capabilities of our commercial Azure network. In addition, Azure Government ExpressRoute (ER) uses peering with our customerΓÇÖs networks over non-Internet private circuits to route ER customer ΓÇ£DMZΓÇ¥ networks using specific Border Gateway Protocol (BGP)/AS peering as a trust boundary for application routing and associated policy enforcement.
+Within the Azure Government network, internal network system components are isolated from other system components through implementation of separate subnets and access control policies on management interfaces. Azure Government doesn't directly peer with the public internet or with the Microsoft corporate network. Azure Government directly peers to the commercial Microsoft Azure network, which has routing and transport capabilities to the Internet and the Microsoft Corporate network. Azure Government limits its exposed surface area by applying extra protections and communications capabilities of our commercial Azure network. In addition, Azure Government ExpressRoute (ER) uses peering with our customerΓÇÖs networks over non-Internet private circuits to route ER customer ΓÇ£DMZΓÇ¥ networks using specific Border Gateway Protocol (BGP)/AS peering as a trust boundary for application routing and associated policy enforcement.
Azure Government maintains the following authorizations:
You can manage your isolation posture to meet individual requirements through ne
## Screening
-All Azure and Azure Government employees in the United States are subject to Microsoft background checks, as outlined in the table below. Personnel with the ability to access customer data for troubleshooting purposes in Azure Government are additionally subject to the verification of U.S. citizenship and extra screening requirements where appropriate.
+All Azure and Azure Government employees in the United States are subject to Microsoft background checks. Personnel with the ability to access customer data for troubleshooting purposes in Azure Government are additionally subject to the verification of US citizenship and extra screening requirements where appropriate.
-We are now screening all our operators at a Tier 3 Investigation (formerly National Agency Check with Law and Credit, NACLC) as defined in the DoD SRG [Section 5.6.2.2](https://dl.dod.cyber.mil/wp-content/uploads/cloud/SRG/https://docsupdatetracker.net/index.html#5.6PhysicalFacilitiesandPersonnelRequirements):
+We're now screening all our operators at a Tier 3 Investigation (formerly National Agency Check with Law and Credit, NACLC) as defined in Section 5.6.2.2 (Page 77) of the DoD [Cloud Computing SRG](https://public.cyber.mil/dccs/dccs-documents/):
> [!NOTE] > The minimum background investigation required for CSP personnel having access to Level 4 and 5 information based on a ΓÇ£noncritical-sensitiveΓÇ¥ (e.g., DoDΓÇÖs ADP-2) is a Tier 3 Investigation (for ΓÇ£noncritical-sensitiveΓÇ¥ contractors), or a Moderate Risk Background Investigation (MBI) for a ΓÇ£moderate riskΓÇ¥ position designation.
We are now screening all our operators at a Tier 3 Investigation (formerly Natio
|Applicable screening and background check|Environment|Frequency|Description| ||||| |Background check </br> Cloud screen|Azure </br>Azure Gov|Upon employment|- Education history (highest degree) </br>- Employment history (7-yr history)|
-|||Every 2 years|- Social Security Number search </br>- Criminal history check (7-yr history) </br>- Office of Foreign Assets Control (OFAC) list </br>- Bureau of Industry and Security (BIS) list </br>- Office of Defense Trade Controls (DDTC) debarred list|
-|U.S. citizenship|Azure Gov|Upon employment|- Verification of U.S. citizenship|
+|||Every two years|- Social Security Number search </br>- Criminal history check (7-yr history) </br>- Office of Foreign Assets Control (OFAC) list </br>- Bureau of Industry and Security (BIS) list </br>- Office of Defense Trade Controls (DDTC) debarred list|
+|US citizenship|Azure Gov|Upon employment|- Verification of US citizenship|
|Criminal Justice Information Services (CJIS)|Azure Gov|Upon signed CJIS agreement with State|- Adds fingerprint background check against FBI database </br>- Criminal records check and credit check| |Tier 3 Investigation|Azure Gov|Upon signed contract with sponsoring agency|- Detailed background and criminal history investigation (Form SF 86 required)|
Screening standards include the validation of US citizenship of all Microsoft su
**Controls for restricting insider access to customer data are the same for both Azure and Azure Government. As described in the previous section, Azure Government imposes extra personnel background screening requirements, including verification of US citizenship.** > [!NOTE]
-> Insider threat is characterized as potential for providing back-door connections and cloud service provider (CSP) privileged administrator access to customerΓÇÖs systems and data. Microsoft provides strong **[customer commitments](https://www.microsoft.com/trust-center/privacy/data-access)** regarding who can access customer data and on what terms. Access to customer data by Microsoft operations and support personnel is **denied by default**. Access to customer data is not needed to operate Azure. Moreover, for most support scenarios involving customer troubleshooting tickets, access to customer data is not needed.
+> Insider threat is characterized as potential for providing back-door connections and cloud service provider (CSP) privileged administrator access to customerΓÇÖs systems and data. Microsoft provides strong **[customer commitments](https://www.microsoft.com/trust-center/privacy/data-access)** regarding who can access customer data and on what terms. Access to customer data by Microsoft operations and support personnel is **denied by default**. Access to customer data isn't needed to operate Azure. Moreover, for most support scenarios involving customer troubleshooting tickets, access to customer data isn't needed.
No default access rights and Just-in-Time (JIT) access provisions reduce greatly the risks associated with traditional on-premises administrator elevated access rights that typically persist throughout the duration of employment. Microsoft makes it considerably more difficult for malicious insiders to tamper with your applications and data. The same access control restrictions and processes are imposed on all Microsoft engineers, including both full-time employees and subprocessors/vendors. The following controls are in place to restrict insider access to your data: -- Internal Microsoft controls that prevent access to production systems unless it is authorized through **Just-in-Time (JIT)** privileged access management system, as described in this section.-- Enforcement of **Customer Lockbox** that puts you in charge of approving insider access in support and troubleshooting scenarios, as described in this section. For most support scenarios, access to your data is not required.
+- Internal Microsoft controls that prevent access to production systems unless it's authorized through **Just-in-Time (JIT)** privileged access management system, as described in this section.
+- Enforcement of **Customer Lockbox** that puts you in charge of approving insider access in support and troubleshooting scenarios, as described in this section. For most support scenarios, access to your data isn't required.
- **Data encryption** with option for customer-managed encryption keys ΓÇô encrypted data is accessible only by entities who are in possession of the key, as described previously. - **Customer monitoring** of external access to provisioned Azure resources, which includes security alerts as described in the next section. ### Access control requirements
-Microsoft takes strong measures to protect your data from inappropriate access or use by unauthorized persons. Microsoft engineers (including full-time employees and subprocessors/vendors) [do not have default access](https://www.microsoft.com/trust-center/privacy/data-access) to your data in the cloud. Instead, they are granted access, under management oversight, only when necessary. Using the [restricted access workflow](https://www.youtube.com/watch?v=lwjPGtGGe84&feature=youtu.be&t=25m), access to your data is carefully controlled, logged, and revoked when it is no longer needed. For example, access to your data may be required to resolve troubleshooting requests that you initiated. The access control requirements are [established by the following policy](../security/fundamentals/protection-customer-data.md):
+Microsoft takes strong measures to protect your data from inappropriate access or use by unauthorized persons. Microsoft engineers (including full-time employees and subprocessors/vendors) [don't have default access](https://www.microsoft.com/trust-center/privacy/data-access) to your data in the cloud. Instead, they're granted access, under management oversight, only when necessary. Using the [restricted access workflow](https://www.youtube.com/watch?v=lwjPGtGGe84&feature=youtu.be&t=25m), access to your data is carefully controlled, logged, and revoked when it's no longer needed. For example, access to your data may be required to resolve troubleshooting requests that you initiated. The access control requirements are [established by the following policy](../security/fundamentals/protection-customer-data.md):
- No access to customer data, by default. - No user or administrator accounts on customer virtual machines (VMs). - Grant the least privilege that is required to complete task, audit, and log access requests.
-Microsoft engineers can be granted access to customer data using temporary credentials via **Just-in-Time (JIT)** access. There must be an incident logged in the Azure Incident Management system that describes the reason for access, approval record, what data was accessed, etc. This approach ensures that there is appropriate oversight for all access to customer data and that all JIT actions (consent and access) are logged for audit. Evidence that procedures have been established for granting temporary access for Azure personnel to customer data and applications upon appropriate approval for customer support or incident handling purposes is available from the Azure [SOC 2 Type 2 attestation report](/azure/compliance/offerings/offering-soc-2) produced by an independent third-party auditing firm.
+Microsoft engineers can be granted access to customer data using temporary credentials via **Just-in-Time (JIT)** access. There must be an incident logged in the Azure Incident Management system that describes the reason for access, approval record, what data was accessed, etc. This approach ensures that there's appropriate oversight for all access to customer data and that all JIT actions (consent and access) are logged for audit. Evidence that procedures have been established for granting temporary access for Azure personnel to customer data and applications upon appropriate approval for customer support or incident handling purposes is available from the Azure [SOC 2 Type 2 attestation report](/azure/compliance/offerings/offering-soc-2) produced by an independent third-party auditing firm.
-JIT access works with multifactor authentication that requires Microsoft engineers to use a smartcard to confirm their identity. All access to production systems is performed using Secure Admin Workstations (SAWs) that are consistent with published guidance on [securing privileged access](/security/compass/overview). Use of SAWs for access to production systems is required by Microsoft policy and compliance with this policy is closely monitored. These workstations use a fixed image with all software fully managed ΓÇô only select activities are allowed and users cannot accidentally circumvent the SAW design since they do not have admin privileges on these machines. Access is permitted only with a smartcard and access to each SAW is limited to specific set of users.
+JIT access works with multifactor authentication that requires Microsoft engineers to use a smartcard to confirm their identity. All access to production systems is performed using Secure Admin Workstations (SAWs) that are consistent with published guidance on [securing privileged access](/security/compass/overview). Use of SAWs for access to production systems is required by Microsoft policy and compliance with this policy is closely monitored. These workstations use a fixed image with all software fully managed ΓÇô only select activities are allowed and users cannot accidentally circumvent the SAW design since they don't have admin privileges on these machines. Access is permitted only with a smartcard and access to each SAW is limited to specific set of users.
### Customer Lockbox
-[Customer Lockbox for Azure](../security/fundamentals/customer-lockbox-overview.md) is a service that provides you with the capability to control how a Microsoft engineer accesses your data. As part of the support workflow, a Microsoft engineer may require elevated access to your data. Customer Lockbox puts you in charge of that decision by enabling you to approve / deny such elevated requests. Customer Lockbox is an extension of the JIT workflow and comes with full audit logging enabled. Customer Lockbox capability is not required for support cases that do not involve access to customer data. For most support scenarios, access to customer data is not needed and the workflow should not require Customer Lockbox. Microsoft engineers rely heavily on logs to maintain Azure services and provide customer support.
+[Customer Lockbox for Azure](../security/fundamentals/customer-lockbox-overview.md) is a service that provides you with the capability to control how a Microsoft engineer accesses your data. As part of the support workflow, a Microsoft engineer may require elevated access to your data. Customer Lockbox puts you in charge of that decision by enabling you to approve/deny such elevated requests. Customer Lockbox is an extension of the JIT workflow and comes with full audit logging enabled. Customer Lockbox capability isn't required for support cases that don't involve access to customer data. For most support scenarios, access to customer data isn't needed and the workflow shouldn't require Customer Lockbox. Microsoft engineers rely heavily on logs to maintain Azure services and provide customer support.
Customer Lockbox is available to all customers who have an Azure support plan with a minimum level of Developer. You can enable Customer Lockbox from the [Administration module](https://aka.ms/customerlockbox/administration) in the Customer Lockbox blade. A Microsoft engineer will initiate Customer Lockbox request if this action is needed to progress a customer-initiated support ticket. Customer Lockbox is available to customers from all Azure public regions. ### Guest VM memory crash dumps
-On each Azure node, there is a Hypervisor that runs directly over the hardware and divides the node into a variable number of Guest virtual machines (VMs), as described in [Compute isolation](./azure-secure-isolation-guidance.md#compute-isolation). Each node also has one special Root VM, which runs the Host OS.
+On each Azure node, there's a Hypervisor that runs directly over the hardware and divides the node into a variable number of Guest virtual machines (VMs), as described in [Compute isolation](./azure-secure-isolation-guidance.md#compute-isolation). Each node also has one special Root VM, which runs the Host OS.
-When a Guest VM (also known as customer VM) crashes, customer data may be contained inside a memory dump file on the Guest VM. **By default, Microsoft engineers do not have access to Guest VMs and cannot review crash dumps on Guest VMs without customer's approval.** The same process involving explicit customer authorization is used to control access to Guest VM crash dumps should you request an investigation of your VM crash. As described previously, access is gated by the JIT privileged access management system and Customer Lockbox so that all actions are logged and audited. The primary forcing function for deleting the memory dumps from Guest VMs is the routine process of VM re-imaging that typically occurs at least every two months.
+When a Guest VM (also known as customer VM) crashes, customer data may be contained inside a memory dump file on the Guest VM. **By default, Microsoft engineers don't have access to Guest VMs and can't review crash dumps on Guest VMs without customer's approval.** The same process involving explicit customer authorization is used to control access to Guest VM crash dumps should you request an investigation of your VM crash. As described previously, access is gated by the JIT privileged access management system and Customer Lockbox so that all actions are logged and audited. The primary forcing function for deleting the memory dumps from Guest VMs is the routine process of VM reimaging that typically occurs at least every two months.
### Data deletion, retention, and destruction
-As a customer, you are [always in control of your customer data](https://www.microsoft.com/trust-center/privacy/data-management) in Azure. You can access, extract, and delete your customer data stored in Azure at will. When you terminate your Azure subscription, Microsoft takes the necessary steps to ensure that you continue to own you customer data. A common customer concern upon data deletion or subscription termination is whether another customer or Azure administrator can access their deleted data. For more information on how data deletion, retention, and destruction are implemented in Azure, see our online documentation:
+As a customer, you're [always in control of your customer data](https://www.microsoft.com/trust-center/privacy/data-management) in Azure. You can access, extract, and delete your customer data stored in Azure at will. When you terminate your Azure subscription, Microsoft takes the necessary steps to ensure that you continue to own your customer data. A common customer concern upon data deletion or subscription termination is whether another customer or Azure administrator can access their deleted data. For more information on how data deletion, retention, and destruction are implemented in Azure, see our online documentation:
- [Data deletion](./azure-secure-isolation-guidance.md#data-deletion) - [Data retention](./azure-secure-isolation-guidance.md#data-retention)
As a customer, you are [always in control of your customer data](https://www.mic
## Customer monitoring of Azure resources
-Listed below are essential Azure services that you can use to gain in-depth insight into your provisioned Azure resources and get alerted about suspicious activity, including outside attacks aimed at your applications and data. For a complete list, see the Azure service directory sections for [Management + Governance](https://azure.microsoft.com/services/#management-tools), [Networking](https://azure.microsoft.com/services/#networking), and [Security](https://azure.microsoft.com/services/#security). Moreover, the [Azure Security Benchmark](../security/benchmarks/index.yml) provides security recommendations and implementation details to help you improve your security posture with respect to Azure resources.
+This section covers essential Azure services that you can use to gain in-depth insight into your provisioned Azure resources and get alerted about suspicious activity, including outside attacks aimed at your applications and data. For a complete list, see the Azure service directory sections for [Management + Governance](https://azure.microsoft.com/services/#management-tools), [Networking](https://azure.microsoft.com/services/#networking), and [Security](https://azure.microsoft.com/services/#security). Moreover, the [Azure Security Benchmark](../security/benchmarks/index.yml) provides security recommendations and implementation details to help you improve your security posture with respect to Azure resources.
-**[Microsoft Defender for Cloud](../defender-for-cloud/index.yml)** (formerly Azure Security Center) provides unified security management and advanced threat protection across hybrid cloud workloads. It is an essential service for you to limit your exposure to threats, protect cloud resources, [respond to incidents](../defender-for-cloud/alerts-overview.md), and improve your regulatory compliance posture.
+**[Microsoft Defender for Cloud](../defender-for-cloud/index.yml)** (formerly Azure Security Center) provides unified security management and advanced threat protection across hybrid cloud workloads. It's an essential service for you to limit your exposure to threats, protect cloud resources, [respond to incidents](../defender-for-cloud/alerts-overview.md), and improve your regulatory compliance posture.
With Microsoft Defender for Cloud, you can:
To assist you with Microsoft Defender for Cloud usage, Microsoft has published e
Azure Monitor collects data from each of the following tiers: -- **Application monitoring data:** Data about the performance and functionality of the code you have written, regardless of its platform.
+- **Application monitoring data:** Data about the performance and functionality of the code you've written, regardless of its platform.
- **Guest OS monitoring data:** Data about the operating system on which your application is running. The application could be running in Azure, another cloud, or on-premises. - **Azure resource monitoring data:** Data about the operation of an Azure resource. - **Azure subscription monitoring data:** Data about the operation and management of an Azure subscription and data about the health and operation of Azure itself.
Azure Monitor collects data from each of the following tiers:
With Azure Monitor, you can get a 360-degree view of your applications, infrastructure, and network with advanced analytics, dashboards, and visualization maps. Azure Monitor provides intelligent insights and enables better decisions with AI. You can analyze, correlate, and monitor data from various sources using a powerful query language and built-in machine learning constructs. Moreover, Azure Monitor provides out-of-the-box integration with popular DevOps, IT Service Management (ITSM), and Security Information and Event Management (SIEM) tools.
-**[Azure Policy](../governance/policy/overview.md)** enables effective governance of Azure resources by creating, assigning, and managing policies. These policies enforce various rules over provisioned Azure resources to keep them compliant with your specific corporate security and privacy standards. For example, one of the built-in policies for Allowed Locations can be used to restrict available locations for new resources to enforce your geo-compliance requirements. Azure Policy provides a comprehensive compliance view of all provisioned resources and enables cloud policy management and security at scale.
+**[Azure Policy](../governance/policy/overview.md)** enables effective governance of Azure resources by creating, assigning, and managing policies. These policies enforce various rules over provisioned Azure resources to keep them compliant with your specific corporate security and privacy standards. For example, one of the built-in policies for Allowed Locations can be used to restrict available locations for new resources to enforce your geo-compliance requirements. For additional customer assistance, Microsoft provides **Azure Policy regulatory compliance built-in initiatives**, which map to **compliance domains** and **controls** in many US government, global, regional, and industry standards. For more information, see [Azure Policy samples](../governance/policy/samples/index.md#regulatory-compliance). Regulatory compliance in Azure Policy provides built-in initiative definitions to view a list of the controls and compliance domains based on responsibility ΓÇô customer, Microsoft, or shared. For Microsoft-responsible controls, we provide additional audit result details based on third-party attestations and our control implementation details to achieve that compliance. Each control is associated with one or more Azure Policy definitions. These policies may help you [assess compliance](../governance/policy/how-to/get-compliance-data.md) with the control; however, compliance in Azure Policy is only a partial view of your overall compliance status. Azure Policy helps to enforce organizational standards and assess compliance at scale. Through its compliance dashboard, it provides an aggregated view to evaluate the overall state of the environment, with the ability to drill down to more granular status.
-**[Azure Firewall](../firewall/overview.md)** provides a managed, cloud-based network security service that protects your Azure Virtual Network resources. It is a fully stateful firewall as a service with built-in high availability that integrates with Azure Monitor for logging and analytics.
+**[Azure Firewall](../firewall/overview.md)** provides a managed, cloud-based network security service that protects your Azure Virtual Network resources. It's a fully stateful firewall as a service with built-in high availability that integrates with Azure Monitor for logging and analytics.
-**[Network Watcher](../network-watcher/network-watcher-monitoring-overview.md)** allows you to monitor, diagnose, and gain insights into your Azure virtual network performance and health. With network security group flow logs, you can gain deeper understanding of your network traffic patterns and collect data for compliance, auditing, and monitoring of your network security profile. Packet capture allows you to capture traffic to and from your virtual machines to diagnose network anomalies and gather network statistics, including information on network intrusions.
+**[Network Watcher](../network-watcher/network-watcher-monitoring-overview.md)** allows you to monitor, diagnose, and gain insights into your Azure Virtual Network performance and health. With network security group flow logs, you can gain deeper understanding of your network traffic patterns and collect data for compliance, auditing, and monitoring of your network security profile. Packet capture allows you to capture traffic to and from your virtual machines to diagnose network anomalies and gather network statistics, including information on network intrusions.
-**[Azure DDoS Protection](../ddos-protection/ddos-protection-overview.md)** provides extensive Distributed Denial of Service (DDoS) mitigation capability to help you protect your Azure resources from attacks. Always-on traffic monitoring provides near real-time detection of a DDoS attack, with automatic mitigation of the attack as soon as it is detected. In combination with Web Application Firewall, DDoS Protection defends against a comprehensive set of network layer attacks, including SQL injection, cross-site scripting attacks, and session hijacks. Azure DDoS Protection is integrated with Azure Monitor for analytics and insight.
+**[Azure DDoS Protection](../ddos-protection/ddos-protection-overview.md)** provides extensive Distributed Denial of Service (DDoS) mitigation capability to help you protect your Azure resources from attacks. Always-on traffic monitoring provides near real-time detection of a DDoS attack, with automatic mitigation of the attack as soon as it's detected. In combination with Web Application Firewall, DDoS Protection defends against a comprehensive set of network layer attacks, including SQL injection, cross-site scripting attacks, and session hijacks. Azure DDoS Protection is integrated with Azure Monitor for analytics and insight.
**[Microsoft Sentinel](../sentinel/overview.md)** (formerly Azure Sentinel) is a cloud-native SIEM platform that uses built-in AI to help you quickly analyze large volumes of data across an enterprise. Microsoft Sentinel aggregates data from various sources, including users, applications, servers, and devices running on-premises or in any cloud, letting you reason over millions of records in a few seconds. With Microsoft Sentinel, you can:
With Azure Monitor, you can get a 360-degree view of your applications, infrastr
**[Azure Advisor](../advisor/advisor-overview.md)** helps you follow best practices to optimize your Azure deployments. It analyzes resource configurations and usage telemetry and then recommends solutions that can help you improve the cost effectiveness, performance, high availability, and security of Azure resources.
-**[Azure Blueprints](../governance/blueprints/overview.md)** is a service that helps you deploy and update cloud environments in a repeatable manner using composable artifacts such as Azure Resource Manager templates to provision resources, role-based access controls, and policies that adhere to your organizationΓÇÖs standards, patterns, and requirements. You can use pre-defined standard blueprints and customize these solutions to meet specific requirements, including data encryption, host and service configuration, network and connectivity configuration, identity, and other security aspects of deployed resources. The overarching goal of Azure Blueprints is to help automate compliance and cybersecurity risk management in cloud environments. For more information on Azure Blueprints, including production-ready blueprint solutions for ISO 27001, NIST SP 800-171, PCI DSS, HIPA).
- ## Next steps
-For supplemental information and updates, subscribe to the [Microsoft Azure Government Blog](https://devblogs.microsoft.com/azuregov/).
+- [Azure Government overview](./documentation-government-welcome.md)
+- [Azure Government compliance](./documentation-government-plan-compliance.md)
+- [Azure and other Microsoft services compliance offerings](/azure/compliance/offerings/)
+- [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md)
+- [Azure guidance for secure isolation](./azure-secure-isolation-guidance.md)
+- [Azure Government isolation guidelines for Impact Level 5 workloads](./documentation-government-impact-level-5.md)
+- [Azure Government DoD overview](./documentation-government-overview-dod.md)
+- [Azure security fundamentals documentation](../security/fundamentals/index.yml)
+- [Azure Policy regulatory compliance built-in initiatives](../governance/policy/samples/index.md#regulatory-compliance)
azure-monitor Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/customer-managed-keys.md
# Azure Monitor customer-managed key
-Data in Azure Monitor is encrypted with Microsoft-managed keys. You can use your own encryption key to protect the data and saved queries in your workspaces. When you specify a customer-managed key, that key is used to protect and control access to your data and once configured, any data sent to your workspaces is encrypted with your Azure Key Vault key. Customer-managed keys offer greater flexibility to manage access controls.
+Data in Azure Monitor is encrypted with Microsoft-managed keys. You can use your own encryption key to protect the data and saved queries in your workspaces. Customer-managed keys in Azure Monitor gives you greater flexibility to manage access controls to logs. Once configure, new data for linked workspaces is encrypted with your key stored in [Azure Key Vault](../../key-vault/general/overview.md), or [Azure Key Vault Managed "HSM"](../../key-vault/managed-hsm/overview.md).
We recommend you review [Limitations and constraints](#limitationsandconstraints) below before configuration.
We recommend you review [Limitations and constraints](#limitationsandconstraints
[Encryption at Rest](../../security/fundamentals/encryption-atrest.md) is a common privacy and security requirement in organizations. You can let Azure completely manage encryption at rest, while you have various options to closely manage encryption and encryption keys.
-Azure Monitor ensures that all data and saved queries are encrypted at rest using Microsoft-managed keys (MMK). Azure Monitor also provides an option for encryption using your own key that is stored in your [Azure Key Vault](../../key-vault/general/overview.md), which gives you the control to revoke the access to your data at any time. Azure Monitor use of encryption is identical to the way [Azure Storage encryption](../../storage/common/storage-service-encryption.md#about-azure-storage-encryption) operates.
+Azure Monitor ensures that all data and saved queries are encrypted at rest using Microsoft-managed keys (MMK). You also have the option to encrypt data with your own key in [Azure Key Vault](../../key-vault/general/overview.md), with control over key lifecycle and ability to revoke access to your data at any time. Azure Monitor use of encryption is identical to the way [Azure Storage encryption](../../storage/common/storage-service-encryption.md#about-azure-storage-encryption) operates.
-Customer-managed key is delivered on [dedicated clusters](./logs-dedicated-clusters.md) providing higher protection level and control. Data ingested to dedicated clusters is being encrypted twice ΓÇö once at the service level using Microsoft-managed keys or customer-managed keys, and once at the infrastructure level using two different encryption algorithms and two different keys. [Double encryption](../../storage/common/storage-service-encryption.md#doubly-encrypt-data-with-infrastructure-encryption) protects against a scenario where one of the encryption algorithms or keys may be compromised. In this case, the additional layer of encryption continues to protect your data. Dedicated cluster also allows you to protect your data with [Lockbox](#customer-lockbox-preview) control.
+Customer-managed key is delivered on [dedicated clusters](./logs-dedicated-clusters.md) providing higher protection level and control. Data to dedicated clusters is encrypted twice, once at the service level using Microsoft-managed keys or Customer-managed keys, and once at the infrastructure level, using two different encryption algorithms and two different keys. [double encryption](../../storage/common/storage-service-encryption.md#doubly-encrypt-data-with-infrastructure-encryption) protects against a scenario where one of the encryption algorithms or keys may be compromised. In this case, the additional layer of encryption continues to protect your data. Dedicated cluster also allows you to protect your data with [Lockbox](#customer-lockbox-preview) control.
-Data ingested in the last 14 days and data recently used in queries is also kept in hot-cache (SSD-backed) for query efficiency and encrypted with Microsoft keys regardless customer-managed key configuration. Your control SSD data access applies and adheres to [key revocation](#key-revocation)
+Data ingested in the last 14 days or recently used in queries is kept in hot-cache (SSD-backed) for query efficiency. SSD data is encrypted with Microsoft keys regardless customer-managed key configuration, but your control over SSD access adheres to [key revocation](#key-revocation)
-Log Analytics Dedicated Clusters [pricing model](./logs-dedicated-clusters.md#cluster-pricing-model) requires commitment Tier starting at 500 GB/day and can have values of 500, 1000, 2000 or 5000 GB/day.
+Log Analytics Dedicated Clusters [pricing model](./logs-dedicated-clusters.md#cluster-pricing-model) requires commitment Tier starting at 500 GB per day and can have values of 500, 1000, 2000 or 5000 GB per day.
## How Customer-managed key works in Azure Monitor
-Azure Monitor uses managed identity to grant access to your Azure Key Vault. The identity of the Log Analytics cluster is supported at the cluster level. To allow Customer-managed key protection on multiple workspaces, a new Log Analytics *Cluster* resource performs as an intermediate identity connection between your Key Vault and your Log Analytics workspaces. The cluster's storage uses the managed identity that\'s associated with the *Cluster* resource to authenticate to your Azure Key Vault via Azure Active Directory.
+Azure Monitor uses managed identity to grant access to your Azure Key Vault. The identity of the Log Analytics cluster is supported at the cluster level. To allow Customer-managed key on multiple workspaces, a new Log Analytics cluster resource performs as an intermediate identity connection between your Key Vault and your Log Analytics workspaces. The cluster storage uses the managed identity that\'s associated with the *Cluster* resource to authenticate to your Azure Key Vault via Azure Active Directory.
-After the Customer-managed key configuration, new ingested data to workspaces linked to your dedicated cluster gets encrypted with your key. You can unlink workspaces from the cluster at any time. New data then gets ingested to Log Analytics storage and encrypted with Microsoft key, while you can query your new and old data seamlessly.
+After the Customer-managed key configuration, new ingested data to workspaces linked to your dedicated cluster gets encrypted with your key. You can unlink workspaces from the cluster at any time. New data then gets ingested to cluster storage and encrypted with Microsoft key, while you can query your new and old data seamlessly.
> [!IMPORTANT]
-> Customer-managed key capability is regional. Your Azure Key Vault, cluster and linked Log Analytics workspaces must be in the same region, but they can be in different subscriptions.
+> Customer-managed key capability is regional. Your Azure Key Vault, cluster and linked workspaces must be in the same region, but they can be in different subscriptions.
![Customer-managed key overview](media/customer-managed-keys/cmk-overview.png) 1. Key Vault
-2. Log Analytics *Cluster* resource having managed identity with permissions to Key Vault -- The identity is propagated to the underlay dedicated Log Analytics cluster storage
-3. Dedicated Log Analytics cluster
-4. Workspaces linked to *Cluster* resource
+2. Log Analytics cluster resource having managed identity with permissions to Key VaultΓÇöThe identity is propagated to the underlay dedicated cluster storage
+3. Dedicated cluster
+4. Workspaces linked to dedicated cluster
### Encryption keys operation
-There are 3 types of keys involved in Storage data encryption:
+There are three types of keys involved in Storage data encryption:
-- **KEK** - Key Encryption Key (your Customer-managed key)-- **AEK** - Account Encryption Key-- **DEK** - Data Encryption Key
+- "**KEK**" - Key Encryption Key (your Customer-managed key)
+- "**AEK**" - Account Encryption Key
+- "**DEK**" - Data Encryption Key
The following rules apply: -- The Log Analytics cluster storage accounts generate unique encryption key for every storage account, which is known as the AEK.-- The AEK is used to derive DEKs, which are the keys that are used to encrypt each block of data written to disk.-- When you configure your key in Key Vault and reference it in the cluster, Azure Storage sends requests to your Azure Key Vault to wrap and unwrap the AEK to perform data encryption and decryption operations.-- Your KEK never leaves your Key Vault.-- Azure Storage uses the managed identity that's associated with the *Cluster* resource to authenticate and access to Azure Key Vault via Azure Active Directory.
+- The cluster storage has unique encryption key for every Storage Account, which is known as the "AEK".
+- The "AEK" is used to derive "DEKs, which are the keys that are used to encrypt each block of data written to disk.
+- When you configure a key in your Key Vault, and updated the key details in the cluster, the cluster storage performs requests to 'wrap' and 'unwrap' "AEK" for encryption and decryption.
+- Your "KEK" never leaves your Key Vault, and in the case of Managed "HSM", it never leaves the hardware.
+- Azure Storage uses managed identity that's associated with the *Cluster* resource for authentication. It accesses Azure Key Vault via Azure Active Directory.
### Customer-Managed key provisioning steps
The following rules apply:
1. Creating cluster 1. Granting permissions to your Key Vault 1. Updating cluster with key identifier details
-1. Linking Log Analytics workspaces
+1. Linking workspaces
-Customer-managed key configuration isn't supported in Azure portal currently and provisioning can be performed via [PowerShell](/powershell/module/az.operationalinsights/), [CLI](/cli/azure/monitor/log-analytics) or [REST](/rest/api/loganalytics/) requests.
+Customer-managed key configuration isn't supported in Azure portal currently and provisioning can be performed via [PowerShell](/powershell/module/az.operationalinsights/), [CLI](/cli/azure/monitor/log-analytics), or [REST](/rest/api/loganalytics/) requests.
-## Storing encryption key (KEK)
+## Storing encryption key ("KEK")
-Create or use existing Azure Key Vault in the region that the cluster is planed, then generate or import a key to be used for logs encryption. The Azure Key Vault must be configured as recoverable to protect your key and the access to your data in Azure Monitor. You can verify this configuration under properties in your Key Vault, both *Soft delete* and *Purge protection* should be enabled.
+Create or use an existing Azure Key Vault in the region that the cluster is planed, and generate or import a key to be used for logs encryption. The Azure Key Vault must be configured as recoverable, to protect your key and the access to your data in Azure Monitor. You can verify this configuration under properties in your Key Vault, both *Soft delete* and *Purge protection* should be enabled.
![Soft delete and purge protection settings](media/customer-managed-keys/soft-purge-protection.png) These settings can be updated in Key Vault via CLI and PowerShell: - [Soft Delete](../../key-vault/general/soft-delete-overview.md)-- [Purge protection](../../key-vault/general/soft-delete-overview.md#purge-protection) guards against force deletion of the secret / vault even after soft delete
+- [Purge protection](../../key-vault/general/soft-delete-overview.md#purge-protection) guards against force deletion of the secret, vault even after soft delete
## Create cluster
-Clusters uses managed identity for data encryption with your Key Vault. Configure identity `type` property to `SystemAssigned` when creating your cluster to allow access to your Key Vault for wrap and unwrap operations.
+Clusters uses managed identity for data encryption with your Key Vault. Configure identity `type` property to `SystemAssigned` when creating your cluster to allow access to your Key Vault for "wrap" and "unwrap" operations.
Identity settings in cluster for System-assigned managed identity ```json
Follow the procedure illustrated in [Dedicated Clusters article](./logs-dedicate
## Grant Key Vault permissions
-Create access policy in Key Vault to grants permissions to your cluster. These permissions are used by the underlay Azure Monitor storage. Open your Key Vault in Azure portal and click *"Access Policies"* then *"+ Add Access Policy"* to create a policy with these settings:
+Create Access Policy in Key Vault to grants permissions to your cluster. These permissions are used by the underlay cluster storage. Open your Key Vault in Azure portal and click *Access Policies* then *+ Add Access Policy* to create a policy with these settings:
-- Key permissions: select *'Get'*, *'Wrap Key'* and *'Unwrap Key'*.-- Select principal: depending on the identity type used in the cluster (system or user assigned managed identity) enter either cluster name or cluster principal ID for system assigned managed identity or the user assigned managed identity name.
+- Key permissionsΓÇöselect *Get*, *Wrap Key* and *Unwrap Key*.
+- Select principalΓÇödepending on the identity type used in the cluster (system or user assigned managed identity)
+ - System assigned managed identity - enter the cluster name or cluster principal ID
+ - User assigned managed identity - enter the identity name
![grant Key Vault permissions](media/customer-managed-keys/grant-key-vault-permissions-8bit.png)
The *Get* permission is required to verify that your Key Vault is configured as
All operations on the cluster require the `Microsoft.OperationalInsights/clusters/write` action permission. This permission could be granted via the Owner or Contributor that contains the `*/write` action or via the Log Analytics Contributor role that contains the `Microsoft.OperationalInsights/*` action.
-This step updates Azure Monitor Storage with the key and version to be used for data encryption. When updated, your new key is being used to wrap and unwrap the Storage key (AEK).
+This step updates dedicated cluster storage with the key and version to use for "AEK" wrap and unwrap.
>[!IMPORTANT] >- Key rotation can be automatic or require explicit key update, see [Key rotation](#key-rotation) to determine approach that is suitable for you before updating the key identifier details in cluster.
N/A
# [Azure CLI](#tab/azure-cli) ```azurecli
-az account set --subscription "cluster-subscription-id"
+az account set ΓÇösubscription "cluster-subscription-id"
-az monitor log-analytics cluster update --no-wait --name "cluster-name" --resource-group "resource-group-name" --key-name "key-name" --key-vault-uri "key-uri" --key-version "key-version"
+az monitor log-analytics cluster update ΓÇöno-wait ΓÇöname "cluster-name" ΓÇöresource-group "resource-group-name" ΓÇökey-name "key-name" ΓÇökey-vault-uri "key-uri" ΓÇökey-version "key-version"
-# Wait for job completion when `--no-wait` was used
-$clusterResourceId = az monitor log-analytics cluster list --resource-group "resource-group-name" --query "[?contains(name, "cluster-name")].[id]" --output tsv
-az resource wait --created --ids $clusterResourceId --include-response-body true
+# Wait for job completion when `ΓÇöno-wait` was used
+$clusterResourceId = az monitor log-analytics cluster list ΓÇöresource-group "resource-group-name" ΓÇöquery "[?contains(name, "cluster-name")].[id]" ΓÇöoutput tsv
+az resource wait ΓÇöcreated ΓÇöids $clusterResourceId ΓÇöinclude-response-body true
``` # [PowerShell](#tab/powershell)
Content-type: application/json
It takes the propagation of the key a while to complete. You can check the update state by sending GET request on the cluster and look at the *KeyVaultProperties* properties. Your recently updated key should return in the response.
-A response to GET request should look like this when the key update is complete:
+Response to GET request when key update is completed:
202 (Accepted) and header ```json {
A response to GET request should look like this when the key update is complete:
## Link workspace to cluster > [!IMPORTANT]
-> This step should be performed only after the completion of the Log Analytics cluster provisioning. If you link workspaces and ingest data prior to the provisioning, ingested data will be dropped and won't be recoverable.
+> This step should be performed only after the cluster provisioning. If you link workspaces and ingest data prior to the provisioning, ingested data will be dropped and won't be recoverable.
-You need to have 'write' permissions to both your workspace and cluster to perform this operation, which include `Microsoft.OperationalInsights/workspaces/write` and `Microsoft.OperationalInsights/clusters/write`.
+You need to have "write" permissions on your workspace and cluster to perform this operation. It include `Microsoft.OperationalInsights/workspaces/write` and `Microsoft.OperationalInsights/clusters/write`.
Follow the procedure illustrated in [Dedicated Clusters article](./logs-dedicated-clusters.md#link-a-workspace-to-a-cluster). ## Key revocation > [!IMPORTANT]
-> - The recommended way to revoke access to your data is by disabling your key, or deleting access policy in your Key Vault.
+> - The recommended way to revoke access to your data is by disabling your key, or deleting Access Policy in your Key Vault.
> - Setting the cluster's `identity` `type` to `None` also revokes access to your data, but this approach isn't recommended since you can't revert it without contacting support.
-The cluster storage will always respect changes in key permissions within an hour or sooner and storage will become unavailable. Any new data ingested to workspaces linked with your cluster gets dropped and won't be recoverable, data becomes inaccessible on these workspaces and queries fail. Previously ingested data remains in storage as long as your cluster and your workspaces aren't deleted. Inaccessible data is governed by the data-retention policy and will be purged when retention is reached. Data ingested in the last 14 days and data recently used in queries is also kept in hot-cache (SSD-backed) for query efficiency. The data on SSD gets deleted on key revocation operation and becomes inaccessible. The cluster's storage attempts to unwrap encryption periodically with your Key Vault and once you have reverted revocation, the unwrap succeeds, SSD data is reloaded from storage, data ingestion and query are resumed within 30 minutes.
+The cluster storage will always respect changes in key permissions within an hour or sooner and storage will become unavailable. New data ingested to linked workspaces is dropped and non-recoverable. Data is inaccessible on these workspaces and queries fail. Previously ingested data remains in storage as long as your cluster and your workspaces aren't deleted. Inaccessible data is governed by the data-retention policy and will be purged when retention is reached. Data ingested in the last 14 days and data recently used in queries is also kept in hot-cache (SSD-backed) for query efficiency. The data on SSD gets deleted on key revocation operation and becomes inaccessible. The cluster storage attempts reach your Key Vault to unwrap encryption periodically, and when key is enabled, unwrap succeeds, SSD data is reloaded from storage, data ingestion and query are resumed within 30 minutes.
## Key rotation Key rotation has two modes: -- Auto-rotation - when you you update your cluster with ```"keyVaultProperties"``` but omit ```"keyVersion"``` property, or set it to ```""```, storage will automatically use the latest versions.-- Explicit key version update - when you update your cluster and provide key version in ```"keyVersion"``` property, any new key versions require an explicit ```"keyVaultProperties"``` update in cluster, see [Update cluster with Key identifier details](#update-cluster-with-key-identifier-details). If you generate new key version in Key Vault but don't update it in the cluster, the Log Analytics cluster storage will keep using your previous key. If you disable or delete your old key before updating the new key in the cluster, you will get into [key revocation](#key-revocation) state.
+- AutorotationΓÇöupdate your cluster with ```"keyVaultProperties"``` but omit ```"keyVersion"``` property, or set it to ```""```. Storage will automatically use the latest versions.
+- Explicit key version updateΓÇöupdate your cluster with key version in ```"keyVersion"``` property. Rotatio of keys require an explicit ```"keyVaultProperties"``` update in cluster, see [Update cluster with Key identifier details](#update-cluster-with-key-identifier-details). If you generate new key version in Key Vault but don't update it in the cluster, the cluster storage will keep using your previous key. If you disable, or delete the old key before updating a new one in the cluster, you will get into [key revocation](#key-revocation) state.
-All your data remains accessible after the key rotation operation, since data always encrypted with Account Encryption Key (AEK) while AEK is now being encrypted with your new Key Encryption Key (KEK) version in Key Vault.
+All your data remains accessible after the key rotation operation. Data always encrypted with the Account Encryption Key ("AEK"), which is encrypted with your new Key Encryption Key ("KEK") version in Key Vault.
## Customer-managed key for saved queries and log alerts
-The query language used in Log Analytics is expressive and can contain sensitive information in comments you add to queries or in the query syntax. Some organizations require that such information is kept protected under Customer-managed key policy and you need save your queries encrypted with your key. Azure Monitor enables you to store *saved-searches* and *log alerts* queries encrypted with your key in your own storage account when connected to your workspace.
+The query language used in Log Analytics is expressive and can contain sensitive information in comments, or in the query syntax. Some organizations require that such information is kept protected under Customer-managed key policy and you need save your queries encrypted with your key. Azure Monitor enables you to store *saved-searches* and *log alerts* queries encrypted with your key in your own Storage Account when connected to your workspace.
> [!NOTE]
-> Log Analytics queries can be saved in various stores depending on the scenario used. Queries remain encrypted with Microsoft key (MMK) in the following scenarios regardless Customer-managed key configuration: Workbooks in Azure Monitor, Azure dashboards, Azure Logic App, Azure Notebooks and Automation Runbooks.
+> Log Analytics queries can be saved in various stores depending on the scenario used. Queries remain encrypted with Microsoft key ("MMK") in the following scenarios regardless Customer-managed key configuration: Workbooks in Azure Monitor, Azure dashboards, Azure Logic App, Azure Notebooks and Automation Runbooks.
-When you Bring Your Own Storage (BYOS) and link it to your workspace, the service uploads *saved-searches* and *log alerts* queries to your storage account. That means that you control the storage account and the [encryption-at-rest policy](../../storage/common/customer-managed-keys-overview.md) either using the same key that you use to encrypt data in Log Analytics cluster, or a different key. You will, however, be responsible for the costs associated with that storage account.
+When link your own storage (BYOS) to workspace, the service stores *saved-searches* and *log alerts* queries to your Storage Account. With the control on Storage Account and the [encryption-at-rest policy](../../storage/common/customer-managed-keys-overview.md), you can protect *saved-searches* and *log alerts* with Customer-managed key. You will, however, be responsible for the costs associated with that Storage Account.
**Considerations before setting Customer-managed key for queries**
-* You need to have 'write' permissions to both your workspace and Storage Account
-* Make sure to create your Storage Account in the same region as your Log Analytics workspace is located
-* The *saves searches* in storage is considered as service artifacts and their format may change
-* Existing *saves searches* are removed from your workspace. Copy and any *saves searches* that you need before the configuration. You can view your *saved-searches* using [PowerShell](/powershell/module/az.operationalinsights/get-azoperationalinsightssavedsearch)
-* Query history isn't supported and you won't be able to see queries that you ran
-* You can link a single storage account to workspace for the purpose of saving queries, but is can be used fro both *saved-searches* and *log alerts* queries
-* Pin to dashboard isn't supported
+* You need to have "write" permissions on your workspace and Storage Account.
+* Make sure to create your Storage Account in the same region as your Log Analytics workspace is located.
+* The *saves searches* in storage is considered as service artifacts and their format may change.
+* Existing *saves searches* are removed from your workspace. Copy and any *saves searches* that you need before the configuration. You can view your *saved-searches* using [PowerShell](/powershell/module/az.operationalinsights/get-azoperationalinsightssavedsearch).
+* Query history isn't supported and you won't be able to see queries that you ran.
+* You can link a single Storage Account to a workspace, which can be used for both *saved-searches* and *log alerts* queries.
+* Pin to dashboard isn't supported.
* Fired log alerts will not contains search results or alert query. You can use [alert dimensions](../alerts/alerts-unified-log.md#split-by-alert-dimensions) to get context in the fired alerts. **Configure BYOS for saved-searches queries**
-Link a storage account for *Query* to your workspace -- *saved-searches* queries are saved in your storage account.
+Link a Storage Account for *Query* to keep *saved-searches* queries in your Storage Account.
# [Azure portal](#tab/portal)
N/A
```azurecli $storageAccountId = '/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Storage/storageAccounts/<storage name>'
-az account set --subscription "workspace-subscription-id"
+az account set ΓÇösubscription "workspace-subscription-id"
-az monitor log-analytics workspace linked-storage create --type Query --resource-group "resource-group-name" --workspace-name "workspace-name" --storage-accounts $storageAccountId
+az monitor log-analytics workspace linked-storage create ΓÇötype Query ΓÇöresource-group "resource-group-name" ΓÇöworkspace-name "workspace-name" ΓÇöstorage-accounts $storageAccountId
``` # [PowerShell](#tab/powershell)
After the configuration, any new *saved search* query will be saved in your stor
**Configure BYOS for log alerts queries**
-Link a storage account for *Alerts* to your workspace -- *log alerts* queries are saved in your storage account.
+Link a Storage Account for *Alerts* to keep *log alerts* queries in your Storage Account.
# [Azure portal](#tab/portal)
N/A
```azurecli $storageAccountId = '/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Storage/storageAccounts/<storage name>'
-az account set --subscription "workspace-subscription-id"
+az account set ΓÇösubscription "workspace-subscription-id"
-az monitor log-analytics workspace linked-storage create --type ALerts --resource-group "resource-group-name" --workspace-name "workspace-name" --storage-accounts $storageAccountId
+az monitor log-analytics workspace linked-storage create ΓÇötype ALerts ΓÇöresource-group "resource-group-name" ΓÇöworkspace-name "workspace-name" ΓÇöstorage-accounts $storageAccountId
``` # [PowerShell](#tab/powershell)
Content-type: application/json
  } } ```- After the configuration, any new alert query will be saved in your storage.
After the configuration, any new alert query will be saved in your storage.
Lockbox gives you the control to approve or reject Microsoft engineer request to access your data during a support request.
-In Azure Monitor, you have this control on data in workspaces linked to your Log Analytics dedicated cluster. The Lockbox control applies to data stored in a Log Analytics dedicated cluster where itΓÇÖs kept isolated in the clusterΓÇÖs storage accounts under your Lockbox protected subscription.
+In Azure Monitor, you have this control on data in workspaces linked to your dedicated cluster. The Lockbox control applies to data stored in a dedicated cluster where itΓÇÖs kept isolated in the cluster storage under your Lockbox protected subscription.
Learn more about [Customer Lockbox for Microsoft Azure](../../security/fundamentals/customer-lockbox-overview.md)
Customer-Managed key is provided on dedicated cluster and these operations are r
## Limitations and constraints -- The max number of cluster per region and subscription is 2
+- The max number of cluster per region and subscription is two.
-- The maximum number of workspaces that can be linked to a cluster is 1000
+- The maximum number of workspaces that can be linked to a cluster is 1000.
-- You can link a workspace to your cluster and then unlink it. The number of workspace link operations on particular workspace is limited to 2 in a period of 30 days.
+- You can link a workspace to your cluster and then unlink it. The number of workspace link operations on particular workspace is limited to two in a period of 30 days.
- Customer-managed key encryption applies to newly ingested data after the configuration time. Data that was ingested prior to the configuration, remains encrypted with Microsoft key. You can query data ingested before and after the Customer-managed key configuration seamlessly. - The Azure Key Vault must be configured as recoverable. These properties aren't enabled by default and should be configured using CLI or PowerShell:<br>
- - [Soft Delete](../../key-vault/general/soft-delete-overview.md)
- - [Purge protection](../../key-vault/general/soft-delete-overview.md#purge-protection) should be turned on to guard against force deletion of the secret / vault even after soft delete.
+ - [Soft Delete](../../key-vault/general/soft-delete-overview.md).
+ - [Purge protection](../../key-vault/general/soft-delete-overview.md#purge-protection) should be turned on to guard against force deletion of the secret, vault even after soft delete.
- Cluster move to another resource group or subscription isn't supported currently.
Customer-Managed key is provided on dedicated cluster and these operations are r
- Lockbox isn't available in China currently. - [Double encryption](../../storage/common/storage-service-encryption.md#doubly-encrypt-data-with-infrastructure-encryption) is configured automatically for clusters created from October 2020 in supported regions. You can verify if your cluster is configured for double encryption by sending a GET request on the cluster and observing that the `isDoubleEncryptionEnabled` value is `true` for clusters with Double encryption enabled.
- - If you create a cluster and get an error "region-name doesnΓÇÖt support Double Encryption for clusters.", you can still create the cluster without Double encryption by adding `"properties": {"isDoubleEncryptionEnabled": false}` in the REST request body.
+ - If you create a cluster and get an errorΓÇö"region-name doesnΓÇÖt support Double Encryption for clusters", you can still create the cluster without Double encryption, by adding `"properties": {"isDoubleEncryptionEnabled": false}` in the REST request body.
- Double encryption setting can not be changed after the cluster has been created. - Setting the cluster's `identity` `type` to `None` also revokes access to your data, but this approach isn't recommended since you can't revert it without contacting support. The recommended way to revoke access to your data is [key revocation](#key-revocation).
Customer-Managed key is provided on dedicated cluster and these operations are r
## Troubleshooting -- Behavior with Key Vault availability
- - In normal operation -- Storage caches AEK for short periods of time and goes back to Key Vault to unwrap periodically.
+- Behavior per Key Vault availability:
+ - Normal operationΓÇöstorage caches "AEK" for short periods of time and goes back to Key Vault to unwrap periodically.
- - Key Vault connection errors -- Storage handles transient errors (timeouts, connection failures, DNS issues) by allowing keys to stay in cache for the duration of the availability issue and this overcomes blips and availability issues. The query and ingestion capabilities continue without interruption.
+ - Key Vault connection errorsΓÇöstorage handles transient errors (timeouts, connection failures, "DNS" issues), by allowing keys to stay in cache for the duration of the availability issue, and it overcomes blips and availability issues. The query and ingestion capabilities continue without interruption.
-- Key Vault access rate -- The frequency that Azure Monitor Storage accesses Key Vault for wrap and unwrap operations is between 6 to 60 seconds.
+- Key Vault access rateΓÇöThe frequency that Azure the cluster storage accesses Key Vault for wrap and unwrap is between 6 to 60 seconds.
-- If you update your cluster while the cluster is at provisioning or updating state, the update will fail.
+- If you update your cluster while it's at provisioning state, or updating state, the update will fail.
-- If you get conflict error when creating a cluster ΓÇô It may be that you have deleted your cluster in the last 14 days and itΓÇÖs in a soft-delete period. The cluster name remains reserved during the soft-delete period and you can't create a new cluster with that name. The name is released after the soft-delete period when the cluster is permanently deleted.
+- If you get conflictΓÇöerror when creating a cluster, you may have deleted your cluster in the last 14 days and itΓÇÖs in a soft-delete state. The cluster name remains reserved during the soft-delete period and you can't create a new cluster with that name. The name is released after the soft-delete period when the cluster is permanently deleted.
- Workspace link to cluster will fail if it is linked to another cluster. -- If you create a cluster and specify the KeyVaultProperties immediately, the operation may fail since the
- access policy can't be defined until system identity is assigned to the cluster.
+- If you create a cluster and specify the KeyVaultProperties immediately, the operation may fail since the Access Policy can't be defined until system identity is assigned to the cluster.
- If you update existing cluster with KeyVaultProperties and 'Get' key Access Policy is missing in Key Vault, the operation will fail. -- If you fail to deploy your cluster, verify that your Azure Key Vault, cluster and linked Log Analytics workspaces are in the same region. The can be in different subscriptions.
+- If you fail to deploy your cluster, verify that your Azure Key Vault, cluster and linked workspaces are in the same region. The can be in different subscriptions.
-- If you update your key version in Key Vault and don't update the new key identifier details in the cluster, the Log Analytics cluster will keep using your previous key and your data will become inaccessible. Update new key identifier details in the cluster to resume data ingestion and ability to query data.
+- If you update your key version in Key Vault and don't update the new key identifier details in the cluster, the cluster will keep using your previous key and your data will become inaccessible. Update new key identifier details in the cluster to resume data ingestion and ability to query data.
-- Some operations are long and can take a while to complete -- these are cluster create, cluster key update and cluster delete. You can check the operation status by sending GET request to cluster or workspace and observe the response. For example, unlinked workspace won't have the *clusterResourceId* under *features*.
+- Some operations are long and can take a while to complete ΓÇö these are cluster create, cluster key update and cluster delete. You can check the operation status by sending GET request to cluster or workspace and observe the response. For example, unlinked workspace won't have the *clusterResourceId* under *features*.
- Error messages **Cluster Create**
- - 400 -- Cluster name is not valid. Cluster name can contain characters a-z, A-Z, 0-9 and length of 3-63.
- - 400 -- The body of the request is null or in bad format.
- - 400 -- SKU name is invalid. Set SKU name to capacityReservation.
- - 400 -- Capacity was provided but SKU is not capacityReservation. Set SKU name to capacityReservation.
- - 400 -- Missing Capacity in SKU. Set Capacity value to 500, 1000, 2000 or 5000 GB/day.
- - 400 -- Capacity is locked for 30 days. Decreasing capacity is permitted 30 days after update.
- - 400 -- No SKU was set. Set the SKU name to capacityReservation and Capacity value to 500, 1000, 2000 or 5000 GB/day.
- - 400 -- Identity is null or empty. Set Identity with systemAssigned type.
- - 400 -- KeyVaultProperties are set on creation. Update KeyVaultProperties after cluster creation.
- - 400 -- Operation cannot be executed now. Async operation is in a state other than succeeded. Cluster must complete its operation before any update operation is performed.
+ - 400 ΓÇö Cluster name is not valid. Cluster name can contain characters a-z, A-Z, 0-9 and length of 3-63.
+ - 400 ΓÇö The body of the request is null or in bad format.
+ - 400 ΓÇö "SKU" name is invalid. Set "SKU" name to capacityReservation.
+ - 400 ΓÇö Capacity was provided but "SKU" is not capacityReservation. Set "SKU" name to capacityReservation.
+ - 400 ΓÇö Missing Capacity in "SKU". Set Capacity value to 500, 1000, 2000 or 5000 GB/day.
+ - 400 ΓÇö Capacity is locked for 30 days. Decreasing capacity is permitted 30 days after update.
+ - 400 ΓÇö No "SKU" was set. Set the "SKU" name to capacityReservation and Capacity value to 500, 1000, 2000 or 5000 GB/day.
+ - 400 ΓÇö Identity is null or empty. Set Identity with systemAssigned type.
+ - 400 ΓÇö KeyVaultProperties are set on creation. Update KeyVaultProperties after cluster creation.
+ - 400 ΓÇö Operation cannot be executed now. Async operation is in a state other than succeeded. Cluster must complete its operation before any update operation is performed.
**Cluster Update**
- - 400 -- Cluster is in deleting state. Async operation is in progress . Cluster must complete its operation before any update operation is performed.
- - 400 -- KeyVaultProperties is not empty but has a bad format. See [key identifier update](#update-cluster-with-key-identifier-details).
- - 400 -- Failed to validate key in Key Vault. Could be due to lack of permissions or when key doesnΓÇÖt exist. Verify that you [set key and access policy](#grant-key-vault-permissions) in Key Vault.
- - 400 -- Key is not recoverable. Key Vault must be set to Soft-delete and Purge-protection. See [Key Vault documentation](../../key-vault/general/soft-delete-overview.md)
- - 400 -- Operation cannot be executed now. Wait for the Async operation to complete and try again.
- - 400 -- Cluster is in deleting state. Wait for the Async operation to complete and try again.
+ - 400 ΓÇö Cluster is in deleting state. Async operation is in progress . Cluster must complete its operation before any update operation is performed.
+ - 400 ΓÇö KeyVaultProperties is not empty but has a bad format. See [key identifier update](#update-cluster-with-key-identifier-details).
+ - 400 ΓÇö Failed to validate key in Key Vault. Could be due to lack of permissions or when key doesnΓÇÖt exist. Verify that you [set key and Access Policy](#grant-key-vault-permissions) in Key Vault.
+ - 400 ΓÇö Key is not recoverable. Key Vault must be set to Soft-delete and Purge-protection. See [Key Vault documentation](../../key-vault/general/soft-delete-overview.md)
+ - 400 ΓÇö Operation cannot be executed now. Wait for the Async operation to complete and try again.
+ - 400 ΓÇö Cluster is in deleting state. Wait for the Async operation to complete and try again.
**Cluster Get**
- - 404 -- Cluster not found, the cluster may have been deleted. If you try to create a cluster with that name and get conflict, the cluster is in soft-delete for 14 days. You can contact support to recover it, or use another name to create a new cluster.
+ - 404 ΓÇö Cluster not found, the cluster may have been deleted. If you try to create a cluster with that name and get conflict, the cluster is in soft-delete for 14 days. You can contact support to recover it, or use another name to create a new cluster.
**Cluster Delete**
- - 409 -- Can't delete a cluster while in provisioning state. Wait for the Async operation to complete and try again.
+ - 409 ΓÇö Can't delete a cluster while in provisioning state. Wait for the Async operation to complete and try again.
**Workspace link**
- - 404 -- Workspace not found. The workspace you specified doesnΓÇÖt exist or was deleted.
- - 409 -- Workspace link or unlink operation in process.
- - 400 -- Cluster not found, the cluster you specified doesnΓÇÖt exist or was deleted. If you try to create a cluster with that name and get conflict, the cluster is in soft-delete for 14 days. You can contact support to recover it.
+ - 404 ΓÇö Workspace not found. The workspace you specified doesnΓÇÖt exist or was deleted.
+ - 409 ΓÇö Workspace link or unlink operation in process.
+ - 400 ΓÇö Cluster not found, the cluster you specified doesnΓÇÖt exist or was deleted. If you try to create a cluster with that name and get conflict, the cluster is in soft-delete for 14 days. You can contact support to recover it.
**Workspace unlink**
- - 404 -- Workspace not found. The workspace you specified doesnΓÇÖt exist or was deleted.
- - 409 -- Workspace link or unlink operation in process.
+ - 404 ΓÇö Workspace not found. The workspace you specified doesnΓÇÖt exist or was deleted.
+ - 409 ΓÇö Workspace link or unlink operation in process.
## Next steps - Learn about [Log Analytics dedicated cluster billing](./manage-cost-storage.md#log-analytics-dedicated-clusters)
azure-netapp-files Azacsnap Installation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azacsnap-installation.md
This article provides a guide for installation of the Azure Application Consiste
## Introduction The downloadable self-installer is designed to make the snapshot tools easy to set up and run with non-root user privileges (for example, azacsnap). The installer will set up the user and put the snapshot tools into the users `$HOME/bin` subdirectory (default = `/home/azacsnap/bin`).
-The self-installer tries to determine the correct settings and paths for all the files based on the configuration of the user performing the installation (for example, root). If the pre-requisite steps (enable communication with storage and SAP HANA) were run as root, then the installation will copy the private key and `hdbuserstore` to the backup userΓÇÖs location. However, it is possible for the steps that enable communication with the storage back-end and SAP HANA to be manually done by a knowledgeable administrator after the installation.
+The self-installer tries to determine the correct settings and paths for all the files based on the configuration of the user performing the installation (for example, root). If the pre-requisite steps (enable communication with storage and SAP HANA) were run as root, then the installation will copy the private key and `hdbuserstore` to the backup userΓÇÖs location. The steps to enable communication with the storage back-end and SAP HANA can be manually done by a knowledgeable administrator after the installation.
## Prerequisites for installation
tools.
1. **Time Synchronization is set up**. The customer will need to provide an NTP compatible time server, and configure the OS accordingly. 1. **HANA is installed** : See HANA installation instructions in [SAP NetWeaver Installation on HANA database](/archive/blogs/saponsqlserver/sap-netweaver-installation-on-hana-database).
-1. **[Enable communication with storage](#enable-communication-with-storage)** (refer separate section for more details): Select the storage back-end you are using for your deployment.
+1. **[Enable communication with storage](#enable-communication-with-storage)** (for more information, see separate section): Select the storage back-end you're using for your deployment.
# [Azure NetApp Files](#tab/azure-netapp-files)
- 1. **For Azure NetApp Files (refer separate section for details)**: Customer must generate the service principal authentication file.
+ 1. **For Azure NetApp Files (for more information, see separate section)**: Customer must generate the service principal authentication file.
> [!IMPORTANT] > When validating communication with Azure NetApp Files, communication might fail or time-out. Check to ensure firewall rules are not blocking outbound traffic from the system running AzAcSnap to the following addresses and TCP/IP ports:
tools.
# [Azure Large Instance (Bare Metal)](#tab/azure-large-instance)
- 1. **For Azure Large Instance (refer separate section for details)**: Customer must set up SSH with a
- private/public key pair, and provide the public key for each node where the snapshot tools are
- planned to be executed to Microsoft Operations for setup on the storage back-end.
+ 1. **For Azure Large Instance (for more information, see separate section)**: Set up SSH with a
+ private/public key pair. Provide the public key for each node, where the snapshot tools are
+ planned to be executed, to Microsoft Operations for setup on the storage back-end.
Test this by using SSH to connect to one of the nodes (for example, `ssh -l <Storage UserName> <Storage IP Address>`). Type `exit` to logout of the storage prompt.
tools.
-1. **[Enable communication with storage](#enable-communication-with-storage)** (refer separate section for more details): Select the storage back-end you are using for your deployment.
+1. **[Enable communication with storage](#enable-communication-with-storage)** (for more information, see separate section): Select the storage back-end you're using for your deployment.
-1. **[Enable communication with database](#enable-communication-with-database)** (refer separate section for more details):
+1. **[Enable communication with database](#enable-communication-with-database)** (for more information, see separate section):
# [SAP HANA](#tab/sap-hana)
- Customer must set up an appropriate SAP HANA user with the required privileges to perform the snapshot.
+ Set up an appropriate SAP HANA user with the required privileges to perform the snapshot.
+
+ 1. This setting can be tested from the command line as follows using these examples:
- 1. This setting can be tested from the command line as follows using the text in `grey`
1. HANAv1 `hdbsql -n <HANA IP address> -i <HANA instance> -U <HANA user> "\s"`
tools.
`hdbsql -n <HANA IP address> -i <HANA instance> -d SYSTEMDB -U <HANA user> "\s"`
- - The examples above are for non-SSL communication to SAP HANA.
+ > [!NOTE]
+ > These examples are for non-SSL communication to SAP HANA.
## Enable communication with storage
-This section explains how to enable communication with storage. Ensure the storage back-end you are using is correctly selected.
+This section explains how to enable communication with storage. Ensure the storage back-end you're using is correctly selected.
# [Azure NetApp Files (with Virtual Machine)](#tab/azure-netapp-files)
Create RBAC Service Principal
az account show ```
-1. If the subscription is not correct, use
+1. If the subscription isn't correct, use the following command:
```azurecli-interactive az account set -s <subscription name or id> ```
-1. Create a service principal using Azure CLI per the following example
+1. Create a service principal using Azure CLI per the following example:
```azurecli-interactive az ad sp create-for-rbac --role Contributor --sdk-auth
example steps are to provide guidance on setup of SSH for this communication.
1. Send the public key to Microsoft Operations
- Send the output of the `cat /root/.ssh/id_rsa.pub` command (example below) to Microsoft Operations
+ Send the output of the `cat /root/.ssh/id_rsa.pub` command to Microsoft Operations
to enable the snapshot tools to communicate with the storage subsystem. ```bash
example steps are to provide guidance on setup of SSH for this communication.
## Enable communication with database
-This section explains how to enable communication with storage. Ensure the storage back-end you are using is correctly selected.
+This section explains how to enable communication with storage. Ensure the storage back-end you're using is correctly selected.
# [SAP HANA](#tab/sap-hana)
HANA v2 user and the `hdbuserstore` for communication to the SAP HANA database.
The following example commands set up a user (AZACSNAP) in the SYSTEMDB on SAP HANA 2. database, change the IP address, usernames, and passwords as appropriate:
-1. Connect to the SYSTEMDB to create the user
+1. Connect to the SYSTEMDB to create the user.
```bash hdbsql -n <IP_address_of_host>:30013 -i 00 -u SYSTEM -p <SYSTEM_USER_PASSWORD>
database, change the IP address, usernames, and passwords as appropriate:
hdbsql SYSTEMDB=> ```
-1. Create the user
+1. Create the user.
This example creates the AZACSNAP user in the SYSTEMDB.
database, change the IP address, usernames, and passwords as appropriate:
hdbsql SYSTEMDB=> CREATE USER AZACSNAP PASSWORD <AZACSNAP_PASSWORD_CHANGE_ME> NO FORCE_FIRST_PASSWORD_CHANGE; ```
-1. Grant the user permissions
+1. Grant the user permissions.
This example sets the permission for the AZACSNAP user to allow for performing a database consistent storage snapshot.
+
+ 1. For SAP HANA releases up to version 2.0 SPS 03:
- ```sql
- hdbsql SYSTEMDB=> GRANT BACKUP ADMIN, CATALOG READ, MONITORING TO AZACSNAP;
- ```
+ ```sql
+ hdbsql SYSTEMDB=> GRANT BACKUP ADMIN, CATALOG READ TO AZACSNAP;
+ ```
-1. *OPTIONAL* - Prevent user's password from expiring
+ 1. For SAP HANA releases from version 2.0 SPS 04, SAP added new fine-grained privileges:
+
+ ```sql
+ hdbsql SYSTEMDB=> GRANT BACKUP ADMIN, DATABASE BACKUP ADMIN, CATALOG READ TO AZACSNAP;
+ ```
+
+1. *OPTIONAL* - Prevent user's password from expiring.
> [!NOTE] > Check with corporate policy before making this change.
database, change the IP address, usernames, and passwords as appropriate:
hdbsql SYSTEMDB=> ALTER USER AZACSNAP DISABLE PASSWORD LIFETIME; ```
-1. Set up the SAP HANA Secure User Store (change the password)
- This example uses the `hdbuserstore` command from the Linux shell to set up the SAP HANA Secure User store.
+1. Set up the SAP HANA Secure User Store (change the password).
+ This example uses the `hdbuserstore` command from the Linux shell to set up the SAP HANA Secure User Store.
```bash hdbuserstore Set AZACSNAP <IP_address_of_host>:30013 AZACSNAP <AZACSNAP_PASSWORD_CHANGE_ME> ```
-1. Check the SAP HANA Secure User Store
+1. Check the SAP HANA Secure User Store.
To check if the secure user store is set up correctly, use the `hdbuserstore` command to list the output similar to the following example. More details on using `hdbuserstore` are available on the SAP website.
The following are always used when using the `azacsnap --ssl` option:
- `-e` - Enables TLS encryptionTLS/SSL encryption. The server chooses the highest available. - `-ssltrustcert` - Specifies whether to validate the server's certificate. - `-sslhostnameincert "*"` - Specifies the host name used to verify serverΓÇÖs identity. By
- specifying `"*"` as the host name, then the server's host name is not validated.
+ specifying `"*"` as the host name, then the server's host name isn't validated.
-SSL communication also requires Key Store and Trust Store files. While it is possible for
+SSL communication also requires Key Store and Trust Store files. While it's possible for
these files to be stored in default locations on a Linux installation, to ensure the
-correct key material is being used for the various SAP HANA systems (that is, in the cases where
+correct key material is being used for the various SAP HANA systems (for the cases where
different key-store and trust-store files are used for each SAP HANA system) `azacsnap` expects the key-store and trust-store files to be stored in the `securityPath` location as specified in the `azacsnap` configuration file.
to the command line.
#### Trust Store files -- If using multiple SIDs with the same key material create hard-links into the securityPath
+- If using multiple SIDs with the same key material, create hard-links into the securityPath
location as defined in the `azacsnap` config file. Ensure these values exist for every SID using SSL. - For openssl:
into the users `$HOME/bin` subdirectory (default = `/home/azacsnap/bin`).
The self-installer tries to determine the correct settings and paths for all the files based on the configuration of the user performing the installation (for example, root). If the previous setup steps (Enable communication with storage and SAP HANA) were run as root, then the installation will copy the
-private key and the `hdbuserstore` to the backup user's location. However, it is possible for the steps
-which enable communication with the storage back-end and SAP HANA to be manually done by a
-knowledgeable administrator after the installation.
+private key and the `hdbuserstore` to the backup user's location. The steps to enable communication with the storage back-end
+and SAP HANA can be manually done by a knowledgeable administrator after the installation.
> [!NOTE] > For earlier SAP HANA on Azure Large Instance installations, the directory of pre-installed
installer is run with only the -I option, it will do the following steps:
1. Search filesystem for directories to add to azacsnap's `$LD_LIBRARY_PATH`. Many commands require a library path to be set in order to execute correctly, this configures it for the installed user.
-1. Copy the SSH keys for back-end storage for azacsnap from the "root" user (the user running
- the install). This assumes the "root" user has already configured connectivity to the storage
- - see section "[Enable communication with storage](#enable-communication-with-storage)".
-1. Copy the SAP HANA connection secure user store for the target user, azacsnap. This
- assumes the "root" user has already configured the secure user store ΓÇô see section "Enable
- communication with SAP HANA".
+1. Copy the SSH keys for back-end storage for azacsnap from the "root" user (the user running the install). This assumes the "root" user has
+ already configured connectivity to the storage (for more information, see section [Enable communication with storage](#enable-communication-with-storage)).
+3. Copy the SAP HANA connection secure user store for the target user, azacsnap. This
+ assumes the "root" user has already configured the secure user store (for more information, see section "Enable communication with SAP HANA").
1. The snapshot tools are extracted into `/home/azacsnap/bin/`. 1. The commands in `/home/azacsnap/bin/` have their permissions set (ownership and executable bit, etc.).
userdel -f -r azacsnap
### Manual installation of the snapshot tools
-In some cases, it is necessary to install the tools manually, but the recommendation is to use the
+In some cases, it's necessary to install the tools manually, but the recommendation is to use the
installer's default option to ease this process. Each line starting with a `#` character demonstrates the example commands following the character
The following output shows the steps to complete after running the installer wit
1. Run your first snapshot backup 1. `azacsnap -c backup ΓÇô-volume data--prefix=hana_test --retention=1`
-Step 2 will be necessary if "[Enable communication with database](#enable-communication-with-database)" was not done before the
+Step 2 will be necessary if "[Enable communication with database](#enable-communication-with-database)" wasn't done before the
installation. > [!NOTE]
This section explains how to configure the data base.
### SAP HANA Configuration
-There are some recommended changes to be applied to SAP HANA to ensure protection of the log backups and catalog. By default, the `basepath_logbackup` and `basepath_catalogbackup` will output their files to the `$(DIR_INSTANCE)/backup/log` directory, and it is unlikely this path is on a volume which `azacsnap` is configured to snapshot these files will not be protected with storage snapshots.
+There are some recommended changes to be applied to SAP HANA to ensure protection of the log backups and catalog. By default, the `basepath_logbackup` and `basepath_catalogbackup` will output their files to the `$(DIR_INSTANCE)/backup/log` directory, and it's unlikely this path is on a volume which `azacsnap` is configured to snapshot these files won't be protected with storage snapshots.
-The following `hdbsql` command examples are intended to demonstrate setting the log and catalog paths to locations which are on storage volumes that can be snapshot by `azacsnap`. Be sure to check the values on the command line match the local SAP HANA configuration.
+The following `hdbsql` command examples demonstrate setting the log and catalog paths to locations, which are on storage volumes that can be snapshot by `azacsnap`. Be sure to check the values on the command line match the local SAP HANA configuration.
### Configure log backup location
hdbsql -jaxC -n <HANA_ip_address>:30013 -i 00 -u SYSTEM -p <SYSTEM_USER_PASSWORD
### Check log and catalog backup locations
-After making the changes above, confirm that the settings are correct with the following command.
-In this example, the settings that have been set following the guidance above will display as
-SYSTEM settings.
+After making the changes to the log and catalog backup locations, confirm the settings are correct with the following command.
+In this example, the settings that have been set following the example will display as SYSTEM settings.
> This query also returns the DEFAULT settings for comparison.
global.ini,SYSTEM,,,persistence,basepath_logvolumes,/hana/log/H80
### Configure log backup timeout The default setting for SAP HANA to perform a log backup is 900 seconds (15 minutes). It's
-recommended to reduce this value to 300 seconds (that is, 5 minutes). Then it is possible to run regular
-backups (for example, every 10 minutes) by adding the log_backups volume into the OTHER volume section of the
+recommended to reduce this value to 300 seconds (for example, 5 minutes). Then it's possible to run regular
+backups of these files (for example, every 10 minutes). This is done by adding the log_backups volumes to the OTHER volume section of the
configuration file. ```bash
azure-netapp-files Azacsnap Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azacsnap-introduction.md
Azure Application Consistent Snapshot tool (AzAcSnap) is a command-line tool that enables data protection for third-party databases by handling all the orchestration required to put them into an application consistent state before taking a storage snapshot, after which it returns them to an operational state.
-## Supported Platforms and OS
+## Supported Databases, OS and Azure Platforms
- **Databases** - SAP HANA (refer to [support matrix](azacsnap-get-started.md#snapshot-support-matrix-from-sap) for details)
Azure Application Consistent Snapshot tool (AzAcSnap) is a command-line tool tha
- SUSE Linux Enterprise Server 12+ - Red Hat Enterprise Linux 7+
+- **Azure Platforms**
+ - Azure Virtual Machine with Azure NetApp Files storage
+ - Azure Large Instance (on BareMetal Infrastructure)
+
+> [!TIP]
+> If looking for new features, or support for other databases, operating systems and platforms, check out the [Preview](azacsnap-preview.md) page. You can also provide [feedback or suggestions](https://aka.ms/azacsnap-feedback).
+ ## Benefits of using AzAcSnap AzAcSnap leverages the volume snapshot and replication functionalities in Azure NetApp Files and Azure Large Instance. It provides the following benefits:
azure-netapp-files Azacsnap Preview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azacsnap-preview.md
cat snapshot-to-blob.sh
``` ```output
-#!/bin/sh
+#!/bin/bash
+# Utility to upload-to/list Azure Blob store.
+# If run as snapshot-to-blob.sh will upload a gzipped tarball of the snapshot.
+# If run as list-blobs.sh will list uploaded blobs.
+# e.g. `ln -s snapshot-to-blob.sh list-blobs.sh`
++ # _START_ Change these
-saskeyFile="$HOME/bin/blob-credentials.saskey"
+SAS_KEY_FILE="${HOME}/bin/blob-credentials.saskey"
# the snapshots need to be mounted locally for copying, put source directory here
-sourceDir=/mnt/saphana1/hana_data_PR1/.snapshot
+SOURCE_DIR="/mnt/saphana1/hana_data_PR1/.snapshot"
# _END_ Change these +
+# _START_ AzCopy Settings
+#Overrides where the job plan files (used for progress tracking and resuming) are stored, to avoid filling up a disk.
+export AZCOPY_JOB_PLAN_LOCATION="${HOME}/.azcopy/plans/"
+#Overrides where the log files are stored, to avoid filling up a disk.
+export AZCOPY_LOG_LOCATION="${HOME}/.azcopy/logs/"
+#If set, to anything, on-screen output will include counts of chunks by state
+export AZCOPY_SHOW_PERF_STATES=true
+# _END_ AzCopy Settings
++ # do not change any of the following
-#
-if [ -r $saskeyFile ]; then
- . $saskeyFile
-else
- echo "Credential file '$saskeyFile' not found, exiting!"
-fi
-# Log files
-archiveLog="logs/`basename $0`.log"
-echo "-- Started ($0 $snapshotName $prefix) @ `date "+%d-%h-%Y %H:%M"`" >> $archiveLog
-env >> $archiveLog
-#
-if [ "$1" == "" -o "$2" == "" ]; then
- echo "Usage: $0 <snapshotName> <prefix>"
+
+# Make sure we got some command line args
+if [ "$(basename "$0")" = "snapshot-to-blob.sh" ] && ([ "$1" = "" ] || [ "$2" = "" ]); then
+ echo "Usage: $0 <SNAPSHOT_NAME> <PREFIX>"
exit 1 fi
-blobStore="`echo $portalGeneratedSas | cut -f1 -d'?'`"
-blobSasKey="`echo $portalGeneratedSas | cut -f2 -d'?'`"
-snapshotName=$1
-prefix=$2
+# Make sure we can read the SAS key credential file.
+if [ -r "${SAS_KEY_FILE}" ]; then
+ source "${SAS_KEY_FILE}"
+else
+ echo "Credential file '${SAS_KEY_FILE}' not found, exiting!"
+fi
++
+# Assign the rest of the Global variables.
+SNAPSHOT_NAME=$1
+PREFIX=$2
+BLOB_STORE="$(echo "${PORTAL_GENERATED_SAS}" | cut -f1 -d'?')"
+BLOB_SAS_KEY="$(echo "${PORTAL_GENERATED_SAS}" | cut -f2 -d'?')"
+ARCHIVE_LOG="logs/$(basename "$0").log"
# Archive naming (daily.1, daily.2, etc...)
-dayOfWeek=`date "+%u"`
-monthOfYear=`date "+%m"`
-archiveBlobTgz="$prefix.$dayOfWeek.tgz"
+DAY_OF_WEEK=$(date "+%u")
+MONTH_OF_YEAR=$(date "+%m")
+ARCHIVE_BLOB_TGZ="${PREFIX}.${DAY_OF_WEEK}.tgz"
+
+#######################################
+# Write to the log.
+# Globals:
+# None
+# Arguments:
+# LOG_MSG
+#######################################
+write_log(){
+ LOG_MSG=$1
+ date=$(date "+[%d/%h/%Y:%H:%M:%S %z]")
+ echo "$date ${LOG_MSG}" >> "${ARCHIVE_LOG}"
+}
+
-runCmd(){
- echo "[RUNCMD] $1" >> $archiveLog
- bash -c "$1"
+#######################################
+# Run and Log the command.
+# Globals:
+# None
+# Arguments:
+# CMD_TO_RUN
+#######################################
+run_cmd(){
+ CMD_TO_RUN="${1}"
+ write_log "[RUNCMD] ${CMD_TO_RUN}"
+ bash -c "${CMD_TO_RUN}"
}
-main() {
- # Check sourceDir and snapshotName exist
- if [ ! -d "$sourceDir/$snapshotName" ]; then
- echo "$sourceDir/$snapshotName not found, exiting!" | tee -a $archiveLog
+
+#######################################
+# Check snapshot exists and then background the upload to Blob store.
+# Globals:
+# SOURCE_DIR
+# SNAPSHOT_NAME
+# ARCHIVE_LOG
+# Arguments:
+# None
+#######################################
+snapshot_to_blob(){
+ # Check SOURCE_DIR and SNAPSHOT_NAME exist
+ if [ ! -d "${SOURCE_DIR}/${SNAPSHOT_NAME}" ]; then
+ echo "${SOURCE_DIR}/${SNAPSHOT_NAME} not found, exiting!" | tee -a "${ARCHIVE_LOG}"
exit 1 fi
+ # background ourselves so AzAcSnap exits cleanly
+ echo "Backgrounding '$0 $@' to prevent blocking azacsnap"
+ echo "write_logging to ${ARCHIVE_LOG}"
+ {
+ trap '' HUP
+ # the script
+ upload_to_blob
+ list_blob >> "${ARCHIVE_LOG}"
+ } < > 2>&1 &
+}
+
+#######################################
+# Upload to Blob store.
+# Globals:
+# SOURCE_DIR
+# SNAPSHOT_NAME
+# ARCHIVE_BLOB_TGZ
+# BLOB_STORE
+# BLOB_SAS_KEY
+# ARCHIVE_LOG
+# Arguments:
+# None
+#######################################
+upload_to_blob(){
# Copy snapshot to blob store
- echo " Starting copy of $snapshotName to $blobStore/$archiveBlobTgz" >> $archiveLog
- runCmd "cd $sourceDir/$snapshotName && tar zcvf - * | azcopy cp \"$blobStore/$archiveBlobTgz?$blobSasKey\" --from-to PipeBlob && cd -"
- echo " Completed copy of $snapshotName $blobStore/$archiveBlobTgz" >> $archiveLog
- echo " Current list of files stored in $blobStore" >> $archiveLog
- runCmd "azcopy list \"$blobStore?$blobSasKey\" --properties LastModifiedTime " >> $archiveLog
+ echo "Starting upload of ${SNAPSHOT_NAME} to ${BLOB_STORE}/${ARCHIVE_BLOB_TGZ}" >> "${ARCHIVE_LOG}"
+ run_cmd "azcopy env ; cd ${SOURCE_DIR}/${SNAPSHOT_NAME} && tar zcvf - * | azcopy cp \"${BLOB_STORE}/${ARCHIVE_BLOB_TGZ}?${BLOB_SAS_KEY}\" --from-to PipeBlob && cd -"
+ echo "Completed upload of ${SNAPSHOT_NAME} ${BLOB_STORE}/${ARCHIVE_BLOB_TGZ}" >> "${ARCHIVE_LOG}"
# Complete
- echo "-- Finished ($0 $snapshotName $prefix) @ `date "+%d-%h-%Y %H:%M"`" >> $archiveLog
- echo "--" >> $archiveLog
+ echo "Finished ($0 ${SNAPSHOT_NAME} ${PREFIX}) @ $(date "+%d-%h-%Y %H:%M")" >> "${ARCHIVE_LOG}"
+ echo "--" >> "${ARCHIVE_LOG}"
# col 12345678901234567890123456789012345678901234567890123456789012345678901234567890 }
-# background ourselves so AzAcSnap exits cleanly
-echo "Backgrounding '$0 $@' to prevent blocking azacsnap"
-echo "Logging to $archiveLog"
-{
- trap '' HUP
- # the script
- main
-} < > 2>&1 &
+
+#######################################
+# List contents of Blob store.
+# Globals:
+# BLOB_STORE
+# BLOB_SAS_KEY
+# Arguments:
+# None
+#######################################
+list_blob(){
+ LOG_MSG="Current list of files stored in ${BLOB_STORE}"
+ write_log "${LOG_MSG}"
+ echo "${LOG_MSG}"
+ run_cmd "azcopy list \"${BLOB_STORE}?${BLOB_SAS_KEY}\" --properties LastModifiedTime "
+}
++
+# Log when script started.
+write_log "Started ($0 ${SNAPSHOT_NAME} ${PREFIX}) @ $(date "+%d-%h-%Y %H:%M")"
++
+# Check what this was called as ($0) and run accordingly.
+case "$(basename "$0")" in
+ "snapshot-to-blob.sh" )
+ snapshot_to_blob
+ ;;
+ "list-blobs.sh" )
+ list_blob
+ ;;
+ *)
+ echo "Command '$0' not recognised!"
+ ;;
+esac
``` The saskeyFile contains the following example SAS Key (content changed for security):
cat blob-credentials.saskey
```output # we need a generated SAS key, get this from the portal with read,add,create,write,list permissions
-portalGeneratedSas="https://<targetstorageaccount>.blob.core.windows.net/<blob-store>?sp=racwl&st=2021-06-10T21:10:38Z&se=2021-06-11T05:10:38Z&spr=https&sv=2020-02-10&sr=c&sig=<key-material>"
+PORTAL_GENERATED_SAS="https://<targetstorageaccount>.blob.core.windows.net/<blob-store>?sp=racwl&st=2021-06-10T21:10:38Z&se=2021-06-11T05:10:38Z&spr=https&sv=2020-02-10&sr=c&sig=<key-material>"
``` ## Next steps
azure-netapp-files Snapshots Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/snapshots-introduction.md
na Previously updated : 12/20/2021 Last updated : 01/31/2022 # How Azure NetApp Files snapshots work
-This article explains how Azure NetApp Files snapshots work. Azure NetApp Files snapshot technology delivers stability, scalability, and faster recoverability, with no impact to performance. It provides the foundation for data protection solutions, including single-file restores, volume restores and clones, cross-region replication and long-term retention.
+This article explains how Azure NetApp Files snapshots work. Azure NetApp Files snapshot technology delivers stability, scalability, and faster recoverability, with no impact to performance. It provides the foundation for data protection solutions, including single-file restores, volume restores and clones, cross-region replication, and long-term retention.
For steps about using volume snapshots, see [Manage snapshots by using Azure NetApp Files](azure-netapp-files-manage-snapshots.md). For considerations about snapshot management in cross-region replication, see [Requirements and considerations for using cross-region replication](cross-region-replication-requirements-considerations.md).
The following diagrams illustrate the concepts:
[ ![The latest changes are captured in Snapshot2 for a second point in time view of the volume (and the files within).](../media/azure-netapp-files/single-file-snapshot-restore-four.png) ](../media/azure-netapp-files/single-file-snapshot-restore-four.png#lightbox)
-When a snapshot is taken, the pointers to the data blocks are copied, and modifications are written to new data locations. The snapshot pointers continue to point to the original data blocks that the file occupied when the snapshot was taken, giving you a live and a historical view of the data. If you were to create a new snapshot, the current pointers (i.e. the ones created after the most recent additions and modifications) are copied to a new snapshot `Snapshot2`. This creates access to three generations of data (the live data, `Snapshot2`, and `Snapshot1`, in order of age) without taking up the volume space that three full copies would require.
+When a snapshot is taken, the pointers to the data blocks are copied, and modifications are written to new data locations. The snapshot pointers continue to point to the original data blocks that the file occupied when the snapshot was taken, giving you a live and a historical view of the data. If you were to create a new snapshot, the current pointers (that is, the ones created after the most recent additions and modifications) are copied to a new snapshot `Snapshot2`. This creates access to three generations of data (the live data, `Snapshot2`, and `Snapshot1`, in order of age) without taking up the volume space that three full copies would require.
A snapshot takes only a copy of the volume metadata (*inode table*). It takes just a few seconds to create, regardless of the volume size, the capacity used, or the level of activity on the volume. As such, taking a snapshot of a 100-TiB volume takes the same (next to zero) amount of time as taking a snapshot of a 100-GiB volume. After a snapshot is created, changes to data files are reflected in the active version of the files, as normal.
The Azure NetApp Files snapshot technology greatly improves the frequency and re
### Restoring (cloning) an online snapshot to a new volume
-You can restore Azure NetApp Files snapshots to separate, independent volumes (clones). This operation is near-instantaneous, regardless of the volume size and the capacity consumed. The newly created volume is almost immediately available for access, while the actual volume and snapshot data blocks are being copied over. Depending on volume size and capacity, this process can take considerable time during which the parent volume and snapshot cannot be deleted. However, the volume can already be accessed after initial creation, while the copy process is in progress in the background. This capability enables fast volume creation for data recovery or volume cloning for test and development. By nature of the data copy process, storage capacity pool consumption will double when the restore completes, and the new volume will show the full active capacity of the original snapshot. After this process is completed, the volume will be independent and disassociated from the original volume, and source volumes and snapshot can be managed or removed independently from the new volume.
+You can restore Azure NetApp Files snapshots to separate, independent volumes (clones). This operation is near-instantaneous, regardless of the volume size and the capacity consumed. The newly created volume is almost immediately available for access, while the actual volume and snapshot data blocks are being copied over. Depending on volume size and capacity, this process can take considerable time during which the parent volume and snapshot cannot be deleted. However, the volume can already be accessed after initial creation, while the copy process is in progress in the background. This capability enables fast volume creation for data recovery or volume cloning for test and development. By nature of the data copy process, storage capacity pool consumption will double when the restore completes, and the new volume will show the full active capacity of the original snapshot. The snapshot used to create the new volume will also be present on the new volume. After this process is completed, the volume will be independent and disassociated from the original volume, and source volumes and snapshot can be managed or removed independently from the new volume.
The following diagram shows a new volume created by restoring (cloning) a snapshot:
See [Restore a file from a snapshot using a client](snapshots-restore-file-clien
### Restoring files or directories from online snapshots using single-file snapshot restore
-If you do not want to restore the entire snapshot to a new volume or copy large files across the network, you can use the [single-file snapshot restore](snapshots-restore-file-single.md) feature to recover individual files directly within a volume from a snapshot, without requiring an external client data copy.
+If you don't want to restore the entire snapshot to a new volume or copy large files across the network, you can use the [single-file snapshot restore](snapshots-restore-file-single.md) feature to recover individual files directly within a volume from a snapshot, without requiring an external client data copy.
This feature does not require that you restore the entire snapshot to a new volume, revert a volume, or copy large files across the network. You can use this feature to restore individual files directly on the service from a volume snapshot without requiring data copy using an external client. This approach can drastically reduce RTO and network resource usage when restoring large files.
Snapshots consume storage capacity. As such, they are not typically kept indefin
> [!IMPORTANT] > The snapshot deletion operation cannot be undone. You should retain offline copies (vaulted snapshots) of the volume for data protection and retention purposes.
-When a snapshot is deleted, all pointers from that snapshot to existing data blocks will be removed. Only when a data block has no more pointers pointing at it (by the active volume, or other snapshots in the volume), the data block is returned to the volume free space for future use. Therefore, removing snapshots usually frees up more capacity in a volume than deleting data from the active volume, because data blocks are often captured in previously created snapshots.
+When a snapshot is deleted, all pointers from that snapshot to existing data blocks will be removed. Only when a data block has no more pointers pointing at it (by the active volume, or other snapshots in the volume), the data block is returned to the volume-free space for future use. Therefore, removing snapshots usually frees up more capacity in a volume than deleting data from the active volume, because data blocks are often captured in previously created snapshots.
The following diagram shows the effect on storage consumption of Snapshot 3 deletion from a volume:
azure-netapp-files Snapshots Restore New Volume https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/snapshots-restore-new-volume.md
na Previously updated : 09/16/2021 Last updated : 01/31/2022
## Steps
-1. Select **Snapshots** from the Volume blade to display the snapshot list.
+1. Select **Snapshots** from the Volume page to display the snapshot list.
2. Right-click the snapshot to restore and select **Restore to new volume** from the menu option. ![Screenshot that shows the Restore New Volume menu.](../media/azure-netapp-files/azure-netapp-files-snapshot-restore-to-new-volume.png) 3. In the Create a Volume window, provide information for the new volume: * **Name**
- Specify the name for the volume that you are creating.
+ Specify the name for the volume that you're creating.
The name must be unique within a resource group. It must be at least three characters long. It can use any alphanumeric characters.
![Screenshot that shows the Create a Volume window.](../media/azure-netapp-files/snapshot-restore-new-volume.png)
-4. Click **Review+create**. Click **Create**.
+4. Select **Review+create**. Select **Create**.
The new volume uses the same protocol that the snapshot uses.
- The new volume to which the snapshot is restored appears in the Volumes blade.
+ The new volume to which the snapshot is restored appears in the Volumes page.
+ The snapshot used to create the new volume will also be present on the new volume.
## Next steps
azure-resource-manager Async Operations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/async-operations.md
Title: Status of asynchronous operations description: Describes how to track asynchronous operations in Azure. It shows the values you use to get the status of a long-running operation. Previously updated : 08/21/2020 Last updated : 01/31/2022 # Track asynchronous Azure operations
An asynchronous operation initially returns an HTTP status code of either:
* 201 (Created) * 202 (Accepted)
+However, that status code doesn't necessarily mean the operation is asynchronous. An asynchronous operation also returns a value for `provisioningState` that indicates the operation hasn't finished. The value can vary by operation but won't include **Succeeded**, **Failed**, or **Canceled**. Those three values indicate the operation has finished. If no value is returned for `provisioningState`, the operation has finished and succeeded.
+ When the operation successfully completes, it returns either: * 200 (OK)
GET
https://management.azure.com/subscriptions/{subscription-id}/providers/Microsoft.Storage/operations/{operation-id}?monitor=true&api-version=2019-06-01 ```
-If the request is still running, you receive a status code 202. If the request has completed, your receive a status code 200, and the body of the response contains the properties of the storage account that has been created.
+If the request is still running, you receive a status code 202. If the request has completed, your receive a status code 200. The body of the response contains the properties of the storage account that was created.
## Next steps
azure-resource-manager Resources Without Resource Group Limit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/resources-without-resource-group-limit.md
Some resources have a limit on the number instances per region. This limit is di
* snapshots * virtualMachineScaleSets - By default, limited to 800 instances. That limit can be increased by contacting support. * virtualMachines
+* virtualMachines/extensions - Supports an unlimited number of VM extension instances.
## Microsoft.ContainerInstance
Some resources have a limit on the number instances per region. This limit is di
## Microsoft.HybridCompute
-* machines - supports up to 5,000 instances
-* machines/extensions - supports an unlimited number of VM extension instances
+* machines - Supports up to 5,000 instances.
+* machines/extensions - Supports an unlimited number of VM extension instances.
## microsoft.insights
azure-resource-manager Copy Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/copy-resources.md
The following example creates the number of storage accounts specified in the `s
"parameters": { "storageCount": { "type": "int",
- "defaultValue": 2
+ "defaultValue": 3
} }, "resources": [
azure-sql Features Comparison https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/features-comparison.md
The following table lists the major features of SQL Server and provides informat
| [Certificates and asymmetric keys](/sql/relational-databases/security/sql-server-certificates-and-asymmetric-keys) | Yes, without access to file system for `BACKUP` and `CREATE` operations. | Yes, without access to file system for `BACKUP` and `CREATE` operations - see [certificate differences](../managed-instance/transact-sql-tsql-differences-sql-server.md#certificates). | | [Change data capture - CDC](/sql/relational-databases/track-changes/about-change-data-capture-sql-server) | Yes (Preview) for S3 tier and above. Basic, S0, S1, S2 are not supported. | Yes | | [Collation - server/instance](/sql/relational-databases/collations/set-or-change-the-server-collation) | No, default server collation `SQL_Latin1_General_CP1_CI_AS` is always used. | Yes, can be set when the [instance is created](../managed-instance/create-template-quickstart.md) and can't be updated later. |
-| [Columnstore indexes](/sql/relational-databases/indexes/columnstore-indexes-overview) | Yes - [Premium tier, Standard tier - S3 and above, General Purpose tier, Business Critical, and HyperScale tiers](/sql/relational-databases/indexes/columnstore-indexes-overview) |Yes |
+| [Columnstore indexes](/sql/relational-databases/indexes/columnstore-indexes-overview) | Yes - [Premium tier, Standard tier - S3 and above, General Purpose tier, Business Critical, and Hyperscale tiers](/sql/relational-databases/indexes/columnstore-indexes-overview) |Yes |
| [Common language runtime - CLR](/sql/relational-databases/clr-integration/common-language-runtime-clr-integration-programming-concepts) | No | Yes, but without access to file system in `CREATE ASSEMBLY` statement - see [CLR differences](../managed-instance/transact-sql-tsql-differences-sql-server.md#clr) | | [Credentials](/sql/relational-databases/security/authentication-access/credentials-database-engine) | Yes, but only [database scoped credentials](/sql/t-sql/statements/create-database-scoped-credential-transact-sql). | Yes, but only **Azure Key Vault** and `SHARED ACCESS SIGNATURE` are supported - see [details](../managed-instance/transact-sql-tsql-differences-sql-server.md#credential) | | [Cross-database/three-part name queries](/sql/relational-databases/linked-servers/linked-servers-database-engine) | No - see [Elastic queries](elastic-query-overview.md) | Yes|
The Azure platform provides a number of PaaS capabilities that are added as an a
| **Platform feature** | **Azure SQL Database** | **Azure SQL Managed Instance** | | | | |
-| [Active geo-replication](active-geo-replication-overview.md) | Yes - all service tiers other than hyperscale | No, see [Auto-failover groups](auto-failover-group-overview.md) as an alternative |
-| [Auto-failover groups](auto-failover-group-overview.md) | Yes - all service tiers other than hyperscale | Yes, see [Auto-failover groups](auto-failover-group-overview.md)|
+| [Active geo-replication](active-geo-replication-overview.md) | Yes - all service tiers. Public Preview in Hyperscale. | No, see [Auto-failover groups](auto-failover-group-overview.md) as an alternative. |
+| [Auto-failover groups](auto-failover-group-overview.md) | Yes - all service tiers. Public Preview in Hyperscale. | Yes, see [Auto-failover groups](auto-failover-group-overview.md).|
| Auto-scale | Yes, but only in [serverless model](serverless-tier-overview.md). In the non-serverless model, the change of service tier (change of vCore, storage, or DTU) is fast and online. The service tier change requires minimal or no downtime. | No, you need to choose reserved compute and storage. The change of service tier (vCore or max storage) is online and requires minimal or no downtime. | | [Automatic backups](automated-backups-overview.md) | Yes. Full backups are taken every 7 days, differential 12 hours, and log backups every 5-10 min. | Yes. Full backups are taken every 7 days, differential 12 hours, and log backups every 5-10 min. | | [Automatic tuning (indexes)](/sql/relational-databases/automatic-tuning/automatic-tuning)| [Yes](automatic-tuning-overview.md)| No | | [Availability Zones](../../availability-zones/az-overview.md) | Yes | No | | [Azure Resource Health](../../service-health/resource-health-overview.md) | Yes | No |
-| Backup retention | Yes. 7 days default, max 35 days. | Yes. 7 days default, max 35 days. |
+| Backup retention | Yes. 7 days default, max 35 days. Hyperscale backups are currently limited to a 7 day retention period. | Yes. 7 days default, max 35 days. |
| [Data Migration Service (DMS)](/sql/dma/dma-overview) | Yes | Yes | | [Elastic jobs](elastic-jobs-overview.md) | Yes - see [Elastic jobs (preview)](elastic-jobs-overview.md) | No ([SQL Agent](../managed-instance/transact-sql-tsql-differences-sql-server.md#sql-server-agent) can be used instead). | | File system access | No. Use [BULK INSERT](/sql/t-sql/statements/bulk-insert-transact-sql#f-importing-data-from-a-file-in-azure-blob-storage) or [OPENROWSET](/sql/t-sql/functions/openrowset-transact-sql#i-accessing-data-from-a-file-stored-on-azure-blob-storage) to access and load data from Azure Blob Storage as an alternative. | No. Use [BULK INSERT](/sql/t-sql/statements/bulk-insert-transact-sql#f-importing-data-from-a-file-in-azure-blob-storage) or [OPENROWSET](/sql/t-sql/functions/openrowset-transact-sql#i-accessing-data-from-a-file-stored-on-azure-blob-storage) to access and load data from Azure Blob Storage as an alternative. | | [Geo-restore](recovery-using-backups.md#geo-restore) | Yes | Yes | | [Hyperscale architecture](service-tier-hyperscale.md) | Yes | No |
-| [Long-term backup retention - LTR](long-term-retention-overview.md) | Yes, keep automatically taken backups up to 10 years. | Yes, keep automatically taken backups up to 10 years. |
+| [Long-term backup retention - LTR](long-term-retention-overview.md) | Yes, keep automatically taken backups up to 10 years. Long-term retention policies are not yet supported for Hyperscale databases. | Yes, keep automatically taken backups up to 10 years. |
| Pause/resume | Yes, in [serverless model](serverless-tier-overview.md) | No | | [Policy-based management](/sql/relational-databases/policy-based-management/administer-servers-by-using-policy-based-management) | No | No | | Public IP address | Yes. The access can be restricted using firewall or service endpoints. | Yes. Needs to be explicitly enabled and port 3342 must be enabled in NSG rules. Public IP can be disabled if needed. See [Public endpoint](../managed-instance/public-endpoint-overview.md) for more details. |
-| [Point in time database restore](/sql/relational-databases/backup-restore/restore-a-sql-server-database-to-a-point-in-time-full-recovery-model) | Yes - all service tiers other than hyperscale - see [SQL Database recovery](recovery-using-backups.md#point-in-time-restore) | Yes - see [SQL Database recovery](recovery-using-backups.md#point-in-time-restore) |
+| [Point in time database restore](/sql/relational-databases/backup-restore/restore-a-sql-server-database-to-a-point-in-time-full-recovery-model) | Yes - all service tiers. See [SQL Database recovery](recovery-using-backups.md#point-in-time-restore) | Yes - see [SQL Database recovery](recovery-using-backups.md#point-in-time-restore) |
| Resource pools | Yes, as [Elastic pools](elastic-pool-overview.md) | Yes. A single instance of SQL Managed Instance can have multiple databases that share the same pool of resources. In addition, you can deploy multiple instances of SQL Managed Instance in [instance pools (preview)](../managed-instance/instance-pools-overview.md) that can share the resources. | | Scaling up or down (online) | Yes, you can either change DTU or reserved vCores or max storage with the minimal downtime. | Yes, you can change reserved vCores or max storage with the minimal downtime. | | [SQL Alias](/sql/database-engine/configure-windows/create-or-delete-a-server-alias-for-use-by-a-client) | No, use [DNS Alias](dns-alias-overview.md) | No, use [Cliconfg](https://techcommunity.microsoft.com/t5/Azure-Database-Support-Blog/Lesson-Learned-33-How-to-make-quot-cliconfg-quot-to-work-with/ba-p/369022) to set up alias on the client machines. |
azure-sql Service Tier Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tier-hyperscale.md
These are the current limitations to the Hyperscale service tier as of GA. We'r
| Issue | Description | | :- | : |
-| The Manage Backups pane for a server doesn't show Hyperscale databases. These will be filtered from the view. | Hyperscale has a separate method for managing backups, so the Long-Term Retention and Point-in-Time backup retention settings don't apply. Accordingly, Hyperscale databases don't appear in the Manage Backup pane.<br><br>For databases migrated to Hyperscale from other Azure SQL Database service tiers, pre-migration backups are kept for the duration of [backup retention](automated-backups-overview.md#backup-retention) period of the source database. These backups can be used to [restore](recovery-using-backups.md#programmatic-recovery-using-automated-backups) the source database to a point in time before migration.|
-| Point-in-time restore | A non-Hyperscale database can't be restored as a Hyperscale database, and a Hyperscale database can't be restored as a non-Hyperscale database. For a non-Hyperscale database that has been migrated to Hyperscale by changing its service tier, restore to a point in time before migration and within the backup retention period of the database is supported [programmatically](recovery-using-backups.md#programmatic-recovery-using-automated-backups). The restored database will be non-Hyperscale. |
+| Backup retention is currently seven days; long-term retention policies aren't yet supported. | Hyperscale has a unique method for managing backups, so a non-Hyperscale database can't be restored as a Hyperscale database, and a Hyperscale database can't be restored as a non-Hyperscale database.<BR/><BR/>For databases migrated to Hyperscale from other Azure SQL Database service tiers, pre-migration backups are kept for the duration of [backup retention](automated-backups-overview.md#backup-retention) period of the source database, including long-term retention policies. Restoring a pre-migration backup within the backup retention period of the database is supported [programmatically](recovery-using-backups.md#programmatic-recovery-using-automated-backups). You can restore these backups to any non-Hyperscale service tier.|
| When changing Azure SQL Database service tier to Hyperscale, the operation fails if the database has any data files larger than 1 TB | In some cases, it may be possible to work around this issue by [shrinking](file-space-manage.md#shrinking-data-files) the large files to be less than 1 TB before attempting to change the service tier to Hyperscale. Use the following query to determine the current size of database files. `SELECT file_id, name AS file_name, size * 8. / 1024 / 1024 AS file_size_GB FROM sys.database_files WHERE type_desc = 'ROWS'`;| | SQL Managed Instance | Azure SQL Managed Instance isn't currently supported with Hyperscale databases. | | Elastic Pools | Elastic Pools aren't currently supported with Hyperscale.|
azure-sql Availability Group Manually Configure Multiple Regions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/availability-group-manually-configure-multiple-regions.md
The preceding diagram shows a new virtual machine called SQL-3. SQL-3 is in a di
In this architecture, the replica in the remote region is normally configured with asynchronous commit availability mode and manual failover mode.
-When availability group replicas are on Azure virtual machines in different Azure regions, each region requires:
+When availability group replicas are on Azure virtual machines in different Azure regions, then you can connect the Virtual Networks using the recommended [Virtual Network Peering](../../../virtual-network/virtual-network-peering-overview.md) or [Site to Site VPN Gateway](../../../vpn-gateway/vpn-gateway-about-vpngateways.md)
-* A virtual network gateway
-* A virtual network gateway connection
-
-The following diagram shows how the networks communicate between data centers.
-
- :::image type="content" source="./media/availability-group-manually-configure-multiple-regions/01-vpngateway-example.png" alt-text="Diagram that shows the two Virtual Networks in different Azure Regions communicating using V P N Gateways.":::
>[!IMPORTANT] >This architecture incurs outbound data charges for data replicated between Azure regions. See [Bandwidth Pricing](https://azure.microsoft.com/pricing/details/bandwidth/).
To create a replica in a remote data center, do the following steps:
1. [Create a virtual network in the new region](../../../virtual-network/manage-virtual-network.md#create-a-virtual-network).
-1. [Configure a VNet-to-VNet connection using the Azure portal](../../../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md).
+1. Connect the Virtual Networks in the two Azure regions using one of the following methods:
+
+ [Virtual Network Peering - Connect virtual networks with virtual network peering using the Azure portal](../../../virtual-network/tutorial-connect-virtual-networks-portal.md) (Recommended)
+
+ or
+
+ [Site to Site VPN Gateway - Configure a VNet-to-VNet connection using the Azure portal](../../../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md).
>[!NOTE] >In some cases, you may have to use PowerShell to create the VNet-to-VNet connection. For example, if you use different Azure accounts you cannot configure the connection in the portal. In this case see, [Configure a VNet-to-VNet connection using the Azure portal](../../../vpn-gateway/vpn-gateway-vnet-vnet-rm-ps.md).
azure-sql Performance Guidelines Best Practices Checklist https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist.md
The following is a quick checklist of storage configuration best practices for r
- For the log drive plan for capacity and test performance versus cost while evaluating the [premium P30 - P80 disks](../../../virtual-machines/disks-types.md#premium-ssds). - If submillisecond storage latency is required, use [Azure ultra disks](../../../virtual-machines/disks-types.md#ultra-disks) for the transaction log. - For M-series virtual machine deployments consider [Write Accelerator](../../../virtual-machines/how-to-enable-write-accelerator.md) over using Azure ultra disks.
- - Place [tempdb](/sql/relational-databases/databases/tempdb-database) on the local ephemeral SSD (default `D:\`) drive for most SQL Server workloads after choosing the optimal VM size.
+ - Place [tempdb](/sql/relational-databases/databases/tempdb-database) on the local ephemeral SSD (default `D:\`) drive for most SQL Server workloads that are not part of Failover Cluster Instance (FCI) after choosing the optimal VM size.
- If the capacity of the local drive is not enough for tempdb, consider sizing up the VM. See [Data file caching policies](performance-guidelines-best-practices-storage.md#data-file-caching-policies) for more information.
+ - For FCI place tempdb on the shared storage.
+ - If the FCI workload is heavily dependent on tempdb disk performance, then as an advanced configuration place tempdb on the local ephemeral SSD (default `D:\`) drive which is not part of FCI storage. This configuration will need custom monitoring and action to ensure the local ephemeral SSD (default `D:\`) drive is available all the time as any failures of this drive will not trigger action from FCI.
- Stripe multiple Azure data disks using [Storage Spaces](/windows-server/storage/storage-spaces/overview) to increase I/O bandwidth up to the target virtual machine's IOPS and throughput limits. - Set [host caching](../../../virtual-machines/disks-performance.md#virtual-machine-uncached-vs-cached-limits) to read-only for data file disks. - Set [host caching](../../../virtual-machines/disks-performance.md#virtual-machine-uncached-vs-cached-limits) to none for log file disks.
The following is a quick checklist of best practices for SQL Server configuratio
- Enable [Query Store](/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store) on all production SQL Server databases [following best practices](/sql/relational-databases/performance/best-practice-with-the-query-store). - Enable [automatic tuning](/sql/relational-databases/automatic-tuning/automatic-tuning) on mission critical application databases. - Ensure that all [tempdb best practices](/sql/relational-databases/databases/tempdb-database#optimizing-tempdb-performance-in-sql-server) are followed.-- Place tempdb on the ephemeral D:/ drive. - [Use the recommended number of files](/troubleshoot/sql/performance/recommendations-reduce-allocation-contention#resolution), using multiple tempdb data files starting with one file per core, up to eight files. - Schedule SQL Server Agent jobs to run [DBCC CHECKDB](/sql/t-sql/database-console-commands/dbcc-checkdb-transact-sql#a-checking-both-the-current-and-another-database), [index reorganize](/sql/relational-databases/indexes/reorganize-and-rebuild-indexes#reorganize-an-index), [index rebuild](/sql/relational-databases/indexes/reorganize-and-rebuild-indexes#rebuild-an-index), and [update statistics](/sql/t-sql/statements/update-statistics-transact-sql#examples) jobs. - Monitor and manage the health and size of the SQL Server [transaction log file](/sql/relational-databases/logs/manage-the-size-of-the-transaction-log-file#Recommendations).
azure-sql Performance Guidelines Best Practices Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-storage.md
Review the following checklist for a brief overview of the storage best practice
- For the log drive plan for capacity and test performance versus cost while evaluating the [premium P30 - P80 disks](../../../virtual-machines/disks-types.md#premium-ssds) - If submillisecond storage latency is required, use [Azure ultra disks](../../../virtual-machines/disks-types.md#ultra-disks) for the transaction log. - For M-series virtual machine deployments consider [write accelerator](../../../virtual-machines/how-to-enable-write-accelerator.md) over using Azure ultra disks.
- - Place [tempdb](/sql/relational-databases/databases/tempdb-database) on the local ephemeral SSD `D:\` drive for most SQL Server workloads after choosing the optimal VM size.
+ - Place [tempdb](/sql/relational-databases/databases/tempdb-database) on the local ephemeral SSD (default `D:\`) drive for most SQL Server workloads that are not part of Failover Cluster Instance (FCI) after choosing the optimal VM size.
- If the capacity of the local drive is not enough for tempdb, consider sizing up the VM. See [Data file caching policies](#data-file-caching-policies) for more information.
+ - For FCI place tempdb on the shared storage.
+ - If the FCI workload is heavily dependent on tempdb disk performance, then as an advanced configuration place tempdb on the local ephemeral SSD (default `D:\`) drive which is not part of FCI storage. This configuration will need custom monitoring and action to ensure the local ephemeral SSD (default `D:\`) drive is available all the time as any failures of this drive will not trigger action from FCI.
- Stripe multiple Azure data disks using [Storage Spaces](/windows-server/storage/storage-spaces/overview) to increase I/O bandwidth up to the target virtual machine's IOPS and throughput limits. - Set [host caching](../../../virtual-machines/disks-performance.md#virtual-machine-uncached-vs-cached-limits) to read-only for data file disks. - Set [host caching](../../../virtual-machines/disks-performance.md#virtual-machine-uncached-vs-cached-limits) to none for log file disks.
azure-sql Sql Agent Extension Manually Register Single Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/sql-agent-extension-manually-register-single-vm.md
Provide the SQL Server license type as either pay-as-you-go (`PAYG`) to pay per
Register a SQL Server VM in full mode with the Azure CLI: ```azurecli-interactive
-# Register Enterprise or Standard self-installed VM in Lightweight mode
+# Register Enterprise or Standard self-installed VM in full mode
az sql vm create --name <vm_name> --resource-group <resource_group_name> --location <vm_location> --license-type <license_type> --sql-mgmt-type Full ```
backup Backup Azure Data Protection Use Rest Api Backup Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-data-protection-use-rest-api-backup-postgresql.md
Title: Back up Azure PostgreSQL databases using Azure data protection REST API description: In this article, learn how to configure, initiate, and manage backup operations of Azure PostgreSQL databases using REST API. Previously updated : 10/22/2021 Last updated : 01/24/2022 ms.assetid: 55fa0a81-018f-4843-bef8-609a44c97dcd
-# Back up Azure PostgreSQL databases using Azure data protection via REST API (preview)
+# Back up Azure PostgreSQL databases using Azure data protection via REST API
This article describes how to manage backups for Azure PostgreSQL databases via REST API.
Fetch the Azure Resource Manager ID (ARM ID) of the PostgreSQL database to be pr
#### Azure key vault
-Azure Backup service doesn't store the username and password to connect to the PostgreSQL database. Instead, the backup admin needs to seed the *keys* into the key vault. Then the Azure Backup service will access the key vault, read the keys, and access the database. Note the secret identifier of the relevant key. The following example uses bash.
+The Azure Backup service doesn't store the username and password to connect to the PostgreSQL database. Instead, the backup admin needs to seed the *keys* into the key vault. Then the Azure Backup service will access the key vault, read the keys, and access the database. Note the secret identifier of the relevant key. The following example uses bash.
```http "https://testkeyvaulteus.vault.azure.net/secrets/ossdbkey"
Azure Backup service doesn't store the username and password to connect to the P
#### Backup vault
-Backup vault has to connect to the PostgreSQL server, and then access the database via the keys present in the key vault. So, it requires access to PostgreSQL server and the key vault. Access is granted to the Backup vault's MSI.
+Backup vault has to connect to the PostgreSQL server, and then access the database via the keys present in the key vault. So, it requires access to PostgreSQL server and the key vault. Access is granted to the Backup vault's Managed Service Identity (MSI).
-[Read about the appropriate permissions](./backup-azure-database-postgresql-overview.md#set-of-permissions-needed-for-azure-postgresql-database-backup) that you need to grant to back up vault's MSI on the PostgreSQL server and the Azure Key vault, where the keys to the database are stored.
+You need to grant permissions to back up vault's MSI on the PostgreSQL server and the Azure Key vault, where the keys to the database are stored. [Learn more](./backup-azure-database-postgresql-overview.md#set-of-permissions-needed-for-azure-postgresql-database-backup).
### Prepare the request to configure backup
-Once the relevant permissions are set to the vault and the PostgreSQL database, and the vault and policy are configured, we can prepare the request to configure backup. The following is the request body to configure the backup for an Azure PostgreSQL database. The Azure Resource Manager ID (ARM ID) of the Azure PostgreSQL database and its details are mentioned in the _datasourceinfo_ section and the policy information is present in the _policyinfo_ section.
+After you set the relevant permissions to the vault and PostgreSQL database, and configure the vault and policy, prepare the request to configure backup. See the following request body to configure backup for an Azure PostgreSQL database. The Azure Resource Manager ID (ARM ID) of the Azure PostgreSQL database and its details are present in the _datasourceinfo_ section. The policy information is present in the _policyinfo_ section.
```json {
Once the relevant permissions are set to the vault and the PostgreSQL database,
To validate if the request to configure backup will be successful, use the [validate for backup API](/rest/api/dataprotection/backup-instances/validate-for-backup). You can use the response to perform the required prerequisites, and then submit the configuration for the backup request.
-Validate for backup request is a _POST_ operation and the URI contains `{subscriptionId}`, `{vaultName}`, `{vaultresourceGroupName}` parameters.
+Validate for backup request is a _POST_ operation and the Uniform Resource Identifier (URI) contains `{subscriptionId}`, `{vaultName}`, `{vaultresourceGroupName}` parameters.
```http POST https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{vaultresourceGroupname}/providers/Microsoft.DataProtection/backupVaults/{backupVaultName}/validateForBackup?api-version=2021-01-01
The [request body](#prepare-the-request-to-configure-backup) that we prepared ea
Backup request validation is an [asynchronous operation](../azure-resource-manager/management/async-operations.md). So, this operation creates another operation that needs to be tracked separately.
-It returns two responses: 202 (Accepted) when another operation is created, and then 200 (OK) when that operation completes.
+It returns two responses: 202 (Accepted) when another operation is created, and 200 (OK) when that operation completes.
|Name |Type |Description | ||||
GET https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx
### Configure backup request
-Once the request is validated, then you can submit the same to the [create backup instance API](/rest/api/dataprotection/backup-instances/create-or-update). A Backup instance represents an item protected with data protection service of Azure Backup within the Backup vault. Here, the Azure PostgreSQL database is the backup instance and you can use the same request body, which was validated above, with minor additions.
+Once the request is validated, you can submit the same to the [create backup instance API](/rest/api/dataprotection/backup-instances/create-or-update). One of the Azure Backup data protection services protects the Backup instance within the Backup vault. Here, the Azure PostgreSQL database is the backup instance. Use the above-validated request body with minor additions.
Use a unique name for the backup instance. So, we recommend you use a combination of the resource name and a unique identifier. For example, in the following operation, we'll use _testpostgresql-empdb11-957d23b1-c679-4c94-ade6-c4d34635e149_ and mark it as the backup instance name.
To create a backup instance, following are the components of the request body:
##### Example request for configure backup
-We'll use the same request body that we used to validate the backup request with a unique name as we mentioned [above](#configure-backup).
+We'll use the [same request body that we used to validate the backup request](#configure-backup) with a unique name.
```json {
We'll use the same request body that we used to validate the backup request with
#### Responses to configure backup request
-The _create backup instance request_ is an [asynchronous operation](../azure-resource-manager/management/async-operations.md). So, this operation creates another operation that needs to be tracked separately.
+_Create backup instance request_ is an [asynchronous operation](../azure-resource-manager/management/async-operations.md). So, this operation creates another operation that needs to be tracked separately.
-It returns two responses: 201 (Created) when backup instance is created and the protection is being configured, and then 200 (OK) when that configuration completes.
+It returns two responses: 201 (Created) when backup instance is created and the protection is configured. 200 (OK) when that configuration completes.
|Name |Type |Description | ||||
DELETE "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/resourceGroups/Test
*DELETE* protection is an [asynchronous operation](../azure-resource-manager/management/async-operations.md). So, this operation creates another operation that needs to be tracked separately.
-It returns two responses: 202 (Accepted) when another operation is created, and then 200 (OK) when that operation completes.
+It returns two responses: 202 (Accepted) when another operation is created, and 200 (OK) when that operation completes.
|Name |Type |Description | ||||
GET "https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx
For more information on the Azure Backup REST APIs, see the following articles: -- [Azure Data Protection Provider REST API](/rest/api/dataprotection/)
+- [Get started with Azure Data Protection Provider REST API](/rest/api/dataprotection/)
- [Get started with Azure REST API](/rest/api/azure/)
backup Backup Azure Data Protection Use Rest Api Create Update Postgresql Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-data-protection-use-rest-api-create-update-postgresql-policy.md
Title: Create backup policies for Azure PostgreSQL databases using data protection REST API description: In this article, you'll learn how to create and manage backup policies for Azure PostgreSQL databases using REST API. Previously updated : 10/21/2021 Last updated : 01/24/2022 ms.assetid: 759ee63f-148b-464c-bfc4-c9e640b7da6b
-# Create Azure Data Protection backup policies for Azure PostgreSQL databases using REST API (preview)
+# Create Azure Data Protection backup policies for Azure PostgreSQL databases using REST API
A backup policy governs the retention and schedule of your backups. Azure PostgreSQL database Backup offers long-term retention and supports a backup per day.
backup Backup Azure Database Postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-database-postgresql-overview.md
Title: About Azure Database for PostgreSQL backup
-description: An overview on Azure Database for PostgreSQL backup (preview)
+description: An overview on Azure Database for PostgreSQL backup
Previously updated : 09/28/2021- Last updated : 01/24/2022+++
-# About Azure Database for PostgreSQL backup (preview)
+# About Azure Database for PostgreSQL backup
Azure Backup and Azure Database Services have come together to build an enterprise-class backup solution for Azure Database for PostgreSQL servers that retains backups for up to 10 years. Besides long-term retention, the solution offers the following capabilities:
Azure Backup and Azure Database Services have come together to build an enterpri
You can use this solution independently or in addition to the [native backup solution offered by Azure PostgreSQL](../postgresql/concepts-backup.md) that offers retention up to 35 days. The native solution is suited for operational recoveries, such as when you want to recover from the latest backups. The Azure Backup solution helps you with your compliance needs and more granular and flexible backup/restore.
-## Support matrix
-
-|Support |Details |
-|||
-|Supported deployments | [Azure Database for PostgreSQL - Single Server](../postgresql/overview.md#azure-database-for-postgresqlsingle-server) |
-|Supported Azure regions | East US, East US 2, Central US, South Central US, West US, West US 2, West Central US, Brazil South, Canada Central, North Europe, West Europe, UK South, UK West, Germany West Central, Switzerland North, Switzerland West, East Asia, Southeast Asia, Japan East, Japan West, Korea Central, Korea South, India Central, Australia East, Australia Central, Australia Central 2, UAE North |
-|Supported Azure PostgreSQL versions | 9.5, 9.6, 10, 11 |
-
-## Feature considerations and limitations
--- All operations are supported from the Azure portal only. -- Recommended limit for the maximum database size is 400 GB.-- Cross-region backup isn't supported. Therefore, you can't back up an Azure PostgreSQL server to a vault in another region. Similarly, you can only restore a backup to a server within the same region as the vault. However, we support cross-subscription backup and restore. -- Only the data is recovered during restore; "roles" aren't restored.-- In preview, we recommend you to run the solution only on your test environment.- ## Backup process 1. As a backup admin, you can specify the Azure PostgreSQL databases that you intend to back up. Additionally, you can also specify the details of the Azure key vault that stores the credentials needed to connect to the specified database(s). These credentials are securely seeded by the database admin in the Azure key vault.
The Azure Backup service needs to connect to the Azure PostgreSQL while taking e
:::image type="content" source="./media/backup-azure-database-postgresql-overview/key-vault-based-authentication-model.png" alt-text="Diagram showing the workload or database flow.":::
-#### Set of Permissions needed for Azure PostgreSQL database backup
+#### Set of permissions needed for Azure PostgreSQL database backup
1. Grant the following access permissions to the Backup vaultΓÇÖs MSI:
The Azure Backup service needs to connect to the Azure PostgreSQL while taking e
>[!Note] >You can grant these permissions within the [configure backup](backup-azure-database-postgresql.md#configure-backup-on-azure-postgresql-databases) flow with a single click if you (the backup admin) have ΓÇÿwriteΓÇÖ access on the intended resources, or use an ARM template if you donΓÇÖt have the required permissions (when multiple personas are involved).
-#### Set of Permissions needed for Azure PostgreSQL database restore
+#### Set of permissions needed for Azure PostgreSQL database restore
Permissions for restore are similar to the ones needed for backup and you need to grant the permissions on the target PostgreSQL server and its corresponding key vault. Unlike in configure backup flow, the experience to grant these permissions inline is currently not available. Therefore, you need to [manually grant the access on the Postgres server and the corresponding key vault](#grant-access-on-the-azure-postgresql-server-and-key-vault-manually).
We had earlier launched a different authentication model that was entirely based
To grant all the access permissions needed by Azure Backup, refer to the following sections:
-### Access Permissions on the Azure PostgreSQL server
+### Access permissions on the Azure PostgreSQL server
1. Set Backup vaultΓÇÖs MSI **Reader** access on the Azure PostgreSQL server.
- :::image type="content" source="./media/backup-azure-database-postgresql-overview/set-reader-access-on-azure-postgresql-server-inline.png" alt-text="Screenshot showing the option to set Backup vaultΓÇÖs MSI Reader access on the Azure PostgreSQL server." lightbox="./media/backup-azure-database-postgresql-overview/set-reader-access-on-azure-postgresql-server-expanded.png":::
+ :::image type="content" source="./media/backup-azure-database-postgresql-overview/set-reader-access-on-azure-postgresql-server-inline.png" alt-text="Screenshot showing the option to set Backup vaultΓÇÖs M S I Reader access on the Azure PostgreSQL server." lightbox="./media/backup-azure-database-postgresql-overview/set-reader-access-on-azure-postgresql-server-expanded.png":::
1. Network line of sight access on the Azure PostgreSQL server: Set ΓÇÿAllow access to Azure servicesΓÇÖ flag to ΓÇÿYesΓÇÖ. :::image type="content" source="./media/backup-azure-database-postgresql-overview/network-line-of-sight-access-on-azure-postgresql-server-inline.png" alt-text="Screenshot showing the option to set network line of sight access on the Azure PostgreSQL server." lightbox="./media/backup-azure-database-postgresql-overview/network-line-of-sight-access-on-azure-postgresql-server-expanded.png":::
-### Access Permissions on the Azure Key vault (associated with the PostgreSQL server)
+### Access permissions on the Azure Key vault (associated with the PostgreSQL server)
1. Set Backup vaultΓÇÖs MSI **Key Vault Secrets User** (or **get**, **list** secrets) access on the Azure key vault. To assign permissions, you can use role assignments or access policies. ItΓÇÖs not required to add the permission using both the options as it doesnΓÇÖt help.
To grant all the access permissions needed by Azure Backup, refer to the followi
:::image type="content" source="./media/backup-azure-database-postgresql-overview/key-vault-secrets-user-access-inline.png" alt-text="Screenshot showing the option to provide secret user access." lightbox="./media/backup-azure-database-postgresql-overview/key-vault-secrets-user-access-expanded.png":::
- :::image type="content" source="./media/backup-azure-database-postgresql-overview/grant-permission-to-applications-azure-rbac-inline.png" alt-text="Screenshot showing the option to grant the backup vaultΓÇÖs MSI Key Vault Secrets User access on the key vault." lightbox="./media/backup-azure-database-postgresql-overview/grant-permission-to-applications-azure-rbac-expanded.png":::
+ :::image type="content" source="./media/backup-azure-database-postgresql-overview/grant-permission-to-applications-azure-rbac-inline.png" alt-text="Screenshot showing the option to grant the backup vaultΓÇÖs M S I Key Vault Secrets User access on the key vault." lightbox="./media/backup-azure-database-postgresql-overview/grant-permission-to-applications-azure-rbac-expanded.png":::
- Using access policies (that is, Permission model is set to Vault access policy):
To grant all the access permissions needed by Azure Backup, refer to the followi
### Database userΓÇÖs backup privileges on the database
-Run the following query in [PG admin](#using-the-pg-admin-tool) tool (replace _username_ with the database user ID):
+Run the following query in [PG admin](#use-the-pg-admin-tool) tool (replace _username_ with the database user ID):
``` DO $do$
END;
$do$ ```
-## Using the PG admin tool
+## Use the PG admin tool
[Download PG admin tool](https://www.pgadmin.org/download/) if you donΓÇÖt have it already. You can connect to the Azure PostgreSQL server through this tool. Also, you can add databases and new users to this server. Create new server with a name of your choice. Enter the Host name/address name same as the **Server name** displayed in the Azure PostgreSQL resource view in the Azure portal. :::image type="content" source="./media/backup-azure-database-postgresql-overview/enter-host-name-or-address-name-same-as--server-name-inline.png" alt-text="Screenshot showing the option to enter the Host name or address name same as the Server name." lightbox="./media/backup-azure-database-postgresql-overview/enter-host-name-or-address-name-same-as--server-name-expanded.png"::: Ensure that you add the _current client ID address_ to the Firewall rules for the connection to go through. You can add new databases and database users to the server. For database users, add a new **Login/Group Roles**ΓÇÖ. Ensure **Can login?** is set to **Yes**.
backup Backup Azure Database Postgresql Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-database-postgresql-support-matrix.md
+
+ Title: Azure Database for PostgreSQL server support matrix
+description: Provides a summary of support settings and limitations of Azure Database for PostgreSQL server backup.
+ Last updated : 01/24/2022++++++
+# Azure Database for PostgreSQL server support matrix
+
+You can use [Azure Backup](./backup-overview.md) to protect Azure Database for PostgreSQL server. This article summarizes supported regions, scenarios, and the limitations.
+
+## Supported regions
+
+Azure Database for PostgreSQL server backup is available in the following regions:
+
+East US, East US 2, Central US, South Central US, West US, West US 2, West Central US, Brazil South, Canada Central, North Europe, West Europe, UK South, UK West, Germany West Central, Switzerland North, Switzerland West, East Asia, Southeast Asia, Japan East, Japan West, Korea Central, Korea South, India Central, Australia East, Australia Central, Australia Central 2, UAE North
+
+## Support scenarios
+
+|Scenarios | Details |
+|| |
+|Deployments | [Azure Database for PostgreSQL - Single Server](../postgresql/overview.md#azure-database-for-postgresqlsingle-server) |
+|Azure PostgreSQL versions | 9.5, 9.6, 10, 11 |
+
+## Feature considerations and limitations
+
+- Recommended limit for the maximum database size is 400 GB.
+- Cross-region backup isn't supported. Therefore, you can't back up an Azure PostgreSQL server to a vault in another region. Similarly, you can only restore a backup to a server within the same region as the vault. However, we support cross-subscription backup and restore.
+- Only the data is recovered during restore; "roles" aren't restored.
+- We recommend you run the solution only on your test environment.
+
+## Next steps
+
+- [Back up Azure Database for PostgreSQL server](backup-azure-database-postgresql.md)
backup Backup Azure Database Postgresql Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-database-postgresql-troubleshoot.md
Title: Troubleshoot Azure Database for PostgreSQL backup description: Troubleshooting information for backing up Azure Database for PostgreSQL. Previously updated : 09/22/2021- Last updated : 01/24/2022+++
-# Troubleshoot PostgreSQL database backup by using Azure Backup (preview)
+# Troubleshoot PostgreSQL database backup using Azure Backup
This article provides troubleshooting information for backing up Azure PostgreSQL databases with Azure Backup.
Establish network line of sight by enabling the **Allow access to Azure services
## Next steps
-[About Azure Database for PostgreSQL backup (preview)](backup-azure-database-postgresql-overview.md)
+[About Azure Database for PostgreSQL backup](backup-azure-database-postgresql-overview.md)
backup Backup Azure Database Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-database-postgresql.md
Title: Back up Azure Database for PostgreSQL
-description: Learn about Azure Database for PostgreSQL backup with long-term retention (preview)
+description: Learn about Azure Database for PostgreSQL backup with long-term retention
Previously updated : 11/02/2021 Last updated : 01/24/2022
-# Azure Database for PostgreSQL backup with long-term retention (preview)
+# Azure Database for PostgreSQL backup with long-term retention
This article describes how to back up Azure Database for PostgreSQL server.
You can configure backup on multiple databases across multiple Azure PostgreSQL
Alternatively, you can navigate to this page from the [Backup center](./backup-center-overview.md).
-1. Select/[create a Backup Policy](#create-backup-policy) that defines the backup schedule and the retention duration.
+1. Select or [create](#create-backup-policy) a Backup Policy that defines the backup schedule and the retention duration.
:::image type="content" source="./media/backup-azure-database-postgresql/create-or-add-backup-policy-inline.png" alt-text="Screenshot showing the option to add a backup policy." lightbox="./media/backup-azure-database-postgresql/create-or-add-backup-policy-expanded.png"::: 1. **Select Azure PostgreSQL databases to back up**: Choose one of the Azure PostgreSQL servers across subscriptions if they're in the same region as that of the vault. Expand the arrow to see the list of databases within a server. >[!Note]
- >You can't (and don't need to) back up the databases *azure_maintenance* and *azure_sys*. Additionally, you can't back up a database already backed-up to a Backup vault.
+ >You don't need to back up the databases *azure_maintenance* and *azure_sys*. Additionally, you can't back up a database already backed-up to a Backup vault.
:::image type="content" source="./media/backup-azure-database-postgresql/select-azure-postgresql-databases-to-back-up-inline.png" alt-text="Screenshot showing the option to select an Azure PostgreSQL database." lightbox="./media/backup-azure-database-postgresql/select-azure-postgresql-databases-to-back-up-expanded.png":::
You can configure backup on multiple databases across multiple Azure PostgreSQL
1. **Enter secret URI**: Use this option if the secret URI is shared/known to you. You can copy the **secret URI from the Key vault** -> **Secrets (select a secret)** -> **Secret Identifier**.
- :::image type="content" source="./media/backup-azure-database-postgresql/enter-secret-uri-inline.png" alt-text="Screenshot showing how to enter secret URI." lightbox="./media/backup-azure-database-postgresql/enter-secret-uri-expanded.png":::
+ :::image type="content" source="./media/backup-azure-database-postgresql/enter-secret-uri-inline.png" alt-text="Screenshot showing how to enter secret U R I." lightbox="./media/backup-azure-database-postgresql/enter-secret-uri-expanded.png":::
However, with this option, Azure Backup gets no visibility about the key vault youΓÇÖve referenced. Therefore, access permissions on the key vault canΓÇÖt be granted inline. The backup admin along with the Postgres and/or key vault admin need to ensure that the backup vaultΓÇÖs [access on the key vault is granted manually](backup-azure-database-postgresql-overview.md#access-permissions-on-the-azure-key-vault-associated-with-the-postgresql-server) outside the configure backup flow for the backup operation to succeed.
- 1. **Select the key vault**: Use this option if you know the key vault and secret name. With this option, you (backup admin with write access on the key vault) can grant the access permissions on the key vault inline. The key vault and the secret could pre-exist or be created on the go. Ensure that the secret is the PG server connection string in ADO.net format updated with the credentials of the database user that has been granted with the ΓÇÿbackupΓÇÖ privileges on the server. Learn more about [how to create the [secrets in the key vault](#create-secrets-in-the-key-vault).
+ 1. **Select the key vault**: Use this option if you know the key vault and secret name. With this option, you (backup admin with write access on the key vault) can grant the access permissions on the key vault inline. The key vault and the secret could pre-exist or be created on the go. Ensure that the secret is the PG server connection string in ADO.net format updated with the credentials of the database user that has been granted with the ΓÇÿbackupΓÇÖ privileges on the server. Learn more about how to [create secrets in the key vault](#create-secrets-in-the-key-vault).
:::image type="content" source="./media/backup-azure-database-postgresql/assign-secret-store-inline.png" alt-text="Screenshot showing how to assign secret store." lightbox="./media/backup-azure-database-postgresql/assign-secret-store-expanded.png"::: :::image type="content" source="./media/backup-azure-database-postgresql/select-secret-from-azure-key-vault-inline.png" alt-text="Screenshot showing the selection of secret from Azure Key Vault." lightbox="./media/backup-azure-database-postgresql/select-secret-from-azure-key-vault-expanded.png":::
-1. When the secret information update is complete, the validation starts after the key vault information has been updated. Here, the backup service validates if it has all the necessary [access permissions](backup-azure-database-postgresql-overview.md#key-vault-based-authentication-model)() to read secret details from the key vault and connect to the database. If one or more access permissions are found missing, it will display one of the error messages ΓÇô _Role assignment not done or User cannot assign roles_.
+1. When the secret information update is complete, the validation starts after the key vault information has been updated.
+
+ >[!Note]
+ >
+ >- Here, the backup service validates if it has all the necessary [access permissions](backup-azure-database-postgresql-overview.md#key-vault-based-authentication-model) to read secret details from the key vault and connect to the database.
+ >- If one or more access permissions are found missing, it will display one of the error messages ΓÇô _Role assignment not done or User cannot assign roles_.
:::image type="content" source="./media/backup-azure-database-postgresql/validation-of-secret-inline.png" alt-text="Screenshot showing the validation of secret." lightbox="./media/backup-azure-database-postgresql/validation-of-secret-expanded.png":::
- 1. **User cannot assign roles**: This message displays when you (the backup admin) donΓÇÖt have the write access on the PostgreSQL server and/or key vault to assign missing permissions as listed under **View details**. Download the assignment template from the action button and have it run by the PostgreSQL and/or key vault admin. ItΓÇÖs an ARM template that helps you assign the necessary permissions on the required resources. Once the template is run successfully, click **Re-validate** on the Configure Backup page.
+ - **User cannot assign roles**: This message displays when you (the backup admin) donΓÇÖt have the write access on the PostgreSQL server and/or key vault to assign missing permissions as listed under **View details**. Download the assignment template from the action button and have it run by the PostgreSQL and/or key vault admin. ItΓÇÖs an ARM template that helps you assign the necessary permissions on the required resources. Once the template is run successfully, click **Re-validate** on the Configure Backup page.
- :::image type="content" source="./media/backup-azure-database-postgresql/download-role-assignment-template-inline.png" alt-text="Screenshot showing the option to download role assignment template." lightbox="./media/backup-azure-database-postgresql/download-role-assignment-template-expanded.png":::
+ :::image type="content" source="./media/backup-azure-database-postgresql/download-role-assignment-template-inline.png" alt-text="Screenshot showing the option to download role assignment template." lightbox="./media/backup-azure-database-postgresql/download-role-assignment-template-expanded.png":::
- 1. **Role assignment not done**: This message displays when you (the backup admin) have the write access on the PostgreSQL server and/or key vault to assign missing permissions as listed under **View details**. Use **Assign missing roles** action button in the top action menu to grant permissions on the PostgreSQL server and/or the key vault inline.
+ - **Role assignment not done**: This message displays when you (the backup admin) have the write access on the PostgreSQL server and/or key vault to assign missing permissions as listed under **View details**. Use **Assign missing roles** action button in the top action menu to grant permissions on the PostgreSQL server and/or the key vault inline.
- :::image type="content" source="./media/backup-azure-database-postgresql/role-assignment-not-done-inline.png" alt-text="Screenshot showing the error about the role assignment not done." lightbox="./media/backup-azure-database-postgresql/role-assignment-not-done-expanded.png":::
+ :::image type="content" source="./media/backup-azure-database-postgresql/role-assignment-not-done-inline.png" alt-text="Screenshot showing the error about the role assignment not done." lightbox="./media/backup-azure-database-postgresql/role-assignment-not-done-expanded.png":::
1. Select **Assign missing roles** in the top menu and assign roles. Once the process starts, the [missing access permissions](backup-azure-database-postgresql-overview.md#azure-backup-authentication-with-the-postgresql-server) on the KV and/or PG server are granted to the backup vault. You can define the scope at which the access permissions should be granted. When the action is complete, re-validation starts.
You can configure backup on multiple databases across multiple Azure PostgreSQL
- Backup vault accesses secrets from the key vault and runs a test connection to the database to validate if the credentials have been entered correctly. The privileges of the database user are also checked to see [if the Database user has backup-related permissions on the database](backup-azure-database-postgresql-overview.md#database-users-backup-privileges-on-the-database).
- - PostgreSQL admin will have all the backup and restore permissions on the database by default. Therefore, validations would succeed.
+ - PostgreSQL admin will have all the backup and restore permissions on the database by default. Therefore, validations would succeed.
- A low privileged user may not have backup/restore permissions on the database. Therefore, the validations would fail. A PowerShell script is dynamically generated (one per record/selected database). [Run the PowerShell script to grant these privileges to the database user on the database](#create-secrets-in-the-key-vault). Alternatively, you can assign these privileges using PG admin or PSQL tool. :::image type="content" source="./media/backup-azure-database-postgresql/backup-vault-accesses-secrets-inline.png" alt-text="Screenshot showing the backup vault access secrets from the key vault." lightbox="./media/backup-azure-database-postgresql/backup-vault-accesses-secrets-expanded.png":::
You can configure backup on multiple databases across multiple Azure PostgreSQL
:::image type="content" source="./media/backup-azure-database-postgresql/submit-configure-backup-operation-inline.png" alt-text="Screenshot showing the backup configuration submission and tracking progress." lightbox="./media/backup-azure-database-postgresql/submit-configure-backup-operation-expanded.png":::
-## Create Backup Policy
+## Create Backup policy
You can create a Backup policy on the go during the configure backup flow. Alternatively, go to **Backup center** -> **Backup policies** -> **Add**.
You can create a Backup policy on the go during the configure backup flow. Alter
:::image type="content" source="./media/backup-azure-database-postgresql/enter-name-for-new-policy-inline.png" alt-text="Screenshot showing the process to enter a name for the new policy." lightbox="./media/backup-azure-database-postgresql/enter-name-for-new-policy-expanded.png":::
-1. Define the Backup schedule. Currently, only Weekly backup option is available. However, you can schedule the backups on multiple days of the week.
+1. Define the Backup schedule.
+
+ Currently, only Weekly backup option is available. However, you can schedule the backups on multiple days of the week.
-1. Define **Retention** settings. You can add one or more retention rules. Each retention rule assumes inputs for specific backups, and data store and retention duration for those backups.
+1. Define **Retention** settings.
+
+ You can add one or more retention rules. Each retention rule assumes inputs for specific backups, and data store and retention duration for those backups.
1. To store your backups in one of the two data stores (or tiers), choose **Backup data store** (standard tier) or **Archive data store** (in preview). 1. Choose **On-expiry** to move the backup to archive data store upon its expiry in the backup data store.
- The **default retention rule** is applied in the absence of any other retention rule and has a default value of three months.
-
- - Retention duration ranges from seven days to 10 years in the **Backup data store**.
- - Retention duration ranges from six months to 10 years in the **Archive data store**.
+ >[!Note]
+ >The **default retention rule** is applied in the absence of any other retention rule and has a default value of three months.
+ >
+ >- Retention duration ranges from seven days to 10 years in the **Backup data store**.
+ >- Retention duration ranges from six months to 10 years in the **Archive data store**.
:::image type="content" source="./media/backup-azure-database-postgresql/choose-option-to-move-backup-to-archive-data-store-inline.png" alt-text="Screenshot showing to choose On-expiry to move the backup to archive data store upon its expiry." lightbox="./media/backup-azure-database-postgresql/choose-option-to-move-backup-to-archive-data-store-expanded.png":::
The secret is the PG server connection string in _ADO.net_ format updated with t
:::image type="content" source="./media/backup-azure-database-postgresql/pg-server-connection-string-inline.png" alt-text="Screenshot showing the PG server connection string as secret." lightbox="./media/backup-azure-database-postgresql/pg-server-connection-string-expanded.png"::: ## Run PowerShell script to grant privileges to database users
The dynamically generate PowerShell script during configure backup accepts the d
Ensure that **Connection Security settings** in the Azure PostgreSQL instance allowlist the IP address of the machine to allow network connectivity.
-## Generate an on-demand backup
+## Run an on-demand backup
To trigger a backup not in the schedule specified in the policy, go to **Backup instances** -> **Backup Now**. Choose from the list of retention rules that were defined in the associated Backup policy.
backup Backup Azure Dataprotection Use Rest Api Create Update Disk Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-dataprotection-use-rest-api-create-update-disk-policy.md
description: In this article, you'll learn how to create and manage backup polic
Last updated 10/06/2021 ms.assetid: ecc107c0-311c-42d0-a094-654d7ee30443+++ # Create Azure Data Protection backup policies for disks using REST API
backup Backup Center Actions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-center-actions.md
To stop protection, navigate to Backup center and select the **Backup Instances*
- [Learn more](backup-azure-manage-vms.md#stop-protecting-a-vm) about stopping backup for Azure Virtual Machines. - [Learn more](manage-azure-managed-disks.md#stop-protection-preview) about stopping backup for a disk.-- [Learn more](manage-azure-database-postgresql.md#stop-protection-preview) about stopping backup for Azure Database for PostgreSQL Server.
+- [Learn more](manage-azure-database-postgresql.md#stop-protection) about stopping backup for Azure Database for PostgreSQL Server.
## Resume backup
backup Backup Center Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-center-support-matrix.md
Last updated 10/20/2021
# Support matrix for Backup center
-Backup Center provides a single pane of glass for enterprises to [govern, monitor, operate, and analyze backups at scale](backup-center-overview.md). This article summarizes the scenarios that Backup center supports for each workload type.
+Backup center helps enterprises to [govern, monitor, operate, and analyze backups at scale](backup-center-overview.md). This article summarizes the scenarios that Backup center supports for each workload type.
## Supported scenarios | **Category** | **Scenario** | **Supported workloads** | **Limits** | | -| - | -- ||
-| Monitoring | View all jobs | <li> Azure Virtual Machine <br><br> <li> Azure Database for PostgreSQL server <br><br> <li> SQL in Azure VM <br><br> <li> SAP HANA in Azure VM <br><br> <li> Azure Files<br/><br/> <li>Azure Blobs<br/><br/> <li>Azure Managed Disks | <li> 7 days worth of jobs available out of the box. <br> <li> Each filter/drop-down supports a maximum of 1000 items. So Backup center can be used to monitor a maximum of 1000 subscriptions and 1000 vaults across tenants. |
-| Monitoring | View all backup instances | <li> Azure Virtual Machine <br><br> <li> Azure Database for PostgreSQL server <br><br> <li> SQL in Azure VM <br><br> <li> SAP HANA in Azure VM <br><br> <li> Azure Files<br/><br/> <li>Azure Blobs<br/><br/> <li>Azure Managed Disks | Same as above |
-| Monitoring | View all backup policies | <li> Azure Virtual Machine <br><br> <li> Azure Database for PostgreSQL server <br><br> <li> SQL in Azure VM <br><br> <li> SAP HANA in Azure VM <br><br> <li> Azure Files<br/><br/> <li>Azure Blobs<br/><br/> <li>Azure Managed Disks | Same as above |
-| Monitoring | View all vaults | <li> Azure Virtual Machine <br><br> <li> Azure Database for PostgreSQL server <br><br> <li> SQL in Azure VM <br><br> <li> SAP HANA in Azure VM <br><br> <li> Azure Files<br/><br/> <li>Azure Blobs<br/><br/> <li>Azure Managed Disks | Same as above |
-| Monitoring | View Azure Monitor alerts at scale | <li> Azure Virtual Machine <br><br> <li> Azure Database for PostgreSQL server <br><br> <li> SQL in Azure VM <br><br> <li> SAP HANA in Azure VM <br><br> <li> Azure Files<br/><br/> <li>Azure Blobs<br/><br/> <li>Azure Managed Disks | Refer [Alerts](./backup-azure-monitoring-built-in-monitor.md#azure-monitor-alerts-for-azure-backup-preview) documentation |
-| Monitoring | View Azure Backup metrics and write metric alert rules | <li>Azure VM </li><li>SQL in Azure VM </li><li> SAP HANA in Azure VM</li><li>Azure Files </li> | You can view metrics for all Recovery Services vaults for a region and subscription simultaneously. Viewing metrics for a larger scope in the Azure portal isnΓÇÖt currently supported. The same limits are also applicable to configure metric alert rules. Refer to [View metrics in the Azure portal](metrics-overview.md#view-metrics-in-the-azure-portal) for more details.|
-| Actions | Configure backup | <li> Azure Virtual Machine <br><br> <li> Azure Database for PostgreSQL server <br><br> <li> SQL in Azure VM <br><br> <li> SAP HANA in Azure VM <br><br> <li> Azure Files<br/><br/> <li>Azure Blobs<br/><br/> <li>Azure Managed Disks | Refer to support matrices for [Azure VM backup](./backup-support-matrix-iaas.md) and [Azure Database for PostgreSQL Server backup](backup-azure-database-postgresql-overview.md#support-matrix) |
-| Actions | Restore Backup Instance | <li> Azure Virtual Machine <br><br> <li> Azure Database for PostgreSQL server <br><br> <li> SQL in Azure VM <br><br> <li> SAP HANA in Azure VM <br><br> <li> Azure Files<br/><br/> <li>Azure Blobs<br/><br/> <li>Azure Managed Disks | Refer to support matrices for [Azure VM backup](./backup-support-matrix-iaas.md) and [Azure Database for PostgreSQL Server backup](backup-azure-database-postgresql-overview.md#support-matrix) |
-| Actions | Create vault | <li> Azure Virtual Machine <br><br> <li> Azure Database for PostgreSQL server <br><br> <li> SQL in Azure VM <br><br> <li> SAP HANA in Azure VM <br><br> <li> Azure Files<br/><br/> <li>Azure Blobs<br/><br/> <li>Azure Managed Disks | Refer to support matrices for [Recovery Services vault](./backup-support-matrix.md#vault-support) |
-| Actions | Create backup policy | <li> Azure Virtual Machine <br><br> <li> Azure Database for PostgreSQL server <br><br> <li> SQL in Azure VM <br><br> <li> SAP HANA in Azure VM <br><br> <li> Azure Files<br/><br/> <li>Azure Blobs<br/><br/> <li>Azure Managed Disks | Refer to support matrices for [Recovery Services vault](./backup-support-matrix.md#vault-support) |
-| Actions | Execute on-demand backup for a backup instance | <li> Azure Virtual Machine <br><br> <li> Azure Database for PostgreSQL server <br><br> <li> SQL in Azure VM <br><br> <li> SAP HANA in Azure VM <br><br> <li> Azure Files<br/><br/> <li>Azure Blobs<br/><br/> <li>Azure Managed Disks | Refer to support matrices for [Azure VM backup](./backup-support-matrix-iaas.md) and [Azure Database for PostgreSQL Server backup](backup-azure-database-postgresql-overview.md#support-matrix) |
-| Actions | Stop backup for a backup instance | <li> Azure Virtual Machine <br><br> <li> Azure Database for PostgreSQL server <br><br> <li> SQL in Azure VM <br><br> <li> SAP HANA in Azure VM <br><br> <li> Azure Files<br/><br/> <li>Azure Blobs<br/><br/> <li>Azure Managed Disks | Refer to support matrices for [Azure VM backup](./backup-support-matrix-iaas.md) and [Azure Database for PostgreSQL Server backup](backup-azure-database-postgresql-overview.md#support-matrix) |
-| Actions | Execute cross-region restore job from Backup center | <li> Azure Virtual Machine <br><br> <li> SQL in Azure VM <br><br> <li> SAP HANA in Azure VM | Refer [cross-region restore](./backup-create-rs-vault.md#set-cross-region-restore) documentation |
-| Insights | View Backup Reports | <li> Azure Virtual Machine <br><br> <li> SQL in Azure Virtual Machine <br><br> <li> SAP HANA in Azure Virtual Machine <br><br> <li> Azure Files <br><br> <li> System Center Data Protection Manager <br><br> <li> Azure Backup Agent (MARS) <br><br> <li> Azure Backup Server (MABS) | Refer to [supported scenarios for Backup Reports](./configure-reports.md#supported-scenarios) |
-| Governance | View and assign built-in and custom Azure Policies under category 'Backup' | N/A | N/A |
-| Governance | View datasources not configured for backup | <li> Azure Virtual Machine <br><br> <li> Azure Database for PostgreSQL server | N/A |
+| Monitoring | View all jobs | Azure Virtual Machine <br><br> Azure Database for PostgreSQL server <br><br> SQL in Azure VM <br><br> SAP High-Performance Analytic Appliance (HANA) in Azure VM <br><br> Azure Files<br/><br/> Azure Blobs<br/><br/> Azure Managed Disks | Seven days worth of jobs available out of the box. <br> <br> Each filter/drop-down supports a maximum of 1000 items. So, Backup center can be used to monitor a maximum of 1000 subscriptions and 1000 vaults across tenants. |
+| Monitoring | View all backup instances | Azure Virtual Machine <br><br> Azure Database for PostgreSQL server <br><br> SQL in Azure VM <br><br> SAP HANA in Azure VM <br><br> Azure Files<br/><br/> Azure Blobs<br/><br/> Azure Managed Disks | Same as previous |
+| Monitoring | View all backup policies | Azure Virtual Machine <br><br> Azure Database for PostgreSQL server <br><br> SQL in Azure VM <br><br> SAP HANA in Azure VM <br><br> Azure Files<br/><br/> Azure Blobs<br/><br/> Azure Managed Disks | Same as previous |
+| Monitoring | View all vaults | Azure Virtual Machine <br><br> Azure Database for PostgreSQL server <br><br> SQL in Azure VM <br><br> SAP HANA in Azure VM <br><br> Azure Files<br/><br/> Azure Blobs<br/><br/> Azure Managed Disks | Same as previous |
+| Monitoring | View Azure Monitor alerts at scale | Azure Virtual Machine <br><br> Azure Database for PostgreSQL server <br><br> SQL in Azure VM <br><br> SAP HANA in Azure VM <br><br> Azure Files<br/><br/> Azure Blobs<br/><br/> Azure Managed Disks | Refer [Alerts](./backup-azure-monitoring-built-in-monitor.md#azure-monitor-alerts-for-azure-backup-preview) documentation |
+| Monitoring | View Azure Backup metrics and write metric alert rules | Azure VM <br><br>SQL in Azure VM <br><br> SAP HANA in Azure VM<br><br>Azure Files | You can view metrics for all Recovery Services vaults for a region and subscription simultaneously. Viewing metrics for a larger scope in the Azure portal isnΓÇÖt currently supported. The same limits are also applicable to configure metric alert rules. For more information, see [View metrics in the Azure portal](metrics-overview.md#view-metrics-in-the-azure-portal).|
+| Actions | Configure backup | Azure Virtual Machine <br><br> Azure Database for PostgreSQL server <br><br> SQL in Azure VM <br><br> SAP HANA in Azure VM <br><br> Azure Files<br/><br/> Azure Blobs<br/><br/> Azure Managed Disks | See support matrices for [Azure VM backup](./backup-support-matrix-iaas.md) and [Azure Database for PostgreSQL Server backup](backup-azure-database-postgresql-support-matrix.md) |
+| Actions | Restore Backup Instance | Azure Virtual Machine <br><br> Azure Database for PostgreSQL server <br><br> SQL in Azure VM <br><br> SAP HANA in Azure VM <br><br> Azure Files<br/><br/> Azure Blobs<br/><br/> Azure Managed Disks | See support matrices for [Azure VM backup](./backup-support-matrix-iaas.md) and [Azure Database for PostgreSQL Server backup](backup-azure-database-postgresql-support-matrix.md) |
+| Actions | Create vault | Azure Virtual Machine <br><br> Azure Database for PostgreSQL server <br><br> SQL in Azure VM <br><br> SAP HANA in Azure VM <br><br> Azure Files<br/><br/> Azure Blobs<br/><br/> Azure Managed Disks | Refer to support matrices for [Recovery Services vault](./backup-support-matrix.md#vault-support) |
+| Actions | Create backup policy | Azure Virtual Machine <br><br> Azure Database for PostgreSQL server <br><br> SQL in Azure VM <br><br> SAP HANA in Azure VM <br><br> Azure Files<br/><br/> Azure Blobs<br/><br/> Azure Managed Disks | See support matrices for [Recovery Services vault](./backup-support-matrix.md#vault-support) |
+| Actions | Execute on-demand backup for a backup instance | Azure Virtual Machine <br><br> Azure Database for PostgreSQL server <br><br> SQL in Azure VM <br><br> SAP HANA in Azure VM <br><br> Azure Files<br/><br/> Azure Blobs<br/><br/> Azure Managed Disks | See support matrices for [Azure VM backup](./backup-support-matrix-iaas.md) and [Azure Database for PostgreSQL Server backup](backup-azure-database-postgresql-support-matrix.md) |
+| Actions | Stop backup for a backup instance | Azure Virtual Machine <br><br> Azure Database for PostgreSQL server <br><br> SQL in Azure VM <br><br> SAP HANA in Azure VM <br><br> Azure Files<br/><br/> Azure Blobs<br/><br/> Azure Managed Disks | See support matrices for [Azure VM backup](./backup-support-matrix-iaas.md) and [Azure Database for PostgreSQL Server backup](backup-azure-database-postgresql-support-matrix.md) |
+| Actions | Execute cross-region restore job from Backup center | Azure Virtual Machine <br><br> SQL in Azure VM <br><br> SAP HANA in Azure VM | See the [cross-region restore](./backup-create-rs-vault.md#set-cross-region-restore) documentation. |
+| Insights | View Backup Reports | Azure Virtual Machine <br><br> SQL in Azure Virtual Machine <br><br> SAP HANA in Azure Virtual Machine <br><br> Azure Files <br><br> System Center Data Protection Manager <br><br> Azure Backup Agent (MARS) <br><br> Azure Backup Server (MABS) | See [supported scenarios for Backup Reports](./configure-reports.md#supported-scenarios). |
+| Governance | View and assign built-in and custom Azure Policies under category _Backup_. | N/A | N/A |
+| Governance | View datasources not configured for backup | Azure Virtual Machine <br><br> Azure Database for PostgreSQL server | N/A |
## Unsupported scenarios
Backup Center provides a single pane of glass for enterprises to [govern, monito
* [Review the support matrix for Azure Backup](./backup-support-matrix.md) * [Review the support matrix for Azure VM backup](./backup-support-matrix-iaas.md)
-* [Review the support matrix for Azure Database for PostgreSQL Server backup](backup-azure-database-postgresql-overview.md#support-matrix)
+* [Review the support matrix for Azure Database for PostgreSQL Server backup](backup-azure-database-postgresql-support-matrix.md)
backup Backup Postgresql Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-postgresql-cli.md
Title: Back up Azure Database for PostgreSQL with long-term-retention using Azure CLI description: Learn how to back up Azure Database for PostgreSQL using Azure CLI. Previously updated : 10/24/2021 Last updated : 01/24/2022 ++
-# Back up Azure PostgreSQL databases using Azure CLI (preview)
+# Back up Azure PostgreSQL databases using Azure CLI
This article explains how to back up [Azure PostgreSQL database](../postgresql/overview.md#azure-database-for-postgresqlsingle-server) using Azure CLI.
In this article, you'll learn how to:
- Configure a backup of an Azure PostgreSQL database - Run an on-demand backup job
-For informgreSQL databases supported scenarios and limitations, see the [support matrix](backup-azure-database-postgresql-overview.md#support-matrix).
+For informgreSQL databases supported scenarios and limitations, see the [support matrix](backup-azure-database-postgresql-support-matrix.md).
## Create a Backup vault
-Backup vault is a storage entity in Azure that stores backup data for various new workloads that Azure Backup supports, such as Azure Database for PostgreSQL servers, blobs in a storage account, and Azure Disks. Backup vaults help to organize your backup data, while minimizing management overhead. Backup vaults are based on the Azure Resource Manager model of Azure, which provides enhanced capabilities to help secure backup data.
+Backup vault is a storage entity in Azure. This stores the backup data for new workloads that Azure Backup supports. For example, Azure Database for PostgreSQL servers, blobs in a storage account, and Azure Disks. Backup vaults help to organize your backup data, while minimizing management overhead. Backup vaults are based on the Azure Resource Manager model of Azure, which provides enhanced capabilities to help secure backup data.
Before you create a Backup vault, choose the storage redundancy of the data within the vault. Then proceed to create the Backup vault with that storage redundancy and the location.
While disk backup offers multiple backups per day and blob backup is a _continuo
- Initial Datastore (Where will the backups land initially) - Trigger (How the backup is triggered) - Schedule based
- - Default Tagging Criteria (A default 'tag' for all the scheduled backups. This tag links the backups to the retention rule)
+ - Default tagging criteria (a default 'tag' for all the scheduled backups. This tag links the backups to the retention rule)
- Default Retention Rule (A rule that will be applied to all backups, by default, on the initial datastore) So, this object defines what type of backups are triggered, how they are triggered (via a schedule), what they are tagged with, where they land (a datastore), and the life cycle of the backup data in a datastore. The default PowerShell object for PostgreSQL says to trigger a *full* backup every week and they will reach the vault, where they are stored for three months.
The resultant PowerShell object is as follows:
- Initial Datastore (Where will the backups land initially) - Trigger (How the backup is triggered) - Schedule based
- - Default Tagging Criteria (A default 'tag' for all the scheduled backups. This tag links the backups to the retention rule)
+ - Default tagging criteria (a default 'tag' for all the scheduled backups. This tag links the backups to the retention rule)
- New Tagging criteria for the new retention rule with the same name 'X' - Default Retention Rule (A rule that will be applied to all backups, by default, on the initial datastore) - A new Retention rule named as 'X'
az dataprotection backup-policy get-default-policy-template --datasource-type Az
} ```
-The policy template consists of a trigger (which decides what triggers the backup) and a lifecycle (which decides when to delete/copy/move the backup). In Azure PostgreSQL database backup, the default value for trigger is a scheduled Weekly trigger (1 backup every 7 days) and to retain each backup for three months.
+The policy template consists of a trigger (which decides what triggers the backup) and a lifecycle (which decides when to delete/copy/move the backup). In Azure PostgreSQL database backup, the default value for trigger is a scheduled Weekly trigger (one backup every seven days) and to retain each backup for three months.
**Scheduled trigger:**
az dataprotection backup-policy retention-rule set --lifecycles .\VaultToArchive
Once a retention rule is created, you've to create a corresponding *tag* in the *Trigger* property of the Backup policy. Use the [az dataprotection backup-policy tag create-absolute-criteria](/cli/azure/dataprotection/backup-policy/tag#az_dataprotection_backup_policy_tag_create_absolute_criteria) command to create a new tagging criteria and use the [az dataprotection backup-policy tag set](/cli/azure/dataprotection/backup-policy/tag#az_dataprotection_backup_policy_tag_set) command to update the existing tag or create a new tag.
-The following example creates a new *tag* along with the criteria (which is the first successful backup of the month) with the same name as the corresponding retention rule to be applied.
+The following example creates a new *tag* along with the criteria, the first successful backup of the month. The tag has the same name as the corresponding retention rule to be applied.
In this example, the tag criteria should be named *Monthly*.
ossId="/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/ossrg/providers/Mic
#### Azure key vault
-Azure Backup service doesn't store the username and password to connect to the PostgreSQL database. Instead, the backup admin needs to seed the *keys* into the key vault, and then the Backup service will access the key vault, read the keys, and then access the database. Note the secret identifier of the relevant key.
+The Azure Backup service doesn't store the username and password to connect to the PostgreSQL database. Instead, the backup admin needs to seed the *keys* into the key vault. Then the Backup service will access the key vault, read the keys, and then access the database. Note the secret identifier of the relevant key.
The following example uses bash.
keyURI="https://testkeyvaulteus.vault.azure.net/secrets/ossdbkey"
#### Backup vault
-Backup vault has to connect to the PostgreSQL server, and then access the database via the keys present in the key vault. Therefore, it requires access to the PostgreSQL server and the key vault. Access is granted to the Backup vault's MSI.
+Backup vault has to connect to the PostgreSQL server, and then access the database via the keys present in the key vault. Therefore, it requires access to the PostgreSQL server and the key vault. Access is granted to the Backup vault's Managed Service Identity (MSI).
-[Read about the appropriate permissions](./backup-azure-database-postgresql-overview.md#set-of-permissions-needed-for-azure-postgresql-database-backup) that you should grant to the Backup vault's MSI on the PostgreSQL server and the Azure Key vault, where the keys to the database are stored.
+See the [permissions](./backup-azure-database-postgresql-overview.md#set-of-permissions-needed-for-azure-postgresql-database-backup) you should grant to the Backup vault's Managed Service Identity (MSI) on the PostgreSQL server and Azure Key vault that stores keys to the database.
### Prepare the request
backup Backup Postgresql Ps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-postgresql-ps.md
Title: Back up Azure Database for PostgreSQL with long-term-retention using Azure PowerShell description: Learn how to back up Azure Database for PostgreSQL using Azure PowerShell. - Previously updated : 10/14/2021 Last updated : 01/24/2022 +++
-# Back up Azure PostgreSQL databases using Azure PowerShell (preview)
+# Back up Azure PostgreSQL databases using Azure PowerShell
This article explains how to back up [Azure PostgreSQL database](../postgresql/overview.md#azure-database-for-postgresqlsingle-server) using Azure PowerShell.
In this article, you'll learn how to:
- Run an on-demand backup job
-For information on the Azure PostgreSQL databases supported scenarios and limitations, see the [support matrix](backup-azure-database-postgresql-overview.md#support-matrix).
+For information on the Azure PostgreSQL databases supported scenarios and limitations, see the [support matrix](backup-azure-database-postgresql-support-matrix.md).
## Create a Backup vault
backup Manage Azure Database Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/manage-azure-database-postgresql.md
Title: Manage Azure Database for PostgreSQL server description: Learn about managing Azure Database for PostgreSQL server. Previously updated : 09/21/2021 Last updated : 01/24/2022+++
-# Manage Azure Database for PostgreSQL server (preview)
+# Manage Azure Database for PostgreSQL server
This article describes how to manage Azure Database for PostgreSQL servers that are backed up with the Azure Backup service.
You can change the associated policy with a backup instance.
:::image type="content" source="./media/manage-azure-database-postgresql/reassign-policy.png" alt-text="Screenshot showing the option to reassign policy.":::
-## Stop Protection (Preview)
+## Stop protection
-There are three ways by which you can stop protecting an Azure Database for PostgreSQL server.
+There are three ways to stop protecting an Azure Database for PostgreSQL server.
- **Stop Protection and Retain Data (Retain forever)**: This option helps you stop all future backup jobs from protecting your Azure Database for PostgreSQL server. However, Azure Backup service will retain the recovery points that are backed up forever. You'll need to pay to keep the recovery points in the vault (see [Azure Backup pricing](https://azure.microsoft.com/pricing/details/backup/) for details). You'll be able to restore from these recovery points, if needed. To resume protection, use the **Resume backup** option.
There are three ways by which you can stop protecting an Azure Database for Post
- **Stop Protection and Delete Data**: This option helps you stop all future backup jobs from protecting your Azure Database for PostgreSQL server and delete all the recovery points. You won't be able to restore the database or use the **Resume backup** option.
-### Stop Protection and Retain Data
+### Stop protection and retain data
-1. Go to **Backup center** and select **Azure Database for PostgreSQL server (Preview)**.
+1. Go to **Backup center** and select **Azure Database for PostgreSQL server**.
1. From the list of server backup instances, select the instance that you want to retain.
-1. Select **Stop Backup (Preview)**.
+1. Select **Stop Backup**.
:::image type="content" source="./media/manage-azure-database-postgresql/select-postgresql-server-backup-instance-to-delete-inline.png" alt-text="Screenshot showing the selection of the Azure Database for PostgreSQL server backup instance to be stopped." lightbox="./media/manage-azure-database-postgresql/select-postgresql-server-backup-instance-to-delete-expanded.png":::
There are three ways by which you can stop protecting an Azure Database for Post
:::image type="content" source="./media/manage-azure-database-postgresql/confirmation-to-stop-backup-inline.png" alt-text="Screenshot for the confirmation for stopping backup." lightbox="./media/manage-azure-database-postgresql/confirmation-to-stop-backup-expanded.png":::
-### Stop Protection and Delete Data
+### Stop protection and delete data
-1. Go to **Backup center** and select **Azure Database for PostgreSQL server (Preview)**.
+1. Go to **Backup center** and select **Azure Database for PostgreSQL server**.
1. From the list of server backup instances, select the instance that you want to delete.
-1. Click **Stop Backup (Preview)**.
+1. Click **Stop Backup**.
1. Select **Delete Backup Data**.
There are three ways by which you can stop protecting an Azure Database for Post
:::image type="content" source="./media/manage-azure-database-postgresql/confirmation-to-stop-backup-inline.png" alt-text="Screenshot for the confirmation for stopping backup." lightbox="./media/manage-azure-database-postgresql/confirmation-to-stop-backup-expanded.png":::
-## Resume Protection
+## Resume protection
If you have selected the **Stop Protection and Retain data** option while stopping the data backup, you can resume protection for your Azure Database for PostgreSQL server. >[!Note] >When you start protecting a database, the backup policy is applied to the retained data as well. The recovery points that have expired as per the policy will be cleaned up.
-Use the following steps:
+Follow these steps:
-1. Go to **Backup center** and select **Azure Database for PostgreSQL server (Preview)**.
+1. Go to **Backup center** and select **Azure Database for PostgreSQL server**.
1. From the list of server backup instances, select the instance that you want resume.
-1. Select **Resume Backup (Preview)**.
+1. Select **Resume Backup**.
:::image type="content" source="./media/manage-azure-database-postgresql/resume-data-protection-inline.png" alt-text="Screenshot showing the option to resume data protection." lightbox="./media/manage-azure-database-postgresql/resume-data-protection-expanded.png":::
backup Restore Azure Database Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/restore-azure-database-postgresql.md
Title: Restore Azure Database for PostgreSQL description: Learn about how to restore Azure Database for PostgreSQL backups. Previously updated : 10/01/2021 Last updated : 01/21/2022 +++
-# Restore Azure Database for PostgreSQL backups (preview)
+# Restore Azure Database for PostgreSQL backups
This article explains how to restore a database to an Azure PostgreSQL server backed up by Azure Backup.
Assign the Backup vault MSI the permission to access the storage account contain
1. Select the **Storage Blob Data Contributor** role in the **Role** drop-down list to the Backup vault MSI.
- :::image type="content" source="./media/restore-azure-database-postgresql/assign-vault-msi-permission-to-access-storage-account-containers-azure-portal-inline.png" alt-text="Screenshot showing the process to assign Backup vault MSI the permission to access the storage account containers using the Azure portal." lightbox="./media/restore-azure-database-postgresql/assign-vault-msi-permission-to-access-storage-account-containers-azure-portal-expanded.png":::
+ :::image type="content" source="./media/restore-azure-database-postgresql/assign-vault-msi-permission-to-access-storage-account-containers-azure-portal-inline.png" alt-text="Screenshot showing the process to assign Backup vault M S I the permission to access the storage account containers using the Azure portal." lightbox="./media/restore-azure-database-postgresql/assign-vault-msi-permission-to-access-storage-account-containers-azure-portal-expanded.png":::
Alternatively, give granular permissions to the specific container you're restoring to by using the Azure CLI [az role assignment](/cli/azure/role/assignment) create command.
az role assignment create --assignee $VaultMSI_AppId --role "Storage Blob Data
``` Replace the assignee parameter with the _Application ID_ of the vault's MSI and the scope parameter to refer to your specific container. To get the **Application ID** of the vault MSI, select **All applications** under **Application type**. Search for the vault name and copy the Application ID.
- :::image type="content" source="./media/restore-azure-database-postgresql/select-application-type-for-id-inline.png" alt-text="Screenshot showing the process to get the Application ID of the vault MSI." lightbox="./media/restore-azure-database-postgresql/select-application-type-for-id-expanded.png":::
+ :::image type="content" source="./media/restore-azure-database-postgresql/select-application-type-for-id-inline.png" alt-text="Screenshot showing the process to get the Application I D of the vault MSI." lightbox="./media/restore-azure-database-postgresql/select-application-type-for-id-expanded.png":::
- :::image type="content" source="./media/restore-azure-database-postgresql/copy-vault-id-inline.png" alt-text="Screenshot showing the process to copy the Application ID of the vault." lightbox="./media/restore-azure-database-postgresql/copy-vault-id-expanded.png":::
+ :::image type="content" source="./media/restore-azure-database-postgresql/copy-vault-id-inline.png" alt-text="Screenshot showing the process to copy the Application I D of the vault." lightbox="./media/restore-azure-database-postgresql/copy-vault-id-expanded.png":::
## Next steps
backup Restore Postgresql Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/restore-postgresql-database-cli.md
Title: Restore Azure PostgreSQL databases via Azure CLI description: Learn how to restore Azure PostgreSQL databases using Azure CLI. Previously updated : 10/25/2021 Last updated : 01/24/2022
-# Restore Azure PostgreSQL databases using Azure CLI (preview)
+# Restore Azure PostgreSQL databases using Azure CLI
This article explains how to restore [Azure PostgreSQL databases](../postgresql/overview.md#azure-database-for-postgresqlsingle-server) to an Azure PostgreSQL server backed-up by Azure Backup.
-Being a PaaS database, the Original-Location Recovery (OLR) option to restore by replacing the existing database (from where the backups were taken) isn't supported. You can restore from a recovery point to create a new database in the same Azure PostgreSQL server or in any other PostgreSQL server. This is called Alternate-Location Recovery (ALR) that helps to keep both - the source database and the restored (new) database.
+Being a PaaS database, the Original Location Recovery (OLR) option to restore by replacing the existing database (from where the backups were taken) isn't supported. You can restore from a recovery point to create a new database in the same Azure PostgreSQL server or in any other PostgreSQL server. This is called Alternate-Location Recovery (ALR) that helps to keep both - the source database and the restored (new) database.
In this article, you'll learn how to:
We'll refer to an existing Backup vault _TestBkpVault_, under the resource group
### Set up permissions
-Backup vault uses Managed Identity to access other Azure resources. To restore from backup, Backup vaultΓÇÖs managed identity requires a set of permissions on the Azure PostgreSQL server to which the database should be restored.
+Backup vault uses managed identity to access other Azure resources. To restore from backup, Backup vaultΓÇÖs managed identity requires a set of permissions on the Azure PostgreSQL server to which the database should be restored.
To assign the relevant permissions for vault's system-assigned managed identity on the target PostgreSQL server, see the [set of permissions needed to backup Azure PostgreSQL database](./backup-azure-database-postgresql-overview.md#set-of-permissions-needed-for-azure-postgresql-database-restore).
To restore the recovery point as files to a storage account, the [Backup vault's
### Fetch the relevant recovery point
-To list all backup instances within a vault, use [az dataprotection backup-instance list](/cli/azure/dataprotection/backup-instance#az_dataprotection_backup_instance_list) command, and then fetch the relevant instance using the [az dataprotection backup-instance show](/cli/azure/dataprotection/backup-instance#az_dataprotection_backup_instance_show) command. Alternatively, for _at-scale_ scenarios, you can list backup instances across vaults and subscriptions using the [az dataprotection backup-instance list-from-resourcegraph](/cli/azure/dataprotection/backup-instance#az_dataprotection_backup_instance_list_from_resourcegraph) command.
+To list all backup instances within a vault, use [az dataprotection backup-instance list](/cli/azure/dataprotection/backup-instance#az_dataprotection_backup_instance_list) command. Then fetch the relevant instance using the [az dataprotection backup-instance show](/cli/azure/dataprotection/backup-instance#az_dataprotection_backup_instance_show) command. Alternatively, for _at-scale_ scenarios, you can list backup instances across vaults and subscriptions using the [az dataprotection backup-instance list-from-resourcegraph](/cli/azure/dataprotection/backup-instance#az_dataprotection_backup_instance_list_from_resourcegraph) command.
```azurecli az dataprotection backup-instance list-from-resourcegraph --datasource-type AzureDatabaseForPostgreSQL -subscriptions "xxxxxxxx-xxxx-xxxx-xxxx"
There are various restore options for a PostgreSQL database. You can restore the
#### Restore as database
-Construct the Azure Resource Manager ID (ARM ID) of the new PostgreSQL database to be created (with the target PostgreSQL server to which permissions were assigned as detailed [above](#set-up-permissions)) and the required PostgreSQL database name. For example, a PostgreSQL database can be named **emprestored21** under a target PostgreSQL server **targetossserver** in resource group **targetrg** with a different subscription.
+Construct the Azure Resource Manager ID (ARM ID) of the new PostgreSQL database. You need to create this with the [target PostgreSQL server to which permissions were assigned](#set-up-permissions). Also, construct the required PostgreSQL database name. For example, a PostgreSQL database can be named **emprestored21** under a target PostgreSQL server **targetossserver** in resource group **targetrg** with a different subscription.
```azurecli $targetOssId = "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourceGroups/targetrg/providers/providers/Microsoft.DBforPostgreSQL/servers/targetossserver/databases/emprestored21"
az dataprotection backup-instance restore initialize-for-data-recovery --datasou
For an archive-based recovery point, you need to: 1. Rehydrate from archive datastore to vault store
-1. Modify the source datastore.
-1. Add other parameters to specify the rehydration priority.
-1. Specify the duration for which the rehydrated recovery point should be retained in the vault data store.
-1. Restore as a database from this recovery point.
+1. Modify the source datastore
+1. Add other parameters to specify the rehydration priority
+1. Specify the duration for which the rehydrated recovery point should be retained in the vault data store
+1. Restore as a database from this recovery point
-Use the following command to prepare the request for all the above-mentioned operations, at once.
+Use the following command to prepare the request for all the previous-mentioned operations, at once.
```azurecli az dataprotection backup-instance restore initialize-for-data-recovery --datasource-type AzureDatabaseForPostgreSQL --restore-location {location} --source-datastore ArchiveStore --target-resource-id $targetOssId --recovery-point-id 9da55e757af94261afa009b43cd3222a --secret-store-type AzureKeyVault --secret-store-uri "https://restoreoss-test.vault.azure.net/secrets/dbauth3" --rehydration-priority Standard --rehydration-duration 12 > OssRestoreFromArchiveReq.JSON
az dataprotection backup-instance restore initialize-for-data-recovery --datasou
#### Restore as files
-Fetch the URI of the container, within the storage account to which permissions were assigned as detailed [above](#set-up-permissions). For example, a container named **testcontainerrestore** under a storage account **testossstorageaccount** with a different subscription.
+Fetch the Uniform Resource Identifier (URI) of the container, within the storage account [to which permissions were assigned](#set-up-permissions). For example, a container named **testcontainerrestore** under a storage account **testossstorageaccount** with a different subscription.
```azurecli $contURI = "https://testossstorageaccount.blob.core.windows.net/testcontainerrestore"
Use the [az dataprotection backup-instance restore initialize-for-data-recovery-
az dataprotection backup-instance restore initialize-for-data-recovery-as-files --datasource-type AzureDatabaseForPostgreSQL --restore-location {location} --source-datastore VaultStore -target-blob-container-url $contURI --target-file-name "empdb11_postgresql-westus_1628853549768" --recovery-point-id 9da55e757af94261afa009b43cd3222a > OssRestoreAsFilesReq.JSON ```
-For archive-based recovery point, modify the source datastore, and add the rehydration priority and the retention duration, in days, of the rehydrated recovery point as mentioned below:
+For archive-based recovery point, in the following script:
+
+- Modify the source datastore.
+- Add the rehydration priority and the retention duration, in days, of the rehydrated recovery point.
```azurecli az dataprotection backup-instance restore initialize-for-data-recovery-as-files --datasource-type AzureDatabaseForPostgreSQL --restore-location {location} --source-datastore ArchiveStore -target-blob-container-url $contURI --target-file-name "empdb11_postgresql-westus_1628853549768" --recovery-point-id 9da55e757af94261afa009b43cd3222a --rehydration-priority Standard --rehydration-duration 12 > OssRestoreAsFilesReq.JSON ```
-You can also validate if the JSON file will succeed to create new resources using the [az dataprotection backup-instance validate-for-restore](/cli/azure/dataprotection/backup-instance#az_dataprotection_backup_instance_validate_for_restore) command.
+To validate if the JSON file will succeed to create new resources, use the [az dataprotection backup-instance validate-for-restore](/cli/azure/dataprotection/backup-instance#az_dataprotection_backup_instance_validate_for_restore) command.
### Trigger the restore
-Use the [az dataprotection backup-instance restore trigger](/cli/azure/dataprotection/backup-instance/restore#az_dataprotection_backup_instance_restore_trigger) command to trigger the restore operation with the request prepared above.
+Use the [az dataprotection backup-instance restore trigger](/cli/azure/dataprotection/backup-instance/restore#az_dataprotection_backup_instance_restore_trigger) command to trigger the restore operation with the previously prepared request.
```azurecli-interactive az dataprotection backup-instance restore trigger -g testBkpVaultRG --vault-name TestBkpVault --backup-instance-name testpostgresql-empdb11-957d23b1-c679-4c94-ade6-c4d34635e149 --restore-request-object OssRestoreReq.JSON
az dataprotection backup-instance restore trigger -g testBkpVaultRG --vault-name
Track all jobs using the [az dataprotection job list](/cli/azure/dataprotection/job#az_dataprotection_job_list) command. You can list all jobs and fetch a particular job detail.
-You can also use _Az.ResourceGraph_ to track all jobs across all Backup vaults. Use the [az dataprotection job list-from-resourcegraph](/cli/azure/dataprotection/job#az_dataprotection_job_list_from_resourcegraph) command to get the relevant job that is across all Backup vaults.
+You can also use _Az.ResourceGraph_ to track jobs across all Backup vaults. Use the [az dataprotection job list-from-resourcegraph](/cli/azure/dataprotection/job#az_dataprotection_job_list_from_resourcegraph) command to get the relevant job that is across all Backup vaults.
```azurecli az dataprotection job list-from-resourcegraph --datasource-type AzureDatabaseForPostgreSQL --operation Restore
az dataprotection job list-from-resourcegraph --datasource-type AzureDatabaseFor
## Next steps -- [Azure PostgreSQL Backup overview](backup-azure-database-postgresql-overview.md)
+- [Overview of Azure PostgreSQL backup](backup-azure-database-postgresql-overview.md)
backup Restore Postgresql Database Ps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/restore-postgresql-database-ps.md
Title: Restore Azure PostgreSQL databases via Azure PowerShell description: Learn how to restore Azure PostgreSQL databases using Azure PowerShell. Previously updated : 03/26/2021 Last updated : 01/24/2022
-# Restore Azure PostgreSQL databases using Azure PowerShell (preview)
+# Restore Azure PostgreSQL databases using Azure PowerShell
This article explains how to restore [Azure PostgreSQL databases](../postgresql/overview.md#azure-database-for-postgresqlsingle-server) to an Azure PostgreSQL server backed-up by Azure Backup.
$TestBkpVault = Get-AzDataProtectionBackupVault -VaultName TestBkpVault -Resourc
### Set up permissions
-Backup vault uses Managed Identity to access other Azure resources. To restore from backup, Backup vaultΓÇÖs managed identity requires a set of permissions on the Azure PostgreSQL server to which the database should be restored.
+Backup vault uses managed identity to access other Azure resources. To restore from backup, Backup vaultΓÇÖs managed identity requires a set of permissions on the Azure PostgreSQL server to which the database should be restored.
To assign the relevant permissions for vault's system-assigned managed identity on the target PostgreSQL server, see the [set of permissions needed to backup Azure PostgreSQL database](./backup-azure-database-postgresql-overview.md#set-of-permissions-needed-for-azure-postgresql-database-restore).
Get-AzDataProtectionRecoveryPoint -ResourceGroupName "testBkpVaultRG" -VaultName
### Prepare the restore request
-There're various restore options for a PostgreSQL database. You can restore the recovery point as another database or restore as files. The recovery point can be on archive tier as well.
+There are various restore options for a PostgreSQL database. You can restore the recovery point as another database or restore as files. The recovery point can be on archive tier as well.
#### Restore as database
For an archive-based recovery point, you need to:
1. Specify the duration for which the rehydrated recovery point should be retained in the vault data store. 1. Restore as a database from this recovery point.
-Use the following command to prepare the request for all the above mentioned operations, at once.
+Use the following command to prepare the request for all the previous-mentioned operations, at once.
```azurepowershell-interactive $OssRestoreFromArchiveReq = Initialize-AzDataProtectionRestoreRequest -DatasourceType AzureDatabaseForPostgreSQL -SourceDataStore ArchiveStore -RestoreLocation $TestBkpVault.Location -RestoreType AlternateLocation -RecoveryPoint $rps[0].Property.RecoveryPointId -TargetResourceId $targetOssId -SecretStoreURI "https://restoreoss-test.vault.azure.net/secrets/dbauth3" -SecretStoreType AzureKeyVault -RehydrationDuration 12 -RehydrationPriority Standard
Start-AzDataProtectionBackupInstanceRestore -BackupInstanceName $AllInstances[2]
Track all the jobs using the [Get-AzDataProtectionJob](/powershell/module/az.dataprotection/get-azdataprotectionjob?view=azps-5.7.0&preserve-view=true) command. You can list all jobs and fetch a particular job detail.
-You can also use *Az.ResourceGraph* to track all jobs across all Backup vaults. Use the [Search-AzDataProtectionJobInAzGraph](/powershell/module/az.dataprotection/search-azdataprotectionjobinazgraph?view=azps-5.7.0&preserve-view=true) command to get the relevant job, which is across all backup vault.
+You can also use *Az.ResourceGraph* to track jobs across all Backup vaults. Use the [Search-AzDataProtectionJobInAzGraph](/powershell/module/az.dataprotection/search-azdataprotectionjobinazgraph?view=azps-5.7.0&preserve-view=true) command to get the relevant job, which is across all backup vault.
```azurepowershell-interactive $job = Search-AzDataProtectionJobInAzGraph -Subscription $sub -ResourceGroupName "testBkpVaultRG" -Vault $TestBkpVault.Name -DatasourceType AzureDatabaseForPostgreSQL -Operation OnDemandBackup
backup Restore Postgresql Database Use Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/restore-postgresql-database-use-rest-api.md
Title: Restore Azure PostgreSQL databases via Azure data protection REST API description: Learn how to restore Azure PostGreSQL databases using Azure Data Protection REST API. Previously updated : 10/23/2021 Last updated : 01/24/2022
-# Restore Azure PostgreSQL databases using Azure data protection REST API (preview)
+# Restore Azure PostgreSQL databases using Azure data protection REST API
This article explains how to restore [Azure PostgreSQL databases](../postgresql/overview.md#azure-database-for-postgresqlsingle-server) to an Azure PostgreSQL server backed-up by Azure Backup.
We have constructed a section of the same in the [above section](#create-a-reque
The _validate restore request_ is an [asynchronous operation](../azure-resource-manager/management/async-operations.md). So, this operation creates another operation that you need to track separately.
-It returns two responses: 202 (Accepted) when another operation is created, and then 200 (OK) when that operation completes.
+It returns two responses: 202 (Accepted) when another operation is created. Then 200 (OK) when that operation completes.
|Name |Type |Description | ||||
The only change from the _validate restore request_ body is to remove the _resto
The _trigger restore request_ is an [asynchronous operation](../azure-resource-manager/management/async-operations.md). So, this operation creates another operation that needs to be tracked separately.
-It returns two responses: 202 (Accepted) when another operation is created, and then 200 (OK) when that operation completes.
+It returns two responses: 202 (Accepted) when another operation is created. Then 200 (OK) when that operation completes.
|Name |Type |Description | ||||
backup Tutorial Postgresql Backup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/tutorial-postgresql-backup.md
+
+ Title: Tutorial - Back up Azure Database for PostgreSQL server
+description: Learn about how to back up Azure Database for PostgreSQL server to an Azure Backup Vault.
+ Last updated : 01/24/2022++++
+# Back up Azure Database for PostgreSQL server
+
+This tutorial shows you how to back up Azure Database for PostgreSQL server running on an Azure VM to an Azure Backup Recovery Services vault. In this article, you learn how to:
+
+> [!div class="checklist"]
+>
+> - Create a Backup vault.
+> - Create a Backup Policy.
+> - Prepare the databases.
+> - Configure backup on the database.
+> - Run an on-demand backup.
+
+## Before you start
+
+Before you back up your Azure Database for PostgreSQL server:
+
+- Identify or create a Backup Vault in the same region where you want to back up the Azure Database for PostgreSQL server instance.
+- Check that Azure Database for PostgreSQL server is named in accordance with naming guidelines for Azure Backup.
+- [Create secrets in the key vault](backup-azure-database-postgresql.md#create-secrets-in-the-key-vault).
+- [Allow access permissions for the relevant key vault](backup-azure-database-postgresql-overview.md#access-permissions-on-the-azure-key-vault-associated-with-the-postgresql-server).
+- [Provide database user's backup privileges on the database](backup-azure-database-postgresql-overview.md#database-users-backup-privileges-on-the-database).
+- [Allow access permissions for PostgreSQL server](backup-azure-database-postgresql-overview.md#access-permissions-on-the-azure-postgresql-server).
++
+## Create a Backup vault
+
+A Backup vault is a storage entity in Azure that holds backup data for various newer workloads that Azure Backup supports, such as Azure Database for PostgreSQL servers and Azure Disks. Backup vaults make it easy to organize your backup data, while minimizing management overhead. Backup vaults are based on the Azure Resource Manager model of Azure, which provides enhanced capabilities to help secure backup data.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Type **Backup center** in the search box.
+1. Under **Services**, select **Backup center**.
+1. On the **Backup center** page, select **Vault**.
+
+ :::image type="content" source="./media/backup-managed-disks/backup-center.png" alt-text="Screenshot showing to select Vault in Backup center.":::
+
+1. In the **Initiate: Create Vault** screen, select **Backup vault**, and **Proceed**.
+
+ :::image type="content" source="./media/backup-managed-disks/initiate-create-vault.png" alt-text="Screenshot showing to select Initiate: Create vault.":::
+
+1. On the **Basics** tab, provide subscription, resource group, backup vault name, region, and backup storage redundancy.
+
+ Continue by selecting **Review + create**. Learn more about [creating a Backup vault](./backup-vault-overview.md#create-a-backup-vault).
+
+ :::image type="content" source="./media/backup-managed-disks/review-and-create.png" alt-text="Screenshot showing to select Review and create vault.":::
+
+## Create Backup Policy
+
+You can create a Backup policy on the go during the configure backup flow. Alternatively, go to **Backup center** -> **Backup policies** -> **Add**.
+
+1. Enter a name for the new policy.
+
+ :::image type="content" source="./media/backup-azure-database-postgresql/enter-name-for-new-policy-inline.png" alt-text="Screenshot showing the process to enter a name for the new policy." lightbox="./media/backup-azure-database-postgresql/enter-name-for-new-policy-expanded.png":::
+
+1. Define the Backup schedule.
+
+ Currently, only Weekly backup option is available. However, you can schedule the backups on multiple days of the week.
+
+1. Define **Retention** settings.
+
+ You can add one or more retention rules. Each retention rule assumes inputs for specific backups, and data store and retention duration for those backups.
+
+1. To store your backups in one of the two data stores (or tiers), choose **Backup data store** (standard tier) or **Archive data store** (in preview).
+
+1. Choose **On-expiry** to move the backup to archive data store upon its expiry in the backup data store.
+
+ >[!Note]
+ >The **default retention rule** is applied in the absence of any other retention rule and has a default value of three months.
+ >
+ >- Retention duration ranges from seven days to 10 years in the **Backup data store**.
+ >- Retention duration ranges from six months to 10 years in the **Archive data store**.
+
+ :::image type="content" source="./media/backup-azure-database-postgresql/choose-option-to-move-backup-to-archive-data-store-inline.png" alt-text="Screenshot showing to choose On-expiry to move the backup to archive data store upon its expiry." lightbox="./media/backup-azure-database-postgresql/choose-option-to-move-backup-to-archive-data-store-expanded.png":::
+
+>[!Note]
+>The retention rules are evaluated in a pre-determined order of priority. The priority is the highest for the yearly rule, followed by the monthly, and then the weekly rule. Default retention settings are applied when no other rules qualify. For example, the same recovery point may be the first successful backup taken every week as well as the first successful backup taken every month. However, as the monthly rule priority is higher than that of the weekly rule, the retention corresponding to the first successful backup taken every month applies.
+
+## Prepare the database
+
+To prepare the database, follow these steps:
+
+1. [Create secrets in the key vault](backup-azure-database-postgresql.md#create-secrets-in-the-key-vault).
+1. [Grant privileges to database users using PowerShell scripts](backup-azure-database-postgresql.md#run-powershell-script-to-grant-privileges-to-database-users).
++
+## Configure backup on the database
+
+You can configure backup on multiple databases across multiple Azure PostgreSQL servers. To configure backup on the Azure PostgreSQL databases using Azure Backup, follow these steps:
+
+1. Go to **Backup vault** -> **+Backup**.
+
+ :::image type="content" source="./media/backup-azure-database-postgresql/adding-backup-inline.png" alt-text="Screenshot showing the option to add a backup." lightbox="./media/backup-azure-database-postgresql/adding-backup-expanded.png":::
+
+ :::image type="content" source="./media/backup-azure-database-postgresql/adding-backup-details-inline.png" alt-text="Screenshot showing the option to add backup information." lightbox="./media/backup-azure-database-postgresql/adding-backup-details-expanded.png":::
+
+ Alternatively, you can navigate to this page from the [Backup center](./backup-center-overview.md).
+
+1. Select or [create](#create-backup-policy) a Backup Policy that defines the backup schedule and the retention duration.
+
+ :::image type="content" source="./media/backup-azure-database-postgresql/create-or-add-backup-policy-inline.png" alt-text="Screenshot showing the option to add a backup policy." lightbox="./media/backup-azure-database-postgresql/create-or-add-backup-policy-expanded.png":::
+
+1. **Select Azure PostgreSQL databases to back up**: Choose one of the Azure PostgreSQL servers across subscriptions if they're in the same region as that of the vault. Expand the arrow to see the list of databases within a server.
+
+ >[!Note]
+ >You don't need to back up the databases *azure_maintenance* and *azure_sys*. Additionally, you can't back up a database already backed-up to a Backup vault.
+
+ :::image type="content" source="./media/backup-azure-database-postgresql/select-azure-postgresql-databases-to-back-up-inline.png" alt-text="Screenshot showing the option to select an Azure PostgreSQL database." lightbox="./media/backup-azure-database-postgresql/select-azure-postgresql-databases-to-back-up-expanded.png":::
+
+ :::image type="content" source="./media/backup-azure-database-postgresql/choose-an-azure-postgresql-server-inline.png" alt-text="Screenshot showing how to choose an Azure PostgreSQL server." lightbox="./media/backup-azure-database-postgresql/choose-an-azure-postgresql-server-expanded.png":::
++
+1. **Assign Azure key vault** that stores the credentials to connect to the selected database. To assign the key vault at the individual row level, click **Select a key vault and secret**. You can also assign the key vault by multi-selecting the rows and click Assign key vault in the top menu of the grid.
+
+ :::image type="content" source="./media/backup-azure-database-postgresql/assign-azure-key-vault-inline.png" alt-text="Screenshot showing how to assign Azure key vault." lightbox="./media/backup-azure-database-postgresql/assign-azure-key-vault-expanded.png":::
+
+1. To specify the secret information, use one of the following options:
+
+ 1. **Enter secret URI**: Use this option if the secret URI is shared/known to you. You can copy the **secret URI from the Key vault** -> **Secrets (select a secret)** -> **Secret Identifier**.
+
+ :::image type="content" source="./media/backup-azure-database-postgresql/enter-secret-uri-inline.png" alt-text="Screenshot showing how to enter secret U R I." lightbox="./media/backup-azure-database-postgresql/enter-secret-uri-expanded.png":::
+
+ However, with this option, Azure Backup gets no visibility about the key vault youΓÇÖve referenced. Therefore, access permissions on the key vault canΓÇÖt be granted inline. The backup admin along with the Postgres and/or key vault admin need to ensure that the backup vaultΓÇÖs [access on the key vault is granted manually](backup-azure-database-postgresql-overview.md#access-permissions-on-the-azure-key-vault-associated-with-the-postgresql-server) outside the configure backup flow for the backup operation to succeed.
+
+ 1. **Select the key vault**: Use this option if you know the key vault and secret name. With this option, you (backup admin with write access on the key vault) can grant the access permissions on the key vault inline. The key vault and the secret could pre-exist or be created on the go. Ensure that the secret is the PG server connection string in ADO.net format updated with the credentials of the database user that has been granted with the _backup_ privileges on the server. Learn more about how to [create secrets in the key vault](backup-azure-database-postgresql.md#create-secrets-in-the-key-vault).
+
+ :::image type="content" source="./media/backup-azure-database-postgresql/assign-secret-store-inline.png" alt-text="Screenshot showing how to assign secret store." lightbox="./media/backup-azure-database-postgresql/assign-secret-store-expanded.png":::
+
+ :::image type="content" source="./media/backup-azure-database-postgresql/select-secret-from-azure-key-vault-inline.png" alt-text="Screenshot showing the selection of secret from Azure Key Vault." lightbox="./media/backup-azure-database-postgresql/select-secret-from-azure-key-vault-expanded.png":::
+
+1. When the secret information update is complete, the validation starts after the key vault information has been updated.
+
+ >[!Note]
+ >
+ >- Here, the backup service validates if it has all the necessary [access permissions](backup-azure-database-postgresql-overview.md#key-vault-based-authentication-model) to read secret details from the key vault and connect to the database.
+ >- If one or more access permissions are found missing, it'll display one of the error messages ΓÇô _Role assignment not done or User cannot assign roles_.
+
+ :::image type="content" source="./media/backup-azure-database-postgresql/validation-of-secret-inline.png" alt-text="Screenshot showing the validation of secret." lightbox="./media/backup-azure-database-postgresql/validation-of-secret-expanded.png":::
+
+ - **User cannot assign roles**: This message displays when you (the backup admin) donΓÇÖt have the write access on the PostgreSQL server and/or key vault to assign missing permissions as listed under **View details**. Download the assignment template from the action button and have it run by the PostgreSQL and/or key vault admin. ItΓÇÖs an ARM template that helps you assign the necessary permissions on the required resources. Once the template is run successfully, click **Re-validate** on the Configure Backup page.
+
+ :::image type="content" source="./media/backup-azure-database-postgresql/download-role-assignment-template-inline.png" alt-text="Screenshot showing the option to download role assignment template." lightbox="./media/backup-azure-database-postgresql/download-role-assignment-template-expanded.png":::
+
+ - **Role assignment not done**: This message displays when you (the backup admin) have the write access on the PostgreSQL server and/or key vault to assign missing permissions as listed under **View details**. Use **Assign missing roles** action button in the top action menu to grant permissions on the PostgreSQL server and/or the key vault inline.
+
+ :::image type="content" source="./media/backup-azure-database-postgresql/role-assignment-not-done-inline.png" alt-text="Screenshot showing the error about the role assignment not done." lightbox="./media/backup-azure-database-postgresql/role-assignment-not-done-expanded.png":::
+
+1. Select **Assign missing roles** in the top menu and assign roles. Once the process starts, the [missing access permissions](backup-azure-database-postgresql-overview.md#azure-backup-authentication-with-the-postgresql-server) on the KV and/or PG server are granted to the backup vault. You can define the scope at which the access permissions should be granted. When the action is complete, re-validation starts.
+
+ :::image type="content" source="./media/backup-azure-database-postgresql/assign-missing-roles-inline.png" alt-text="Screenshot showing the option to assign missing roles." lightbox="./media/backup-azure-database-postgresql/assign-missing-roles-expanded.png":::
+
+ :::image type="content" source="./media/backup-azure-database-postgresql/define-scope-of-access-permission-inline.png" alt-text="Screenshot showing to define the scope of access permission." lightbox="./media/backup-azure-database-postgresql/define-scope-of-access-permission-expanded.png":::
+
+ - Backup vault accesses secrets the key vault and runs a test connection to the database to validate if the credentials have been entered correctly. The privileges of the database user are also checked to see [if the Database user has backup-related permissions on the database](backup-azure-database-postgresql-overview.md#database-users-backup-privileges-on-the-database).
+
+ - PostgreSQL admin will have all the backup and restore permissions on the database by default. Therefore, validations would succeed.
+
+ - A low-privileged user may not have backup/restore permissions on the database. Therefore, the validations would fail. A PowerShell script is dynamically generated (one per record/selected database). [Run the PowerShell script to grant these privileges to the database user on the database](backup-azure-database-postgresql.md#create-secrets-in-the-key-vault). Alternatively, you can assign these privileges using PG admin or PSQL tool.
+
+ :::image type="content" source="./media/backup-azure-database-postgresql/backup-vault-accesses-secrets-inline.png" alt-text="Screenshot showing the backup vault access secrets from the key vault." lightbox="./media/backup-azure-database-postgresql/backup-vault-accesses-secrets-expanded.png":::
+
+ :::image type="content" source="./media/backup-azure-database-postgresql/run-test-connection.png" alt-text="Screenshot showing the process to start test connection.":::
+
+ :::image type="content" source="./media/backup-azure-database-postgresql/user-credentials-to-run-test-connection-inline.png" alt-text="Screenshot showing how to provide user credentials to run the test." lightbox="./media/backup-azure-database-postgresql/user-credentials-to-run-test-connection-expanded.png":::
+
+1. Keep the records with backup readiness as Success to proceed to last step of submitting the operation.
+
+ :::image type="content" source="./media/backup-azure-database-postgresql/backup-readiness-as-success-inline.png" alt-text="Screenshot showing the backup readiness is successful." lightbox="./media/backup-azure-database-postgresql/backup-readiness-as-success-expanded.png":::
+
+ :::image type="content" source="./media/backup-azure-database-postgresql/review-backup-configuration-details-inline.png" alt-text="Screenshot showing the backup configuration review page." lightbox="./media/backup-azure-database-postgresql/review-backup-configuration-details-expanded.png":::
+
+1. Submit the configure backup operation and track the progress under **Backup instances**.
+
+ :::image type="content" source="./media/backup-azure-database-postgresql/submit-configure-backup-operation-inline.png" alt-text="Screenshot showing the backup configuration submission and tracking progress." lightbox="./media/backup-azure-database-postgresql/submit-configure-backup-operation-expanded.png":::
+
+## Run an on-demand backup
+
+To trigger an on-demand backup (that's not in the schedule specified in the policy), follow these steps:
+
+1. Go to **Backup instances** -> **Backup Now**.
+
+ :::image type="content" source="./media/backup-azure-database-postgresql/navigate-to-retention-rules-inline.png" alt-text="Screenshot showing the option to navigate to the list of retention rules that were defined in the associated Backup policy." lightbox="./media/backup-azure-database-postgresql/navigate-to-retention-rules-expanded.png":::
+
+1. Choose retention rules from the list that were defined in the associated Backup policy.
+
+ :::image type="content" source="./media/backup-azure-database-postgresql/choose-retention-rules-inline.png" alt-text="Screenshot showing the option to choose retention rules that were defined in the associated Backup policy." lightbox="./media/backup-azure-database-postgresql/choose-retention-rules-expanded.png":::
+
+## Next steps
+
+In this tutorial, you used the Azure portal to:
+
+> [!div class="checklist"]
+>
+> - Create a Backup vault.
+> - Create a Backup Policy.
+> - Prepare the databases.
+> - Configure backup on the database.
+> - Run an on-demand backup.
+
+Continue to the how-to article to Azure Database for PostgreSQL.
+
+> [!div class="nextstepaction"]
+> [Restore Azure Database for PostgreSQL server](restore-azure-database-postgresql.md)
backup Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/whats-new.md
You can learn more about the new releases by bookmarking this page or by [subscr
## Updates summary
+- January 2022
+ - [Back up Azure Database for PostgreSQL is now generally available](#back-up-azure-database-for-postgresql-is-now-generally-available)
- October 2021 - [Archive Tier support for SQL Server/ SAP HANA in Azure VM from Azure portal](#archive-tier-support-for-sql-server-sap-hana-in-azure-vm-from-azure-portal) - [Multi-user authorization using Resource Guard (in preview)](#multi-user-authorization-using-resource-guard-in-preview)
You can learn more about the new releases by bookmarking this page or by [subscr
- February 2021 - [Backup for Azure Blobs (in preview)](#backup-for-azure-blobs-in-preview)
+## Back up Azure Database for PostgreSQL is now generally available
+
+Azure Backup and Azure Database services together help you to build an enterprise-class backup solution for Azure PostgreSQL (is now generally available). You can meet your data protection and compliance needs with a customer-controlled backup policy that enables retention of backups for up to 10 years.
+
+With this, you've granular control to manage the backup and restore operations at the individual database level. Likewise, you can restore across PostgreSQL versions or to blob storage with ease. Besides using the Azure portal to perform the PostgreSQL database protection operations, you can also use the PowerShell, CLI, and REST API clients.
+
+For more information, see [Azure Database for PostgreSQL backup](backup-azure-database-postgresql-overview.md).
+ ## Archive Tier support for SQL Server/ SAP HANA in Azure VM from Azure portal Azure Backup now supports the movement of recovery points to the Vault-archive tier for SQL Server and SAP HANA in Azure Virtual Machines from the Azure portal. This allows you to move the archivable recovery points corresponding to a particular database to the Vault-archive tier at one go.
cognitive-services Get Started Speaker Recognition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/get-started-speaker-recognition.md
Title: "Speaker Recognition quickstart - Speech service"
-description: Learn how to use Speaker Recognition from the Speech SDK to answer the question, "who is speaking". In this quickstart, you learn about common design patterns for working with both speaker verification and identification, which both use voice biometry to identify unique voices.
+description: Learn how to use speaker recognition from the Speech SDK to answer the question, "Who is speaking?". In this quickstart, you learn about common design patterns for working with speaker verification and identification, which both use voice biometry to identify unique voices.
zone_pivot_groups: programming-languages-set-twenty-five
keywords: speaker recognition, voice biometry
-# Get started with Speaker Recognition
+# Get started with speaker recognition
::: zone pivot="programming-language-csharp" [!INCLUDE [C# Basics include](includes/how-to/speaker-recognition-basics/speaker-recognition-basics-csharp.md)]
cognitive-services How To Use Codec Compressed Audio Input Streams https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-use-codec-compressed-audio-input-streams.md
The Speech SDK and Speech CLI use GStreamer to support different kinds of input
[!INCLUDE [supported-audio-formats](includes/supported-audio-formats.md)]
-## Installing GStreamer
+## Install GStreamer
-Choose a platform for installation instructions.
+Choose a platform for installation instructions.
Platform | Languages | Supported GStreamer version | : | : | ::
-Android | Java | [1.18.3](https://gstreamer.freedesktop.org/data/pkg/android/1.18.3/)
-Linux | C++, C#, Java, Python, Go | [Supported Linux distributions and target architectures](~/articles/cognitive-services/speech-service/speech-sdk.md)
-Windows (excluding UWP) | C++, C#, Java, Python | [1.18.3](https://gstreamer.freedesktop.org/data/pkg/windows/1.18.3/msvc/gstreamer-1.0-msvc-x86_64-1.18.3.msi)
+Android | Java | [1.18.3](https://gstreamer.freedesktop.org/data/pkg/android/1.18.3/)
+Linux | C++, C#, Java, Python, Go | [Supported Linux distributions and target architectures](~/articles/cognitive-services/speech-service/speech-sdk.md)
+Windows (excluding UWP) | C++, C#, Java, Python | [1.18.3](https://gstreamer.freedesktop.org/data/pkg/windows/1.18.3/msvc/gstreamer-1.0-msvc-x86_64-1.18.3.msi)
### [Android](#tab/android)
-See [GStreamer configuration by programming language](#gstreamer-configuration) for the details about building libgstreamer_android.so.
+For more information about building libgstreamer_android.so, see [GStreamer configuration by programming language](#gstreamer-configuration).
-For more information, see [Android installation instructions](https://gstreamer.freedesktop.org/documentation/installing/for-android-development.html?gi-language=c).
+For more information, see [Android installation instructions](https://gstreamer.freedesktop.org/documentation/installing/for-android-development.html?gi-language=c).
### [Linux](#tab/linux)
gstreamer1.0-plugins-ugly
``` ### [Windows](#tab/windows)
-Make sure that packages of the same platform (x64 or x86) are installed. For example, if you installed the x64 package for Python, then you need to install the x64 GStreamer package. The instructions below are for the x64 packages.
+Make sure that packages of the same platform (x64 or x86) are installed. For example, if you installed the x64 package for Python, you need to install the x64 GStreamer package. The following instructions are for the x64 packages.
-1. Create a folder c:\gstreamer
-1. Download [installer](https://gstreamer.freedesktop.org/data/pkg/windows/1.18.3/msvc/gstreamer-1.0-msvc-x86_64-1.18.3.msi)
-1. Copy the installer to c:\gstreamer
+1. Create the folder c:\gstreamer.
+1. Download the [installer](https://gstreamer.freedesktop.org/data/pkg/windows/1.18.3/msvc/gstreamer-1.0-msvc-x86_64-1.18.3.msi).
+1. Copy the installer to c:\gstreamer.
1. Open PowerShell as an administrator.
-1. Run the following command in the PowerShell:
+1. Run the following command in PowerShell:
```powershell cd c:\gstreamer msiexec /passive INSTALLLEVEL=1000 INSTALLDIR=C:\gstreamer /i gstreamer-1.0-msvc-x86_64-1.18.3.msi ```
-1. Add the system variables GST_PLUGIN_PATH with value C:\gstreamer\1.0\msvc_x86_64\lib\gstreamer-1.0
-1. Add the system variables GSTREAMER_ROOT_X86_64 with value C:\gstreamer\1.0\msvc_x86_64
-1. Add another entry in the path variable as C:\gstreamer\1.0\msvc_x86_64\bin
-1. Reboot the machine
+1. Add the system variables GST_PLUGIN_PATH with the value C:\gstreamer\1.0\msvc_x86_64\lib\gstreamer-1.0.
+1. Add the system variables GSTREAMER_ROOT_X86_64 with the value C:\gstreamer\1.0\msvc_x86_64.
+1. Add another entry in the path variable as C:\gstreamer\1.0\msvc_x86_64\bin.
+1. Reboot the machine.
-For more information about GStreamer, see [Windows installation instructions](https://gstreamer.freedesktop.org/documentation/installing/on-windows.html?gi-language=c).
+For more information about GStreamer, see [Windows installation instructions](https://gstreamer.freedesktop.org/documentation/installing/on-windows.html?gi-language=c).
*** ## GStreamer configuration > [!NOTE]
-> GStreamer configuration requirements vary by programming language. For details, choose your programming language at the top of this page. The contents of this section will be updated.
+> GStreamer configuration requirements vary by programming language. For more information, choose your programming language at the top of this page. The contents of this section will be updated.
::: zone pivot="programming-language-csharp" [!INCLUDE [prerequisites](includes/how-to/compressed-audio-input/csharp/prerequisites.md)]
cognitive-services Speaker Recognition Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speaker-recognition-overview.md
As with all of the Cognitive Services resources, developers who use the speaker
| What Azure regions are supported? | See [Speaker recognition region support](regions.md#speaker-recognition).| | What audio formats are supported? | Mono 16 bit, 16 kHz PCM-encoded WAV. | | Can you enroll one speaker multiple times? | Yes, for text-dependent verification, you can enroll a speaker up to 50 times. For text-independent verification or speaker identification, you can enroll with up to 300 seconds of audio. |
-| What data is stored in Azure? | Enrollment audio is stored in the service until the voice profile is [deleted](./get-started-speaker-recognition.md#deleting-voice-profile-enrollments). Recognition audio samples aren't retained or stored. |
+| What data is stored in Azure? | Enrollment audio is stored in the service until the voice profile is [deleted](./get-started-speaker-recognition.md#delete-voice-profile-enrollments). Recognition audio samples aren't retained or stored. |
## Next steps
cognitive-services Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/managed-identity.md
In the following steps, we'll enable a system-assigned managed identity and gran
> [Managed identities for Azure resources: frequently asked questions](../../../active-directory/managed-identities-azure-resources/managed-identities-faq.md) > [!div class="nextstepaction"]
->[Use managed identities to acquire an access token](../../../app-service/overview-managed-identity.md?tabs=dotnet#obtain-tokens-for-azure-resources)
+>[Use managed identities to acquire an access token](../../../app-service/overview-managed-identity.md?tabs=dotnet#configure-target-resource)
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/chat/concepts.md
# Chat concepts
-Azure Communication Services Chat SDKs can be used to add real-time text chat to your applications. This page summarizes key Chat concepts and capabilities.
+Azure Communication Services Chat can help you add real-time text communication to your cross-platform applications. This page summarizes key Chat concepts and capabilities. See the [Communication Services Chat Software Development Kit (SDK) Overview](./sdk-features.md) for lists of SDKs, languages, platforms, and detailed feature support.
+
+The Chat APIs provide an **auto-scaling** service for persistently storied text and data communication. Other key features include:
+
+- **Custom Identity and Addressing** - Azure Communication Services provides generic [identities](../identity-model.md) that are used to address communication endpoints. Clients use these identities to authenticate to the Azure service and communicate with each other in `chat threads` you control.
+- **Encryption** - Chat SDKs encrypt traffic and prevents tampering on the wire.
+- **Microsoft Teams Meetings** - Chat SDKs can [join Teams meetings](../../quickstarts/chat/meeting-interop.md) and communicate with Teams chat messages.
+- **Real-time Notifications** - Chat SDKs use efficient persistent connectivity (WebSockets) to receive real-time notifications such as when a remote user is typing. When apps are running in the background, built-in functionality is available to [fire pop-up notifications](../notifications.md) ("toasts") to inform end users of new threads and messages.
+**Service & Bot Extensibility** - REST APIs and server SDKs allow services to send and receive messages. Bots can be added easily with [Azure Bot Framework integration](../../quickstarts/chat/quickstart-botframework-integration.md).
+
-See the [Communication Services Chat SDK Overview](./sdk-features.md) to learn more about specific SDK languages and capabilities.
## Chat overview Chat conversations happen within **chat threads**. Chat threads have the following properties: - A chat thread is uniquely identified by its `ChatThreadId`. -- Chat threads can have one or many users as participants who can send messages to it. -- A user can be a part of one or many chat threads. -- Only the thread participants have access to a given chat thread, and only they can perform chat thread operations. These operations include sending and receiving messages, adding participants, and removing participants.
+- Chat threads have between zero to 250 users as participants who can send messages to it.
+- A user can be a part an unlimited number of chat threads.
+- Only thread participants can send or receive messages, add participants, or remove participants.
- Users are automatically added as a participant to any chat threads that they create. ### User access Typically the thread creator and participants have same level of access to the thread and can execute all related operations available in the SDK, including deleting it. Participants don't have write access to messages sent by other participants, which means only the message sender can update or delete their sent messages. If another participant tries to do that, they'll get an error. ### Chat Data
-Communication Services stores chat history until explicitly deleted. Chat thread participants can use `ListMessages` to view message history for a particular thread. Users removed from a chat thread will be able to view previous message history, but they won't be able to send or receive new messages as part of that chat thread. To learn more about data being stored by Communication Services, refer to documentation on [privacy](../privacy.md).
+Azure stores chat messages until explicitly deleted. Chat thread participants can use `ListMessages` to view message history for a particular thread. Users that are removed from a chat thread will be able to view previous message history but cannot send or receive new messages. To learn more about data being stored by Communication Services, refer to the [data residency and privacy page](../privacy.md).
### Service limits - The maximum number of participants allowed in a chat thread is 250.
There are two core parts to chat architecture: 1) Trusted Service and 2) Client
:::image type="content" source="../../media/chat-architecture.png" alt-text="Diagram showing Communication Services' chat architecture."::: - **Trusted service:** To properly manage a chat session, you need a service that helps you connect to Communication Services by using your resource connection string. This service is responsible for creating chat threads, adding and removing participants, and issuing access tokens to users. More information about access tokens can be found in our [access tokens](../../quickstarts/access-tokens.md) quickstart.
+ - **Client app:** The client application connects to your trusted service and receives the access tokens that are used by users to connect directly to Communication Services. Once your trusted service has created the chat thread and added users as participants, they can use the client app to connect to the chat thread and send messages. Use real-time notifications feature, which we will discuss below, in your client app to subscribe to message & thread updates from other participants.
## Message types
-As part of message history, Chat shares user-generated messages as well as system-generated messages. System messages are generated when a chat thread is updated and can help identify when a participant was added or removed or when the chat thread topic was updated. When you call `List Messages` or `Get Messages` on a chat thread, the result will contain both kind of messages in chronological order.
+As part of message history, Chat shares user-generated messages as well as system-generated messages. System messages are generated when a chat thread is updated and identify when a participant was added or removed or when the chat thread topic was updated. When you call `List Messages` or `Get Messages` on a chat thread, the result will contain both kind of messages in chronological order.
-For user generated messages, the message type can be set in `SendMessageOptions` when sending a message to chat thread. If no value is provided, Communication Services will default to `text` type. Setting this value is important when sending HTML. When `html` is specified, Communication Services will sanitize the content to ensure that it's rendered safely on client devices.
+For user-generated messages, the message type can be set in `SendMessageOptions` when sending a message to chat thread. If no value is provided, Communication Services will default to `text` type. Setting this value is important when sending HTML. When `html` is specified, Communication Services will sanitize the content to ensure that it's rendered safely on client devices.
- `text`: A plain text message composed and sent by a user as part of a chat thread. - `html`: A formatted message using html, composed and sent by a user as part of chat thread.
You can use [Azure Cognitive APIs](../../../cognitive-services/index.yml) with t
- Help a support agent prioritize tickets by detecting a negative sentiment of an incoming message from a customer. - Analyze the incoming messages for key detection and entity recognition, and prompt relevant info to the user in your app based on the message content.
-One way to achieve this is by having your trusted service act as a participant of a chat thread. Let's say you want to enable language translation. This service will be responsible for listening to the messages being exchanged by other participants [1], calling cognitive APIs to translate the content to desired language[2,3] and sending the translated result as a message in the chat thread[4].
+One way to achieve this is by having your trusted service act as a participant of a chat thread. Let's say you want to enable language translation. This service will be responsible for listening to the messages being exchanged by other participants [1], calling Cognitive APIs to translate the content to desired language[2,3] and sending the translated result as a message in the chat thread[4].
This way, the message history will contain both original and translated messages. In the client application, you can add logic to show the original or translated message. See [this quickstart](../../../cognitive-services/translator/quickstart-translator.md) to understand how to use Cognitive APIs to translate text to different languages.
communication-services Sub Eligibility Number Capability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/numbers/sub-eligibility-number-capability.md
The capabilities that are available to you depend on the country that you're ope
The tables below summarize current availability:
-### Customers with US Azure Billing Addresses
+## Customers with US Azure Billing Addresses
|Number|Type |Send SMS | Receive SMS |Make Calls |Receive Calls| |:|:|:|:|:|:|
-|USA (includes PR) |Toll-Free |GA |GA |GA |GA |
-|USA (includes PR) |Local | | |GA |GA |
-|USA |Short-Codes |Public Preview |Public Preview | | |
+|USA (includes PR) |Toll-Free |GA |GA |GA |GA* |
+|USA (includes PR) |Local | | |GA |GA* |
+|USA |Short-Codes |Public Preview |Public Preview* | | |
-### (New) Customers with UK Azure Billing Addresses
+## Customers with UK Azure Billing Addresses
|Number|Type |Send SMS | Receive SMS |Make Calls |Receive Calls| |:|:|:|:|:|:|
-|USA (includes PR) |Toll-Free |GA |GA |Public Preview |Public Preview |
-|USA (includes PR) |Local | | |Public Preview |Public Preview |
+|UK |Toll-Free | | |Public Preview |Public Preview* |
+|UK |Local | | |Public Preview |Public Preview* |
+|USA (includes PR) |Toll-Free |GA |GA |Public Preview |Public Preview* |
+|USA (includes PR) |Local | | |Public Preview |Public Preview* |
-### (New) Customers with Ireland Azure Billing Addresses
+## Customers with Ireland Azure Billing Addresses
|Number|Type |Send SMS | Receive SMS |Make Calls |Receive Calls| |:|:|:|:|:|:|
-|USA (includes PR) |Toll-Free |GA |GA |GA |GA |
-|USA (includes PR) |Local | | |GA |GA |
+|USA (includes PR) |Toll-Free |GA |GA |GA |GA* |
+|USA (includes PR) |Local | | |GA |GA* |
+
+## Customers with Denmark Azure Billing Addresses
+|Number|Type |Send SMS | Receive SMS |Make Calls |Receive Calls|
+|:|:|:|:|:|:|
+|Denmark |Toll-Free | | |Public Preview |Public Preview* |
+|Denmark |Local | | |Public Preview |Public Preview* |
+
+***
+\* Available through Azure Bot Framework and Dynamics only
## Next Steps-- Get a [Toll-Free or Local Phone Number](../../quickstarts/telephony/get-phone-number.md)-- Get a [Short-Code](../../quickstarts/sms/apply-for-short-code.md)
+In this quickstart, you learned about Subscription Eligibility and Number Capabilities for Communication Services.
+The following documents may be interesting to you:
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/pricing.md
Prices for Azure Communication Services are generally based on a pay-as-you-go m
## Voice/Video calling and screen sharing
-Azure Communication Services allow for adding voice/video calling and screen sharing to your applications. You can embed the experience into your applications using JavaScript, Objective-C (Apple), Java (Android), or .NET SDKs. Refer to our [full list of available SDKs](./sdk-options.md).
+Azure Communication Services allows for adding voice/video calling and screen sharing to your applications. You can embed the experience into your applications using JavaScript, Objective-C (Apple), Java (Android), or .NET SDKs. Refer to our [full list of available SDKs](./sdk-options.md).
### Pricing
-Calling and screen-sharing services are charged on a per minute per participant basis at $0.004 per participant per minute for group calls. Azure Communication Services does not charge for data egress. To understand the various call flows that are possible, refer to [this page](./call-flows.md).
+Calling and screen-sharing services are charged on a per minute per participant basis at $0.004 per participant per minute for group calls. Azure Communication Services doesn't charge for data egress. To understand the various call flows that are possible, refer to [this page](./call-flows.md).
Each participant of the call will count in billing for each minute they're connected to the call. This holds true regardless of whether the user is video calling, voice calling, or screen-sharing. ### Pricing example: Group audio/video call using JS and iOS SDKs
-Alice made a group call with her colleagues, Bob and Charlie. Alice and Bob used the JS SDKs, Charlie iOS SDKs.
+Alice made a group call with her colleagues, Bob, and Charlie. Alice and Bob used the JS SDKs, Charlie iOS SDKs.
- The call lasts a total of 60 minutes. - Alice and Bob participated for the entire call. Alice turned on her video for five minutes and shared her screen for 23 minutes. Bob had the video on for the whole call (60 minutes) and shared his screen for 12 minutes.
Alice made a group call with her colleagues, Bob and Charlie. Alice and Bob used
**Cost calculations** -- 2 participants x 60 minutes x $0.004 per participant per minute = $0.48 [both video and audio are charged at the same rate]-- 1 participant x 43 minutes x $0.004 per participant per minute = $0.172 [both video and audio are charged at the same rate]
+- Two participants x 60 minutes x $0.004 per participant per minute = $0.48 [both video and audio are charged at the same rate]
+- One participant x 43 minutes x $0.004 per participant per minute = $0.172 [both video and audio are charged at the same rate]
**Total cost for the group call**: $0.48 + $0.172 = $0.652
-### Pricing example: Outbound Call from app using JS SDK to a PSTN number
+### Pricing example: Outbound Call from app using JS SDK to a PSTN (Public Switched Telephony Network) number
Alice makes a PSTN Call from an app to Bob on his US phone number beginning with `+1-425`.
Alice makes a PSTN Call from an app to Bob on his US phone number beginning with
**Cost calculations** -- 1 participant on the VoIP leg (Alice) from App to Communication Services servers x 10 minutes x $0.004 per participant leg per minute = $0.04-- 1 participant on the PSTN outbound leg (Bob) from Communication Services servers to a US telephone number x 10 minutes x $0.013 per participant leg per minute = $0.13.
+- One participant on the VoIP leg (Alice) from App to Communication Services servers x 10 minutes x $0.004 per participant leg per minute = $0.04
+- One participant on the PSTN outbound leg (Bob) from Communication Services servers to a US telephone number x 10 minutes x $0.013 per participant leg per minute = $0.13.
> [!Note]
-> USA mixed rates to `+1-425` is $0.013. Refer to the following link for details: https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+> USA mixed rate to `+1-425` is $0.013. Refer to the following link for details: https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
**Total cost for the call**: $0.04 + $0.13 = $0.17
Alice makes an outbound call from an Azure Communication Services app to a telep
**Cost calculations** -- 1 participant on the VoIP leg (Alice) from App to Communication Services servers x 10 minutes x $0.004 per participant leg per minute = $0.04-- 1 participant on the Communication Services direct routing outbound leg (Bob) from Communication Services servers to an SBC x 10 minutes x $0.004 per participant leg per minute = $0.04.
+- One participant on the VoIP leg (Alice) from App to Communication Services servers x 10 minutes x $0.004 per participant leg per minute = $0.04
+- One participant on the Communication Services direct routing outbound leg (Bob) from Communication Services servers to an SBC x 10 minutes x $0.004 per participant leg per minute = $0.04.
**Total cost for the call**: $0.04 + $0.04 = $0.08
Alice and Bob are on a VOIP Call. Bob escalated the call to Charlie on Charlie's
**Cost calculations** -- 2 participants on the VoIP leg (Alice and Bob) from App to Communication Services servers x 20 minutes x $0.004 per participant leg per minute = $0.16-- 1 participant on the PSTN outbound leg (Charlie) from Communication Services servers to US Telephone number x 10 minutes x $0.013 per participant leg per minute = $0.13
+- Two participants on the VoIP leg (Alice and Bob) from App to Communication Services servers x 20 minutes x $0.004 per participant leg per minute = $0.16
+- One participant on the PSTN outbound leg (Charlie) from Communication Services servers to US Telephone number x 10 minutes x $0.013 per participant leg per minute = $0.13
Note: USA mixed rates to `+1-425` is $0.013. Refer to the following link for details: https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
Alice is a doctor meeting with her patient, Bob. Alice will be joining the visit
**Cost calculations** -- 1 Participant (Bob) connected to Teams lobby x 1 minute x $0.004 per participant per minute (lobby charged at regular rate of meetings) = $0.004-- 1 participant (Bob) x 29 minutes x $0.004 per participant per minute = $0.116 [both video and audio are charged at the same rate]-- 1 participant (Alice) x 30 minutes x $0.000 per participant per minute = $0.0*.-- 1 participant (Bob) x 3 chat messages x $0.0008 = $0.0024.-- 1 participant (Alice) x 5 chat messages x $0.000 = $0.0*.
+- One Participant (Bob) connected to Teams lobby x 1 minute x $0.004 per participant per minute (lobby charged at regular rate of meetings) = $0.004
+- One participant (Bob) x 29 minutes x $0.004 per participant per minute = $0.116 [both video and audio are charged at the same rate]
+- One participant (Alice) x 30 minutes x $0.000 per participant per minute = $0.0*.
+- One participant (Bob) x three chat messages x $0.0008 = $0.0024.
+- One participant (Alice) x five chat messages x $0.000 = $0.0*.
-*Alice's participation is covered by her Teams license. Your Azure invoice will show the minutes and chat messages that Teams users had with Communication Services Users for your convenience, but those minutes and messages originating from the Teams client will not cost.
+*Alice's participation is covered by her Teams license. Your Azure invoice will show the minutes and chat messages that Teams users had with Communication Services Users for your convenience, but those minutes and messages originating from the Teams client won't be charged.
**Total cost for the visit**: - User joining using the Communication Services JavaScript SDK: $0.004 + $0.116 + $0.0024 = $0.1224
Alice has ordered a product from Contoso and struggles to set it up. Alice calls
- The call lasts a total of 30 minutes. - Bob accepts the call from Alice.-- After 5 minutes, Bob adds Charlie to the call. Charlie has his camera turned off for 10 minutes. Then turns his camera on for the rest of the call.
+- After five minutes, Bob adds Charlie to the call. Charlie has his camera turned off for 10 minutes. Then turns his camera on for the rest of the call.
- After another 10 minutes, Alice leaves the call. -- After another 5 minutes, both Bob and Charlie leave the call
+- After another five minutes, both Bob and Charlie leave the call
**Cost calculations** -- 1 Participant (Alice) called the phone number associated with Teams user Bob using Teams Calling plan x 25 minutes deducted from Bob's tenant Teams minute pool-- 1 participant (Bob) x 30 minutes x $0.004 per participant per minute = $0.12 [both video and audio are charged at the same rate]-- 1 participant (Charlie) x 25 minutes x $0.000 per participant per minute = $0.0*.
+- One Participant (Alice) called the phone number associated with Teams user Bob using Teams Calling plan x 25 minutes deducted from Bob's tenant Teams minute pool
+- One participant (Bob) x 30 minutes x $0.004 per participant per minute = $0.12 [both video and audio are charged at the same rate]
+- One participant (Charlie) x 25 minutes x $0.000 per participant per minute = $0.0*.
*Charlie's participation is covered by her Teams license.
Alice has ordered a product from Contoso and struggles to set it up. Alice calls
## Call Recording
-Azure Communication Services allow to record PSTN, WebRTC, Conference, SIP Interface calls. Currently Call Recording supports mixed audio+video MP4 and mixed audio-only MP3/WAV output formats. Call Recording SDKs are available for Java and C#. Refer to [this page to learn more](../quickstarts/voice-video-calling/call-recording-sample.md).
+Azure Communication Services allows customers to record PSTN, WebRTC, Conference, SIP Interface calls. Currently Call Recording supports mixed audio+video MP4 and mixed audio-only MP3/WAV output formats. Call Recording SDKs are available for Java and C#. Refer to [this page to learn more](../quickstarts/voice-video-calling/call-recording-sample.md).
### Price
Alice made a group call with her colleagues, Bob and Charlie.
- Bob stayed in a call for 30 minutes and Alice and Charlie for 60 minutes. **Cost calculations**-- You will be charged the length of the meeting. (Length of the meeting is the timeline between user starts a recording and either explicitly stops or when there is no one left in a meeting).
+- You'll be charged the length of the meeting. (Length of the meeting is the timeline between user starts a recording and either explicitly stops or when there's no one left in a meeting).
- 60 minutes x $0.01 per recording per minute = $0.6 ### Pricing example: Record a call in a mixed audio+only format
Alice starts a call with Jane.
- The call lasts a total of 60 minutes. The recording lasted for 45 minutes. **Cost calculations**-- You will be charged the length of the recording.
+- You'll be charged the length of the recording.
- 45 minutes x $0.002 per recording per minute = $0.09 ## Chat
-With Communication Services you can enhance your application with the ability to send and receive chat messages between two or more users. Chat SDKs are available for JavaScript, .NET, Python and Java. Refer to [this page to learn about SDKs](./sdk-options.md)
+With Communication Services you can enhance your application with the ability to send and receive chat messages between two or more users. Chat SDKs are available for JavaScript, .NET, Python, and Java. Refer to [this page to learn about SDKs](./sdk-options.md)
### Price
You're charged $0.0008 for every chat message sent.
### Pricing example: Chat between two users
-Geeta starts a chat thread with Emily to share an update and sends 5 messages. The chat lasts 10 minutes. Geeta and Emily send another 15 messages each.
+Geeta starts a chat thread with Emily to share an update and sends five messages. The chat lasts 10 minutes. Geeta and Emily send another 15 messages each.
**Cost calculations** - Number of messages sent (5 + 15 + 15) x $0.0008 = $0.028
Rose sees the messages and starts chatting. In the meanwhile Casey gets a call a
- Number of messages sent (20 + 30 + 18 + 30 + 25 + 35) x $0.0008 = $0.1264
-## Telephony
+## SMS (Short Messaging Service) and Telephony
-## Price
+Please refer to the following links for details on SMS and Telephony pricing
-Telephony services are priced on a per-minute basis. Pricing is determined by the type and location of the number you're using as well as the destination of your calls.
-
-### Telephone number leasing
-
-Fees for phone number leasing are charged upfront and then recur on a month-to-month basis:
-
-|Number type |Monthly fee |
-|--|--|
-|Local (United States) |$1/mo |
-|Toll-free (United States) |$2/mo |
--
-### Telephone calling
-
-Traditional telephone calling (calling that occurs over the public switched telephone network) is available with pay-as-you-go pricing for phone numbers based in the United States. The price is a per-minute charge based on the type of number used and the destination of the call. Pricing details for the most popular calling destinations are included in the table below. Please see the [detailed pricing list](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv) for a full list of destinations.
--
-#### United States calling prices
-
-The following prices include required communications taxes and fees:
-
-|Number type |To make calls |To receive calls|
-|--|--||
-|Local |Starting at $0.013/min |$0.0085/min |
-|Toll-free |$0.013/min |$0.0220/min |
-
-#### Other calling destinations
-
-The following prices include required communications taxes and fees:
-
-|Make calls to |Price per minute|
-|--||
-|Canada |Starting at $0.013/min |
-|United Kingdom |Starting at $0.015/min |
-|Germany |Starting at $0.015/min |
-|France |Starting at $0.016/min |
+- [SMS Pricing Details](./sms-pricing.md)
+- [PSTN Pricing Details](./pstn-pricing.md)
## Next Steps
-The following documents may be interesting to you:
+Get started with ACS
-- [SMS Pricing](./sms-pricing.md)
+- [Send an SMS](../quickstarts/sms/send.md)
+- [Add Voice calling to your app](../quickstarts/voice-video-calling/getting-started-with-calling.md)
communication-services Pstn Pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/pstn-pricing.md
+
+ Title: Pricing for PSTN
+
+description: Learn about Communication Services' Telephony Pricing Model.
++ Last updated : 1/28/2022+++
+# Telephony (PSTN) Pricing
+
+> [!IMPORTANT]
+> Number Retention and Portability: Phone numbers that are assigned to you during any preview program may need to be returned to Microsoft if you do not meet regulatory requirements before General Availability. During private preview and public preview, telephone numbers are not eligible for porting. [Details on offers in Public Preview / GA](../concepts/numbers/sub-eligibility-number-capability.md)
+
+Numbers are billed on a per month basis, and pricing differs based on the type of a number and the source (country) of the number. Once a number is purchased, Customers can make / receive calls using that number and are billed on a per minute basis. PSTN call pricing is based on the type of number and location in which a call is terminated (destination), with few scenarios having rates based on origination location.
+
+In most cases, customers with Azure subscriptions locations that match the country of the Number offer will be able to buy the Number. However, US and UK numbers may be purchased by customers with Azure subscription locations in other countries. Please see here for details on [in-country and cross-country purchases](../concepts/numbers/sub-eligibility-number-capability.md).
+
+All prices shown below are in USD.
+
+## United States Telephony Offers
+
+### Phone Number Leasing Charges
+|Number type |Monthly fee |
+|--|--|
+|Geographic |USD 1.00/mo |
+|Toll-Free |USD 2.00/mo |
+
+### Usage Charges
+|Number type |To make calls* |To receive calls|
+|--|--||
+|Geographic |Starting at USD 0.0130/min |USD 0.0085/min |
+|Toll-free |Starting at USD 0.0130/min | USD 0.0220/min |
+
+\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+
+## United Kingdom Telephony Offers
+
+### Phone Number Leasing Charges
+|Number type |Monthly fee |
+|--|--|
+|Geographic |USD 1.00/mo |
+|Toll-Free |USD 2.00/mo |
+
+### Usage Charges
+|Number type |To make calls* |To receive calls|
+|--|--||
+|Geographic |Starting at USD 0.0150/min |USD 0.0090/min |
+|Toll-free |Starting at USD 0.0150/min |Starting at USD 0.0290/min |
+
+\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+
+## Denmark Telephony Offers
+
+### Phone Number Leasing Charges
+|Number type |Monthly fee |
+|--|--|
+|Geographic |USD 0.82/mo |
+|Toll-Free |USD 25.00/mo |
+
+### Usage Charges
+|Number type |To make calls* |To receive calls|
+|--|--||
+|Geographic |Starting at USD 0.0190/min |USD 0.0100/min |
+|Toll-free |Starting at USD 0.0190/min |Starting at USD 0.0343/min |
+
+\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+
+***
+
+Note: Pricing for all countries is subject to change as pricing is market-based and depends on third-party suppliers of telephony services. Additionally, pricing may include requisite taxes and fees.
+
+***
+## Next steps
+
+In this quickstart, you learned how Telephony (PSTN) Offers are priced for Azure Communication Services.
+
+The following documents may be interesting to you:
+- [Learn more about Telephony](../concepts/telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../quickstarts/telephony/get-phone-number.md)
+- [Phone number types in Azure Communication Services](../concepts/telephony/plan-solution.md)
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/router/concepts.md
An exception policy controls the behavior of a Job based on a trigger and execut
## Next steps
+- [How jobs are matched to workers](matching-concepts.md)
- [Router Rule concepts](router-rule-concepts.md) - [Classification concepts](classification-concepts.md)-- [How jobs are matched to workers](matching-concepts.md)
+- [Exception Policies](exception-policy.md)
- [Quickstart guide](../../quickstarts/router/get-started-router.md) - [Manage queues](../../how-tos/router-sdk/manage-queue.md) - [Classifying a Job](../../how-tos/router-sdk/job-classification.md)
communication-services Exception Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/router/exception-policy.md
+
+ Title: Exception Policy
+
+description: Learn about the Azure Communication Services Job Router Exception Policy.
+++++ Last updated : 01/28/2022++
+zone_pivot_groups: acs-js-csharp
++
+# Exception Policy
++
+An Exception Policy is a set of rules that defines what actions to execute when a condition is triggered. You can save these policies inside Job Router and the attach them to one or more Queues.
+
+## Triggers
+
+The following triggers can be used to drive actions:
+
+**Queue Length -** Fires when the length of the queue exceeds a specified threshold while adding the job to the queue.
+
+**Wait Time -** Fires when the job has been waiting in the queue for the specified threshold.
+
+When these triggers are fired, they'll execute one or more actions and send an [Exception Triggered Event][exception_triggered_event] via [Event Grid][subscribe_events].
+
+## Actions
+
+**Cancel -** Cancels the job and removes it from the queue.
+
+**Reclassify -** Reapplies the specified Classification Policy with modified labels to the job.
+
+**Manual Reclassify -** Modifies the queue, priority, and worker selectors to the job.
+
+## Examples
+
+In the following example, we configure an Exception Policy that will cancel a job before it joins a queue with a length greater than 100.
++
+```csharp
+await client.SetExceptionPolicyAsync(
+ id: "policy-1",
+ name: "My Exception Policy",
+ rules: new List<ExceptionRule>
+ {
+ new ExceptionRule(
+ id: "rule-1",
+ trigger: new QueueLengthExceptionTrigger(threshold: 100),
+ actions: new List<ExceptionAction>
+ {
+ new CancelExceptionAction("cancel-action")
+ })
+ });
+```
+++
+```typescript
+await client.upsertExceptionPolicy({
+ id: "policy-1",
+ name: "My Exception Policy",
+ exceptionRules: [
+ {
+ id: "rule-1",
+ trigger: { kind: "queue-length", threshold: 100 },
+ actions: [
+ { kind: "cancel", id: "cancel-action" }
+ ]
+ }
+ ]
+ });
+```
++
+In the following example, we configure an Exception Policy with rules that will:
+
+- Set the job priority to 10 after it has been waiting in the queue for 1 minute.
+- Move the job to `queue-2` after it has been waiting for 5 minutes.
++
+```csharp
+await client.SetExceptionPolicyAsync(
+ id: "policy-1",
+ name: "My Exception Policy",
+ rules: new List<ExceptionRule>
+ {
+ new ExceptionRule(
+ id: "rule-1",
+ trigger: new WaitTimeExceptionTrigger(TimeSpan.FromMinutes(1)),
+ actions: new List<ExceptionAction>
+ {
+ new ManualReclassifyExceptionAction(id: "action1", priority: 10)
+ }),
+ new ExceptionRule(
+ id: "rule-2",
+ trigger: new WaitTimeExceptionTrigger(TimeSpan.FromMinutes(5)),
+ actions: new List<ExceptionAction>
+ {
+ new ManualReclassifyExceptionAction(id: "action2", queueId: "queue-2")
+ })
+ });
+```
+++
+```typescript
+await client.upsertExceptionPolicy({
+ id: "policy-1",
+ name: "My Exception Policy",
+ exceptionRules: [
+ {
+ id: "rule-1",
+ trigger: { kind: "wait-time", threshold: "00:01:00" },
+ actions: [
+ { kind: "manual-reclassify", id: "action1", priority: 10 }
+ ]
+ },
+ {
+ id: "rule-2",
+ trigger: { kind: "wait-time", threshold: "00:05:00" },
+ actions: [
+ { kind: "manual-reclassify", id: "action2", queueId: "queue-2" }
+ ]
+ }
+ ]
+ });
+```
++
+<!-- LINKS -->
+[subscribe_events]: ../../how-tos/router-sdk/subscribe-events.md
+[exception_triggered_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterjobexceptiontriggered
+
communication-services Router Rule Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/router/router-rule-concepts.md
In this example a `ExpressionRule`, which is a subtype of `RouterRule` can be us
```csharp await client.SetClassificationPolicyAsync( id: "my-policy-id",
- new ExpressionRule("If(job.Urgent = true, 10, 5)")
+ prioritizationRule: new ExpressionRule("If(job.Urgent = true, 10, 5)")
); ```
communication-services Sdk Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/sdk-options.md
# SDKs and REST APIs
-Azure Communication Services capabilities are conceptually organized into eight areas. Most areas have fully open-sourced SDKs programmed against published REST APIs that you can use directly over the Internet. The Calling SDK uses proprietary network interfaces and is closed-source.
+Azure Communication Services APIs are organized into eight areas. Most areas have fully open-sourced SDKs programmed against published REST APIs that you can use directly over the Internet. The Calling SDK uses proprietary network interfaces and is closed-source.
-In the tables below we summarize these areas and availability of REST APIs and SDK libraries. We also note if APIs and SDKs are intended for end-user clients or trusted service environments. APIs and SDKs such as SMS should not be directly accessed by end-user devices in low trust environments.
+In the tables below we summarize these areas and availability of REST APIs and SDK libraries. We note if APIs and SDKs are intended for end-user clients or trusted service environments. APIs such as SMS should not be directly accessed by end-user devices in low trust environments.
-Development of Web-based Calling and Chat applications can be accelerated by [Azure Communication Services UI libraries](https://azure.github.io/communication-ui-library). The UI library provides production-ready UI components that you can drop into your applications.
+Development of Calling and Chat applications can be accelerated by the [Azure Communication Services UI library](./ui-library/ui-library-overview.md). The customizable UI library provides open-source UI components for Web and mobile apps, and a Microsoft Teams theme.
## REST APIs
-Communication Services APIs are documented alongside other Azure REST APIs in [docs.microsoft.com](/rest/api/azure/). This documentation will tell you how to structure your HTTP messages and offers guidance for using Postman. REST interface documentation is also offered in Swagger format on [GitHub](https://github.com/Azure/azure-rest-api-specs).
-
+Communication Services APIs are documented alongside other [Azure REST APIs in docs.microsoft.com](/rest/api/azure/). This documentation will tell you how to structure your HTTP messages and offers guidance for using [Postman](../tutorials/postman-tutorial.md). REST interface documentation is also published in Swagger format on [GitHub](https://github.com/Azure/azure-rest-api-specs).
## SDKs | Assembly | Protocols| Environment | Capabilities|
The mapping between friendly assembly names and namespaces is:
| Network Traversal| Azure.Communication.NetworkTraversal | | UI Library | Azure.Communication.Calling| -
-## REST API Throttles
-Certain REST APIs and corresponding SDK methods have throttle limits you should be mindful of. Exceeding these throttle limits will trigger a`429 - Too Many Requests` error response. These limits can be increased through [a request to Azure Support](../../azure-portal/supportability/how-to-create-azure-support-request.md).
-
-| API| Throttle|
-|||
-| [All Search Telephone Number Plan APIs](/rest/api/communication/phonenumbers) | 4 requests/day|
-| [Purchase Telephone Number Plan](/rest/api/communication/phonenumbers/purchasephonenumbers) | 1 purchase a month|
-| [Send SMS](/rest/api/communication/sms/send) | 200 requests/minute |
-- ## SDK platform support details ### iOS and Android
communication-services Media Comp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/media-comp.md
+
+ Title: Media Streaming and Composition
+
+description: Introduces the Media Streaming and Composition
+++++ Last updated : 11/01/2021++++
+# Media Streaming and Composition
+
+Azure Communication Services Media Streaming and Composition enables you to build dynamic voice and video calling experiences at large scales, suitable for interactive streaming, virtual events, and broadcast scenarios. In a common video calling scenario, each participant is uploading several media streams captured from:
+
+- Cameras
+- Microphones
+- Applications (screen sharing)
+
+These media streams are typically arrayed in a grid and broadcast to call participants. Media Streaming and Composition allows you to extend and enhance this experience:
+
+- Connect devices and services using streaming protocols such as [RTMP](https://datatracker.ietf.org/doc/html/rfc7016) or [SRT](https://datatracker.ietf.org/doc/html/draft-sharabayko-srt)
+- Compose media streams into complex scenes
+
+RTMP & SRT connectivity can be used for both input and output. Using RTMP/SRT input, a videography studio that emits RTMP/SRT can join an Azure Communication Services call. RTMP/SRT output allows you to stream media from Azure Communication Services into [Azure Media Services](https://docs.microsoft.com/azure/media-services/latest/concepts-overview), YouTube Live, and many other broadcasting channels. The ability to attach industry standard RTMP/SRT emitters and to output content to RTMP/SRT subscribers for broadcasting transforms a small group call into a virtual event that reaches millions of people in real time.
+
+Media Composition REST APIs (and open-source SDKs) allow you to command the Azure service to cloud compose these media streams. For example, a **presenter layout** can be used to compose a speaker and a translator together in a classic picture-in-picture style. Media Composition allows for all clients and services connected to the media data plane to enjoy a particular dynamic layout without local processing or application complexity.
+
+ In the diagram below, three endpoints are participating actively in a group call and uploading media. Two users, one of which is using Microsoft Teams, are composed using a *presenter layout.* The third endpoint is a television studio that emits RTMP into the call. The Azure Calling client and Teams client will receive the composed media stream instead of a typical grid. Additionally, Azure Media Services is shown here subscribing to the call's RTMP channel and broadcasting content externally.
++
+This functionality is activated through REST APIs and open-source SDKs. Below is an example of the JSON encoded configuration of a presenter layout for the above scenario:
+
+```
+{
+ layout: {
+ type: ΓÇÿpresenterΓÇÖ,
+ presenter: {
+ supportPosition: ΓÇÿrightΓÇÖ,
+ primarySource: ΓÇÿ1ΓÇÖ, // source id
+ }
+ },
+ sources: [
+ { id: ΓÇÿ1ΓÇÖ }, { id: ΓÇÿ2ΓÇÖ }
+ ]
+}
+
+```
+The presenter layout is one of several layouts available through the media composition capability:
+
+- **Grid** - This is the typical video calling layout, where all media sources are shown on a grid with similar sizes. You can use the grid layout to specify grid positions and size.
+- **Presentation.** Similar to the grid layout but media sources can have different sizes, allowing for emphasis.
+- **Presenter** - This layout overlays two sources on top of each other.
+- **Weather Person** - This layout overlays two sources, but in real-time Azure will remove the background behind people.
+
+<!-To try out media composition, check out following content:-->
+
+<!- [Quick Start - Applying Media Composition to a video call](../../quickstarts/media-composition/get-started-media-composition.md) -->
+<!- [Tutorial - Media Composition Layouts](../../quickstarts/media-composition/media-composition-layouts.md) -->
communication-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/overview.md
# What is Azure Communication Services?
-Azure Communication Services are cloud-based services with REST APIs and client library SDKs available to help you integrate communication into your applications. You can add communication features to your applications without being an expert in communication technologies such as media encoding and real-time networking. This functionality is also supported in Azure for government.
+Azure Communication Services are cloud-based services with REST APIs and client library SDKs available to help you integrate communication into your applications. You can add communication to your applications without being an expert in underlying technologies such as media encoding or telephony. Azure Communication Service is available in multiple [Azure geographies](concepts/privacy.md) and Azure for government.
Azure Communication Services supports various communication formats:
-1. Voice and Video Calling
-1. Rich Text Chat
-1. SMS
+- [Voice and Video Calling](concepts/voice-video-calling/calling-sdk-features.md)
+- [Rich Text Chat](concepts/chat/concepts.md)
+- [SMS](concepts/sms/concepts.md)
-You can connect custom client endpoints, custom services, and the publicly switched telephony network (PSTN) to your communications application. You can acquire phone numbers directly through Azure Communication Services REST APIs, SDKs, or the Azure portal; and use these numbers for SMS or calling applications. Azure Communication Services direct routing allows you to use SIP and session border controllers to connect your own PSTN carriers and bring your own phone numbers.
+You can connect custom client apps, custom services, and the publicly switched telephony network (PSTN) to your communications experience. You can acquire [phone numbers](./concepts/telephony/plan-solution.md) directly through Azure Communication Services REST APIs, SDKs, or the Azure portal; and use these numbers for SMS or calling applications. Azure Communication Services [direct routing](./concepts/telephony/plan-solution.md) allows you to use SIP and session border controllers to connect your own PSTN carriers and bring your own phone numbers.
-In addition to REST APIs, [Azure Communication Services client libraries](./concepts/sdk-options.md) are available for various platforms and languages, including Web browsers (JavaScript), iOS (Swift), Java (Android), Windows (.NET). A [UI library for web browsers](https://aka.ms/acsstorybook) can accelerate development for mobile and desktop browsers. Azure Communication Services is identity agnostic and you control how end users are identified and authenticated.
+In addition to REST APIs, [Azure Communication Services client libraries](./concepts/sdk-options.md) are available for various platforms and languages, including Web browsers (JavaScript), iOS (Swift), Java (Android), Windows (.NET). A [UI library](https://aka.ms/acsstorybook) can accelerate development for Web, iOS, and Android apps. Azure Communication Services is identity agnostic and you control how end users are identified and authenticated.
Scenarios for Azure Communication Services include: -- **Business to Consumer (B2C).** A business' employees and services interact with consumers using voice, video, and rich text chat in a custom browser or mobile application. An organization can send and receive SMS messages, or [operate an interactive voice response system (IVR)](https://github.com/microsoft/botframework-telephony/blob/main/EnableTelephony.md) using a phone number you acquire through Azure. [Integration with Microsoft Teams](./quickstarts/voice-video-calling/get-started-teams-interop.md) can be used to connect consumers to Teams meetings hosted by employees; ideal for remote healthcare, banking, and product support scenarios where employees might already be familiar with Teams.-- **Consumer to Consumer (C2C).** Build engaging social spaces for consumer-to-consumer interaction with voice, video, and rich text chat. Any type of user interface can be built on Azure Communication Services SDKs, or use complete application samples and an open-source UI toolkit to help you get started quickly.
+- **Business to Consumer (B2C).** Employees and services engage external customers using voice, video, and text chat in browser and native apps. An organization can send and receive SMS messages, or [operate an interactive voice response system (IVR)](https://github.com/microsoft/botframework-telephony/blob/main/EnableTelephony.md) using a phone number you acquire through Azure. [Integration with Microsoft Teams](./quickstarts/voice-video-calling/get-started-teams-interop.md) can be used to connect consumers to Teams meetings hosted by employees; ideal for remote healthcare, banking, and product support scenarios where employees might already be familiar with Teams.
+- **Consumer to Consumer (C2C).** Build engaging consumer-to-consumer interaction with voice, video, and rich text chat. Any type of user interface can be built on Azure Communication Services SDKs, or use complete application samples and an open-source UI toolkit to help you get started quickly.
To learn more, check out our [Microsoft Mechanics video](https://www.youtube.com/watch?v=apBX7ASurgM) or the resources linked below.
After creating a Communication Services resource you can start building client s
|**[Get started with chat](./quickstarts/chat/get-started.md)**|The Azure Communication Services Chat SDK is used to add rich real-time text chat into your applications.| |**[Connect a Microsoft Bot to a phone number](https://github.com/microsoft/botframework-telephony)**|Telephony channel is a channel in Microsoft Bot Framework that enables the bot to interact with users over the phone. It leverages the power of Microsoft Bot Framework combined with the Azure Communication Services and the Azure Speech Services. | - ## Samples The following samples demonstrate end-to-end usage of the Azure Communication Services. Use these samples to bootstrap your own Communication Services solutions.
container-apps Compare Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-apps/compare-options.md
Title: 'Comparing Container Apps with other Azure container options'
-description: Understand when to use Azure Container Apps and how it compares to d container options including Azure Container Instances, Azure App Service, Azure Functions, and Azure Kubernetes Service.
+description: Understand when to use Azure Container Apps and how it compares to other container options including Azure Container Instances, Azure App Service, Azure Functions, and Azure Kubernetes Service.
container-instances Container Instances Tutorial Azure Function Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-tutorial-azure-function-trigger.md
This article assumes you publish the project using the name *myfunctionapp*, in
The following commands enable a system-assigned [managed identity](../app-service/overview-managed-identity.md?toc=/azure/azure-functions/toc.json#add-a-system-assigned-identity) in your function app. The PowerShell host running the app can automatically authenticate to Azure using this identity, enabling functions to take actions on Azure services to which the identity is granted access. In this tutorial, you grant the managed identity permissions to create resources in the function app's resource group.
-[Add an identity](../app-service/overview-managed-identity.md?tabs=dotnet#using-azure-powershell-1) to the function app:
+[Add an identity](../app-service/overview-managed-identity.md?tabs=ps%2Cdotnet) to the function app:
```powershell Update-AzFunctionApp -Name myfunctionapp `
cosmos-db Kafka Connect https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra/kafka-connect.md
Title: Integrate Apache Kafka and Azure Cosmos DB Cassandra API using Kafka Connect description: Learn how to ingest data from Kafka to Azure Cosmos DB Cassandra API using DataStax Apache Kafka Connector-+ Last updated 12/14/2020-+
cosmos-db Manage Data Cqlsh https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra/manage-data-cqlsh.md
+
+ Title: 'Quickstart: Cassandra API with CQLSH - Azure Cosmos DB'
+description: This quickstart shows how to use the Azure Cosmos DB's Apache Cassandra API to create a profile application using CQLSH.
+++++ Last updated : 01/24/2022++++
+# Quickstart: Build a Cassandra app with CQLSH and Azure Cosmos DB
+
+> [!div class="op_single_selector"]
+> * [.NET](manage-data-dotnet.md)
+> * [.NET Core](manage-data-dotnet-core.md)
+> * [Java v3](manage-data-java.md)
+> * [Java v4](manage-data-java-v4-sdk.md)
+> * [Node.js](manage-data-nodejs.md)
+> * [Python](manage-data-python.md)
+> * [Golang](manage-data-go.md)
+
+In this quickstart, you create an Azure Cosmos DB Cassandra API account, and use CQLSH to create a Cassandra database and container. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
++
+## Prerequisites
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription.
+
+## Create a database account
+
+Before you can create a document database, you need to create a Cassandra account with Azure Cosmos DB.
++
+## Install standalone CQLSH tool
+
+Refer to [CQL shell](cassandra-support.md#cql-shell) on steps on how to launch a standalone cqlsh tool.
++
+## Update your connection string
+
+Now go back to the Azure portal to get your connection string information and copy it into the app. The connection string details enable your app to communicate with your hosted database.
+
+1. In your Azure Cosmos DB account in the [Azure portal](https://portal.azure.com/), select **Connection String**.
+
+ :::image type="content" source="./media/manage-data-java/copy-username-connection-string-azure-portal.png" alt-text="View and copy a username from the Azure portal, Connection String page":::
+
+2. Use the :::image type="icon" source="./media/manage-data-java/copy-button-azure-portal.png"::: button on the right side of the screen to copy the USERNAME and PASSWORD value.
+
+3. In your terminal, set the SSL variables:
+ ```bash
+ # Export the SSL variables:
+ export SSL_VERSION=TLSv1_2
+ export SSL_VALIDATE=false
+ ```
+
+4. Connect to Azure Cosmos DB API for Cassandra:
+ - Paste the USERNAME and PASSWORD value into the command.
+ ```sql
+ cqlsh <USERNAME>.cassandra.cosmos.azure.com 10350 -u <USERNAME> -p <PASSWORD> --ssl --protocol-version=4
+ ```
++
+## CQL commands to create and run an app
+- Create keyspace
+```sql
+CREATE KEYSPACE IF NOT EXISTS uprofile
+WITH REPLICATION = { 'class' : 'NetworkTopologyStrategy', 'datacenter1' : 1 };
+```
+- Create a table
+```sql
+CREATE TABLE IF NOT EXISTS uprofile.user (user_id int PRIMARY KEY, user_name text, user_bcity text);
+```
+- Insert a row into user table
+```sql
+INSERT INTO uprofile.user (user_id, user_name, user_bcity) VALUES (101,'johnjoe','New York')
+```
+You can also insert data using the COPY command.
+```sql
+COPY uprofile.user(user_id, user_name, user_bcity) FROM '/path to file/fileName.csv'
+WITH DELIMITER = ',' ;
+```
+- Query the user table
+```sql
+SELECT * FROM uprofile.users;
+```
+
+In the Azure portal, open **Data Explorer** to query, modify, and work with this new data.
+ :::image type="content" source="./media/manage-data-java/view-data-explorer-java-app.png" alt-text="View the data in Data Explorer - Azure Cosmos DB":::
++
+## Review SLAs in the Azure portal
++
+## Clean up resources
++
+## Next steps
+
+In this quickstart, you learned how to create an Azure Cosmos DB account with Cassandra API using CQLSH that creates a Cassandra database and container. You can now import additional data into your Azure Cosmos DB account.
+
+> [!div class="nextstepaction"]
+> [Import Cassandra data into Azure Cosmos DB](migrate-data.md)
cosmos-db Manage Data Go https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra/manage-data-go.md
Title: Build a Go app with Azure Cosmos DB Cassandra API using the gocql client description: This quickstart shows how to use a Go client to interact with Azure Cosmos DB Cassandra API --++ ms.devlang: golang
cosmos-db Postgres Migrate Cosmos Db Kafka https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra/postgres-migrate-cosmos-db-kafka.md
Title: Migrate data from PostgreSQL to Azure Cosmos DB Cassandra API account using Apache Kafka description: Learn how to use Kafka Connect to synchronize data from PostgreSQL to Azure Cosmos DB Cassandra API in real time.-+ Last updated 01/05/2021-+
cosmos-db Create Mongodb Go https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb/create-mongodb-go.md
Title: Connect a Go application to Azure Cosmos DB's API for MongoDB description: This quickstart demonstrates how to connect an existing Go application to Azure Cosmos DB's API for MongoDB.--++ ms.devlang: golang
cosmos-db Create Mongodb Rust https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb/create-mongodb-rust.md
Title: Connect a Rust application to Azure Cosmos DB's API for MongoDB description: This quickstart demonstrates how to build a Rust application backed by Azure Cosmos DB's API for MongoDB.--++ ms.devlang: rust
cosmos-db Partial Document Update Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/partial-document-update-getting-started.md
Title: Getting started with Azure Cosmos DB Partial Document Update description: This article provides example for how to use Partial Document Update with .NET, Java, Node SDKs-+ Last updated 12/09/2021-+
cosmos-db Partial Document Update https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/partial-document-update.md
Title: Partial document update in Azure Cosmos DB description: Learn about partial document update in Azure Cosmos DB.-+ Last updated 08/23/2021-+
cosmos-db Tutorial Springboot Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/tutorial-springboot-azure-kubernetes-service.md
Title: Tutorial - Spring Boot application with Azure Cosmos DB SQL API and Azure Kubernetes Service description: This tutorial demonstrates how to deploy a Spring Boot application to Azure Kubernetes Service and use it to perform operations on data in an Azure Cosmos DB SQL API account.-+ ms.devlang: java Last updated 10/01/2021-+
cost-management-billing Aws Integration Set Up Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/aws-integration-set-up-configure.md
The policy JSON should resemble the following example. Replace `bucketname` with
"Sid": "VisualEditor0", "Effect": "Allow", "Action": [
- "organizations:ListAccounts",
- "iam:ListRoles",
- "ce:*",
- "cur:DescribeReportDefinitions"
+ "organizations:ListAccounts",
+ "iam:ListRoles",
+ "ce:*",
+ "cur:DescribeReportDefinitions"
], "Resource": "*" },
data-factory Compare Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/compare-versions.md
Previously updated : 04/09/2018 Last updated : 01/31/2022 # Compare Azure Data Factory with Data Factory version 1
data-factory Connector Http https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-http.md
If you use **certThumbprint** for authentication and the certificate is installe
1. Open the Microsoft Management Console (MMC). Add the **Certificates** snap-in that targets **Local Computer**. 2. Expand **Certificates** > **Personal**, and then select **Certificates**. 3. Right-click the certificate from the personal store, and then select **All Tasks** > **Manage Private Keys**.
-3. On the **Security** tab, add the user account under which the Integration Runtime Host Service (DIAHostService) is running, with read access to the certificate.
+4. On the **Security** tab, add the user account under which the Integration Runtime Host Service (DIAHostService) is running, with read access to the certificate.
+5. The HTTP connector loads only trusted certificates. If you're using a self-signed or nonintegrated CA-issued certificate, to enable trust, the certificate must also be installed in one of the following stores:
+ - Trusted People
+ - Third-Party Root Certification Authorities
+ - Trusted Root Certification Authorities
**Example 1: Using certThumbprint**
data-factory How To Data Flow Dedupe Nulls Snippets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-data-flow-dedupe-nulls-snippets.md
Previously updated : 09/30/2020 Last updated : 01/31/2022 # Dedupe rows and find nulls by using data flow snippets
By using code snippets in mapping data flows, you can easily perform common task
## Next steps
-* Build the rest of your data flow logic by using mapping data flows [transformations](concepts-data-flow-overview.md).
+* Build the rest of your data flow logic by using mapping data flows [transformations](concepts-data-flow-overview.md).
data-factory How To Data Flow Error Rows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-data-flow-error-rows.md
Previously updated : 11/22/2020 Last updated : 01/31/2022
This video walks through an example of setting-up error row handling logic in yo
## Next steps
-* Build the rest of your data flow logic by using mapping data flows [transformations](concepts-data-flow-overview.md).
+* Build the rest of your data flow logic by using mapping data flows [transformations](concepts-data-flow-overview.md).
data-factory How To Invoke Ssis Package Managed Instance Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-invoke-ssis-package-managed-instance-agent.md
Previously updated : 04/14/2020 Last updated : 01/31/2022 # Run SSIS packages by using Azure SQL Managed Instance Agent
To cancel package execution from a SQL Managed Instance Agent job, take the foll
1. Stop the corresponding operation based on **executionId**. ## Next steps
-You can also schedule SSIS packages by using Azure Data Factory. For step-by-step instructions, see [Azure Data Factory event trigger](how-to-create-event-trigger.md).
+You can also schedule SSIS packages by using Azure Data Factory. For step-by-step instructions, see [Azure Data Factory event trigger](how-to-create-event-trigger.md).
data-factory Self Hosted Integration Runtime Automation Scripts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/self-hosted-integration-runtime-automation-scripts.md
Previously updated : 05/09/2020 Last updated : 01/31/2022 # Automating self-hosted integration runtime installation using local PowerShell scripts
You can follow below command-line example to use this script:
PS C:\windows\system32> C:\Users\username\Desktop\script-update-gateway.ps1 -version 3.13.6942.1 ``` If your current version is already the latest one, you'll see following result, suggesting no update is required.
- [:::image type="content" source="media/self-hosted-integration-runtime-automation-scripts/script-2-run-result.png#lightbox" alt-text="script 2 run result](media/self-hosted-integration-runtime-automation-scripts/script-2-run-result.png)":::
+ [:::image type="content" source="media/self-hosted-integration-runtime-automation-scripts/script-2-run-result.png#lightbox" alt-text="script 2 run result](media/self-hosted-integration-runtime-automation-scripts/script-2-run-result.png)":::
data-factory Solution Template Copy Files Multiple Containers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/solution-template-copy-files-multiple-containers.md
Previously updated : 11/1/2018 Last updated : 01/31/2022 # Copy multiple folders with Azure Data Factory
If you want to copy multiple containers under root folders between storage store
1. Go to the **Copy multiple files containers between File Stores** template. Create a **New** connection to your source storage store. The source storage store is where you want to copy files from multiple containers from.
- :::image type="content" source="media/solution-template-copy-files-multiple-containers/copy-files-multiple-containers-image1.png" alt-text="Create a new connection to the source":::
+ :::image type="content" source="media/solution-template-copy-files-multiple-containers/copy-files-multiple-containers-image-1.png" alt-text="Create a new connection to the source":::
2. Create a **New** connection to your destination storage store.
- :::image type="content" source="media/solution-template-copy-files-multiple-containers/copy-files-multiple-containers-image2.png" alt-text="Create a new connection to the destination":::
+ :::image type="content" source="media/solution-template-copy-files-multiple-containers/copy-files-multiple-containers-image-2.png" alt-text="Create a new connection to the destination":::
3. Select **Use this template**.
- :::image type="content" source="media/solution-template-copy-files-multiple-containers/copy-files-multiple-containers-image3.png" alt-text="Use this template":::
+ :::image type="content" source="media/solution-template-copy-files-multiple-containers/copy-files-multiple-containers-image-3.png" alt-text="Use this template":::
4. You'll see the pipeline, as in the following example:
- :::image type="content" source="media/solution-template-copy-files-multiple-containers/copy-files-multiple-containers-image4.png" alt-text="Show the pipeline":::
+ :::image type="content" source="media/solution-template-copy-files-multiple-containers/copy-files-multiple-containers-image-4.png" alt-text="Show the pipeline":::
5. Select **Debug**, enter the **Parameters**, and then select **Finish**.
- :::image type="content" source="media/solution-template-copy-files-multiple-containers/copy-files-multiple-containers-image5.png" alt-text="Run the pipeline":::
+ :::image type="content" source="media/solution-template-copy-files-multiple-containers/copy-files-multiple-containers-image-5.png" alt-text="Run the pipeline":::
6. Review the result.
- :::image type="content" source="media/solution-template-copy-files-multiple-containers/copy-files-multiple-containers-image6.png" alt-text="Review the result":::
+ :::image type="content" source="media/solution-template-copy-files-multiple-containers/copy-files-multiple-containers-image-6.png" alt-text="Review the result":::
## Next steps
data-factory Solution Template Databricks Notebook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/solution-template-databricks-notebook.md
Previously updated : 04/27/2020 Last updated : 01/31/2022 # Transformation with Azure Databricks
data-factory Solution Template Migration S3 Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/solution-template-migration-s3-azure.md
Previously updated : 09/07/2019 Last updated : 01/31/2022 # Migrate data from Amazon S3 to Azure Data Lake Storage Gen2
The template contains two parameters:
3. Go to the **Migrate historical data from AWS S3 to Azure Data Lake Storage Gen2** template. Input the connections to your external control table, AWS S3 as the data source store and Azure Data Lake Storage Gen2 as the destination store. Be aware that the external control table and the stored procedure are reference to the same connection.
- :::image type="content" source="media/solution-template-migration-s3-azure/historical-migration-s3-azure1.png" alt-text="Screenshot that shows the Migrate historical data from AWS S3 to Azure Data Lake Storage Gen2 template.":::
+ :::image type="content" source="media/solution-template-migration-s3-azure/historical-migration-s3-azure-1.png" alt-text="Screenshot that shows the Migrate historical data from AWS S3 to Azure Data Lake Storage Gen2 template.":::
4. Select **Use this template**.
- :::image type="content" source="media/solution-template-migration-s3-azure/historical-migration-s3-azure2.png" alt-text="Screenshot that highlights the Use this template button.":::
+ :::image type="content" source="media/solution-template-migration-s3-azure/historical-migration-s3-azure-2.png" alt-text="Screenshot that highlights the Use this template button.":::
5. You see the 2 pipelines and 3 datasets were created, as shown in the following example:
- :::image type="content" source="media/solution-template-migration-s3-azure/historical-migration-s3-azure3.png" alt-text="Screenshot that shows the two pipelines and three datasets that were created by using the template.":::
+ :::image type="content" source="media/solution-template-migration-s3-azure/historical-migration-s3-azure-3.png" alt-text="Screenshot that shows the two pipelines and three datasets that were created by using the template.":::
6. Go to the "BulkCopyFromS3" pipeline and select **Debug**, enter the **Parameters**. Then, select **Finish**.
- :::image type="content" source="media/solution-template-migration-s3-azure/historical-migration-s3-azure4.png" alt-text="Screenshot that shows where to select Debug and enter the parameters before you select Finish.":::
+ :::image type="content" source="media/solution-template-migration-s3-azure/historical-migration-s3-azure-4.png" alt-text="Screenshot that shows where to select Debug and enter the parameters before you select Finish.":::
7. You see results that are similar to the following example:
- :::image type="content" source="media/solution-template-migration-s3-azure/historical-migration-s3-azure5.png" alt-text="Screenshot that shows the returned results.":::
+ :::image type="content" source="media/solution-template-migration-s3-azure/historical-migration-s3-azure-5.png" alt-text="Screenshot that shows the returned results.":::
### For the template to copy changed files only from Amazon S3 to Azure Data Lake Storage Gen2
The template contains two parameters:
3. Go to the **Copy delta data from AWS S3 to Azure Data Lake Storage Gen2** template. Input the connections to your external control table, AWS S3 as the data source store and Azure Data Lake Storage Gen2 as the destination store. Be aware that the external control table and the stored procedure are reference to the same connection.
- :::image type="content" source="media/solution-template-migration-s3-azure/delta-migration-s3-azure1.png" alt-text="Create a new connection":::
+ :::image type="content" source="media/solution-template-migration-s3-azure/delta-migration-s3-azure-1.png" alt-text="Create a new connection":::
4. Select **Use this template**.
- :::image type="content" source="media/solution-template-migration-s3-azure/delta-migration-s3-azure2.png" alt-text="Use this template":::
+ :::image type="content" source="media/solution-template-migration-s3-azure/delta-migration-s3-azure-2.png" alt-text="Use this template":::
5. You see the 2 pipelines and 3 datasets were created, as shown in the following example:
- :::image type="content" source="media/solution-template-migration-s3-azure/delta-migration-s3-azure3.png" alt-text="Review the pipeline":::
+ :::image type="content" source="media/solution-template-migration-s3-azure/delta-migration-s3-azure-3.png" alt-text="Review the pipeline":::
6. Go the "DeltaCopyFromS3" pipeline and select **Debug**, and enter the **Parameters**. Then, select **Finish**.
- :::image type="content" source="media/solution-template-migration-s3-azure/delta-migration-s3-azure4.png" alt-text="Click **Debug**":::
+ :::image type="content" source="media/solution-template-migration-s3-azure/delta-migration-s3-azure-4.png" alt-text="Click **Debug**":::
7. You see results that are similar to the following example:
- :::image type="content" source="media/solution-template-migration-s3-azure/delta-migration-s3-azure5.png" alt-text="Review the result":::
+ :::image type="content" source="media/solution-template-migration-s3-azure/delta-migration-s3-azure-5.png" alt-text="Review the result":::
8. You can also check the results from the control table by a query *"select * from s3_partition_delta_control_table"*, you will see the output similar to the following example:
- :::image type="content" source="media/solution-template-migration-s3-azure/delta-migration-s3-azure6.png" alt-text="Screenshot that shows the results from the control table after you run the query.":::
+ :::image type="content" source="media/solution-template-migration-s3-azure/delta-migration-s3-azure-6.png" alt-text="Screenshot that shows the results from the control table after you run the query.":::
## Next steps
databox-online Azure Stack Edge Gpu 2110 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-2110-release-notes.md
Previously updated : 10/26/2021 Last updated : 01/28/2022
The following new features are available in the Azure Stack Edge 2110 release.
- **Certificates for Edge container registry and Kubernetes dashboard** - Certificates for Edge container registry and Kubernetes dashboard are now supported. You can create and upload certificates via the local UI. For more information, see [Kubernetes certificates](azure-stack-edge-gpu-certificates-overview.md#kubernetes-certificates) and [Upload Kubernetes certificates](azure-stack-edge-gpu-manage-certificates.md#upload-kubernetes-certificates). - **Metallb in BGP mode** - Starting this release, you can configure load balancing on your Azure Stack Edge device using MetalLB via Border Gateway Protocol (BGP). Configuration is done by connecting to the PowerShell interface of the device and then running specific cmdlets. For more information, see [Configure load balancing with MetalLB on your Azure Stack Edge device](azure-stack-edge-gpu-configure-metallb-bgp-mode.md). -- ## Issues fixed in 2110 release The following table lists the issues that were release noted in previous releases and fixed in the current release.
The following table provides a summary of known issues in the 2110 release.
| No. | Feature | Issue | Workaround/comments | | | | | | |**1.**|Preview features |For this release, the following features: Local Azure Resource Manager, VMs, Cloud management of VMs, Kubernetes cloud management, VPN for Azure Stack Edge Pro R and Azure Stack Edge Mini R, Multi-process service (MPS), and Multi-Access Edge Computing (MEC) for Azure Stack Edge Pro GPU - are all available in preview. |These features will be generally available in later releases. |-
+|**2.**|Certificates|Alerts related to signing chain certificates aren't removed from the portal even after uploading new signing chain certificates.| |
## Known issues from previous releases
The following table provides a summary of known issues carried over from the pre
|**14.**|Certificates |In certain instances, certificate state in the local UI may take several seconds to update. |The following scenarios in the local UI may be affected.<ul><li>**Status** column in **Certificates** page.</li><li>**Security** tile in **Get started** page.</li><li>**Configuration** tile in **Overview** page.</li></ul> | |**15.**|Compute + Kubernetes |Compute/Kubernetes does not support NTLM web proxy. || |**16**|Internet Explorer|If enhanced security features are enabled, you may not be able to access local web UI pages. | Disable enhanced security, and restart your browser.|
-|**17.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#environment-variables).|Replace ":" by double underscore. For more information,see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|
+|**17.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event Grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#environment-variables).|Replace ":" by double underscore. For more information,see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|
|**18.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources are not deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md#additional-parameters). | |**19.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| | |**20.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that do not exist on the network.| |
The following table provides a summary of known issues carried over from the pre
|**25.**|Multi-Access Edge Compute (MEC) |If Azure Stack Edge is running 2106, and a Network Function Device resource is created and the Azure Stack Edge is then updated to 2110, when you deploy the Network Function, the deployment will fail. The virtual network is not being created after the device is updated to 2110. The failure does not occur if there is an existing Network Function deployment on Azure Stack Edge. | To work around this issue, re-register the same device resource using the `Invoke-MecRegister` cmdlet on your Azure Stack Edge and use the activation key from the Azure Stack Edge resource. Alternatively, you can create virtual switches via the following commands: <ul><li> `Add-HcsExternalVirtualSwitch -InterfaceAlias Port5 -WaitForSwitchCreation $true -switchName mec-vswitch-LAN -SupportsAcceleratedNetworking $true` </li><li> `Add-HcsExternalVirtualSwitch -InterfaceAlias Port6 -WaitForSwitchCreation $true -switchName mec-vswitch-WAN -SupportsAcceleratedNetworking $true` </li></ul> | |**26.**|Multi-Access Edge Compute (MEC) |Azure Stack Edge was updated to version 2110 while the Network Functions VMs were running. In these instances, the update may fail when trying to stop the virtual machines that are connected to the Mellanox Ethernet adapter. The following error is seen in the event log: *The network interface "Mellanox ConnectX-4 Lx Ethernet Adapter" has begun resetting. There will be a momentary disruption in network connectivity while the hardware resets. Reason: The network driver did not respond to an OID request in a timely fashion. This network interface has reset 3 time(s) since it was last initialized.* |To work around this issue, reboot your Azure Stack Edge and retry updating the device. |
+## Next steps
- [Update your device](azure-stack-edge-gpu-install-update.md)
-
+
databox-online Azure Stack Edge Gpu 2111 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-2111-release-notes.md
Previously updated : 11/18/2021 Last updated : 01/31/2022
The following table provides a summary of known issues in the 2111 release.
|**1.**|Preview features |For this release, the following features: Local Azure Resource Manager, VMs, Cloud management of VMs, Kubernetes cloud management, VPN for Azure Stack Edge Pro R and Azure Stack Edge Mini R, Multi-process service (MPS), and Multi-Access Edge Computing (MEC) for Azure Stack Edge Pro GPU - are all available in preview. |These features will be generally available in later releases. | - ## Known issues from previous releases The following table provides a summary of known issues carried over from the previous releases.
The following table provides a summary of known issues carried over from the pre
|**12.**|Kubernetes |File-based bind mounts aren't supported with Azure IoT Edge on Kubernetes on Azure Stack Edge device.|IoT Edge uses a translation layer to translate `ContainerCreate` options to Kubernetes constructs. Creating `Binds` maps to `hostpath` directory and thus file-based bind mounts cannot be bound to paths in IoT Edge containers. If possible, map the parent directory.| |**13.**|Kubernetes |If you bring your own certificates for IoT Edge and add those certificates on your Azure Stack Edge device after the compute is configured on the device, the new certificates are not picked up.|To work around this problem, you should upload the certificates before you configure compute on the device. If the compute is already configured, [Connect to the PowerShell interface of the device and run IoT Edge commands](azure-stack-edge-gpu-connect-powershell-interface.md#use-iotedge-commands). Restart `iotedged` and `edgehub` pods.| |**14.**|Certificates |In certain instances, certificate state in the local UI may take several seconds to update. |The following scenarios in the local UI may be affected.<ul><li>**Status** column in **Certificates** page.</li><li>**Security** tile in **Get started** page.</li><li>**Configuration** tile in **Overview** page.</li></ul> |
-|**15.**|Compute + Kubernetes |Compute/Kubernetes does not support NTLM web proxy. ||
-|**16**|Internet Explorer|If enhanced security features are enabled, you may not be able to access local web UI pages. | Disable enhanced security, and restart your browser.|
-|**17.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#environment-variables).|Replace ":" by double underscore. For more information,see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|
-|**18.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources are not deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md#additional-parameters). |
-|**19.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| |
-|**20.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that do not exist on the network.| |
-|**21.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it is not possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create 1 VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available 1 GPU. |
-|**22.**|Custom script VM extension |There is a known issue in the Windows VMs that were created in an earlier release and the device was updated to 2103. <br> If you add a custom script extension on these VMs, the Windows VM Guest Agent (Version 2.7.41491.901 only) gets stuck in the update causing the extension deployment to time out. | To work around this issue: <ol><li> Connect to the Windows VM using remote desktop protocol (RDP). </li><li> Make sure that the `waappagent.exe` is running on the machine: `Get-Process WaAppAgent`. </li><li> If the `waappagent.exe` is not running, restart the `rdagent` service: `Get-Service RdAgent` \| `Restart-Service`. Wait for 5 minutes.</li><li> While the `waappagent.exe` is running, kill the `WindowsAzureGuest.exe` process. </li><li>After you kill the process, the process starts running again with the newer version.</li><li>Verify that the Windows VM Guest Agent version is 2.7.41491.971 using this command: `Get-Process WindowsAzureGuestAgent` \| `fl ProductVersion`.</li><li>[Set up custom script extension on Windows VM](azure-stack-edge-gpu-deploy-virtual-machine-custom-script-extension.md). </li><ol> |
-|**23.**|GPU VMs |Prior to this release, GPU VM lifecycle was not managed in the update flow. Hence, when updating to 2103 release, GPU VMs are not stopped automatically during the update. You will need to manually stop the GPU VMs using a `stop-stayProvisioned` flag before you update your device. For more information, see [Suspend or shut down the VM](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md#suspend-or-shut-down-the-vm).<br> All the GPU VMs that are kept running before the update, are started after the update. In these instances, the workloads running on the VMs aren't terminated gracefully. And the VMs could potentially end up in an undesirable state after the update. <br>All the GPU VMs that are stopped via the `stop-stayProvisioned` before the update, are automatically started after the update. <br>If you stop the GPU VMs via the Azure portal, you'll need to manually start the VM after the device update.| If running GPU VMs with Kubernetes, stop the GPU VMs right before the update. <br>When the GPU VMs are stopped, Kubernetes will take over the GPUs that were used originally by VMs. <br>The longer the GPU VMs are in stopped state, higher the chances that Kubernetes will take over the GPUs. |
-|**24.**|Multi-Process Service (MPS) |When the device software and the Kubernetes cluster are updated, the MPS setting is not retained for the workloads. |[Re-enable MPS](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface) and redeploy the workloads that were using MPS. |
+|**15.**|Certificates|Alerts related to signing chain certificates aren't removed from the portal even after uploading new signing chain certificates.| |
+|**16.**|Compute + Kubernetes |Compute/Kubernetes does not support NTLM web proxy. ||
+|**17.**|Internet Explorer|If enhanced security features are enabled, you may not be able to access local web UI pages. | Disable enhanced security, and restart your browser.|
+|**18.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event Grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#environment-variables).|Replace ":" by double underscore. For more information,see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|
+|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources are not deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md#additional-parameters). |
+|**20.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| |
+|**21.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that do not exist on the network.| |
+|**22.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it is not possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create 1 VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available 1 GPU. |
+|**23.**|Custom script VM extension |There is a known issue in the Windows VMs that were created in an earlier release and the device was updated to 2103. <br> If you add a custom script extension on these VMs, the Windows VM Guest Agent (Version 2.7.41491.901 only) gets stuck in the update causing the extension deployment to time out. | To work around this issue: <ol><li> Connect to the Windows VM using remote desktop protocol (RDP). </li><li> Make sure that the `waappagent.exe` is running on the machine: `Get-Process WaAppAgent`. </li><li> If the `waappagent.exe` is not running, restart the `rdagent` service: `Get-Service RdAgent` \| `Restart-Service`. Wait for 5 minutes.</li><li> While the `waappagent.exe` is running, kill the `WindowsAzureGuest.exe` process. </li><li>After you kill the process, the process starts running again with the newer version.</li><li>Verify that the Windows VM Guest Agent version is 2.7.41491.971 using this command: `Get-Process WindowsAzureGuestAgent` \| `fl ProductVersion`.</li><li>[Set up custom script extension on Windows VM](azure-stack-edge-gpu-deploy-virtual-machine-custom-script-extension.md). </li><ol> |
+|**24.**|GPU VMs |Prior to this release, GPU VM lifecycle was not managed in the update flow. Hence, when updating to 2103 release, GPU VMs are not stopped automatically during the update. You will need to manually stop the GPU VMs using a `stop-stayProvisioned` flag before you update your device. For more information, see [Suspend or shut down the VM](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md#suspend-or-shut-down-the-vm).<br> All the GPU VMs that are kept running before the update, are started after the update. In these instances, the workloads running on the VMs aren't terminated gracefully. And the VMs could potentially end up in an undesirable state after the update. <br>All the GPU VMs that are stopped via the `stop-stayProvisioned` before the update, are automatically started after the update. <br>If you stop the GPU VMs via the Azure portal, you'll need to manually start the VM after the device update.| If running GPU VMs with Kubernetes, stop the GPU VMs right before the update. <br>When the GPU VMs are stopped, Kubernetes will take over the GPUs that were used originally by VMs. <br>The longer the GPU VMs are in stopped state, higher the chances that Kubernetes will take over the GPUs. |
+|**25.**|Multi-Process Service (MPS) |When the device software and the Kubernetes cluster are updated, the MPS setting is not retained for the workloads. |[Re-enable MPS](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface) and redeploy the workloads that were using MPS. |
## Next steps
devtest-labs Samples Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/samples-cli.md
Title: Azure CLI Samples
-description: This article provides a list of Azure CLI scripting samples that help you manage labs in Azure Lab Services.
+description: Learn about Azure CLI scripts. With these samples, you can create a virtual machine and then start, stop, and delete it in Azure Lab Services.
Previously updated : 06/26/2020 Last updated : 02/02/2022 # Azure CLI Samples for Azure Lab Services
-The following table includes links to bash scripts built using the Azure CLI scripts for Azure Lab Services.
+This article includes sample bash scripts built for Azure CLI for Azure Lab Services.
| Script | Description | |||
-| [Create and verify availability of a VM](scripts/create-verify-virtual-machine-in-lab-cli.md) | Creates a Windows virtual machine with minimal configuration. |
-| [Start a VM](scripts/start-connect-virtual-machine-in-lab-cli.md) | Starts a VM. |
-| [Stop and delete a VM](scripts/stop-delete-virtual-machine-in-lab-cli.md) | Stops and deletes a VM. |
+| [Create and verify a virtual machine (VM)](#create-and-verify-availability-of-a-vm) | Creates a Windows VM with minimal configuration. |
+| [Start a VM](#start-a-vm) | Starts a VM. |
+| [Stop and delete a VM](#stop-and-delete-a-vm) | Stops and deletes a VM. |
+
+## Prerequisites
+++
+All of these scripts have the following prerequisite:
+
+- **A lab**. The script requires you to have an existing lab.
+
+## Create and verify availability of a VM
+
+This Azure CLI script creates a virtual machine in a lab.
+The VM created based on a marketplace image with SSH authentication.
+The script then verifies that the VM is available for use.
++
+This script uses the following commands:
+
+| Command | Notes |
+|||
+| [az group create](/cli/azure/group#az_group_create) | Creates a resource group in which all resources are stored. |
+| [az lab vm create](/cli/azure/lab/vm#az_lab_vm_create) | Creates a VM in a lab. |
+| [az lab vm show](/cli/azure/lab/vm#az_lab_vm_show) | Displays the status of the VM in a lab. |
+
+## Start a VM
+
+This Azure CLI script starts a virtual machine in a lab.
++
+This script uses the following commands:
+
+| Command | Notes |
+|||
+| [az lab vm start](/cli/azure/lab/vm#az_lab_vm_start) | Starts a VM in a lab. This operation can take a while to complete. |
+
+## Stop and delete a VM
+
+This Azure CLI script stops and deletes a virtual machine in a lab.
++
+This script uses the following commands:
+
+| Command | Notes |
+|||
+| [az lab vm stop](/cli/azure/lab/vm#az_lab_vm_stop) | Stops a VM in a lab. This operation can take a while to complete. |
+| [az lab vm delete](/cli/azure/lab/vm#az_lab_vm_delete) | Deletes a VM in a lab. This operation can take a while to complete. |
+
+## Clean up deployment
+
+Run the following command to remove the resource group, VM, and all related resources.
+
+```azurecli
+az group delete --name $resourceGroupName
+```
devtest-labs Create Verify Virtual Machine In Lab Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/scripts/create-verify-virtual-machine-in-lab-cli.md
- Title: Azure CLI - Create and verify a virtual machine in a lab
-description: This Azure CLI script creates a virtual machine in a lab, and verifies that it's available.
- Previously updated : 08/11/2020--
-# Use Azure CLI to create and verify availability of a virtual machine in a lab in Azure DevTest Labs
-
-This Azure CLI script creates a virtual machine (VM) in a lab. The VM created based on a marketplace image with ssh authentication. The script then verifies that the VM is available for use.
---
-## Sample script
-
-[!code-azurecli-interactive[main](../../../cli_scripts/devtest-lab/create-verify-virtual-machine-in-lab/create-verify-virtual-machine-in-lab.sh "Create and verify availability of a VM")]
-
-## Clean up deployment
-
-Run the following command to remove the resource group, VM, and all related resources.
-
-```azurecli-interactive
-az group delete --name myResourceGroup
-```
-
-## Script explanation
-
-This script uses the following commands:
-
-| Command | Notes |
-|||
-| [az group create](/cli/azure/group#az_group_create) | Creates a resource group in which all resources are stored. |
-| [az lab vm create](/cli/azure/lab/vm#az_lab_vm_create) | Creates a virtual machine (VM) in a lab. |
-| [az lab vm show](/cli/azure/lab/vm#az_lab_vm_show) | Displays the status of the VM in a lab. |
-
-## Next steps
-
-For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-
-Additional Azure Lab Services CLI script samples can be found in the [Azure Lab Services CLI samples](../samples-cli.md).
devtest-labs Start Connect Virtual Machine In Lab Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/scripts/start-connect-virtual-machine-in-lab-cli.md
- Title: Azure CLI Script Sample - Start a virtual machine in a lab
-description: This Azure CLI script starts a virtual machine in a lab in Azure DevTest Labs.
- Previously updated : 08/11/2020--
-# Use Azure CLI to start a virtual machine in a lab in Azure DevTest Labs
-
-This Azure CLI script starts a virtual machine (VM) in a lab.
---
-## Sample script
-
-[!code-azurecli-interactive[main](../../../cli_scripts/devtest-lab/start-connect-virtual-machine-in-lab/start-connect-virtual-machine-in-lab.sh "Start a VM")]
--
-## Script explanation
-
-This script uses the following commands:
-
-| Command | Notes |
-|||
-| [az lab vm start](/cli/azure/lab/vm#az_lab_vm_start) | Starts a virtual machine (VM) in a lab. This operation can take a while to complete. |
-
-## Next steps
-
-For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-
-Additional Azure Lab Services CLI script samples can be found in the [Azure Lab Services CLI samples](../samples-cli.md).
devtest-labs Stop Delete Virtual Machine In Lab Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/scripts/stop-delete-virtual-machine-in-lab-cli.md
- Title: Azure CLI - Stop and delete a virtual machine in a lab
-description: This article provides an Azure CLI script that stops and deletes a virtual machine in a lab in Azure DevTest Labs.
- Previously updated : 08/11/2020--
-# Use Azure CLI to stop and delete a virtual machine in a lab in Azure DevTest Labs
-
-This Azure CLI script stops and deletes a virtual machine (VM) in a lab.
---
-## Sample script
-
-[!code-azurecli-interactive[main](../../../cli_scripts/devtest-lab/stop-delete-virtual-machine-in-lab/stop-delete-virtual-machine-in-lab.sh "Stop and delete a VM in a lab")]
-
-## Script explanation
-
-This script uses the following commands:
-
-| Command | Notes |
-|||
-| [az lab vm stop](/cli/azure/lab/vm#az_lab_vm_stop) | Stops a virtual machine (VM) in a lab. This operation can take a while to complete. |
-| [az lab vm delete](/cli/azure/lab/vm#az_lab_vm_delete) | Delets a virtual machine (VM) in a lab. This operation can take a while to complete. |
--
-## Next steps
-
-For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-
-Additional Azure Lab Services CLI script samples can be found in the [Azure Lab Services CLI samples](../samples-cli.md).
event-hubs Event Hubs Kafka Connect Debezium https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-kafka-connect-debezium.md
Title: Integrate Apache Kafka Connect on Azure Event Hubs with Debezium for Change Data Capture description: This article provides information on how to use Debezium with Azure Event Hubs for Kafka. -- Last updated 10/18/2021
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations.md
Previously updated : 01/24/2022 Last updated : 01/31/2022
The following table shows locations by service provider. If you want to view ava
| **[Softbank](https://www.softbank.jp/biz/cloud/cloud_access/direct_access_for_az/)** |Supported |Supported | Osaka, Tokyo | | **[Sohonet](https://www.sohonet.com/fastlane/)** |Supported |Supported |London2 | | **[Spark NZ](https://www.sparkdigital.co.nz/solutions/connectivity/cloud-connect/)** |Supported |Supported |Auckland, Sydney |
-| **[Sprint](https://business.sprint.com/solutions/cloud-networking/)** |Supported |Supported |Chicago, Silicon Valley, Washington DC |
| **[Swisscom](https://www.swisscom.ch/en/business/enterprise/offer/cloud-data-center/microsoft-cloud-services/microsoft-azure-von-swisscom.html)** | Supported | Supported | Geneva, Zurich | | **[Tata Communications](https://www.tatacommunications.com/solutions/network/cloud-ready-networks/)** |Supported |Supported | Amsterdam, Chennai, Hong Kong SAR, London, Mumbai, Pune, Sao Paulo, Silicon Valley, Singapore, Washington DC | | **[Telefonica](https://www.telefonica.com/es/home)** |Supported |Supported | Amsterdam, Sao Paulo, Madrid |
The following table shows locations by service provider. If you want to view ava
| **[TIME dotCom](https://www.time.com.my/enterprise/connectivity/direct-cloud)** | Supported | Supported | Kuala Lumpur | | **[Tokai Communications](https://www.tokai-com.co.jp/en/)** | Supported | Supported | Osaka, Tokyo2 | | **[Transtelco](https://transtelco.net/enterprise-services/)** |Supported |Supported | Dallas, Queretaro(Mexico)|
+| **[T-Mobile](https://www.t-mobile.com/business/solutions/networking/cloud-networking )** |Supported |Supported |Chicago, Silicon Valley, Washington DC |
| **[T-Systems](https://geschaeftskunden.telekom.de/vernetzung-digitalisierung/produkt/intraselect)** |Supported |Supported | Frankfurt | | **[UOLDIVEO](https://www.uoldiveo.com.br/)** |Supported |Supported |Sao Paulo | | **[UIH](https://www.uih.co.th/en/network-solutions/global-network/cloud-direct-for-microsoft-azure-expressroute)** | Supported | Supported | Bangkok |
firewall Firewall Preview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/firewall-preview.md
Previously updated : 01/27/2022 Last updated : 01/31/2022
Run the following Azure PowerShell commands to configure Azure Firewall network
```azurepowershell Connect-AzAccount Select-AzSubscription -Subscription "subscription_id or subscription_name"
-Register-AzProviderFeature -FeatureName AFWEnableNetworkRuleNameLogging -ProviderNamespace Microsoft.Network
+Register-AzProviderFeature -FeatureName AFWEnableNetworkRuleNameLogging -ProviderNamespace Microsoft.Network
+Register-AzResourceProvider -ProviderNamespace Microsoft.Network
``` Run the following Azure PowerShell command to turn off this feature:
Run the following Azure PowerShell commands to configure the Azure Firewall Prem
Connect-AzAccount Select-AzSubscription -Subscription "subscription_id or subscription_name" Register-AzProviderFeature -FeatureName AFWEnableAccelnet -ProviderNamespace Microsoft.Network
+Register-AzResourceProvider -ProviderNamespace Microsoft.Network
``` Run the following Azure PowerShell command to turn off this feature:
firewall Premium Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/premium-deploy.md
Previously updated : 11/24/2021 Last updated : 01/31/2022
Azure Firewall Premium is a next generation firewall with capabilities that are required for highly sensitive and regulated environments. It includes the following features: -- **TLS inspection** - decrypts outbound traffic, processes the data, then encrypts the data and sends it to the destination.
+- **TLS Inspection** - decrypts outbound traffic, processes the data, then encrypts the data and sends it to the destination.
- **IDPS** - A network intrusion detection and prevention system (IDPS) allows you to monitor network activities for malicious activity, log information about this activity, report it, and optionally attempt to block it. - **URL filtering** - extends Azure FirewallΓÇÖs FQDN filtering capability to consider an entire URL. For example, `www.contoso.com/a/c` instead of `www.contoso.com`. - **Web categories** - administrators can allow or deny user access to website categories such as gambling websites, social media websites, and others.
If you don't have an Azure subscription, create a [free account](https://azure.m
## Deploy the infrastructure
-The template deploys a complete testing environment for Azure Firewall Premium enabled with IDPS, TLS Inspection, URL Filtering and Web Categories:
+The template deploys a complete testing environment for Azure Firewall Premium enabled with IDPS, TLS Inspection, URL Filtering, and Web Categories:
-- a new Azure Firewall Premium and Firewall Policy with predefined settings to allow easy validation of its core capabilities (IDPS, TLS Inspection, URL Filtering and Web Categories)-- deploys all dependencies including Key Vault and a Managed Identity. In a production environment these resources may already be created and not needed in the same template.-- generates self signed Root CA and deploys it on the generated Key Vault
+- a new Azure Firewall Premium and Firewall Policy with predefined settings to allow easy validation of its core capabilities (IDPS, TLS Inspection, URL Filtering, and Web Categories)
+- deploys all dependencies including Key Vault and a Managed Identity. In a production environment, these resources may already be created and not needed in the same template.
+- generates self-signed Root CA and deploys it on the generated Key Vault
- generates a derived Intermediate CA and deploys it on a Windows test virtual machine (WorkerVM) - a Bastion Host (BastionHost) is also deployed and can be used to connect to the Windows testing machine (WorkerVM)
The template deploys a complete testing environment for Azure Firewall Premium e
## Test the firewall
-Now you can test IDPS, TLS inspection, Web filtering, and Web categories.
+Now you can test IDPS, TLS Inspection, Web filtering, and Web categories.
### Add firewall diagnostics settings
To collect firewall logs, you need to add diagnostics settings to collect firewa
### IDPS tests
-To test IDPS, you'll need to deploy your own internal Web server with an appropriate server certificate. For more information about Azure Firewall Premium certificate requirements, see [Azure Firewall Premium certificates](premium-certificates.md).
+To test IDPS, you should deploy your own internal test Web server with an appropriate server certificate. This test includes sending malicious traffic to a Web server, so it isn't advisable to do this to a public Web server. For more information about Azure Firewall Premium certificate requirements, see [Azure Firewall Premium certificates](premium-certificates.md).
You can use `curl` to control various HTTP headers and simulate malicious traffic.
You should see the same results that you had with the HTTP tests.
Use the following steps to test TLS Inspection with URL filtering.
-1. Edit the firewall policy application rules and add a new rule called `AllowURL` to the `AllowWeb` rule collection. Configure the target URL `www.nytimes.com/section/world`, Source IP address **\***, Destination type **URL**, select **TLS inspection**, and protocols **http, https**.
+1. Edit the firewall policy application rules and add a new rule called `AllowURL` to the `AllowWeb` rule collection. Configure the target URL `www.nytimes.com/section/world`, Source IP address **\***, Destination type **URL**, select **TLS Inspection**, and protocols **http, https**.
3. When the deployment completes, open a browser on WorkerVM and go to `https://www.nytimes.com/section/world` and validate that the HTML response is displayed as expected in the browser. 4. In the Azure portal, you can view the entire URL in the Application rule Monitoring logs:
Let's create an application rule to allow access to sports web sites.
1. From the portal, open your resource group and select **DemoFirewallPolicy**. 2. Select **Application Rules**, and then **Add a rule collection**. 3. For **Name**, type *GeneralWeb*, **Priority** *103*, **Rule collection group** select **DefaultApplicationRuleCollectionGroup**.
-4. Under **Rules** for **Name** type *AllowSports*, **Source** *\**, **Protocol** *http, https*, select **TLS inspection**, **Destination Type** select *Web categories*, **Destination** select *Sports*.
+4. Under **Rules** for **Name** type *AllowSports*, **Source** *\**, **Protocol** *http, https*, select **TLS Inspection**, **Destination Type** select *Web categories*, **Destination** select *Sports*.
5. Select **Add**. :::image type="content" source="media/premium-deploy/web-categories.png" alt-text="Sports web category":::
firewall Premium Migrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/premium-migrate.md
Previously updated : 12/06/2021 Last updated : 01/31/2022
You can migrate Azure Firewall Standard to Azure Firewall Premium to take advant
The following two examples show how to: - Migrate an existing standard policy using Azure PowerShell-- Migrate an existing standard firewall (with classic rules) to Azure Firewall Premium with a Premium policy.
+- Migrate an existing standard firewall (with classic rules) to Azure Firewall Premium with a Premium policy
+
+> [!IMPORTANT]
+> Upgrading a Standard Firewall deployed in Southeast Asia with Availability Zones is not currently supported.
If you use Terraform to deploy the Azure Firewall, you can use Terraform to migrate to Azure Firewall Premium. For more information, see [Migrate Azure Firewall Standard to Premium using Terraform](/azure/developer/terraform/firewall-upgrade-premium?toc=/azure/firewall/toc.json&bc=/azure/firewall/breadcrumb/toc.json). ## Performance considerations
-Performance is a consideration when migrating from the standard SKU. IDPS and TLS inspection are compute intensive operations. The premium SKU uses a more powerful VM SKU which scales to a maximum throughput of 30 Gbps comparable with the standard SKU. The 30 Gbps throughput is supported when configured with IDPS in alert mode. Use of IDPS in deny mode and TLS inspection increases CPU consumption. Degradation in max throughput might occur.
+Performance is a consideration when migrating from the standard SKU. IDPS and TLS inspection are compute intensive operations. The premium SKU uses a more powerful VM SKU, which scales to a maximum throughput of 30 Gbps comparable with the standard SKU. The 30-Gbps throughput is supported when configured with IDPS in alert mode. Use of IDPS in deny mode and TLS inspection increases CPU consumption. Degradation in max throughput might occur.
-The firewall throughput might be lower than 30 Gbps when you have one or more signatures set to **Alert and Deny** or application rules with **TLS inspection** enabled. Microsoft recommends customers perform full scale testing in their Azure deployment to ensure the firewall service performance meets your expectations.
+The firewall throughput might be lower than 30 Gbps when youΓÇÖve one or more signatures set to **Alert and Deny** or application rules with **TLS inspection** enabled. Microsoft recommends customers perform full-scale testing in their Azure deployment to ensure the firewall service performance meets your expectations.
## Downtime
You can also migrate existing Classic rules from Azure Firewall using Azure Powe
`Transform-Policy.ps1` is an Azure PowerShell script that creates a new Premium policy from an existing Standard policy.
-Given a standard firewall policy ID, the script transforms it to a Premium Azure Firewall policy. The script first connects to your Azure account, pulls the policy, transforms/adds various parameters, and then uploads a new Premium policy. The new premium policy is named `<previous_policy_name>_premium`. In case of child policy transformation, link to parent policy will remain.
+Given a standard firewall policy ID, the script transforms it to a Premium Azure firewall policy. The script first connects to your Azure account, pulls the policy, transforms/adds various parameters, and then uploads a new Premium policy. The new premium policy is named `<previous_policy_name>_premium`. If it's a child policy transformation, a link to the parent policy will remain.
Usage example:
TransformPolicyToPremium -Policy $policy
## Migrate Azure Firewall using stop/start
-If you use Azure Firewall Standard SKU with Firewall Policy, you can use the Allocate/Deallocate method to migrate your Firewall SKU to Premium. This migration approach is supported on both VNet Hub and Secure Hub Firewalls. When you migrate a Secure Hub deployment, it will preserve the firewall public IP address.
+If you use Azure Firewall Standard SKU with firewall policy, you can use the Allocate/Deallocate method to migrate your Firewall SKU to Premium. This migration approach is supported on both VNet Hub and Secure Hub Firewalls. When you migrate a Secure Hub deployment, it will preserve the firewall public IP address.
The minimum Azure PowerShell version requirement is 6.5.0. For more information, see [Az 6.5.0](https://www.powershellgallery.com/packages/Az/6.5.0).
frontdoor Front Door Ddos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-ddos.md
Front Door is a massively scaled, globally distributed service. We have many cus
## For further protection
-If you require further protection, then you can enable [Azure DDoS Protection Standard](../security/fundamentals/ddos-best-practices.md#ddos-protection-standard) on the VNet where your back-ends are deployed. DDoS Protection Standard customers receive additional benefits including cost protection, SLA guarantee, and access to experts from the DDoS Rapid Response Team for immediate help during an attack.
+If you require further protection, then you can enable [Azure DDoS Protection Standard](../ddos-protection/ddos-protection-overview.md) on the VNet where your back-ends are deployed. DDoS Protection Standard customers receive additional benefits including cost protection, SLA guarantee, and access to experts from the DDoS Rapid Response Team for immediate help during an attack.
## Next steps
iot-central Concepts App Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-app-templates.md
- Title: What are application templates in Azure IoT Central | Microsoft Docs
-description: Azure IoT Central application templates allow you to jump in to IoT solution development.
-- Previously updated : 01/18/2022----
-# What are application templates?
-
-Application templates in Azure IoT Central are a tool to help you kickstart your IoT solution development. You can use app templates for everything from getting a feel for what is possible, to fully customizing your application to resell to your customers.
-
-Application templates consist of:
--- Sample dashboards-- Sample device templates-- Simulated devices producing real-time data-- Pre-configured rules and jobs-- Rich documentation including tutorials and how-tos-
-You choose the application template when you create your application. You can't change the template an application uses after it's created.
-
-## Custom templates
-
-If you want to create your application from scratch, choose the **Custom application** template. The custom application template ID is `iotc-pnp-preview`.
-
-## Industry focused templates
-
-Azure IoT Central is an industry agnostic application platform. Application templates are industry focused examples available for these industries today:
--
-## Connected logistics
-
-Global logistics spending is expected to reach $10.6 trillion in 2020. Transportation of goods accounts for most of this spending and shipping providers are under intense competitive pressure and constraints.
-
-You can use IoT sensors to collect and monitor ambient conditions such as temperature, humidity, tilt, shock, light, and the location of a shipment. You can combine telemetry gathered from IoT sensors and devices with other data sources such as weather and traffic information in cloud-based business intelligence systems.
-
-The benefits of a connected logistics solution include:
--- Shipment monitoring with real-time tracing and tracking.-- Shipment integrity with real-time ambient condition monitoring.-- Security from theft, loss, or damage of shipments.-- Geo-fencing, route optimization, fleet management, and vehicle analytics.-- Forecasting for predictable departure and arrival of shipments.-
-The following screenshots show the out-of-the-box dashboard in the application template. The dashboard is fully customizable to meet your specific solution requirements:
---
-To learn more, see the [Deploy and walk through a connected logistics application template](../retail/tutorial-iot-central-connected-logistics.md) tutorial.
-
-## Digital distribution center
-
-As manufacturers and retailers establish worldwide presences, their supply chains branch out and become more complex. Consumers now expect large selections of products to be available, and for those goods to arrive within one or two days of purchase. Distribution centers must adapt to these trends while overcoming existing inefficiencies.
-
-Today, reliance on manual labor means that picking and packing accounts for 55-65% of distribution center costs. Manual picking and packing are also typically slower than automated systems, and rapidly fluctuating staffing needs make it even harder to meet shipping volumes. This seasonal fluctuation results in high staff turnover and increase the likelihood of costly errors.
-
-Solutions based on IoT enabled cameras can deliver transformational benefits by enabling a digital feedback loop. Data from across the distribution center leads to actionable insights that, in turn, results in better data.
-
-The benefits of a digital distribution center include:
--- Cameras monitor goods as they arrive and move through the conveyor system.-- Automatic identification of faulty goods.-- Efficient order tracking.-- Reduced costs, improved productivity, and optimized usage.-
-The following screenshot shows the out-of-the-box dashboard in the application template. The dashboard is fully customizable to meet your specific solution requirements:
--
-To learn more, see the [Deploy and walk through a digital distribution center application template](../retail/tutorial-iot-central-digital-distribution-center.md) tutorial.
-
-## In-store analytics - condition monitoring
-
-For many retailers, environmental conditions within their stores are a key differentiator from their competitors. Retailers want to maintain pleasant conditions within their stores for the benefit of their customers.
-
-You can use the IoT Central in-store analytics condition monitoring application template to build an end-to-end solution. The application template lets you digitally connect to and monitor a retail store environment using of different kinds of sensor devices. These sensor devices generate telemetry that you can convert into business insights to help the retailer reduce operating costs and create a great experience for their customers.
-
-Use the application template to:
--- Connect different kinds of IoT sensors to an IoT Central application instance.-- Monitor and manage the health of the sensor network and any gateway devices in the environment.-- Create custom rules around the environmental conditions within a store to trigger alerts for store managers.-- Transform the environmental conditions within your store into insights that the retail store team can use to improve the customer experience.-- Export the aggregated insights into existing or new business applications to provide useful and timely information to retail staff.-
-The application template comes with a set of device templates and uses a set of simulated devices to populate the dashboard.
-
-The following screenshot shows the out-of-the-box dashboard in the application template. The dashboard is fully customizable to meet your specific solution requirements:
--
-To learn more, see the [Create an in-store analytics application in Azure IoT Central](../retail/tutorial-in-store-analytics-create-app.md) tutorial.
-
-## In-store analytics - checkout
-
-For some retailers, the checkout experience within their stores is a key differentiator from their competitors. Retailers want to deliver a smooth checkout experience within their stores to encourage customers to return.
-
-You can use the IoT Central in-store analytics checkout application template to build a solution that delivers insights from around the checkout zone of a store to retail staff. For example, sensors can provide information about queue lengths and average wait times for each checkout lane.
-
-Use the application template to:
--- Connect different kinds of IoT sensors to an IoT Central application instance.-- Monitor and manage the health of the sensor network and any gateway devices in the environment.-- Create custom rules around the checkout condition within a store to trigger alerts for retail staff.-- Transform the checkout conditions within the store into insights that the retail store team can use to improve the customer experience.-- Export the aggregated insights into existing or new business applications to provide useful and timely information to retail staff.-
-The application template comes with a set of device templates and uses a set of simulated devices to populate the dashboard with lane occupancy data.
-
-The following screenshot shows the out-of-the-box dashboard in the application template. The dashboard is fully customizable to meet your specific solution requirements:
--
-To learn more, see the [Create an in-store analytics application in Azure IoT Central](../retail/tutorial-in-store-analytics-create-app.md) tutorial.
-
-## Smart inventory management
-
-Inventory is the stock of goods a retailer holds. Inventory management is critical to ensure the right product is in the right place at the right time. A retailer must balance the costs of storing too much inventory against the costs of not having sufficient items in stock to meet demand.
-
-IoT data generated from radio-frequency identification (RFID) tags, beacons, and cameras provide opportunities to improve inventory management processes. You can combine telemetry gathered from IoT sensors and devices with other data sources such as weather and traffic information in cloud-based business intelligence systems.
-
-The benefits of smart inventory management include:
--- Reducing the risk of items being out of stock and ensuring the desired customer service level.-- In-depth analysis and insights into inventory accuracy in near real time.-- Tools to help decide on the right amount of inventory to hold to meet customer orders.-
-This application template focuses on device connectivity, and the configuration and management of RFID and Bluetooth low energy (BLE) reader devices.
-
-The following screenshot shows the out-of-the-box dashboard in the application template. The dashboard is fully customizable to meet your specific solution requirements:
--
-To learn more, see the [Deploy and walk through a smart inventory management application template](../retail/tutorial-iot-central-smart-inventory-management.md) tutorial.
-
-## Micro-fulfillment center
-
-In the increasingly competitive retail landscape, retailers constantly face pressure to close the gap between demand and fulfillment. A new trend that has emerged to address the growing consumer demand is to house inventory near the end customers and the stores they visit.
-
-The IoT Central micro-fulfillment center application template enables you to monitor and manage all aspects of your fully automated fulfillment centers. The template includes a set of simulated condition monitoring sensors and robotic carriers to accelerate the solution development process. These sensor devices capture meaningful signals that can be converted into business insights allowing retailers to reduce their operating costs and create experiences for their customers.
-
-The application template enables you to:
--- Seamlessly connect different kinds of IoT sensors such as robots or condition monitoring sensors to an IoT Central application instance.-- Monitor and manage the health of the sensor network, and any gateway devices in the environment.-- Create custom rules around the environmental conditions within a fulfillment center to trigger appropriate alerts.-- Transform the environmental conditions within your fulfillment center into insights that the retail warehouse team can use.-- Export the aggregated insights into existing or new business applications for the benefit of the retail staff members.-
-The following screenshot shows the out-of-the-box dashboard in the application template. The dashboard is fully customizable to meet your specific solution requirements:
--
-To learn more, see the [Deploy and walk through the micro-fulfillment center application template](../retail/tutorial-micro-fulfillment-center.md) tutorial.
-
-## Smart meter monitoring
-
- The smart meters not only enable automated billing, but also advanced metering use cases such as real-time readings and bi-directional communication. The smart meter app template enables utilities and partners to monitor smart meters status and data, define alarms and notifications. It provides sample commands, such as disconnect meter and update software. The meter data can be set up to egress to other business applications and to develop custom solutions.
-
-App's key functionalities:
--- Meter sample device model-- Meter info and live status-- Meter readings such as energy, power, and voltages-- Meter command samples-- Built-in visualization and dashboards-- Extensibility for custom solution development-
-You can try the [smart meter monitoring app for free](https://apps.azureiotcentral.com/build/new/smart-meter-monitoring) without an Azure subscription, and any commitments.
-
-After you deploy the app, you'll see the simulated meter data on the dashboard, as shown in the figure below. This template is a sample app that you can easily extend and customize for your specific use cases.
--
-## Solar panel monitoring
-
-The solar panel monitoring app enables utilities and partners to monitor solar panels, such as their energy generation and connection status in near real time. It can send notifications based on defined threshold criteria. It provides sample commands, such as update firmware and other properties. The solar panel data can be set up to egress to other business applications and to develop custom solutions.
-
-App's key functionalities:
--- Solar panel sample device model-- Solar Panel info and live status-- Solar energy generation and other readings-- Command and control samples-- Built-in visualization and dashboards-- Extensibility for custom solution development-
-You can try the [solar panel monitoring app for free](https://apps.azureiotcentral.com/build/new/solar-panel-monitoring) without an Azure subscription and any commitments.
-
-After you deploy the app, you'll see the simulated solar panel data within 1-2 minutes, as shown in the dashboard below. This template is a sample app that you can easily extend and customize for your specific use cases.
--
-## Water Quality Monitoring
-
-Traditional water quality monitoring relies on manual sampling techniques and field laboratory analysis, which is time consuming and costly. By remotely monitoring water quality in real-time, water quality issues can be managed before citizens are affected. Moreover, with advanced analytics, water utilities, and environmental agencies can act on early warnings on potential water quality issues and plan on water treatment in advance.
-
-Water Quality Monitoring app is an IoT Central app template to help you kickstart your IoT solution development and enable water utilities to digitally monitor water quality in smart cities.
--
-The App template consists of:
--- Sample dashboards-- Sample water quality monitor device templates-- Simulated water quality monitor devices-- Pre-configured rules and jobs-- Branding using white labeling-
-Get started with the [Water Quality Monitoring application tutorial](../government/tutorial-water-quality-monitoring.md).
-
-## Water Consumption Monitoring
-
-Traditional water consumption tracking relies on water operators manually reading water consumption meters at the meter sites. More cities are replacing traditional meters with advanced smart meters enabling remote monitoring of consumption and remotely controlling valves to control water flow. Water consumption monitoring coupled with digital feedback message to the citizen can increase awareness and reduce water consumption.
-
-Water Consumption Monitoring app is an IoT Central app template to help you kickstart your IoT solution development to enable water utilities and cities to remotely monitor and control water flow to reduce consumption.
--
-The Water Consumption Monitoring app template consists of pre-configured:
--- Sample dashboards-- Sample water quality monitor device templates-- Simulated water quality monitor devices-- Pre-configured rules and jobs-- Branding using white labeling-
- Get started with the [Water Consumption Monitoring application tutorial](../government/tutorial-water-consumption-monitoring.md).
-
-## Connected Waste Management
-
-Connected Waste Management app is an IoT Central app template to help you kickstart your IoT solution development to enable smart cities to remotely monitor to maximize efficient waste collection.
--
-The Connected Waste Management app template consists of pre-configured:
--- Sample dashboards-- Sample connected waste bin device templates-- Simulated connected waste bin devices-- Pre-configured rules and jobs-- Branding using white labeling-
-Get started with the [Connected Waste Management application tutorial](../government/tutorial-connected-waste-management.md).
-
-## Continuous patient monitoring
-
-In the healthcare IoT space, Continuous Patient Monitoring is one of the key enablers of reducing the risk of readmissions, managing chronic diseases more effectively, and improving patient outcomes. Continuous Patient Monitoring can be split into two major categories:
-
-1. **In-patient monitoring**: Using medical wearables and other devices in the hospital, care teams can monitor patient vital signs and medical conditions without having to send a nurse to check up on a patient multiple times a day. Care teams can understand the moment that a patient needs critical attention through notifications and prioritizes their time effectively.
-1. **Remote patient monitoring**: By using medical wearables and patient reported outcomes (PROs) to monitor patients outside of the hospital, the risk of readmission can be lowered. Data from chronic disease patients and rehabilitation patients can be collected to ensure that patients are adhering to care plans and that alerts of patient deterioration can be surfaced to care teams before they become critical.
-
-This application template can be used to build solutions for both categories of Continuous Patient Monitoring. The benefits include:
--- Seamlessly connect different kinds of medical wearables to an IoT Central instance.-- Monitor and manage the devices to ensure they remain healthy.-- Create custom rules around device data to trigger appropriate alerts.-- Export your patient health data to the Azure API for FHIR, a compliant data store.-- Export the aggregated insights into existing or new business applications.--
-## Next steps
-
-Now that you know what IoT Central application templates are, get started by [creating an IoT Central Application](quick-deploy-iot-central.md).
iot-central Concepts Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-best-practices.md
- Title: Device development best practices in Azure IoT Central | Microsoft Docs
-description: This article outlines best practices for device connectivity in Azure IoT Central
-- Previously updated : 12/22/2021-----
-# This article applies to device developers.
--
-# Best practices for device development
-
-These recommendations show how to implement devices to take advantage of the built-in disaster recovery and automatic scaling in IoT Central.
-
-The following list shows the high-level flow when a device connects to IoT Central:
-
-1. Use DPS to provision the device and get a device connection string.
-
-1. Use the connection string to connect IoT Central's internal IoT Hub endpoint. Send data to and receive data from your IoT Central application.
-
-1. If the device gets connection failures, then depending on the error type, either retry the connection or reprovision the device.
-
-## Use DPS to provision the device
-
-To provision a device with DPS, use the scope ID, credentials, and device ID from your IoT Central application. To learn more about the credential types, see [X.509 group enrollment](concepts-get-connected.md#x509-group-enrollment) and [SAS group enrollment](concepts-get-connected.md#sas-group-enrollment). To learn more about device IDs, see [Device registration](concepts-get-connected.md#device-registration).
-
-On success, DPS returns a connection string the device can use to connect to your IoT Central application. To troubleshoot provisioning errors, see [Check the provisioning status of your device](troubleshoot-connection.md#check-the-provisioning-status-of-your-device).
-
-The device can cache the connection string to use for later connections. However, the device must be prepared to [handle connection failures](#handle-connection-failures).
-
-## Connect to IoT Central
-
-Use the connection string to connect IoT Central's internal IoT Hub endpoint. The connection lets you send telemetry to your IoT Central application, synchronize property values with your IoT Central application, and respond to commands sent by your IoT Central application.
-
-## Handle connection failures
-
-For scaling or disaster recovery purposes, IoT Central may update its underlying IoT hub. To maintain connectivity, your device code should handle specific connection errors by establishing a connection to the new IoT Hub endpoint.
-
-If the device gets any of the following errors when it connects, it should redo the provisioning step with DPS to get a new connection string. These errors mean the connection string the device is using is no longer valid:
--- Unreachable IoT Hub endpoint.-- Expired security token.-- Device disabled in IoT Hub.-
-If the device gets any of the following errors when it connects, it should use a back-off strategy to retry the connection. These errors mean the connection string the device is using is still valid, but transient conditions are stopping the device from connecting:
--- Operator blocked device.-- Internal error 500 from the service.-
-To learn more about device error codes, see [Troubleshooting device connections](troubleshoot-connection.md).
-
-## Test failover capabilities
-
-The Azure CLI lets you test the failover capabilities of your device client code. The CLI command works by temporarily switching a device registration to a different internal IoT hub. You can verify that the device failover worked by checking that the device is still sending telemetry and responding to commands in your IoT Central application.
-
-To run the failover test for your device, run the following command:
-
-```azurecli
-az iot central device manual-failover \
- --app-id {Application ID of your IoT Central application} \
- --device-id {Device ID of the device you're testing} \
- --ttl-minutes {How to wait before moving the device back to it's original IoT hub}
-```
-
-> [!TIP]
-> To find the **Application ID**, navigate to **Administration > Your application** in your IoT Central application.
-
-If the command succeeds, you see output that looks like the following:
-
-```output
-Command group 'iot central device' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus
-{
- "hubIdentifier": "6bd4...bafa",
- "message": "Success! This device is now being failed over. You can check your device'ΓÇÖ's status using 'iot central device registration-info' command. The device will revert to its original hub at Tue, 18 May 2021 11:03:45 GMT. You can choose to failback earlier using device-manual-failback command. Learn more: https://aka.ms/iotc-device-test"
-}
-```
-
-To learn more about the CLI command, see [az iot central device manual-failover](/cli/azure/iot/central/device#az_iot_central_device_manual_failover).
-
-You can now check to see that telemetry from the device is still reaching your IoT Central application.
-
-> [!TIP]
-> To see sample device code that handles failovers in various programing languages, see [IoT Central high availability clients](/samples/azure-samples/iot-central-high-availability-clients/iotc-high-availability-clients/).
-
-## Next steps
-
-Some suggested next steps are to:
--- Review some sample code that shows how to use SAS tokens in [Tutorial: Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md)-- Learn how to [How to connect devices with X.509 certificates using Node.js device SDK for IoT Central Application](how-to-connect-devices-x509.md)-- Learn how to [Monitor device connectivity using Azure CLI](./howto-monitor-devices-azure-cli.md)-- Learn how to [Define a new IoT device type in your Azure IoT Central application](./howto-set-up-template.md)-- Read about [Azure IoT Edge devices and Azure IoT Central](./concepts-iot-edge.md)
iot-central Concepts Faq Apaas Paas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-faq-apaas-paas.md
+
+ Title: Move from IoT Central to a PaaS solution | Microsoft Docs
+description: How do I move between aPaaS and PaaS solution approaches?
++ Last updated : 01/25/2022++++++
+# How do I move between aPaaS and PaaS solutions?
+
+When you begin your IoT journey, start with Azure IoT Central. IoT Central is the Microsoft application platform as a service (aPaaS) IoT offering. IoT Central is the fastest and easiest way to get started using Azure IoT. However, if you require a high level of customization, you can move from IoT Central and go lower in the stack with the Azure IoT platform as a service (PaaS) services. Use the *IoT Central migrator tool* to migrate devices seamlessly from IoT Central to a custom PaaS solution that uses the Device Provisioning Service (DPS) and IoT Hub service.
+
+## Move devices with the IoT Central migrator tool
+
+Use the migrator tool to move devices with no downtime from IoT Central to your own DPS instance. In a PaaS solution, you link a DPS instance to your IoT hub. The migrator tool disconnects devices from IoT Central and connects them to your PaaS solution. From this point forward, new devices are created in your IoT hub. Old device registrations remain in IoT Central so that you can fall back to IoT Central just if something goes wrong.
+
+Download the [migrator tool from GitHub](https://github.com/Azure/iotc-migrator).
+
+## Minimize disruption
+
+To minimize disruption, you can migrate your devices in phases. The migrator tool uses device groups to move devices from IoT Central to your IoT hub. Divide your device fleet into device groups such as devices in Texas, devices in New York, and devices in the rest of the US. Then migrate each device group independently.
+
+Minimize business impact by following these steps:
+
+- Create the PaaS solution and run it in parallel with the IoT Central application.
+
+- Set up continuous data export in IoT Central application and appropriate routes to the PaaS solution IoT hub. Transform both data channels and store the data into the same data lake.
+
+- Migrate the devices in phases and verify at each phase. If something doesn't go as planned, fail the devices back to IoT Central.
+
+- When you've migrated all the devices to the PaaS solution and fully exported your data from IoT Central, you can remove the devices from the IoT Central solution.
+
+After the migration, devices aren't automatically deleted from the IoT Central application so that you can fail them back from your PaaS solution. These devices continue to be billed as IoT Central charges for all provisioned devices in the application. When you remove these devices from the IoT Central application, they're no longer be billed. Eventually, remove the IoT Central application.
+
+## Firmware best practices
+
+So that you can seamlessly migrate devices from your IoT Central applications to PaaS solution, follow these guidelines:
+
+- The device must be an IoT Plug and Play device that uses a [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) model. IoT Central requires all devices to have a DTDL model. This simplifies the interoperability between an IoT PaaS solution and IoT Central.
+
+- The device must follow the [IoT Central data formats for telemetry, property, and commands](concepts-telemetry-properties-commands.md).
+
+- IoT Central uses the DPS to provision the devices. The PaaS solution must also use DPS to provision the devices.
+
+- The updateable DPS pattern ensures that the device can move seamlessly between IoT Central applications and the PaaS solution without any downtime.
+
+## Move existing data out of IoT Central
+
+You can configure IoT Central to continuously export telemetry and property values. Export destinations are data stores such as Azure Data Lake, Event Hubs, and Webhooks. You can export device templates using either the IoT Central UI or the REST API. The REST API lets you export the users in an IoT Central application.
+
+## Next steps
+
+Now that you've learned about moving from aPaaS to PaaS solutions, a suggested next step is to explore the [IoT Central migrator tool](https://github.com/Azure/iotc-migrator).
iot-central Concepts Faq Extend https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-faq-extend.md
+
+ Title: Extend IoT Central | Microsoft Docs
+description: How do I extend IoT Central if it's missing something I need?
++ Last updated : 01/05/2022++++++
+# How do I extend IoT Central if it's missing something I need?
+
+Use the following extension points to expand the built-in functionality of IoT Central:
+
+- Process your IoT data in other services or applications by using the IoT Central data export capabilities.
+- Trigger business flows and activities by using IoT Central rules.
+- Interact with IoT Central programmatically by using the IoT Central REST APIs.
+
+## Export data
+
+To extend IoT Central's built-in rules and analytics capabilities, use the data export capability to continuously stream data from your devices to other services for processing. The data export capability enables extension scenarios such as:
+
+- Enrich, and transform your IoT data to generate advanced visualizations that provide insights.
+- Extract business metrics and use artificial intelligence and machine learning to derive business insights from your IoT data.
+- Monitoring and diagnostics for millions of connected IoT devices.
+- Combine your IoT data with other business data to build dashboards and reports.
+
+To learn more, see [IoT Central data integration guide](overview-iot-central-solution-builder.md).
+
+## Rules
+
+You can create rules in IoT Central that trigger actions when specified conditions are met. Conditions are evaluated based on data from your connected IoT devices. Actions include sending messages to other cloud services or calling a webhook endpoint. Rules enable extension scenarios such as:
+
+- Notifying operators in other systems.
+- Starting business processes or flows.
+- Monitoring alerts on a custom dashboard.
+
+To learn more, see [Configure rules](howto-configure-rules.md).
+
+## REST API
+
+The *data plane* REST API lets you manage entities in your IoT Central application programmatically. Entities include devices, users, and roles. The preview data plane REST API lets you query the data from your connected devices and manage a wider selection of entities such as jobs and data exports.
+
+The *control plane* REST API lets you create and manage IoT Central applications.
+
+The REST APIs enable extension scenarios such as:
+
+- Programmatic management of your IoT Central applications.
+- Tight integration with other applications.
+
+To learn more, see [Manage an IoT Central application with the REST API](/learn/modules/manage-iot-central-apps-with-rest-api/).
+
+## Next steps
+
+Now that you've learned about the IoT Central extensibility points, the suggested next step is to review the [IoT Central data integration guide](overview-iot-central-solution-builder.md).
iot-central Howto Create Iot Central Application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-create-iot-central-application.md
The _subdomain_ you choose uniquely identifies your application. The subdomain i
The application template you choose determines the initial contents of your application, such as dashboards and device templates. The template ID For a custom application, use `iotc-pnp-preview` as the template ID.
-To learn more about custom and industry-focused application templates, see [What are application templates?](concepts-app-templates.md).
- ### Billing information If you choose one of the standard plans, you need to provide billing information:
iot-central Howto Set Up Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-set-up-template.md
# Define a new IoT device type in your Azure IoT Central application
-A device template is a blueprint that defines the characteristics and behaviors of a type of device that connects to an [Azure IoT Central application](concepts-app-templates.md).
+A device template is a blueprint that defines the characteristics and behaviors of a type of device that connects to an Azure IoT Central application.
This article describes how to create a device template in IoT Central. For example, you can create a device template for a sensor that sends telemetry, such as temperature, and properties, such as location. From this device template, an operator can create and connect real devices.
The following steps show how to use this feature:
1. IoT Central generates a template based on the data format shown on the **Data preview** page and assigns the device to it. You can make further changes to the device template, such as renaming it or adding capabilities, on the **Device templates** page:
- :::image type="content" source="media/howto-set-up-template/infer-model-3.png" alt-text="Screenshot that shows how to rename the auto-generated device template.":::
+ :::image type="content" source="media/howto-set-up-template/infer-model-3.png" alt-text="Screenshot that shows how to rename the autogenerated device template.":::
## Manage a device template
Select **+ Add capability** to add capability to an interface or component. For
#### Telemetry
-Telemetry is a stream of values sent from the device, typically from a sensor. For example, a sensor might report the ambient temperature as shown below:
+Telemetry is a stream of values sent from the device, typically from a sensor. For example, a sensor might report the ambient temperature as shown in the following screenshot:
:::image type="content" source="media/howto-set-up-template/telemetry.png" alt-text="How to add telemetry":::
The following table shows the configuration settings for a telemetry capability:
#### Properties Properties represent point-in-time values. You can set writable properties from IoT Central.
-For example, a device can use a writable property to let an operator set the target temperature as shown below:
+For example, a device can use a writable property to let an operator set the target temperature as shown in the following screenshot:
:::image type="content" source="media/howto-set-up-template/property.png" alt-text="How to add property":::
The following table shows the configuration settings for a property capability:
#### Commands
-You can call device commands from IoT Central. Commands optionally pass parameters to the device and receive a response from the device. For example, you can call a command to reboot a device in 10 seconds as shown below:
+You can call device commands from IoT Central. Commands optionally pass parameters to the device and receive a response from the device. For example, you can call a command to reboot a device in 10 seconds as shown in the following screenshot:
:::image type="content" source="media/howto-set-up-template/command.png" alt-text="How to add commands":::
The following table shows the configuration settings for a cloud property:
## Customizations
-Use customizations when you need to modify an imported component or add IoT Central-specific features to a capability. For example, you can change the display name and units of a property as shown below:
+Use customizations when you need to modify an imported component or add IoT Central-specific features to a capability. For example, you can change the display name and units of a property as shown in the following screenshot:
:::image type="content" source="media/howto-set-up-template/customize.png" alt-text="How to do customizations":::
The following table shows the configuration settings for customizations:
|Display unit | Override from model. | |Comment | Override from model. | |Description | Override from model. |
-|Color | IoT Central specific option. |
-|Min value | Set minimum value - IoT Central specific option. |
-|Max value | Set maximum value - IoT Central specific option. |
-|Decimal places | IoT Central specific option. |
-|Initial value | Commands only IoT Central specific value - default parameter value. |
+|Color | IoT Central-specific option. |
+|Min value | Set minimum value - IoT Central-specific option. |
+|Max value | Set maximum value - IoT Central-specific option. |
+|Decimal places | IoT Central-specific option. |
+|Initial value | Commands only IoT Central-specific value - default parameter value. |
## Views
iot-central Overview Iot Central Admin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central-admin.md
Title: Azure IoT Central administrator guide
-description: Azure IoT Central is an IoT application platform that simplifies the creation of IoT solutions. This article provides an overview of the administrator role in IoT Central.
+ Title: Azure IoT Central application management guide
+description: Azure IoT Central is an IoT application platform that simplifies the creation of IoT solutions. This guide describes how to manage your IoT Central application. Application management includes users, organization, and security.
Previously updated : 12/22/2021 Last updated : 01/04/2022
# This article applies to administrators.
-# IoT Central administrator guide
+# IoT Central application management guide
An IoT Central application lets you monitor and manage millions of devices throughout their life cycle. This guide is for administrators who manage IoT Central applications.
-In IoT Central, an administrator:
+IoT Central application administration includes the following tasks:
-- Manages users and roles in the application.-- Creates and manages organizations.-- Manages security such as device authentication.-- Configures application settings.-- Upgrades applications.-- Exports and shares applications.-- Monitors application health.
+- Manage users and roles in the application.
+- Create and manage organizations.
+- Manage security such as device authentication.
+- Configure application settings.
+- Upgrade applications.
+- Export and share applications.
+- Monitor application health.
## Users and roles
To learn more, see [Manage users and roles in your IoT Central application](howt
## Organizations
-Organizations let you define a hierarchy that you use to manage which users can see which devices in your IoT Central application. The user's role determines their permissions over the devices they see, and the experiences they can access.
+To manage which users see which devices in your IoT Central application, use an _organization_ hierarchy. You define an organization in your application.
+The user's role in application determines their permissions over the devices they can see.
To learn more, see [Create an IoT Central organization](howto-create-organizations.md). ## Application security
-Devices that connect to your IoT Central application typically use X.509 certificates or shared access signatures (SAS) as credentials. The administrator manages the group certificates or keys that the device credentials are derived from.
+Devices that connect to your IoT Central application typically use X.509 certificates or shared access signatures (SAS) as credentials. The administrator manages the group certificates or keys that these device credentials are derived from.
To learn more, see [X.509 group enrollment](concepts-get-connected.md#x509-group-enrollment), [SAS group enrollment](concepts-get-connected.md#sas-group-enrollment), and [How to roll X.509 device certificates](how-to-connect-devices-x509.md).
To learn more, see [Monitor application health](howto-manage-iot-central-from-po
## Monitor connected IoT Edge devices
-To learn how to remotely monitor your IoT Edge fleet using Azure Monitor and built-in metrics integration, see [Collect and transport metrics](../../iot-edge/how-to-collect-and-transport-metrics.md).
+To learn how to monitor your IoT Edge fleet remotely using Azure Monitor and built-in metrics integration, see [Collect and transport metrics](../../iot-edge/how-to-collect-and-transport-metrics.md).
## Tools
iot-central Overview Iot Central Api Tour https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central-api-tour.md
+
+ Title: Take a tour of the Azure IoT Central API | Microsoft Docs
+description: Become familiar with the key areas of the Azure IoT Central REST API. Use the API to create, manage, and use your IoT solution from client applications.
++ Last updated : 01/25/2022+++++
+# Take a tour of the Azure IoT Central API
+
+This article introduces you to Azure IoT Central REST API. Use the API to create client applications that can create, manage, and use an IoT Central application and its connected devices. The extensibility surface enabled by the IoT Central REST API lets you integrate IoT insights and device management capabilities into your existing dashboards and business applications.
+
+The REST API operations are grouped into the:
+
+- *Data plane* operations that let you work with resources inside IoT Central applications. Data plane operations let you automate tasks that can also be completed using the IoT Central UI. Currently, there are [generally available](/rest/api/iotcentral/1.0dataplane/api-tokens) and [preview](/rest/api/iotcentral/1.1-previewdataplane/api-tokens) versions of the data plane API.
+- *Control plane* operations that let you work with the Azure resources associated with IoT Central applications. Control plane operations let you automate tasks that can also be completed in the Azure portal.
+
+## Data plane operations
+
+Version 1.0 of the data plane API lets you manage the following resources in your IoT Central application:
+
+- API tokens
+- Device templates
+- Devices
+- Roles
+- Users
+
+The devices API also lets you [query telemetry and property values from your devices](howto-query-with-rest-api.md).
+
+To get started with the data plane APIs, see [Explore the IoT Central APIs](/learn/modules/manage-iot-central-apps-with-rest-api/).
+
+## Control plane operations
+
+Version 2021-06-01 of the control plane API lets you manage the IoT Central applications in your Azure subscription. To learn more, see the [Control plane overview](/rest/api/iotcentral/2021-06-01controlplane/apps).
+
+## Next steps
+
+Now that you have an overview of Azure IoT Central and are familiar with the capabilities of the IoT Central REST API, the suggested next step is to complete the [Explore the IoT Central APIs](/learn/modules/manage-iot-central-apps-with-rest-api/) Learn module.
iot-central Overview Iot Central Developer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central-developer.md
Title: Device development for Azure IoT Central | Microsoft Docs
-description: Azure IoT Central is an IoT application platform that simplifies the creation of IoT solutions. This article provides an overview of developing devices to connect to your IoT Central application. Devices use telemetry to send streaming data and properties to report device state. Iot Central can set device state using writable properties and call commands on a device.
+ Title: Azure IoT Central device connectivity guide | Microsoft Docs
+description: Azure IoT Central is an IoT application platform that simplifies the creation of IoT solutions. This guide describes how to connect IoT devices to your IoT Central application. After a device connects, it uses telemetry to send streaming data and properties to report device state. Iot Central can set device state using writable properties and call commands on a device. This article outlines best practices for device connectivity.
Previously updated : 08/30/2021 Last updated : 01/28/2022
# This article applies to device developers.
-# IoT Central device development guide
+# IoT Central device connectivity guide
-An IoT Central application lets you monitor and manage millions of devices throughout their life cycle. This guide is intended for device developers who implement code to run on devices that connect to IoT Central.
+An IoT Central application lets you monitor and manage millions of devices throughout their life cycle. This guide is for device developers who implement the code to run on devices that connect to IoT Central.
-Devices interact with an IoT Central application using the following primitives:
+Devices interact with an IoT Central application by using the following primitives:
- _Telemetry_ is data that a device sends to IoT Central. For example, a stream of temperature values from an onboard sensor. - _Properties_ are state values that a device reports to IoT Central. For example, the current firmware version of the device. You can also have writable properties that IoT Central can update on the device such as a target temperature. - _Commands_ are called from IoT Central to control the behavior a device. For example, your IoT Central application might call a command to reboot a device.
-A solution builder is responsible for configuring dashboards and device views in the IoT Central web UI to visualize telemetry, manage properties, and call commands.
- ## Types of device The following sections describe the main types of device you can connect to an IoT Central application:
A IoT device is a standalone device connects directly to IoT Central. A IoT devi
### IoT Edge device
-An IoT Edge device connects directly to IoT Central. An IoT Edge device can send its own telemetry, report its properties, and respond to writable property updates and commands. IoT Edge modules can process data locally on the IoT Edge device. An IoT Edge device can also act as an intermediary for other devices known as downstream devices. Scenarios that use IoT Edge devices include:
+An IoT Edge device connects directly to IoT Central. An IoT Edge device can send its own telemetry, report its properties, and respond to writable property updates and commands. IoT Edge modules process data locally on the IoT Edge device. An IoT Edge device can also act as an intermediary for other devices known as downstream devices. Scenarios that use IoT Edge devices include:
- Aggregate or filter telemetry before it's sent to IoT Central. This approach can help to reduce the costs of sending data to IoT Central. - Enable devices that can't connect directly to IoT Central to connect through the IoT Edge device. For example, a downstream device might use bluetooth to connect to the IoT Edge device, which then connects over the internet to IoT Central.
To learn more, see [Get connected to Azure IoT Central](./concepts-get-connected
### Security
-The connection between a device and your IoT Central application is secured using either [shared access signatures](./concepts-get-connected.md#sas-group-enrollment) or industry-standard [X.509 certificates](./concepts-get-connected.md#x509-group-enrollment).
+The connection between a device and your IoT Central application is secured by using either [shared access signatures](./concepts-get-connected.md#sas-group-enrollment) or industry-standard [X.509 certificates](./concepts-get-connected.md#x509-group-enrollment).
### Communication protocols Communication protocols that a device can use to connect to IoT Central include MQTT, AMQP, and HTTPS. Internally, IoT Central uses an IoT hub to enable device connectivity. For more information about the communication protocols that IoT Hub supports for device connectivity, see [Choose a communication protocol](../../iot-hub/iot-hub-devguide-protocols.md).
+## Connectivity patterns
+
+Device developers typically use one of the device SDKs to implement devices that connect to an IoT Central application. Some scenarios, such as for devices that can't connect to the internet, also require a gateway. To learn more about the device connectivity options available to device developers, see:
+
+- [Get connected to Azure IoT Central](concepts-get-connected.md)
+- [Connect Azure IoT Edge devices to an Azure IoT Central application](concepts-iot-edge.md)
+
+A solution design must take into account the required device connectivity pattern. These patterns fall in to two broad categories. Both categories include devices sending telemetry to your IoT Central application:
+
+### Persistent connections
+
+Persistent connections are required your solution needs *command and control* capabilities. In command and control scenarios, the IoT Central application sends commands to devices to control their behavior in near real time. Persistent connections maintain a network connection to the cloud and reconnect whenever there's a disruption. Use either the MQTT or the AMQP protocol for persistent device connections to IoT Central.
+
+The following options support persistent device connections:
+
+- Use the IoT device SDKs to connect devices and send telemetry:
+
+ The device SDKs enable both the MQTT and AMQP protocols for creating persistent connections to IoT Central. To learn more, see [Get connected to Azure IoT Central](concepts-get-connected.md).
+
+- Connect devices over a local network to an IoT Edge device that forwards telemetry to IoT Central:
+
+ An IoT Edge device can make a persistent connection to IoT Central. For devices that can't connect to the internet or that require network isolation, use an IoT Edge device as a local gateway. The gateway forwards device telemetry to IoT Central. This option enables command and control of the downstream devices connected to the IoT Edge device.
+
+ To learn more, see [Connect Azure IoT Edge devices to an Azure IoT Central application](concepts-iot-edge.md).
+
+- Use IoT Central Device Bridge to connect devices that use a custom protocol:
+
+ Some devices use a protocol or encoding, such as LWM2M or COAP, that IoT Central doesn't currently support. IoT Central Device Bridge acts as a translator that forwards telemetry to IoT Central. Because the bridge maintains a persistent connection, this option enables command and control of the devices connected to the bridge.
+
+ To learn more, see the [Azure IoT Central Device Bridge](https://github.com/Azure/iotc-device-bridge) GitHub repository.
+
+### Ephemeral connections
+
+Ephemeral connections are brief connections for devices to send telemetry to your IoT Central application. After a device sends the telemetry, it drops the connection. The device reconnects when it has more telemetry to send. Ephemeral connections aren't suitable for command and control scenarios.
+
+The following options support ephemeral device connections:
+
+- Connect devices and send telemetry by using HTTP:
+
+ IoT Central supports device clients that use the HTTP API to send telemetry. To learn more, see the [Send Device Event](/rest/api/iothub/device/send-device-event) API documentation.
+
+ > [!NOTE]
+ > Use DPS to provision and register your device with IoT Central before you use the HTTP API to send telemetry.
+
+- Use IoT Central Device Bridge in stateless mode to connect devices:
+
+ Deploy IoT Central Device Bridge as an Azure Function. The function accepts incoming telemetry data as HTTP requests and forwards it to IoT Central. IoT Central Device Bridge integrates with DPS and automatically handles device provisioning for you.
+
+ To learn more, see [Azure IoT Central Device Bridge](https://github.com/Azure/iotc-device-bridge) GitHub repository.
+
+- Use IoT Central Device Bridge in stateless mode to connect external clouds:
+
+ Use Azure IoT Central Device Bridge to forward messages to IoT Central from other IoT clouds, such as SigFox, Particle, and The Things Network.
+
+ To learn more, see [Azure IoT Central Device Bridge](https://github.com/Azure/iotc-device-bridge) GitHub repository.
+
+### Data transformation and custom computation on ingress
+
+Some scenarios require device telemetry augmented with data from external systems or stores. Augmenting telemetry before it reaches IoT Central enables features such as dashboards and rules to use the augmented data.
+
+Some scenarios require you to transform telemetry before it reaches IoT Central. For example, transforming telemetry from legacy formats.
+
+The following options are available for custom transformations or computations before IoT Central ingests the telemetry:
+
+- Use IoT Edge:
+
+ Use custom modules in IoT Edge for custom transformations and computations. Use IoT Edge when your devices use the Azure IoT device SDKs.
+
+- Use IoT Central Device Bridge:
+
+ Use IoT Central Device Bridge adapters for custom transformations and computations.
+
+To learn more, see [Transform data for IoT Central](howto-transform-data.md).
+ ## Implement the device An IoT Central device template includes a _model_ that specifies the behaviors a device of that type should implement. Behaviors include telemetry, properties, and commands.
-To learn more about best practices you edit a model, see [Edit an existing device template](howto-edit-device-template.md).
+To learn more, see [Edit an existing device template](howto-edit-device-template.md).
> [!TIP] > You can export the model from IoT Central as a [Digital Twins Definition Language (DTDL) v2](https://github.com/Azure/opendigitaltwins-dtdl) JSON file.
The [Azure IoT device SDKs](#languages-and-sdks) include support for the IoT Plu
### Device model
-A device model is defined using the [DTDL](https://github.com/Azure/opendigitaltwins-dtdl). This language lets you define:
+A device model is defined by using the [DTDL](https://github.com/Azure/opendigitaltwins-dtdl) modeling language. This language lets you define:
- The telemetry the device sends. The definition includes the name and data type of the telemetry. For example, a device sends temperature telemetry as a double. - The properties the device reports to IoT Central. A property definition includes its name and data type. For example, a device reports the state of a valve as a Boolean.
A device model is defined using the [DTDL](https://github.com/Azure/opendigitalt
A DTDL model can be a _no-component_ or a _multi-component_ model: - No-component model: A simple model doesn't use embedded or cascaded components. All the telemetry, properties, and commands are defined a single _root component_. For an example, see the [Thermostat](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/Thermostat.json) model.-- Multi-component model. A more complex model that includes two or more components. These components include a single root component, and one or more additional nested components. For an example, see the [Temperature Controller](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/TemperatureController.json) model.
+- Multi-component model. A more complex model that includes two or more components. These components include a single root component, and one or more nested components. For an example, see the [Temperature Controller](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/TemperatureController.json) model.
To learn more, see [IoT Plug and Play modeling guide](../../iot-develop/concepts-modeling-guide.md)
For some sample code, see [Create and connect a client application](./tutorial-c
For more information about the supported languages and SDKs, see [Understand and use Azure IoT Hub device SDKs](../../iot-hub/iot-hub-devguide-sdks.md#azure-iot-hub-device-sdks).
+## Best practices
+
+These recommendations show how to implement devices to take advantage of the built-in disaster recovery and automatic scaling in IoT Central.
+
+The following steps show the high-level flow when a device connects to IoT Central:
+
+1. Use DPS to provision the device and get a device connection string.
+
+1. Use the connection string to connect IoT Central's internal IoT Hub endpoint. Send data to and receive data from your IoT Central application.
+
+1. If the device gets connection failures, then depending on the error type, either retry the connection or reprovision the device.
+
+### Use DPS to provision the device
+
+To provision a device with DPS, use the scope ID, credentials, and device ID from your IoT Central application. To learn more about the credential types, see [X.509 group enrollment](concepts-get-connected.md#x509-group-enrollment) and [SAS group enrollment](concepts-get-connected.md#sas-group-enrollment). To learn more about device IDs, see [Device registration](concepts-get-connected.md#device-registration).
+
+On success, DPS returns a connection string the device can use to connect to your IoT Central application. To troubleshoot provisioning errors, see [Check the provisioning status of your device](troubleshoot-connection.md#check-the-provisioning-status-of-your-device).
+
+The device can cache the connection string to use for later connections. However, the device must be prepared to [handle connection failures](#handle-connection-failures).
+
+### Handle connection failures
+
+For scaling or disaster recovery purposes, IoT Central may update its underlying IoT hub. To maintain connectivity, your device code should handle specific connection errors by establishing a connection to the new IoT Hub endpoint.
+
+If the device gets any of the following errors when it connects, it should reprovision the device with DPS to get a new connection string. These errors mean the connection string is no longer valid:
+
+- Unreachable IoT Hub endpoint.
+- Expired security token.
+- Device disabled in IoT Hub.
+
+If the device gets any of the following errors when it connects, it should use a back-off strategy to retry the connection. These errors mean the connection string is still valid, but transient conditions are stopping the device from connecting:
+
+- Operator blocked device.
+- Internal error 500 from the service.
+
+To learn more about device error codes, see [Troubleshooting device connections](troubleshoot-connection.md).
+
+### Test failover capabilities
+
+The Azure CLI lets you test the failover capabilities of your device code. The CLI command works by temporarily switching a device registration to a different internal IoT hub. To verify the device failover worked, check that the device still sends telemetry and responds to commands.
+
+To run the failover test for your device, run the following command:
+
+```azurecli
+az iot central device manual-failover \
+ --app-id {Application ID of your IoT Central application} \
+ --device-id {Device ID of the device you're testing} \
+ --ttl-minutes {How to wait before moving the device back to it's original IoT hub}
+```
+
+> [!TIP]
+> To find the **Application ID**, navigate to **Administration > Your application** in your IoT Central application.
+
+If the command succeeds, you see output that looks like the following:
+
+```output
+Command group 'iot central device' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus
+{
+ "hubIdentifier": "6bd4...bafa",
+ "message": "Success! This device is now being failed over. You can check your device'ΓÇÖ's status using 'iot central device registration-info' command. The device will revert to its original hub at Tue, 18 May 2021 11:03:45 GMT. You can choose to failback earlier using device-manual-failback command. Learn more: https://aka.ms/iotc-device-test"
+}
+```
+
+To learn more about the CLI command, see [az iot central device manual-failover](/cli/azure/iot/central/device#az_iot_central_device_manual_failover).
+
+You can now check that telemetry from the device still reaches your IoT Central application.
+
+> [!TIP]
+> To see sample device code that handles failovers in various programing languages, see [IoT Central high availability clients](/samples/azure-samples/iot-central-high-availability-clients/iotc-high-availability-clients/).
+ ## Next steps If you're a device developer and want to dive into some code, the suggested next step is to [Create and connect a client application to your Azure IoT Central application](./tutorial-connect-device.md).
-If you want to learn more about using IoT Central, the suggested next steps are to try the quickstarts, beginning with [Create an Azure IoT Central application](./quick-deploy-iot-central.md).
+To learn more about using IoT Central, the suggested next steps are to try the quickstarts, beginning with [Create an Azure IoT Central application](./quick-deploy-iot-central.md).
iot-central Overview Iot Central Operator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central-operator.md
Title: Azure IoT Central operator guide
-description: Azure IoT Central is an IoT application platform that simplifies the creation of IoT solutions. This article provides an overview of the operator role in IoT Central.
+ Title: Azure IoT Central device management guide
+description: Azure IoT Central is an IoT application platform that simplifies the creation of IoT solutions. This guide describes how to manage the IoT devices connected to your IoT Central application.
Previously updated : 12/19/2021 Last updated : 01/04/2022
# This article applies to operators.
-# IoT Central operator guide
+# IoT Central device management guide
-An IoT Central application lets you monitor and manage millions of devices throughout their life cycle. This guide is for operators who use an IoT Central application to manage IoT devices.
+An IoT Central application lets you monitor and manage millions of devices throughout their life cycle.
-An operator:
+IoT Central lets you complete device management tasks such as:
-- Monitors and manages the devices connected to the application.-- Troubleshoots and remediates issues with devices.-- Provisions new devices.
+- Monitor and manage the devices connected to the application.
+- Troubleshoot and remediate issues with devices.
+- Provision new devices.
## Monitor and manage devices :::image type="content" source="media/overview-iot-central-operator/simulated-telemetry.png" alt-text="Screenshot that shows a device view":::
-To monitor devices, an operator can use the device views defined by the solution builder as part of the device template. These views can show device telemetry and property values. An example is the **Overview** view shown on the previous screenshot.
+To monitor devices, use the custom device views defined by a solution builder. These views can show device telemetry and property values. An example is the **Overview** view shown in the previous screenshot.
-For more detailed information, an operator can use device groups and the built-in analytics features. To learn more, see [How to use analytics to analyze device data](howto-create-analytics.md).
+For more detailed information, use device groups and the built-in analytics features. To learn more, see [How to use analytics to analyze device data](howto-create-analytics.md).
-To manage individual devices, an operator can use device views to set device and cloud properties, and call device commands. Examples, include the **Manage device** and **Commands** views in the previous screenshot.
+To manage individual devices, use device views to set device and cloud properties, and call device commands. Examples include the **Manage device** and **Commands** views in the previous screenshot.
-To manage devices in bulk, an operator can create and schedule jobs. Jobs can update properties and run commands on multiple devices. To learn more, see [Create and run a job in your Azure IoT Central application](howto-manage-devices-in-bulk.md).
+To manage devices in bulk, create and schedule jobs. Jobs can update properties and run commands on multiple devices. To learn more, see [Create and run a job in your Azure IoT Central application](howto-manage-devices-in-bulk.md).
-If your IoT Central application uses *organizations*, an administrator controls which devices in the application you have access to.
+If your IoT Central application uses *organizations*, an administrator controls which devices you have access to.
## Troubleshoot and remediate issues
-The operator is responsible for the health of the application and its devices. The [troubleshooting guide](troubleshoot-connection.md) helps operators diagnose and remediate common issues. An operator can use the **Devices** page to block devices that appear to be malfunctioning until the problem is resolved.
+The [troubleshooting guide](troubleshoot-connection.md) helps you to diagnose and remediate common issues. You can use the **Devices** page to block devices that appear to be malfunctioning until the problem is resolved.
## Add and remove devices
-The operator can add and remove devices to your IoT Central application either individually or in bulk. To learn more, see [Manage devices in your Azure IoT Central application](howto-manage-devices-individually.md).
+You can add and remove devices in your IoT Central application either individually or in bulk. To learn more, see:
+
+- [Manage individual devices in your Azure IoT Central application](howto-manage-devices-individually.md).
+- [Manage devices in bulk in your Azure IoT Central application](howto-manage-devices-in-bulk.md).
## Personalize
-Operators can create personal dashboards in an IoT Central application that contain links to the resources they use most often. To learn more, see [Manage dashboards](howto-manage-dashboards.md).
+Create personal dashboards in an IoT Central application that contain links to the resources you use most often. To learn more, see [Manage dashboards](howto-manage-dashboards.md).
## Next steps
iot-central Overview Iot Central Solution Builder https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central-solution-builder.md
Title: Solution building for Azure IoT Central | Microsoft Docs
-description: Azure IoT Central is an IoT application platform that simplifies the creation of IoT solutions. This article provides an overview of building integrated solutions with IoT Central.
+ Title: Azure IoT Central data integration guide | Microsoft Docs
+description: Azure IoT Central is an IoT application platform that simplifies the creation of IoT solutions. This guide describes how to integrate your IoT Central application with other services to extend its capabilities.
Previously updated : 12/21/2021 Last updated : 01/04/2022
# This article applies to solution builders.
-# IoT Central solution builder guide
+# IoT Central data integration guide
-An IoT Central application lets you monitor and manage millions of devices throughout their life cycle. This guide is for solution builders who use IoT Central to build integrated solutions. An IoT Central application lets you manage devices, analyze device telemetry, and integrate with other back-end services.
+Azure IoT Central is an application platform that:
-A solution builder:
+- Includes rich functionality such as device monitoring and management at scale.
+- Provides many built-in features that help you to reduce the burden and cost of developing an IoT solution.
+- Has extensibility and integration points that let you use its features and capabilities in your wider solution.
-- Configures dashboards and views in the IoT Central web UI.-- Uses the built-in rules and analytics tools to derive business insights from the connected devices.-- Uses the data export and rules capabilities to integrate IoT Central with other back-end services.
+A typical IoT solution:
-## Configure dashboards and views
+- Enables IoT devices to connect to your solution and send it data.
+- Manages and secures the connected devices and their data.
+- Extracts business value from your device data.
+- Is composed of multiple services and applications.
-An IoT Central application can have one or more dashboards that operators use to view and interact with the application. As a solution builder, you can customize the default dashboard and create specialized dashboards:
-- To view some examples of customized dashboards, see [Industry focused templates](concepts-app-templates.md#industry-focused-templates).-- To learn more about dashboards, see [Create and manage multiple dashboards](howto-manage-dashboards.md) and [Configure the application dashboard](howto-manage-dashboards.md).
+When you use IoT Central to create an IoT solution, tasks include:
-When a device connects to an IoT Central, the device is associated with a device template for the device type. A device template has customizable views that an operator uses to manage individual devices. As a solution developer, you can create and customize the available views for a device type. To learn more, see [Add views](howto-set-up-template.md#views).
+- Configure data transformations to make it easier to extract business value from your data.
+- Configure dashboards and views in the IoT Central web UI.
+- Use the built-in rules and analytics tools to derive business insights from the connected devices.
+- Use the data export, rules capabilities, and APIs to integrate IoT Central with other services and applications.
-## Use built-in rules and analytics
+## Transform data at ingress
+
+Devices may send complex telemetry that needs to be simplified before it's used in IoT Central or exported. In some scenarios, you need to normalize the telemetry from different devices so that you can display and process the telemetry consistently. To learn more, see [Map telemetry on ingress to IoT Central](howto-map-data.md).
+
+## Extract business value
+
+IoT Central provides a rich platform to help you extract business value from your IoT data. IoT Central has many built-in features that you can use to gain insights and take action on your IoT data. However, some IoT solution scenarios need more specialized business processes outside of IoT Central to extract value from your IoT data.
+
+Built-in features of IoT Central you can use to extract business value include:
+
+- Configure dashboards and views:
+
+ An IoT Central application can have one or more dashboards that operators use to view and interact with the application. You can customize the default dashboard and create specialized dashboards:
+
+ - To view some examples of customized dashboards, see [Industry focused templates](../retail/tutorial-in-store-analytics-create-app.md).
+
+ - To learn more about dashboards, see [Create and manage multiple dashboards](howto-manage-dashboards.md) and [Configure the application dashboard](howto-manage-dashboards.md).
+
+ - When a device connects to an IoT Central, the device is associated with a device template for the device type. A device template has customizable views that an operator uses to manage individual devices. You can create and customize the available views for each device type. To learn more, see [Add views](howto-set-up-template.md#views).
+
+- Use built-in rules and analytics:
+
+ You can add rules to an IoT Central application that run customizable actions. Rules evaluate conditions, based on data coming from a device, to determine when to run an action. To learn more about rules, see:
+
+ - [Tutorial: Create a rule and set up notifications in your Azure IoT Central application](tutorial-create-telemetry-rules.md)
+ - [Configure rules](howto-configure-rules.md)
+
+ IoT Central has built-in analytics capabilities that an operator can use to analyze the data flowing from the connected devices. To learn more, see [How to use analytics to analyze device data](howto-create-analytics.md).
+
+Scenarios that process IoT data outside of IoT Central to extract business value include:
-A solution developer can add rules to an IoT Central application that run customizable actions. Rules evaluate conditions, based on data coming from a device, to determine when to run an action. To learn more about rules, see:
+- Compute, enrich, and transform:
+
+ IoT Central lets you capture, transform, manage, and visualize IoT data. Sometimes, it's useful to enrich or transform you IoT data using external data sources. You can then feed the enriched data back into IoT Central.
-- [Tutorial: Create a rule and set up notifications in your Azure IoT Central application](tutorial-create-telemetry-rules.md)-- [Configure rules](howto-configure-rules.md)
+ For example, use the IoT Central continuous data export feature to trigger an Azure function. The function enriches captured device telemetry and pushes the enriched data back into IoT Central while preserving timestamps.
-IoT Central has built-in analytics capabilities that an operator can use to analyze the data flowing from the connected devices. To learn more, see [How to use analytics to analyze device data](howto-create-analytics.md).
+- Extract business metrics and use artificial intelligence (AI) and machine learning (ML):
+
+ Use IoT data to calculate common business metrics such as *overall equipment effectiveness* (OEE) and *overall process effectiveness* (OPE). You can also use IoT data to enrich your existing AI and ML assets. For example, IoT Central can help to capture the data you need to build, train, and deploy your models.
+
+ Use the IoT Central continuous data export feature to publish captured IoT data into an Azure data lake. Then use a connected to Azure Databricks workspace to compute OEE and OPE. Pipe the same data to Azure ML or Azure Synapse to use their machine learning capabilities.
+
+- Streaming computation, monitoring, and diagnostics
+
+ IoT Central provides a scalable and reliable infrastructure to capture streaming data from millions of connected devices. Sometimes, you need to run stream computations over the hot or warm data paths to meet business requirements. You can also merge IoT data with data in external stores such as Azure Data explorer to provide enhanced diagnostics.
+
+- Analyze and visualize IoT data alongside business data
+
+ IoT Central provides feature-rich dashboards and visualizations. However, business-specific reports may require you to merge IoT data with existing business data sourced from external systems. Use the IoT Central integration features to extract IoT data from IoT Central. Then merge the IoT data with existing business data to deliver a centralized solution for analyzing and visualizing you business processes.
+
+ For example, use the IoT Central continuous data export feature to continuously ingest your IoT data into an Azure Synapse store. Then use Azure Data Factory to bring data from external systems into the Azure Synapse store. Use the Azure Synapse store with Power BI to generate your business reports.
+
+To learn more, see [Transform data for IoT Central](howto-transform-data.md). For a complete, end-to-end sample, see the [IoT Central Compute](https://github.com/iot-for-all/iot-central-compute) GitHub repository.
## Integrate with other services
-As a solution builder, you can use the data export and rules capabilities in IoT Central to integrate with other service. To learn more, see:
+You can use the data export and rules capabilities in IoT Central to integrate with other service. To learn more, see:
- [Export IoT data to cloud destinations using data export](howto-export-data.md) - [Transform data for IoT Central](howto-transform-data.md)
As a solution builder, you can use the data export and rules capabilities in IoT
You can use IoT Edge devices connected to your IoT Central application to integrate with [Azure Video Analyzer](../../azure-video-analyzer/video-analyzer-docs/overview.md). To learn more, see the [Azure IoT Central gateway module for Azure Video Analyzer](https://github.com/iot-for-all/iotc-ava-gateway/blob/main/README.md) on GitHub.
-## APIs
+## Integrate with companion applications
-IoT Central APIs let you build deep integrations with other services in your IoT solution. The available APIs are categorized as *data plane* or *control plane* APIs.
+IoT Central provides rich operator dashboards and visualizations. However, some IoT solutions must integrate with existing applications, or require new companion applications to expand their capabilities. To integrate with other applications, use IoT Central extensibility points such as the REST API and the continuous data export feature.
-You use data plane APIs to access the entities in and the capabilities of your IoT Central application. For example managing devices, device templates, users, and roles. The IoT Central REST API operations are *data plane* operations. To learn more, see [How to use the IoT Central REST API to manage users and roles](howto-manage-users-roles-with-rest-api.md).
+You use data plane REST APIs to access the entities in and the capabilities of your IoT Central application. For example, managing devices, device templates, users, and roles. The IoT Central REST API operations are *data plane* operations. To learn more, see [How to use the IoT Central REST API to manage users and roles](howto-manage-users-roles-with-rest-api.md).
You use the *control plane* to manage IoT Central-related resources in your Azure subscription. You can use the Azure CLI and Resource Manager templates for control plane operations. For example, you can use the Azure CLI to create an IoT Central application. To learn more, see [Manage IoT Central from Azure CLI](howto-manage-iot-central-from-cli.md).
-## Transform data at ingress
-
-Devices may send complex telemetry that needs to be simplified before it's used in IoT Central or exported. In some scenarios you need to normalize the telemetry from different devices so that you can display and process the telemetry consistently. To learn more, see [Map telemetry on ingress to IoT Central](howto-map-data.md).
- ## Next steps If you want to learn more about using IoT Central, the suggested next steps are to try the quickstarts, beginning with [Create an Azure IoT Central application](./quick-deploy-iot-central.md).
iot-central Overview Iot Central https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central.md
This article outlines, for IoT Central:
## Create your IoT Central application
-[Quickly deploy a new IoT Central application](quick-deploy-iot-central.md) and then customize it to your specific requirements. Start with a generic _application template_ or with one of the industry-focused application templates:
+[Quickly deploy a new IoT Central application](quick-deploy-iot-central.md) and then customize it to your specific requirements. Application templates in Azure IoT Central are a tool to help you kickstart your IoT solution development. You can use app templates for everything from getting a feel for what is possible, to fully customizing your application to resell to your customers.
-- [Retail](concepts-app-templates.md).-- [Energy](concepts-app-templates.md).-- [Government](concepts-app-templates.md).-- [Healthcare](concepts-app-templates.md).
+Start with a generic _application template_ or with one of the industry-focused application templates:
+
+- [Retail](../retail/tutorial-in-store-analytics-create-app.md)
+- [Energy](../energy/tutorial-smart-meter-app.md)
+- [Government](../government/tutorial-connected-waste-management.md)
+- [Healthcare](../healthcare/tutorial-continuous-patient-monitoring.md).
See the [Create a new application](quick-deploy-iot-central.md) quickstart for a walk-through of how to create your first application.
iot-central Tutorial Smart Meter App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/energy/tutorial-smart-meter-app.md
# Tutorial: Deploy and walk through the smart meter monitoring app template
+The smart meters not only enable automated billing, but also advanced metering use cases such as real-time readings and bi-directional communication. The smart meter app template enables utilities and partners to monitor smart meters status and data, define alarms and notifications. It provides sample commands, such as disconnect meter and update software. The meter data can be set up to egress to other business applications and to develop custom solutions.
+
+App's key functionalities:
+
+- Meter sample device model
+- Meter info and live status
+- Meter readings such as energy, power, and voltages
+- Meter command samples
+- Built-in visualization and dashboards
+- Extensibility for custom solution development
+ Use the IoT Central *smart meter monitoring* application template and the guidance in this article to develop an end-to-end smart meter monitoring solution. :::image type="content" source="media/tutorial-iot-central-smart-meter/smart-meter-app-architecture.png" alt-text="smart meter architecture.":::
A smart meter is one of the most important devices among all the energy assets.
### IoT Central platform
-Azure IoT Central is a platform that simplifies building your IoT solution and helps reduce the burden and costs of IoT management, operations, and development. With IoT Central, you can easily connect, monitor, and manage your Internet of Things (IoT) assets at scale. After you connect your smart meters to IoT Central, the app template uses built-in features such as device models, commands, and dashboards. The app template also uses the IoT Central storage for warm path scenarios such as near real-time meter data monitoring, analytics, rules, and visualization.
+When you build an IoT solution, Azure IoT Central simplifies the build process and helps to reduce the burden and costs of IoT management, operations, and development. With IoT Central, you can easily connect, monitor, and manage your Internet of Things (IoT) assets at scale. After you connect your smart meters to IoT Central, the app template uses built-in features such as device models, commands, and dashboards. The app template also uses the IoT Central storage for warm path scenarios such as near real-time meter data monitoring, analytics, rules, and visualization.
### Extensibility options to build with IoT Central
The following sections walk you through the key features of the application:
### Dashboard
-After you deploy the application template, it comes with sample smart meter device, device model, and a dashboard.
+After you deploy the application template, it comes with sample smart meter device, device model, and a dashboard.
-Adatum is a fictitious energy company, who monitors and manages smart meters. On the smart meter monitoring dashboard, you see smart meter properties, data, and sample commands. It enables operators and support teams to proactively perform the following activities before it turns into support incidents:
-* Review the latest meter info and its installed [location](../core/howto-use-location-data.md) on the map
-* Proactively check the meter network and connection status
-* Monitor Min and Max voltage readings for network health
-* Review the energy, power, and voltage trends to catch any anomalous patterns
-* Track the total energy consumption for planning and billing purposes
-* Command and control operations such as reconnect meter and update firmware version. In the template, the command buttons show the possible functionalities and don't send real commands.
+Adatum is a fictitious energy company, who monitors and manages smart meters. On the smart meter monitoring dashboard, you see smart meter properties, data, and sample commands. It enables operators and support teams to proactively perform the following activities before it turns into support incidents:
+
+* Review the latest meter info and its installed [location](../core/howto-use-location-data.md) on the map.
+* Proactively check the meter network and connection status.
+* Monitor Min and Max voltage readings for network health.
+* Review the energy, power, and voltage trends to catch any anomalous patterns.
+* Track the total energy consumption for planning and billing purposes.
+* Command and control operations such as reconnect meter and update firmware version. In the template, the command buttons show the possible functionalities and don't send real commands.
:::image type="content" source="media/tutorial-iot-central-smart-meter/smart-meter-dashboard.png" alt-text="Smart meter monitoring dashboard.":::
Click on the **Device templates** tab to see the smart meter device model. The m
If you decide to not continue using this application, delete your application with the following these steps:
-1. From the left pane, open Administration tab
-1. Select Application settings and click Delete button at the bottom of the page.
+1. From the left pane, open the **Administration** tab.
+1. Select **Application settings** and then the **Delete** button.
:::image type="content" source="media/tutorial-iot-central-smart-meter/smart-meter-delete-app.png" alt-text="Delete application."::: ## Next steps > [Tutorial: Deploy and walk through a Solar panel application template](tutorial-solar-panel-app.md)-
iot-central Tutorial Solar Panel App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/energy/tutorial-solar-panel-app.md
-# Tutorial: Deploy and walk through the solar panel monitoring app template
+# Tutorial: Deploy and walk through the solar panel monitoring app template
+
+The solar panel monitoring app enables utilities and partners to monitor solar panels, such as their energy generation and connection status in near real time. It can send notifications based on defined threshold criteria. It provides sample commands, such as update firmware and other properties. The solar panel data can be set up to egress to other business applications and to develop custom solutions.
+
+App's key functionalities:
+
+- Solar panel sample device model
+- Solar Panel info and live status
+- Solar energy generation and other readings
+- Command and control samples
+- Built-in visualization and dashboards
+- Extensibility for custom solution development
Use the IoT Central *solar panel monitoring* application template and the guidance in this article to develop an end-to-end solar panel monitoring solution.
This architecture consists of the following components. Some applications may no
### Solar panels and connectivity
-Solar panels are one of the significant sources of renewable energy. Typically, a solar panel uses a gateway to connect to an IoT Central application. You might need to build IoT Central device bridge to connect devices, which can't be connected directly. The IoT Central device bridge is an open-source solution and you can find the complete details [here](../core/howto-build-iotc-device-bridge.md).
+Solar panels are one of the significant sources of renewable energy. Typically, a solar panel uses a gateway to connect to an IoT Central application. You might need to build IoT Central device bridge to connect devices, which can't be connected directly. The IoT Central device bridge is an open-source solution and you can find the complete details [here](../core/howto-build-iotc-device-bridge.md).
### IoT Central platform
-Azure IoT Central is a platform that simplifies building your IoT solution and helps reduce the burden and costs of IoT management, operations, and development. With IoT Central, you can easily connect, monitor, and manage your Internet of Things (IoT) assets at scale. After you connect your solar panels to IoT Central, the app template uses built-in features such as device models, commands, and dashboards. The app template also uses the IoT Central storage for warm path scenarios such as near real-time meter data monitoring, analytics, rules, and visualization.
+When you build an IoT solution, Azure IoT Central simplifies the build process and helps to reduce the burden and costs of IoT management, operations, and development. With IoT Central, you can easily connect, monitor, and manage your Internet of Things (IoT) assets at scale. After you connect your solar panels to IoT Central, the app template uses built-in features such as device models, commands, and dashboards. The app template also uses the IoT Central storage for warm path scenarios such as near real-time meter data monitoring, analytics, rules, and visualization.
### Extensibility options to build with IoT Central
iot-central Tutorial Connected Waste Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/government/tutorial-connected-waste-management.md
# Tutorial: Deploy and walk through the connected waste management application template
+Connected Waste Management app is an IoT Central app template to help you kickstart your IoT solution development to enable smart cities to remotely monitor to maximize efficient waste collection.
+ Use the IoT Central *connected waste management* application template and the guidance in this article to develop an end-to-end connected waste management solution. :::image type="content" source="media/tutorial-connectedwastemanagement/concepts-connected-waste-management-architecture-1.png" alt-text="Connected waste management architecture.":::
To view the device template:
### Customize the device template
-Try to customize the following:
+Try to customize the following features:
1. From the device template menu, select **Customize**. 1. Find the **Odor meter** telemetry type.
iot-central Tutorial Water Consumption Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/government/tutorial-water-consumption-monitoring.md
# Tutorial: Deploy and walk through the water consumption monitoring application
+Traditional water consumption tracking relies on water operators manually reading water consumption meters at the meter sites. More cities are replacing traditional meters with advanced smart meters enabling remote monitoring of consumption and remotely controlling valves to control water flow. Water consumption monitoring coupled with digital feedback message to the citizen can increase awareness and reduce water consumption.
+
+The water consumption monitoring app is an IoT Central app template to help you kickstart your IoT solution development to enable water utilities and cities to remotely monitor and control water flow to reduce consumption.
+ Use the IoT Central *water consumption monitoring* application template and the guidance in this article to develop an end-to-end water consumption monitoring solution. ![Water consumption monitoring architecture](./media/tutorial-waterconsumptionmonitoring/concepts-waterconsumptionmonitoring-architecture1.png)
Devices in smart water solutions may connect through low-power wide area network
### IoT Central
-Azure IoT Central is an IoT App platform that helps you quickly build and deploy an IoT solution. You can brand, customize, and integrate your solution with third-party services.
+When you build an IoT solution, Azure IoT Central simplifies the build process and helps to reduce the burden and costs of IoT management, operations, and development. You can brand, customize, and integrate your solution with third-party services.
When you connect your smart water devices to IoT Central, the application provides device command and control, monitoring and alerting, a user interface with built-in RBAC, configurable dashboards, and extensibility options.
To learn more, see [How to run a job](../core/howto-manage-devices-in-bulk.md).
## Customize your application
-As a administrator, you can change several settings to customize the user experience in your application.
+As an administrator, you can change several settings to customize the user experience in your application.
1. Select **Administration** > **Customize your application**. 1. To choose an image to upload as the **Application logo**, select the **Change** button.
iot-central Tutorial Water Quality Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/government/tutorial-water-quality-monitoring.md
Title: Tutorial - Azure IoT water quality monitoring | Microsoft Docs
-description: This tutorial shows you how to deploy and use the water quality monitoring application application template for IoT Central.
+description: This tutorial shows you how to deploy and use the water quality monitoring application template for IoT Central.
Last updated 12/23/2021
# Tutorial: Deploy and walk through the water quality monitoring application
-Use the IoT Central *water quality monitoring* application template and the guidance in this article to develop an end-to-end water quality monitoring solution.
+Traditional water quality monitoring relies on manual sampling techniques and field laboratory analysis, which is time consuming and costly. By remotely monitoring water quality in real-time, water quality issues can be managed before citizens are affected. Moreover, with advanced analytics, water utilities, and environmental agencies can act on early warnings on potential water quality issues and plan on water treatment in advance.
+
+The water quality monitoring app is an IoT Central app template to help you kickstart your IoT solution development and enable water utilities to digitally monitor water quality in smart cities.
+Use the IoT Central *water quality monitoring* application template and the guidance in this article to develop an end-to-end water quality monitoring solution.
![Water quality monitoring architecture](./media/tutorial-waterqualitymonitoring/concepts-water-quality-monitoring-architecture1.png)
Devices in smart water solutions may connect through low-power wide area network
### IoT Central
-Azure IoT Central is an IoT App platform that helps you quickly build and deploy an IoT solution. You can brand, customize, and integrate your solution with third-party services.
+When you build an IoT solution, Azure IoT Central simplifies the build process and helps to reduce the burden and costs of IoT management, operations, and development. You can brand, customize, and integrate your solution with third-party services.
When you connect your smart water devices to IoT Central, the application provides device command and control, monitoring and alerting, a user interface with built-in RBAC, configurable dashboards, and extensibility options.
Practice customizing the following device template settings:
#### Add a cloud property 1. From the device template menu, select **Cloud properties**.
-1. To add a new cloud property, select **+ Add Cloud Property**. In Azure IoT Central, you can add a property that is relevant to a device but not expected to be sent by the device. One example of such a property is an alert threshold specific to installation area, asset information, or maintenance information.
+1. To add a new cloud property, select **+ Add Cloud Property**. In Azure IoT Central, you can add a property that is relevant to a device but that doesn't come from the device. One example of such a property is an alert threshold specific to installation area, asset information, or maintenance information.
1. Enter **Installation area** as the **Display name** and choose **String** as the **Schema**. 1. Select **Save**.
iot-central Tutorial Continuous Patient Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/healthcare/tutorial-continuous-patient-monitoring.md
# Tutorial: Deploy and walkthrough the continuous patient monitoring app template
+In the healthcare IoT space, Continuous Patient Monitoring is one of the key enablers of reducing the risk of readmissions, managing chronic diseases more effectively, and improving patient outcomes. Continuous Patient Monitoring can be split into two major categories:
+
+1. **In-patient monitoring**: Using medical wearables and other devices in the hospital, care teams can monitor patient vital signs and medical conditions without having to send a nurse to check up on a patient multiple times a day. Care teams can understand the moment that a patient needs critical attention through notifications and prioritizes their time effectively.
+1. **Remote patient monitoring**: By using medical wearables and patient reported outcomes (PROs) to monitor patients outside of the hospital, the risk of readmission can be lowered. Data from chronic disease patients and rehabilitation patients can be collected to ensure that patients are adhering to care plans and that alerts of patient deterioration can be surfaced to care teams before they become critical.
+
+This application template can be used to build solutions for both categories of Continuous Patient Monitoring. The benefits include:
+
+- Seamlessly connect different kinds of medical wearables to an IoT Central instance.
+- Monitor and manage the devices to ensure they remain healthy.
+- Create custom rules around device data to trigger appropriate alerts.
+- Export your patient health data to the Azure API for FHIR, a compliant data store.
+- Export the aggregated insights into existing or new business applications.
+ :::image type="content" source="media/cpm-architecture.png" alt-text="Continuous patient monitoring architecture"::: ## Bluetooth Low Energy (BLE) medical devices
iot-central Tutorial In Store Analytics Create App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-in-store-analytics-create-app.md
Last updated 12/20/2021
# Tutorial: Deploy and walk through the in-store analytics application template
+For many retailers, environmental conditions within their stores are a key differentiator from their competitors. Retailers want to maintain pleasant conditions within their stores for the benefit of their customers.
+
+You can use the IoT Central in-store analytics condition monitoring application template to build an end-to-end solution. The application template lets you digitally connect to and monitor a retail store environment using of different kinds of sensor devices. These sensor devices generate telemetry that you can convert into business insights to help the retailer reduce operating costs and create a great experience for their customers.
+
+Use the application template to:
+
+- Connect different kinds of IoT sensors to an IoT Central application instance.
+- Monitor and manage the health of the sensor network and any gateway devices in the environment.
+- Create custom rules around the environmental conditions within a store to trigger alerts for store managers.
+- Transform the environmental conditions within your store into insights that the retail store team can use to improve the customer experience.
+- Export the aggregated insights into existing or new business applications to provide useful and timely information to retail staff.
+
+The application template comes with a set of device templates and uses a set of simulated devices to populate the dashboard.
+ Use the IoT Central *in-store analytics* application template and the guidance in this article to develop an end-to-end in-store analytics solution. :::image type="content" source="media/tutorial-in-store-analytics-create-app/store-analytics-architecture-frame.png" alt-text="Azure IoT Central Store Analytics.":::
iot-central Tutorial Iot Central Connected Logistics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-iot-central-connected-logistics.md
Last updated 01/06/2022
# Tutorial: Deploy and walk through a connected logistics application template
+Global logistics spending is expected to reach $10.6 trillion in 2020. Transportation of goods accounts for most of this spending and shipping providers are under intense competitive pressure and constraints.
+
+You can use IoT sensors to collect and monitor ambient conditions such as temperature, humidity, tilt, shock, light, and the location of a shipment. You can combine telemetry gathered from IoT sensors and devices with other data sources such as weather and traffic information in cloud-based business intelligence systems.
+
+The benefits of a connected logistics solution include:
+
+- Shipment monitoring with real-time tracing and tracking.
+- Shipment integrity with real-time ambient condition monitoring.
+- Security from theft, loss, or damage of shipments.
+- Geo-fencing, route optimization, fleet management, and vehicle analytics.
+- Forecasting for predictable departure and arrival of shipments.
+ Use the application template and guidance in this article to develop an end-to-end *connected logistics solution*. :::image type="content" source="media/tutorial-iot-central-connected-logistics/connected-logistics-architecture.png" alt-text="Connected logistics dashboard." border="false":::
iot-central Tutorial Iot Central Digital Distribution Center https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-iot-central-digital-distribution-center.md
Last updated 01/06/2022
# Tutorial: Deploy and walk through the digital distribution center application template
+As manufacturers and retailers establish worldwide presences, their supply chains branch out and become more complex. Consumers now expect large selections of products to be available, and for those goods to arrive within one or two days of purchase. Distribution centers must adapt to these trends while overcoming existing inefficiencies.
+
+Today, reliance on manual labor means that picking and packing accounts for 55-65% of distribution center costs. Manual picking and packing are also typically slower than automated systems, and rapidly fluctuating staffing needs make it even harder to meet shipping volumes. This seasonal fluctuation results in high staff turnover and increase the likelihood of costly errors.
+
+Solutions based on IoT enabled cameras can deliver transformational benefits by enabling a digital feedback loop. Data from across the distribution center leads to actionable insights that, in turn, results in better data.
+
+The benefits of a digital distribution center include:
+
+- Cameras monitor goods as they arrive and move through the conveyor system.
+- Automatic identification of faulty goods.
+- Efficient order tracking.
+- Reduced costs, improved productivity, and optimized usage.
+ Use the IoT Central *digital distribution center* application template and the guidance in this article to develop an end-to-end digital distribution center solution. :::image type="content" source="media/tutorial-iot-central-ddc/digital-distribution-center-architecture.png" alt-text="digital distribution center.":::
-1. Set of IoT sensors sending telemetry data to a gateway device
-2. Gateway devices sending telemetry and aggregated insights to IoT Central
-3. Data is routed to the desired Azure service for manipulation
-4. Azure services like ASA or Azure Functions can be used to reformat data streams and send to the desired storage accounts
-5. Processed data is stored in hot storage for near real-time actions or cold storage for more insight enhancements that is based on ML or batch analysis.
-6. Logic Apps can be used to power various business workflows in end-user business applications
+1. Set of IoT sensors sending telemetry data to a gateway device.
+2. Gateway devices sending telemetry and aggregated insights to IoT Central.
+3. Data is routed to the desired Azure service for manipulation.
+4. Azure services like ASA or Azure Functions can be used to reformat data streams and send to the desired storage accounts.
+5. Processed data is stored in hot storage for near real-time actions or cold storage for more insight enhancements that is based on ML or batch analysis.
+6. Logic Apps can be used to power various business workflows in end-user business applications.
### Video cameras
The "cameras-as-sensors" and edge workloads are managed locally by Azure IoT Edg
### Device Management with IoT Central
-Azure IoT Central is a solution development platform that simplifies IoT device & Azure IoT Edge gateway connectivity, configuration, and management. The platform significantly reduces the burden and costs of IoT device management, operations, and related developments. Customers & partners can build an end to end enterprise solutions to achieve a digital feedback loop in distribution centers.
+Azure IoT Central is a solution development platform that simplifies IoT device and Azure IoT Edge gateway connectivity, configuration, and management. The platform significantly reduces the burden and costs of IoT device management, operations, and related developments. Customers and partners can build an end-to-end enterprise solutions to achieve a digital feedback loop in distribution centers.
### Business Insights and actions using data egress
The following sections walk you through the key features of the application:
The default dashboard is a distribution center operator focused portal. Northwind Trader is a fictitious distribution center solution provider managing conveyor systems.
-In this dashboard, you will see one gateway and one camera acting as an IoT device. Gateway is providing telemetry about packages such as valid, invalid, unidentified, and size along with associated device twin properties. All downstream commands are executed at IoT devices, such as a camera. This dashboard is pre-configured to showcase the critical distribution center device operations activity.
+In this dashboard, you'll see one gateway and one camera acting as an IoT device. Gateway is providing telemetry about packages such as valid, invalid, unidentified, and size along with associated device twin properties. All downstream commands are executed at IoT devices, such as a camera. This dashboard is pre-configured to showcase the critical distribution center device operations activity.
-The dashboard is logically organized to show the device management capabilities of the Azure IoT gateway and IoT device.
+The dashboard is logically organized to show the device management capabilities of the Azure IoT gateway and IoT device. You can:
-* You can perform gateway command & control tasks
-* Manage all cameras that are part of the solution.
-* Manage all cameras that are part of the solution.
-* Manage all cameras that are part of the solution.
+* Complete gateway command and control tasks.
+* Manage all the cameras in the solution.
- :::image type="content" source="media/tutorial-iot-central-ddc/ddc-dashboard.png" alt-text="Screenshot showing the digital distribution center dashboard.":::
### Device Template
-Click on the Device templates tab, and you will see the gateway capability model. A capability model is structured around two different interfaces **Camera** and **Digital Distribution Gateway**
+Click on the Device templates tab, and you'll see the gateway capability model. A capability model is structured around two different interfaces **Camera** and **Digital Distribution Gateway**
- :::image type="content" source="media/tutorial-iot-central-ddc/ddc-devicetemplate1.png" alt-text="Screenshot showing the digital distribution gateway device template in the application.":::
**Camera** - This interface organizes all the camera-specific command capabilities
- :::image type="content" source="media/tutorial-iot-central-ddc/ddc-camera.png" alt-text="Screenshot showing the camera interface in the digital distribution gateway device template.":::
**Digital Distribution Gateway** - This interface represents all the telemetry coming from camera, cloud defined device twin properties and gateway info.
- :::image type="content" source="media/tutorial-iot-central-ddc/ddc-devicetemplate1.png" alt-text="Screenshot showing the digital distribution gateway interface in the digital distribution gateway device template.":::
### Gateway Commands This interface organizes all the gateway command capabilities
- :::image type="content" source="media/tutorial-iot-central-ddc/ddc-camera.png" alt-text="Screenshot showing the gateway commands interface in the digital distribution gateway device template.":::
### Rules Select the rules tab to see two different rules that exist in this application template. These rules are configured to email notifications to the operators for further investigations.
- **Too many invalid packages alert** - This rule is triggered when the camera detects a high number of invalid packages flowing through the conveyor system.
+**Too many invalid packages alert** - This rule is triggered when the camera detects a high number of invalid packages flowing through the conveyor system.
-**Large package** - This rule will trigger if the camera detects huge package that cannot be inspected for the quality.
+**Large package** - This rule will trigger if the camera detects huge package that can't be inspected for the quality.
- :::image type="content" source="media/tutorial-iot-central-ddc/ddc-rules.png" alt-text="Screenshot showing the list of rules in the digital distribution center application.":::
## Clean up resources If you're not going to continue to use this application, delete the application template by visiting **Administration** > **Application settings** and click **Delete**.
- :::image type="content" source="media/tutorial-iot-central-ddc/ddc-cleanup.png" alt-text="Screenshot showing how to delete the application when you're done with it.":::
## Next steps
iot-central Tutorial Iot Central Smart Inventory Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-iot-central-smart-inventory-management.md
Last updated 12/20/2021
# Tutorial: Deploy and walk through the smart inventory management application template
+Inventory is the stock of goods a retailer holds. Inventory management is critical to ensure the right product is in the right place at the right time. A retailer must balance the costs of storing too much inventory against the costs of not having sufficient items in stock to meet demand.
+
+IoT data generated from radio-frequency identification (RFID) tags, beacons, and cameras provide opportunities to improve inventory management processes. You can combine telemetry gathered from IoT sensors and devices with other data sources such as weather and traffic information in cloud-based business intelligence systems.
+
+The benefits of smart inventory management include:
+
+- Reducing the risk of items being out of stock and ensuring the desired customer service level.
+- In-depth analysis and insights into inventory accuracy in near real time.
+- Tools to help decide on the right amount of inventory to hold to meet customer orders.
+
+This application template focuses on device connectivity, and the configuration and management of RFID and Bluetooth low energy (BLE) reader devices.
+ Use the IoT Central *smart inventory management* application template and the guidance in this article to develop an end-to-end smart inventory management solution.
- :::image type="content" source="media/tutorial-iot-central-smart-inventory-management/smart-inventory-management-architecture.png" alt-text="smart inventory management.":::
-1. Set of IoT sensors sending telemetry data to a gateway device
-2. Gateway devices sending telemetry and aggregated insights to IoT Central
-3. Data is routed to the desired Azure service for manipulation
-4. Azure services like ASA or Azure Functions can be used to reformat data streams and send to the desired storage accounts
-5. Processed data is stored in hot storage for near real-time actions or cold storage for additional insight enhancements that is based on ML or batch analysis.
-6. Logic Apps can be used to power various business workflows in end-user business applications
+1. Set of IoT sensors sending telemetry data to a gateway device.
+2. Gateway devices sending telemetry and aggregated insights to IoT Central.
+3. Data is routed to the desired Azure service for manipulation.
+4. Azure services like ASA or Azure Functions can be used to reformat data streams and send to the desired storage accounts.
+5. Processed data is stored in hot storage for near real-time actions or cold storage for additional insight enhancements that is based on ML or batch analysis.
+6. Logic Apps can be used to power various business workflows in end-user business applications.
### Details
RFID tags transmit data about an item through radio waves. RFID tags typically d
Energy beacon broadcasts packets of data at regular intervals. Beacon data is detected by BLE readers or installed services on smartphones and then transmitting that to the cloud.
-### RFID & BLE readers
+### RFID and BLE readers
RFID reader converts the radio waves to a more usable form of data. Information collected from the tags is then stored in local edge server or sent to cloud using JSON-RPC 2.0 over MQTT.
-BLE reader also known as Access Points (AP) are similar to RFID reader. It is used to detect nearby Bluetooth signals and relay its message to local Azure IoT Edge or cloud using JSON-RPC 2.0 over MQTT.
+BLE reader also known as Access Points (AP) are similar to RFID reader. It's used to detect nearby Bluetooth signals and relay its message to local Azure IoT Edge or cloud using JSON-RPC 2.0 over MQTT.
Many readers are capable of reading RFID and beacon signals, and providing additional sensor capability related to temperature, humidity, accelerometer, and gyroscope. ### Azure IoT Edge gateway
Azure IoT Edge server provides a place to preprocess that data locally before se
### Device management with IoT Central
-Azure IoT Central is a solution development platform that simplifies IoT device connectivity, configuration, and management. The platform significantly reduces the burden and costs of IoT device management, operations, and related developments. Customers & partners can build an end to end enterprise solutions to achieve a digital feedback loop in inventory management.
+Azure IoT Central is a solution development platform that simplifies IoT device connectivity, configuration, and management. The platform significantly reduces the burden and costs of IoT device management, operations, and related developments. Customers and partners can build an end-to-end enterprise solutions to achieve a digital feedback loop in inventory management.
-### Business insights & actions using data egress
+### Business insights and actions using data egress
-IoT Central platform provides rich extensibility options through Continuous Data Export (CDE) and APIs. Business insights based on telemetry data processing or raw telemetry are typically exported to a preferred line-of-business application. It can be achieved using webhook, service bus, event hub, or blob storage to build, train, and deploy machine learning models & further enrich insights.
+IoT Central platform provides rich extensibility options through Continuous Data Export (CDE) and APIs. Business insights based on telemetry data processing or raw telemetry are typically exported to a preferred line-of-business application. It can be achieved using webhook, service bus, event hub, or blob storage to build, train, and deploy machine learning models and further enrich insights.
In this tutorial, you learn how to,
In this tutorial, you learn how to,
## Prerequisites
-* No specific pre-requisites required to deploy this app
-* Recommended to have Azure subscription, but you can even try without it
+* No specific pre-requisites required to deploy this app.
+* Recommended to have Azure subscription, but you can even try without it.
## Create smart inventory management application Create the application using the following steps: 1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the left-hand navigation bar and then select the **Retail** tab:
- :::image type="content" source="media/tutorial-iot-central-smart-inventory-management/iotc-retail-home-page.png" alt-text="Screenshot showing how to create an app from the smart inventory management application template":::
+
+ :::image type="content" source="media/tutorial-iot-central-smart-inventory-management/iotc-retail-home-page.png" alt-text="Screenshot showing how to create an app from the smart inventory management application template":::
1. Select **Create app** under **smart inventory management**.
The following sections walk you through the key features of the application:
### Dashboard
-After successfully deploying the app template, your default dashboard is a smart inventory management operator focused portal. Northwind Trader is a fictitious smart inventory provider managing warehouse with Bluetooth low energy (BLE) and retail store with Radio-frequency identification (RFID). In this dashboard, you will see two different gateways providing telemetry about inventory along with associated commands, jobs, and actions that you can perform.
+After successfully deploying the app template, your default dashboard is a smart inventory management operator focused portal. Northwind Trader is a fictitious smart inventory provider managing warehouse with Bluetooth low energy (BLE) and retail store with Radio-frequency identification (RFID). In this dashboard, you'll see two different gateways providing telemetry about inventory along with associated commands, jobs, and actions that you can perform.
This dashboard is pre-configured to showcase the critical smart inventory management device operations activity.
-The dashboard is logically divided between two different gateway device management operations,
- * The warehouse is deployed with a fixed BLE gateway & BLE tags on pallets to track & trace inventory at a larger facility
- * Retail store is implemented with a fixed RFID gateway & RFID tags at individual an item level to track and trace the stock in a store outlet
- * View the gateway [location](../core/howto-use-location-data.md), status & related details
+The dashboard is logically divided between two different gateway device management operations:
+
+* The warehouse is deployed with a fixed BLE gateway and BLE tags on pallets to track and trace inventory at a larger facility.
+* Retail store is implemented with a fixed RFID gateway and RFID tags at individual an item level to track and trace the stock in a store outlet.
+* View the gateway [location](../core/howto-use-location-data.md), status and related details.
- :::image type="content" source="media/tutorial-iot-central-smart-inventory-management/smart-inventory-management-dashboard-1.png" alt-text="Screenshot showing the top half of the smart inventory management dashboard.":::
+ :::image type="content" source="media/tutorial-iot-central-smart-inventory-management/smart-inventory-management-dashboard-1.png" alt-text="Screenshot showing the top half of the smart inventory management dashboard.":::
- * You can easily track the total number of gateways, active, and unknown tags.
- * You can perform device management operations such as update firmware, disable sensor, enable sensor, update sensor threshold, update telemetry intervals & update device service contracts
- * Gateway devices can perform on-demand inventory management with a complete or incremental scan.
+* You can easily track the total number of gateways, active, and unknown tags.
+* You can perform device management operations such as update firmware, disable sensor, enable sensor, update sensor threshold, update telemetry intervals and update device service contracts.
+* Gateway devices can perform on-demand inventory management with a complete or incremental scan.
- :::image type="content" source="media/tutorial-iot-central-smart-inventory-management/smart-inventory-management-dashboard-2.png" alt-text="Screenshot showing the bottom half of the smart inventory management dashboard.":::
+ :::image type="content" source="media/tutorial-iot-central-smart-inventory-management/smart-inventory-management-dashboard-2.png" alt-text="Screenshot showing the bottom half of the smart inventory management dashboard.":::
### Device Template
-Click on the Device templates tab, and you will see the gateway capability model. A capability model is structured around two different interfaces **Gateway Telemetry & Property** and **Gateway Commands**
+Click on the Device templates tab, and you'll see the gateway capability model. A capability model is structured around two different interfaces **Gateway Telemetry and Property** and **Gateway Commands**
-**Gateway Telemetry & Property** - This interface represents all the telemetry related to sensors, location, device info, and device twin property capability such as gateway thresholds and update intervals.
+**Gateway Telemetry and Property** - This interface represents all the telemetry related to sensors, location, device info, and device twin property capability such as gateway thresholds and update intervals.
- :::image type="content" source="media/tutorial-iot-central-smart-inventory-management/smart-inventory-management-device-template-1.png" alt-text="Screenshot showing the inventory gateway device template in the application.":::
**Gateway Commands** - This interface organizes all the gateway command capabilities
- :::image type="content" source="media/tutorial-iot-central-smart-inventory-management/smart-inventory-management-device-template-2.png" alt-text="Screenshot showing the gateway commands interface in the inventory gateway device template.":::
### Rules
Select the rules tab to see two different rules that exist in this application t
**Gateway offline**: This rule will trigger if the gateway doesn't report to the cloud for a prolonged period. Gateway could be unresponsive because of low battery mode, loss of connectivity, device health.
-**Unknown tags**: It's critical to track every RFID & BLE tags associated with an asset. If the gateway is detecting too many unknown tags, it's an indication of synchronization challenges with tag sourcing applications.
+**Unknown tags**: It's critical to track every RFID and BLE tags associated with an asset. If the gateway is detecting too many unknown tags, it's an indication of synchronization challenges with tag sourcing applications.
- :::image type="content" source="media/tutorial-iot-central-smart-inventory-management/smart-inventory-management-rules.png" alt-text="Screenshot showing the list of rules in the smart inventory management application.":::
## Clean up resources If you're not going to continue to use this application, delete the application template by visiting **Administration** > **Application settings** and click **Delete**.
- :::image type="content" source="media/tutorial-iot-central-smart-inventory-management/smart-inventory-management-cleanup.png" alt-text="Screenshot showing how to delete the application when you're done with it.":::
## Next steps
iot-central Tutorial Micro Fulfillment Center https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-micro-fulfillment-center.md
Last updated 12/21/2021
# Tutorial: Deploy and walk through the micro-fulfillment center application template
+In the increasingly competitive retail landscape, retailers constantly face pressure to close the gap between demand and fulfillment. A new trend that has emerged to address the growing consumer demand is to house inventory near the end customers and the stores they visit.
+
+The IoT Central micro-fulfillment center application template enables you to monitor and manage all aspects of your fully automated fulfillment centers. The template includes a set of simulated condition monitoring sensors and robotic carriers to accelerate the solution development process. These sensor devices capture meaningful signals that can be converted into business insights allowing retailers to reduce their operating costs and create experiences for their customers.
+
+The application template enables you to:
+
+- Seamlessly connect different kinds of IoT sensors such as robots or condition monitoring sensors to an IoT Central application instance.
+- Monitor and manage the health of the sensor network, and any gateway devices in the environment.
+- Create custom rules around the environmental conditions within a fulfillment center to trigger appropriate alerts.
+- Transform the environmental conditions within your fulfillment center into insights that the retail warehouse team can use.
+- Export the aggregated insights into existing or new business applications for the benefit of the retail staff members.
+ Use the IoT Central *micro-fulfillment center* application template and the guidance in this article to develop an end-to-end micro-fulfillment center solution. ![Azure IoT Central Store Analytics](./media/tutorial-micro-fulfillment-center-app/micro-fulfillment-center-architecture-frame.png)
In this tutorial, you learn:
Create the application using following steps:
-1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the left-hand navigation bar and then select the **Retail** tab:
+1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the navigation bar and then select the **Retail** tab:
- :::image type="content" source="media/tutorial-micro-fulfillment-center-app/iotc-retail-homepage-mfc.png" alt-text="Screenshot showing how to create an app.":::
+ :::image type="content" source="media/tutorial-micro-fulfillment-center-app/iotc-retail-homepage-mfc.png" alt-text="Screenshot showing how to create an app.":::
1. Select **Create app** under **micro-fulfillment center**.
The following sections walk you through the key features of the application:
After successfully deploying the app template, you see the **Northwind Traders micro-fulfillment center dashboard**. Northwind Traders is a fictitious retailer that has a micro-fulfillment center being managed in this Azure IoT Central application. On this dashboard, you see information and telemetry about the devices in this template, along with a set of commands, jobs, and actions that you can take. The dashboard is logically split into two sections. On the left, you can monitor the environmental conditions within the fulfillment structure, and on the right, you can monitor the health of a robotic carrier within the facility. From the dashboard, you can:
- * See device telemetry, such as the number of picks, the number of orders processed, and properties, such as the structure system status.
- * View the floor plan and location of the robotic carriers within the fulfillment structure.
- * Trigger commands, such as resetting the control system, updating the carrier's firmware, and reconfiguring the network.
- :::image type="content" source="media/tutorial-micro-fulfillment-center-app/mfc-dashboard-1.png" alt-text="Screenshot of the top half of the Northwind Traders micro-fulfillment center dashboard.":::
- * See an example of the dashboard that an operator can use to monitor conditions within the fulfillment center.
- * Monitor the health of the payloads that are running on the gateway device within the fulfillment center.
+* See device telemetry, such as the number of picks, the number of orders processed, and properties, such as the structure system status.
+* View the floor plan and location of the robotic carriers within the fulfillment structure.
+* Trigger commands, such as resetting the control system, updating the carrier's firmware, and reconfiguring the network.
+
+ :::image type="content" source="media/tutorial-micro-fulfillment-center-app/mfc-dashboard-1.png" alt-text="Screenshot of the top half of the Northwind Traders micro-fulfillment center dashboard.":::
+
+* See an example of the dashboard that an operator can use to monitor conditions within the fulfillment center.
+* Monitor the health of the payloads that are running on the gateway device within the fulfillment center.
:::image type="content" source="media/tutorial-micro-fulfillment-center-app/mfc-dashboard-2.png" alt-text="Screenshot of the bottom half of the Northwind Traders micro-fulfillment center dashboard."::: ### Device template
-If you select the device templates tab, you see that there are two different device types that are part of the template:
- * **Robotic Carrier**: This device template represents the definition for a functioning robotic carrier that has been deployed in the fulfillment structure, and is performing appropriate storage and retrieval operations. If you select the template, you see that the robot is sending device data, such as temperature and axis position, and properties like the robotic carrier status.
- * **Structure Condition Monitoring**: This device template represents a device collection that allows you to monitor environment condition, as well as the gateway device hosting various edge workloads to power your fulfillment center. The device sends telemetry data, such as the temperature, the number of picks, and the number of orders. It also sends information about the state and health of the compute workloads running in your environment.
+If you select the device templates tab, you see that there are two different device types that are part of the template:
- :::image type="content" source="media/tutorial-micro-fulfillment-center-app/device-templates.png" alt-text="Micro-fulfillment Center Device Templates.":::
+* **Robotic Carrier**: This device template represents the definition for a functioning robotic carrier that has been deployed in the fulfillment structure, and is performing appropriate storage and retrieval operations. If you select the template, you see that the robot is sending device data, such as temperature and axis position, and properties like the robotic carrier status.
+* **Structure Condition Monitoring**: This device template represents a device collection that allows you to monitor environment condition, as well as the gateway device hosting various edge workloads to power your fulfillment center. The device sends telemetry data, such as the temperature, the number of picks, and the number of orders. It also sends information about the state and health of the compute workloads running in your environment.
If you select the device groups tab, you also see that these device templates automatically have device groups created for them.
On the **Rules** tab, you see a sample rule that exists in the application templ
Use the sample rule as inspiration to define rules that are more appropriate for your business functions.
- :::image type="content" source="media/tutorial-micro-fulfillment-center-app/rules.png" alt-text="Screenshot of the Rules tab.":::
### Clean up resources If you're not going to continue to use this application, delete the application template. Go to **Administration** > **Application settings**, and select **Delete**.
- :::image type="content" source="media/tutorial-micro-fulfillment-center-app/delete.png" alt-text="Screenshot of Micro-fulfillment center Application settings page.":::
## Next steps
iot-edge How To Access Built In Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-access-built-in-metrics.md
The **edgeAgent** module produces the following metrics:
| `edgeAgent_total_network_out_bytes` | `module_name` | Type: gauge<br> The number of bytes sent to network | | `edgeAgent_total_disk_read_bytes` | `module_name` | Type: gauge<br> The number of bytes read from the disk | | `edgeAgent_total_disk_write_bytes` | `module_name` | Type: gauge<br> The number of bytes written to disk |
-| `edgeAgent_metadata` | `edge_agent_version`, `experimental_features`, `host_information` | Type: gauge<br> General metadata about the device. The value is always 0, information is encoded in the tags. Note `experimental_features` and `host_information` are json objects. `host_information` looks like ```{"OperatingSystemType": "linux", "Architecture": "x86_64", "Version": "1.0.10~dev20200803.4", "Provisioning": {"Type": "dps.tpm", "DynamicReprovisioning": false, "AlwaysReprovisionOnStartup": true}, "ServerVersion": "19.03.6", "KernelVersion": "5.0.0-25-generic", "OperatingSystem": "Ubuntu 18.04.4 LTS", "NumCpus": 6, "Virtualized": "yes"}```. Note `ServerVersion` is the Docker version and `Version` is the IoT Edge security daemon version. |
+| `edgeAgent_metadata` | `edge_agent_version`, `experimental_features`, `host_information` | Type: gauge<br> General metadata about the device. The value is always 0, information is encoded in the tags. Note `experimental_features` and `host_information` are json objects. `host_information` looks like ```{"OperatingSystemType": "linux", "Architecture": "x86_64", "Version": "1.2.7", "Provisioning": {"Type": "dps.tpm", "DynamicReprovisioning": false, "AlwaysReprovisionOnStartup": false}, "ServerVersion": "20.10.11+azure-3", "KernelVersion": "5.11.0-1027-azure", "OperatingSystem": "Ubuntu 20.04.4 LTS", "NumCpus": 2, "Virtualized": "yes"}```. Note `ServerVersion` is the Docker version and `Version` is the IoT Edge security daemon version. |
## Next steps
iot-edge How To Add Custom Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-add-custom-metrics.md
-# Add custom metrics (Preview)
+# Add custom metrics
[!INCLUDE [iot-edge-version-all-supported](../../includes/iot-edge-version-all-supported.md)]
iot-edge How To Collect And Transport Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-collect-and-transport-metrics.md
-# Collect and transport metrics (Preview)
+# Collect and transport metrics
[!INCLUDE [iot-edge-version-all-supported](../../includes/iot-edge-version-all-supported.md)]
iot-edge How To Create Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-create-alerts.md
-# Get notified about issues using alerts (Preview)
+# Get notified about issues using alerts
[!INCLUDE [iot-edge-version-all-supported](../../includes/iot-edge-version-all-supported.md)]
Click the alert rule name to see more context about the alert. Clicking the devi
## Next steps
-Enhance your monitoring solution with [metrics from custom modules](how-to-add-custom-metrics.md).
+Enhance your monitoring solution with [metrics from custom modules](how-to-add-custom-metrics.md).
iot-edge How To Create Iot Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-create-iot-edge-device.md
For the latest information about which operating systems are currently supported
For Linux devices, the IoT Edge runtime is installed directly on the host device.
-IoT Edge supports X64, ARM32, and ARM64 Linux devices. Microsoft provides installation packages for Ubuntu Server 18.04 and Raspberry Pi OS Stretch operating systems.
-
-Support for ARM64 devices is in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+IoT Edge supports X64, ARM32, and ARM64 Linux devices. Microsoft provides official installation packages for Ubuntu and Raspberry Pi OS Stretch operating systems.
### Linux containers on Windows
iot-edge How To Explore Curated Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-explore-curated-visualizations.md
-# Explore curated visualizations (Preview)
+# Explore curated visualizations
[!INCLUDE [iot-edge-version-all-supported](../../includes/iot-edge-version-all-supported.md)]
Save your changes as a new workbook. You can [share](../azure-monitor/visualize/
## Next steps
-Customize your monitoring solution with [alert rules](how-to-create-alerts.md) and [metrics from custom modules](how-to-add-custom-metrics.md).
+Customize your monitoring solution with [alert rules](how-to-create-alerts.md) and [metrics from custom modules](how-to-add-custom-metrics.md).
iot-edge How To Install Iot Edge Ubuntuvm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-install-iot-edge-ubuntuvm.md
Title: Run Azure IoT Edge on Ubuntu Virtual Machines | Microsoft Docs
-description: Azure IoT Edge setup instructions for Ubuntu 18.04 LTS Virtual Machines
+description: Azure IoT Edge setup instructions for Ubuntu LTS Virtual Machines
# this is the PM responsible
The Azure IoT Edge runtime is what turns a device into an IoT Edge device. The r
To learn more about how the IoT Edge runtime works and what components are included, see [Understand the Azure IoT Edge runtime and its architecture](iot-edge-runtime.md).
-This article lists the steps to deploy an Ubuntu 18.04 LTS virtual machine with the Azure IoT Edge runtime installed and configured using a pre-supplied device connection string. The deployment is accomplished using a [cloud-init](../virtual-machines/linux/using-cloud-init.md) based [Azure Resource Manager template](../azure-resource-manager/templates/overview.md) maintained in the [iotedge-vm-deploy](https://github.com/Azure/iotedge-vm-deploy) project repository.
+This article lists the steps to deploy an Ubuntu 18.04 LTS virtual machine with the Azure IoT Edge runtime installed and configured using a pre-supplied device connection string. The deployment is accomplished using a [cloud-init](../virtual-machines/linux/using-cloud-init.md) based [Azure Resource Manager template](../azure-resource-manager/templates/overview.md) maintained in the [iotedge-vm-deploy](https://github.com/Azure/iotedge-vm-deploy/tree/master) project repository.
+This article lists the steps to deploy an Ubuntu 20.04 LTS virtual machine with the Azure IoT Edge runtime installed and configured using a pre-supplied device connection string. The deployment is accomplished using a [cloud-init](../virtual-machines/linux/using-cloud-init.md) based [Azure Resource Manager template](../azure-resource-manager/templates/overview.md) maintained in the [iotedge-vm-deploy](https://github.com/Azure/iotedge-vm-deploy/tree/1.2) project repository.
-On first boot, the Ubuntu 18.04 LTS virtual machine [installs the latest version of the Azure IoT Edge runtime via cloud-init](https://github.com/Azure/iotedge-vm-deploy/blob/master/cloud-init.txt). It also sets a supplied connection string before the runtime starts, allowing you to easily configure and connect the IoT Edge device without the need to start an SSH or remote desktop session.
+On first boot, the virtual machine [installs the latest version of the Azure IoT Edge runtime via cloud-init](https://github.com/Azure/iotedge-vm-deploy/blob/master/cloud-init.txt). It also sets a supplied connection string before the runtime starts, allowing you to easily configure and connect the IoT Edge device without the need to start an SSH or remote desktop session.
## Deploy using Deploy to Azure Button
The [Deploy to Azure Button](../azure-resource-manager/templates/deploy-to-azure
:::moniker range="iotedge-2018-06" [![Deploy to Azure Button for iotedge-vm-deploy](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fazure%2Fiotedge-vm-deploy%2Fmaster%2FedgeDeploy.json) :::moniker-end
- :::moniker range="iotedge-2020-11"
+ :::moniker range=">=iotedge-2020-11"
[![Deploy to Azure Button for iotedge-vm-deploy](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fazure%2Fiotedge-vm-deploy%2F1.2%2FedgeDeploy.json) :::moniker-end 1. On the newly launched window, fill in the available form fields: > [!div class="mx-imgBorder"]
- > [![Screenshot showing the iotedge-vm-deploy template](./media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-deploy.png)](./media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-deploy.png)
-
- **Subscription**: The active Azure subscription to deploy the virtual machine into.
-
- **Resource group**: An existing or newly created Resource Group to contain the virtual machine and it's associated resources.
-
- **DNS Label Prefix**: A required value of your choosing that is used to prefix the hostname of the virtual machine.
-
- **Admin Username**: A username, which will be provided root privileges on deployment.
-
- **Device Connection String**: A [device connection string](./how-to-provision-single-device-linux-symmetric.md#view-registered-devices-and-retrieve-provisioning-information) for a device that was created within your intended [IoT Hub](../iot-hub/about-iot-hub.md).
-
- **VM Size**: The [size](../cloud-services/cloud-services-sizes-specs.md) of the virtual machine to be deployed
- **Ubuntu OS Version**: The version of the Ubuntu OS to be installed on the base virtual machine.
-
- **Location**: The [geographic region](https://azure.microsoft.com/global-infrastructure/locations/) to deploy the virtual machine into, this value defaults to the location of the selected Resource Group.
-
- **Authentication Type**: Choose **sshPublicKey** or **password** depending on your preference.
-
- **Admin Password or Key**: The value of the SSH Public Key or the value of the password depending on the choice of Authentication Type.
+ :::moniker range="iotedge-2018-06"
+ > [![Screenshot showing the iotedge-vm-deploy template](./media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-deploy.png)](./media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-deploy.png)
+ :::moniker-end
+ :::moniker range=">=iotedge-2020-11"
+ > [![Screenshot showing the iotedge-vm-deploy template](./media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-deploy-ubuntu2004.png)](./media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-deploy-ubuntu2004.png)
+ :::moniker-end
- When all fields have been filled in, select the checkbox at the bottom of the page to accept the terms and select **Purchase** to begin the deployment.
+ | Field | Description |
+ | | -- |
+ | **Subscription** | The active Azure subscription to deploy the virtual machine into. |
+ | **Resource group** | An existing or newly created Resource Group to contain the virtual machine and it's associated resources. |
+ | **Region** | The [geographic region](https://azure.microsoft.com/global-infrastructure/locations/) to deploy the virtual machine into, this value defaults to the location of the selected Resource Group. |
+ | **DNS Label Prefix** | A required value of your choosing that is used to prefix the hostname of the virtual machine. |
+ | **Admin Username** | A username, which will be provided root privileges on deployment. |
+ | **Device Connection String** | A [device connection string](./how-to-provision-single-device-linux-symmetric.md#view-registered-devices-and-retrieve-provisioning-information) for a device that was created within your intended [IoT Hub](../iot-hub/about-iot-hub.md). |
+ | **VM Size** | The [size](../cloud-services/cloud-services-sizes-specs.md) of the virtual machine to be deployed. |
+ | **Ubuntu OS Version** | The version of the Ubuntu OS to be installed on the base virtual machine. |
+ | **Authentication Type** | Choose **sshPublicKey** or **password** depending on your preference. |
+ | **Admin Password or Key** | The value of the SSH Public Key or the value of the password depending on the choice of Authentication Type. |
+
+ When all fields have been filled in, click the button at the bottom to move to `Next : Review + create` where you can review the terms and click **Create** to begin the deployment.
1. Verify that the deployment has completed successfully. A virtual machine resource should have been deployed into the selected resource group. Take note of the machine name, this should be in the format `vm-0000000000000`. Also, take note of the associated **DNS Name**, which should be in the format `<dnsLabelPrefix>`.`<location>`.cloudapp.azure.com.
The [Deploy to Azure Button](../azure-resource-manager/templates/deploy-to-azure
--parameters adminPasswordOrKey="$(< ~/.ssh/iotedge-vm-key.pub)" ``` :::moniker-end
- :::moniker range="iotedge-2020-11"
+ :::moniker range=">=iotedge-2020-11"
To use an **authenticationType** of `password`, see the example below: ```azurecli-interactive
If you are having problems with the IoT Edge runtime installing properly, check
To update an existing installation to the newest version of IoT Edge, see [Update the IoT Edge security daemon and runtime](how-to-update-iot-edge.md).
-If you'd like to open up ports to access the VM through SSH or other inbound connections, refer to the Azure Virtual Machines documentation on [opening up ports and endpoints to a Linux VM](../virtual-machines/linux/nsg-quickstart.md)
+If you'd like to open up ports to access the VM through SSH or other inbound connections, refer to the Azure Virtual Machines documentation on [opening up ports and endpoints to a Linux VM](../virtual-machines/linux/nsg-quickstart.md)
iot-edge How To Provision Devices At Scale Linux Tpm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-provision-devices-at-scale-linux-tpm.md
A virtual switch enables your VM to connect to a physical network.
Create a new VM from a bootable image file.
-1. Download a disk image file to use for your VM and save it locally. For example, [Ubuntu Server 18.04](http://releases.ubuntu.com/18.04/). For information about supported operating systems for IoT Edge devices, see [Azure IoT Edge supported systems](./support.md).
+1. Download a disk image file to use for your VM and save it locally. For example, [Ubuntu Server 20.04](http://releases.ubuntu.com/20.04/). For information about supported operating systems for IoT Edge devices, see [Azure IoT Edge supported systems](./support.md).
1. In Hyper-V Manager, select **Action** > **New** > **Virtual Machine** on the **Actions** menu.
iot-edge How To Provision Single Device Linux Symmetric https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-provision-single-device-linux-symmetric.md
This article covers registering your IoT Edge device and installing IoT Edge on
<!-- Device registration prerequisites H3 and content --> [!INCLUDE [iot-edge-prerequisites-register-device.md](../../includes/iot-edge-prerequisites-register-device.md)]
-### Device requirements
-
-An X64, ARM32, or ARM64 Linux device.
-
-Microsoft provides installation packages for Ubuntu Server 18.04 and Raspberry Pi OS Stretch operating systems.
-
-For the latest information about which operating systems are currently supported for production scenarios, see [Azure IoT Edge supported systems](support.md#operating-systems).
-
->[!NOTE]
->Support for ARM64 devices is in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+<!-- Device requirements H3 and content -->
<!-- Register your device and View provisioning information H2s and content --> [!INCLUDE [iot-edge-register-device-symmetric.md](../../includes/iot-edge-register-device-symmetric.md)]
iot-edge How To Provision Single Device Linux X509 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-provision-single-device-linux-x509.md
This article covers registering your IoT Edge device and installing IoT Edge on
<!-- Device registration prerequisites H3 and content --> [!INCLUDE [iot-edge-prerequisites-register-device.md](../../includes/iot-edge-prerequisites-register-device.md)]
-### Device requirements
-
-An X64, ARM32, or ARM64 Linux device.
-
-Microsoft provides installation packages for Ubuntu Server 18.04 and Raspberry Pi OS Stretch operating systems.
-
-For the latest information about which operating systems are currently supported for production scenarios, see [Azure IoT Edge supported systems](support.md#operating-systems).
-
->[!NOTE]
->Support for ARM64 devices is in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+<!-- Device requirements prerequisites H3 and content -->
<!-- Generate device identity certificates H2 and content --> [!INCLUDE [iot-edge-generate-device-identity-certs.md](../../includes/iot-edge-generate-device-identity-certs.md)]
iot-edge How To Update Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-update-iot-edge.md
Check the version of the security daemon running on your device by using the com
On Linux x64 devices, use apt-get or your appropriate package manager to update the security daemon to the latest version.
-Get the latest repository configuration from Microsoft:
-
-* **Ubuntu Server 18.04**:
-
- ```bash
- curl https://packages.microsoft.com/config/ubuntu/18.04/multiarch/prod.list > ./microsoft-prod.list
- ```
-
-* **Raspberry Pi OS Stretch**:
-
- ```bash
- curl https://packages.microsoft.com/config/debian/stretch/multiarch/prod.list > ./microsoft-prod.list
- ```
-
-Copy the generated list.
-
- ```bash
- sudo cp ./microsoft-prod.list /etc/apt/sources.list.d/
- ```
-
-Install Microsoft GPG public key.
-
- ```bash
- curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
- sudo cp ./microsoft.gpg /etc/apt/trusted.gpg.d/
- ```
- Update apt. ```bash sudo apt-get update ```
+ > [!NOTE]
+ > For instructions to get the latest repository configuration from Microsoft see the preliminary steps to [Install IoT Edge](how-to-provision-single-device-linux-symmetric.md#install-iot-edge).
+ <!-- 1.1 --> :::moniker range="iotedge-2018-06"
Before automating any update processes, validate that it works on test machines.
When you're ready, follow these steps to update IoT Edge on your devices:
-1. Get the latest repository configuration from Microsoft:
-
- * **Ubuntu Server 18.04**:
-
- ```bash
- curl https://packages.microsoft.com/config/ubuntu/18.04/multiarch/prod.list > ./microsoft-prod.list
- ```
-
- * **Raspberry Pi OS Stretch**:
-
- ```bash
- curl https://packages.microsoft.com/config/debian/stretch/multiarch/prod.list > ./microsoft-prod.list
- ```
-
-2. Copy the generated list.
-
- ```bash
- sudo cp ./microsoft-prod.list /etc/apt/sources.list.d/
- ```
-
-3. Install Microsoft GPG public key.
-
- ```bash
- curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
- sudo cp ./microsoft.gpg /etc/apt/trusted.gpg.d/
- ```
-
-4. Update apt.
+1. Update apt.
```bash sudo apt-get update ```
-5. Uninstall the previous version of IoT Edge, leaving your configuration files in place.
+1. Uninstall the previous version of IoT Edge, leaving your configuration files in place.
```bash sudo apt-get remove iotedge ```
-6. Install the most recent version of IoT Edge, along with the IoT identity service.
+1. Install the most recent version of IoT Edge, along with the IoT identity service.
```bash sudo apt-get install aziot-edge ```
-7. Import your old config.yaml file into its new format, and apply the configuration info.
+1. Import your old config.yaml file into its new format, and apply the configuration info.
```bash sudo iotedge config import
iot-edge Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/support.md
Modules built as Linux containers can be deployed to either Linux or Windows dev
| Operating System | AMD64 | ARM32v7 | ARM64 | | - | -- | - | -- | | Raspberry Pi OS Stretch | | ![Raspberry Pi OS Stretch + ARM32v7](./media/support/green-check.png) | |
-| Ubuntu Server 18.04 | ![Ubuntu Server 18.04 + AMD64](./media/support/green-check.png) | | Public preview |
+| Ubuntu Server 20.04 | ![Ubuntu Server 20.04 + AMD64](./media/support/green-check.png) | | ![Ubuntu Server 20.04 + ARM64](./media/support/green-check.png) |
+| Ubuntu Server 18.04 | ![Ubuntu Server 18.04 + AMD64](./media/support/green-check.png) | | ![Ubuntu Server 18.04 + ARM64](./media/support/green-check.png) |
| Windows 10 Pro | ![Windows 10 Pro + AMD64](./media/support/green-check.png) | | | | Windows 10 Enterprise | ![Windows 10 Enterprise + AMD64](./media/support/green-check.png) | | | | Windows 10 IoT Enterprise | ![Windows 10 IoT Enterprise + AMD64](./media/support/green-check.png) | | |
All Windows operating systems must be version 1809 (build 17763) or later.
| Operating System | AMD64 | ARM32v7 | ARM64 | | - | -- | - | -- | | Raspberry Pi OS Stretch | | ![Raspberry Pi OS Stretch + ARM32v7](./media/support/green-check.png) | |
-| Ubuntu Server 18.04 | ![Ubuntu Server 18.04 + AMD64](./media/support/green-check.png) | | Public preview |
+| Ubuntu Server 20.04 | ![Ubuntu Server 20.04 + AMD64](./media/support/green-check.png) | | ![Ubuntu Server 20.04 + ARM64](./media/support/green-check.png) |
+| Ubuntu Server 18.04 | ![Ubuntu Server 18.04 + AMD64](./media/support/green-check.png) | | ![Ubuntu Server 18.04 + ARM64](./media/support/green-check.png) |
:::moniker-end <!-- end 1.2 -->
The systems listed in the following table are considered compatible with Azure I
| Operating System | AMD64 | ARM32v7 | ARM64 | | - | -- | - | -- | | [CentOS-7](https://wiki.centos.org/Manuals/ReleaseNotes/CentOS7) | ![CentOS + AMD64](./media/support/green-check.png) | ![CentOS + ARM32v7](./media/support/green-check.png) | ![CentOS + ARM64](./media/support/green-check.png) |
-| [Ubuntu 20.04 <sup>1</sup>](https://wiki.ubuntu.com/FocalFoss64](./media/support/green-check.png) | ![Ubuntu 20.04 + ARM32v7](./media/support/green-check.png) | ![Ubuntu 20.04 + ARM64](./media/support/green-check.png) |
| [Debian 9](https://www.debian.org/releases/stretch/) | ![Debian 9 + AMD64](./media/support/green-check.png) | ![Debian 9 + ARM32v7](./media/support/green-check.png) | ![Debian 9 + ARM64](./media/support/green-check.png) | | [Debian 10](https://www.debian.org/releases/buster/) | ![Debian 10 + AMD64](./media/support/green-check.png) | ![Debian 10 + ARM32v7](./media/support/green-check.png) | ![Debian 10 + ARM64](./media/support/green-check.png) | | [Debian 11](https://www.debian.org/releases/bullseye/) | ![Debian 11 + AMD64](./media/support/green-check.png) | ![Debian 11 + ARM32v7](./media/support/green-check.png) | ![Debian 11 + ARM64](./media/support/green-check.png) | | [Mentor Embedded Linux Flex OS](https://www.mentor.com/embedded-software/linux/mel-flex-os/) | ![Mentor Embedded Linux Flex OS + AMD64](./media/support/green-check.png) | ![Mentor Embedded Linux Flex OS + ARM32v7](./media/support/green-check.png) | ![Mentor Embedded Linux Flex OS + ARM64](./media/support/green-check.png) | | [Mentor Embedded Linux Omni OS](https://www.mentor.com/embedded-software/linux/mel-omni-os/) | ![Mentor Embedded Linux Omni OS + AMD64](./media/support/green-check.png) | | ![Mentor Embedded Linux Omni OS + ARM64](./media/support/green-check.png) | | [RHEL 7](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7) | ![RHEL 7 + AMD64](./media/support/green-check.png) | ![RHEL 7 + ARM32v7](./media/support/green-check.png) | ![RHEL 7 + ARM64](./media/support/green-check.png) |
-| [Ubuntu 18.04](https://wiki.ubuntu.com/BionicBeaver/ReleaseNotes) | ![Ubuntu 18.04 + AMD64](./media/support/green-check.png) | ![Ubuntu 18.04 + ARM32v7](./media/support/green-check.png) | ![Ubuntu 18.04 + ARM64](./media/support/green-check.png) |
+| [Ubuntu 18.04 <sup>1</sup>](https://wiki.ubuntu.com/BionicBeaver/ReleaseNotes) | | ![Ubuntu 18.04 + ARM32v7](./media/support/green-check.png) | |
+| [Ubuntu 20.04 <sup>1</sup>](https://wiki.ubuntu.com/FocalFossa/ReleaseNotes) | | ![Ubuntu 20.04 + ARM32v7](./media/support/green-check.png) | |
| [Wind River 8](https://docs.windriver.com/category/os-wind_river_linux) | ![Wind River 8 + AMD64](./media/support/green-check.png) | | | | [Yocto](https://www.yoctoproject.org/) | ![Yocto + AMD64](./media/support/green-check.png) | ![Yocto + ARM32v7](./media/support/green-check.png) | ![Yocto + ARM64](./media/support/green-check.png) | | Raspberry Pi OS Buster | | ![Raspberry Pi OS Buster + ARM32v7](./media/support/green-check.png) | ![Raspberry Pi OS Buster + ARM64](./media/support/green-check.png) |
-<sup>1</sup> The Ubuntu Server 18.04 installation steps in [Install or uninstall Azure IoT Edge for Linux](how-to-provision-single-device-linux-symmetric.md) should work without any changes on Ubuntu 20.04.
+<sup>1</sup> Installation packages are made available on the [Azure IoT Edge releases](https://github.com/Azure/azure-iotedge/releases). See the installation steps in [Offline or specific version installation](how-to-provision-single-device-linux-symmetric.md#offline-or-specific-version-installation-optional).
## Releases
The following table lists the components included in each release starting with
| Release | aziot-edge | edgeHub<br>edgeAgent | aziot-identity-service | | - | - | -- | - |
-| **1.2** | 1.2.0<br>1.2.1<br>1.2.2<br>1.2.3 | 1.2.0<br>1.2.1<br>1.2.2<br> 1.2.3 | 1.2.0<br>1.2.1<br>1.2.2<br><br> |
+| **1.2** | 1.2.0<br>1.2.1<br>1.2.3<br>1.2.4<br>1.2.5<br><br>1.2.7 | 1.2.0<br>1.2.1<br>1.2.3<br>1.2.4<br>1.2.5<br>1.2.6<br>1.2.7 | 1.2.0<br>1.2.1<br>1.2.3<br>1.2.4<br>1.2.5<br> |
The following table lists the components included in each release up to the 1.1 LTS release. The components listed in this table can be installed or updated individually, and are backwards compatible with older versions.
key-vault How To Integrate Certificate Authority https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/certificates/how-to-integrate-certificate-authority.md
tags: azure-resource-manager
Previously updated : 06/02/2020- Last updated : 01/24/2022+
-# Integrating Key Vault with DigiCert certificate authority
+# Integrating Key Vault with Integrated Certificate Authorities
Azure Key Vault allows you to easily provision, manage, and deploy digital certificates for your network and to enable secure communications for applications. A digital certificate is an electronic credential that establishes proof of identity in an electronic transaction.
-Azure Key Vault users can generate DigiCert certificates directly from their key vaults. Key Vault has a trusted partnership with DigiCert certificate authority. This partnership ensures end-to-end certificate lifecycle management for certificates issued by DigiCert.
+Azure Key Vault has a trusted partnership with the following Certificate Authorities:
+- [DigiCert](https://www.digicert.com/)
+- [GlobalSign](https://www.globalsign.com/en)
+
+Azure Key Vault users can generate DigiCert/GlobalSign certificates directly from their key vaults. Key Vault's partnership ensures end-to-end certificate lifecycle management for certificates issued by DigiCert.
For more general information about certificates, see [Azure Key Vault certificates](./about-certificates.md).
To complete the procedures in this article, you need to have:
### Before you begin
+#### DigiCert
+ Make sure you have the following information from your DigiCert CertCentral account: - CertCentral account ID - Organization ID - API key
+- Account ID
+- Account Password
+
+#### GlobalSign
+
+Make sure you have the following information from your Global Sign account:
+
+- Account ID
+- Account Password
+- First Name of Administrator
+- Last Name of Administrator
+- E-mail of Administrator
+- Phone Number of Administrator
+ ## Add the certificate authority in Key Vault After you gather the preceding information from your DigiCert CertCentral account, you can add DigiCert to the certificate authority list in the key vault.
-### Azure portal
+### Azure portal (DigiCert)
1. To add DigiCert certificate authority, go to the key vault you want to add it to. 2. On the Key Vault property page, select **Certificates**.
After you gather the preceding information from your DigiCert CertCentral accoun
DigicertCA is now in the certificate authority list.
+### Azure portal (GlobalSign)
+
+1. To add DigiCert certificate authority, go to the key vault you want to add it to.
+2. On the Key Vault property page, select **Certificates**.
+3. Select the **Certificate Authorities** tab:
+4. Select **Add**:
+5. Under **Create a certificate authority**, enter these values:
+ - **Name**: An identifiable issuer name. For example, **GlobalSignCA**.
+ - **Provider**: **GlobalSign**.
+ - **Account ID**: Your GlobalSign account ID.
+ - **Account Password**: Your GlobalSign account password.
+ - **First Name of Administrator**: The first name of administrator of the Global Sign account.
+ - **Last Name of Administrator**: The last name of administrator of the Global Sign account.
+ - **E-mail of Administrator**: The email of administrator of the Global Sign account.
+ - **Phone number of Administrator**: The phone number of administrator of the Global Sign account.
+
+1. Select **Create**.
+
+GlobalSignCA is now in the certificate authority list.
+ ### Azure PowerShell
load-balancer Load Balancer Outbound Connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-outbound-connections.md
Outbound connectivity to the internet can be enabled in the following ways in Az
| # | Method | Type of port allocation | Production-grade? | Rating | | | | | | | | 1 | Using the frontend IP address(es) of a Load Balancer for outbound via Outbound rules | Static, explicit | Yes, but not at scale | OK |
-| 2 | Associating a NAT gateway to the subnet | Static, explicit | Yes | Best |
+| 2 | Associating a NAT gateway to the subnet | Dynamic, explicit | Yes | Best |
| 3 | Assigning a Public IP to the Virtual Machine | Static, explicit | Yes | OK | | 4 | Using [default outbound access](../virtual-network/ip-services/default-outbound-access.md) | Implicit | No | Worst |
machine-learning Deploy With Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/deploy-with-resource-manager-template.md
Connect-AzAccount
``` This step needs to be repeated for each session. Once authenticated, your subscription information should be displayed.
-![Azure Account](/articles/marketplace/media/test-drive/azure-subscriptions.png)
+![Azure Account](/azure/marketplace/media/test-drive/azure-subscriptions.png)
Now that we have access to Azure, we can create the resource group.
Another way to retrieve tokens of existing workspace is to use the Invoke-AzReso
# List the primary and secondary tokens of all workspaces Get-AzResource |? { $_.ResourceType -Like "*MachineLearning/workspaces*"} |ForEach-Object { Invoke-AzResourceAction -ResourceId $_.ResourceId -Action listworkspacekeys -Force} ```
-After the workspace is provisioned, you can also automate many Machine Learning Studio (classic) tasks using the [PowerShell Module for Machine Learning Studio (classic)](https://aka.ms/amlps).
+After the workspace is provisioned, you can also automate many Machine Learning Studio (classic) tasks using the [PowerShell Module for Machine Learning Studio (classic)](/previous-versions/azure/machine-learning/classic/powershell-module).
## Next steps
After the workspace is provisioned, you can also automate many Machine Learning
* Have a look at the [Azure Quickstart Templates Repository](https://github.com/Azure/azure-quickstart-templates). * See the [Resource Manager template reference help](/azure/templates/microsoft.machinelearning/allversions)
-<!--Link references-->
+<!--Link references-->
migrate Hyper V Migration Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/hyper-v-migration-architecture.md
You can limit the amount of bandwidth used to upload data to Azure on each Hyper
2. Run **C:\Program Files\Microsoft Azure Recovery Services Agent\bin\wabadmin.msc**, to open the Windows Azure Backup MMC snap-in. 3. In the snap-in, select **Change Properties**. 4. In **Throttling**, select **Enable internet bandwidth usage throttling for backup operations**. Set the limits for work and non-work hours. Valid ranges are from 512 Kbps to 1,023 Mbps.
-I
### Influence upload efficiency
mysql Concepts Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/concepts-data-in-replication.md
Data-in replication allows you to synchronize data from an external MySQL server
> [!Note] > GTID-based replication is currently not supported for Azure Database for MySQL Flexible Servers.<br>
-> Configuring Data-in replication for zone redundant high availability servers is not supported.
+> Configuring Data-in replication for zone-redundant high availability servers is not supported.
## When to use Data-in replication
For migration scenarios, use the [Azure Database Migration Service](https://azur
The [*mysql system database*](https://dev.mysql.com/doc/refman/5.7/en/system-schema.html) on the source server isn't replicated. In addition, changes to accounts and permissions on the source server aren't replicated. If you create an account on the source server and this account needs to access the replica server, manually create the same account on the replica server. To understand what tables are contained in the system database, see the [MySQL manual](https://dev.mysql.com/doc/refman/5.7/en/system-schema.html). ### Data-in replication not supported on HA enabled servers
-Configuring Data-in replication for zone redundant high availability servers is not supported. On servers were HA is enabled the stored procedures for replication `mysql.az_replication_*` will not be available.
+Configuring Data-in replication for zone-redundant high availability servers isn't supported. On servers were HA is enabled, the stored procedures for replication `mysql.az_replication_*` won't be available. You can't use HA servers as source server when you use binary log files position-based replication.
### Filtering
-Modifying the parameter `replicate_wild_ignore_table` which was used to create replication filter for tables, is currently not supported for Azure Database for MySQL -Flexible server.
+Modifying the parameter `replicate_wild_ignore_table` used to create replication filter for tables, is currently not supported for Azure Database for MySQL -Flexible server.
### Requirements
Modifying the parameter `replicate_wild_ignore_table` which was used to create r
- Our recommendation is to have a primary key in each table. If we have table without primary key, you might face slowness in replication. - The source server should use the MySQL InnoDB engine. - User must have permissions to configure binary logging and create new users on the source server.
+- Binary log files on the source server shouldn't be purged before the replica applies those changes. If the source is Azure Database for MySQL refer how to configure binlog_expire_logs_seconds for [Flexible server](./concepts-server-parameters.md#binlog_expire_logs_seconds) or [Single server](../concepts-server-parameters.md#binlog_expire_logs_seconds)
- If the source server has SSL enabled, ensure the SSL CA certificate provided for the domain has been included in the `mysql.az_replication_change_master` stored procedure. Refer to the following [examples](./how-to-data-in-replication.md#link-source-and-replica-servers-to-start-data-in-replication) and the `master_ssl_ca` parameter. - Ensure that the machine hosting the source server allows both inbound and outbound traffic on port 3306. - Ensure that the source server has a **public IP address**, that DNS is publicly accessible, or that the source server has a fully qualified domain name (FQDN). - In case of public access, ensure that the source server has a public IP address, that DNS is publicly accessible, or that the source server has a fully qualified domain name (FQDN).-- In case of private access ensure that the source server name can be resolved and is accessible from the VNet where the Azure Database for MySQL instance is running.For more details see , [Name resolution for resources in Azure virtual networks](../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md)
+- In case of private access ensure that the source server name can be resolved and is accessible from the VNet where the Azure Database for MySQL instance is running.For more details see, [Name resolution for resources in Azure virtual networks](../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md)
## Next steps
mysql Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/overview.md
Last updated 08/10/2021
-# Azure Database for MySQL - Flexible Server
+# Azure Database for MySQL - Flexible Server
[[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
In this article, we'll provide an overview and introduction to core concepts of
## Overview
-Azure Database for MySQL Flexible Server is a fully managed production-ready database service designed for more granular control and flexibility over database management functions and configuration settings. The flexible server architecture allows users to opt for high availability within single availability zone and across multiple availability zones. Flexible servers provides better cost optimization controls with the ability to stop/start server and burstable compute tier, ideal for workloads that do not need full compute capacity continuously. Flexible Server also supports reserved instances allowing you to save up to 63% cost, ideal for production workloads with predictable compute capacity requirements. The service supports community version of MySQL 5.7 and 8.0. The service is generally available today in wide variety of [Azure regions](overview.md#azure-regions).
+Azure Database for MySQL Flexible Server is a fully managed production-ready database service designed for more granular control and flexibility over database management functions and configuration settings. The flexible server architecture allows users to opt for high availability within single availability zone and across multiple availability zones. Flexible servers provide better cost optimization controls with the ability to stop/start server and burstable compute tier, ideal for workloads that donΓÇÖt need full-compute capacity continuously. Flexible Server also supports reserved instances allowing you to save up to 63% cost, ideal for production workloads with predictable compute capacity requirements. The service supports community version of MySQL 5.7 and 8.0. The service is generally available today in wide variety of [Azure regions](overview.md#azure-regions).
The Flexible Server deployment option offers three compute tiers: Burstable, General Purpose, and Memory Optimized. Each tier offers different compute and memory capacity to support your database workloads. You can build your first app on a burstable tier for a few dollars a month, and then adjust the scale to meet the needs of your solution. Dynamic scalability enables your database to transparently respond to rapidly changing resource requirements. You only pay for the resources you need, and only when you need them. See [Compute and Storage](concepts-compute-storage.md) for details. Flexible servers are best suited for-- Ease of deployments, simplified scaling and low database management overhead for functions like backups, high availability, security and monitoring
+- Ease of deployments, simplified scaling, and low database management overhead for functions like backups, high availability, security, and monitoring
- Application developments requiring community version of MySQL with better control and customizations-- Production workloads with same-zone, zone redundant high availability and managed maintenance windows
+- Production workloads with same-zone, zone-redundant high availability and managed maintenance windows
- Simplified development experience -- Enterprise grade security, compliance and privacy
+- Enterprise grade security, compliance, and privacy
For latest updates on Flexible Server, refer to [What's new in Azure Database for MySQL - Flexible Server](whats-new.md).
You can take advantage of this offer to develop and deploy applications that use
## High availability within and across availability zones
-Azure Database for MySQL Flexible Server allows configuring high availability with automatic failover. The high availability solution is designed to ensure that committed data is never lost due to failures, and improve overall uptime for your application. When high availability is configured, flexible server automatically provisions and manages a standby replica. There are two high availability architectural models:
+Azure Database for MySQL Flexible Server allows configuring high availability with automatic failover. The high availability solution is designed to ensure that committed data is never lost due to failures, and improve overall uptime for your application. When high availability is configured, flexible server automatically provisions and manages a standby replica. There are two high availability-architectural models:
- **Zone Redundant High Availability (HA):** This option is preferred for complete isolation and redundancy of infrastructure across multiple availability zones. It provides highest level of availability, but it requires you to configure application redundancy across zones. Zone redundant HA is preferred when you want to achieve highest level of availability against any infrastructure failure in the availability zone and where latency across the availability zone is acceptable. Zone redundant HA is available in [subset of Azure regions](overview.md#azure-regions) where the region supports multiple Availability Zones and Zone redundant Premium file shares are available.
See [Networking concepts](concepts-networking.md) to learn more.
## Adjust performance and scale within seconds
-The flexible server service is available in three SKU tiers: Burstable, General Purpose, and Memory Optimized. The Burstable tier is best suited for low-cost development and low concurrency workloads that don't need full compute capacity continuously. The General Purpose and Memory Optimized are better suited for production workloads requiring high concurrency, scale, and predictable performance. You can build your first app on a small database for a few dollars a month, and then seamlessly adjust the scale to meet the needs of your solution. The storage scaling is online and supports storage autogrowth. Flexible Server enables you to provision additional IOPS up to 20K IOPs above the complimentary IOPS limit independent of storage. Using this feature, you can increase or decrease the number of IOPS provisioned based on your workload requirements at any time. Dynamic scalability enables your database to transparently respond to rapidly changing resource requirements. You only pay for the resources you consume.
+The flexible server service is available in three SKU tiers: Burstable, General Purpose, and Memory Optimized. The Burstable tier is best suited for low-cost development and low concurrency workloads that don't need full-compute capacity continuously. The General Purpose and Memory Optimized are better suited for production workloads requiring high concurrency, scale, and predictable performance. You can build your first app on a small database for a few dollars a month, and then seamlessly adjust the scale to meet the needs of your solution. The storage scaling is online and supports storage autogrowth. Flexible Server enables you to provision additional IOPS up to 20 K IOPs above the complimentary IOPS limit independent of storage. Using this feature, you can increase or decrease the number of IOPS provisioned based on your workload requirements at any time. Dynamic scalability enables your database to transparently respond to rapidly changing resource requirements. You only pay for the resources you consume.
See [Compute and Storage concepts](concepts-compute-storage.md) to learn more.
For more information, see [Data-in replication concepts](concepts-data-in-replic
## Stop/Start server to optimize cost
-The flexible server service allows you to stop and start server on-demand to optimize cost. The compute tier billing is stopped immediately when the server is stopped. This can allow you to have significant cost savings during development, testing and for time-bound predictable production workloads. The server remains in stopped state for thirty days unless re-started sooner.
+The flexible server service allows you to stop and start server on-demand to optimize cost. The compute tier billing is stopped immediately when the server is stopped. This can allow you to have significant cost savings during development, testing and for time-bound predictable production workloads. The server remains in stopped state for 30 days unless restarted sooner.
For more information, see [Server concepts](concept-servers.md).
-## Enterprise grade security, compliance and privacy
+## Enterprise grade security, compliance, and privacy
The flexible server service uses the FIPS 140-2 validated cryptographic module for storage encryption of data at-rest. Data, including backups, and temporary files created while running queries are encrypted. The service uses the AES 256-bit cipher included in Azure storage encryption, and the keys can be system managed (default).
The service encrypts data in-motion with transport layer security enforced by de
For more information, see [how to use encrypted connections to flexible servers](how-to-connect-tls-ssl.md).
-Flexible Server allows full private access to the servers using [Azure virtual network](../../virtual-network/virtual-networks-overview.md) (VNet) integration. Servers in Azure virtual network can only be reached and connected through private IP addresses. With VNet integration, public access is denied and servers cannot be reached using public endpoints.
+Flexible Server allows full-private access to the servers using [Azure virtual network](../../virtual-network/virtual-networks-overview.md) (VNet) integration. Servers in Azure virtual network can only be reached and connected through private IP addresses. With VNet integration, public access is denied and servers canΓÇÖt be reached using public endpoints.
For more information, see [Networking concepts](concepts-networking.md).
The flexible server service is equipped with built-in performance monitoring and
* The query details: view the query text as well as the history of execution with minimum, maximum, average, and standard deviation query time. * The resource utilizations (CPU, memory, and storage).
-In addition, you can use and integrate with community monitoring tools like [Percona Monitoring and Management with your MySQL Flexible Server](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/monitor-azure-database-for-mysql-using-percona-monitoring-and/ba-p/2568545).
+In addition, you can use and integrate with community monitoring tools like [Percona Monitoring and Management with your MySQL Flexible Server](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/monitor-azure-database-for-mysql-using-percona-monitoring-and/ba-p/2568545).
For more information, see [Monitoring concepts](concepts-monitoring.md).
For more information, see [Monitoring concepts](concepts-monitoring.md).
The service runs the community version of MySQL. This allows full application compatibility and requires minimal refactoring cost to migrate existing applications developed on MySQL engine to Flexible Server. Migration to Flexible Server can be performed using the following option: ### Offline Migrations
-* Using Azure Data Migration Service when network bandwidth between source and Azure is good (for example: High speed ExpressRoute). Learn more with step by step instructions - [Migrate MySQL to Azure Database for MySQL offline using DMS - Azure Database Migration Service](../../dms/tutorial-mysql-azure-mysql-offline-portal.md)
-* Use mydumper/myloader to take advantage of compression settings to efficiently move data over low speed networks (such as public internet). Learn more with step by step instructions [Migrate large databases to Azure Database for MySQL using mydumper/myloader](../../mysql/concepts-migrate-mydumper-myloader.md)
+* Using Azure Data Migration Service when network bandwidth between source and Azure is good (for example: High-speed ExpressRoute). Learn more with step-by-step instructions - [Migrate MySQL to Azure Database for MySQL offline using DMS - Azure Database Migration Service](../../dms/tutorial-mysql-azure-mysql-offline-portal.md)
+* Use mydumper/myloader to take advantage of compression settings to efficiently move data over low speed networks (such as public internet). Learn more with step-by-step instructions [Migrate large databases to Azure Database for MySQL using mydumper/myloader](../../mysql/concepts-migrate-mydumper-myloader.md)
### Online or Minimal downtime migrations
-Use data-in replication with mydumper/myloader consistent backup/restore for initial seeding. Learn more with step by step instructions - [Tutorial: Minimal Downtime Migration of Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server](../../mysql/howto-migrate-single-flexible-minimum-downtime.md)
+Use data-in replication with mydumper/myloader consistent backup/restore for initial seeding. Learn more with step-by-step instructions - [Tutorial: Minimal Downtime Migration of Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server](../../mysql/howto-migrate-single-flexible-minimum-downtime.md)
-To migrate from Azure Database for MySQL - Single Server to Flexible Server in 5 easy steps, refer to [this blog](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/migrate-from-azure-database-for-mysql-single-server-to-flexible/ba-p/2674057).
+To migrate from Azure Database for MySQL - Single Server to Flexible Server in five easy steps, refer to [this blog](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/migrate-from-azure-database-for-mysql-single-server-to-flexible/ba-p/2674057).
For more information, see [Select the right tools for migration to Azure Database for MySQL](../../mysql/how-to-decide-on-right-migration-tools.md)
One advantage of running your workload in Azure is its global reach. The flexibl
| Canada East | :heavy_check_mark: | :x: | :x: | :heavy_check_mark: | | Central India | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: | | Central US | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
+| China East 2 | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: |
+| China North 2 | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: |
| East Asia (Hong Kong) | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | East US | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | East US 2 | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
In addition, consider the following points of contact as appropriate:
## Next steps
-Now that you've read an introduction to Azure Database for MySQL - Single Server deployment mode, you're ready to:
+Now that you've read an introduction to Azure Database for MySQL - Single-Server deployment mode, you're ready to:
- Create your first server. - [Create an Azure Database for MySQL flexible server using Azure portal](quickstart-create-server-portal.md)
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/whats-new.md
Last updated 10/12/2021
-# What's new in Azure Database for MySQL - Flexible Server ?
+# What's new in Azure Database for MySQL - Flexible Server?
[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)] [Azure Database for MySQL - Flexible Server](./overview.md) is a deployment mode that's designed to provide more granular control and flexibility over database management functions and configuration settings than does the Single Server deployment mode. The service currently supports community version of MySQL 5.7 and 8.0. This article summarizes new releases and features in Azure Database for MySQL - Flexible Server beginning in January 2021. Listings appear in reverse chronological order, with the most recent updates first.+ ## January 2022+
+This release of Azure Database for MySQL - Flexible Server includes the following updates.
+ - **All Operations are disabled on Stopped Azure Database for MySQL - Flexible Server**
- Operations on servers that are in a [Stop](concept-servers.md#stopstart-an-azure-database-for-mysql-flexible-server) state are disabled and show as inactive in the Azure portal. Operations that are not supported on stopped servers include changing the pricing tier, number of vCores, storage size or IOPS, backup retention day, server tag, the server password, server parameters, storage auto-grow, GEO backup, HA, and user identity.
-
+ Operations on servers that are in a [Stop](concept-servers.md#stopstart-an-azure-database-for-mysql-flexible-server) state are disabled and show as inactive in the Azure portal. Operations that aren't supported on stopped servers include changing the pricing tier, number of vCores, storage size or IOPS, backup retention day, server tag, the server password, server parameters, storage autogrow, GEO backup, HA, and user identity.
+
+- **Availability in three additional Azure regions**
+
+ The public preview of Azure Database for MySQL - Flexible Server is now available in the following Azure regions:
+ - China East 2
+ - China North 2
+ - **Bug fixes**
-
+ Restart workflow struck issue with servers with HA and Geo-redundant backup option enabled is fixed. - **Known issues**
-
- When you are using ARM templates for provisioning or configuration changes for HA enabled servers, if a single deployment is made to enable/disable HA and along with other server properties like backup redundancy, storage etc. then deployment would fail. You can mitigate it by submit the deployment request separately for to enable\disable and configuration changes. You would not have issue with Portal or Azure cli as these are request already separated.
+
+ When you're using ARM templates for provisioning or configuration changes for HA enabled servers, if a single deployment is made to enable/disable HA and along with other server properties like backup redundancy, storage etc. then deployment would fail. You can mitigate it by submitting the deployment request separately for to enable\disable and configuration changes. You wouldnΓÇÖt have issue with Portal or Azure CLI as these are request already separated.
## November 2021+ - **General Availability of Azure Database for MySQL - Flexible Server**
-
+ Azure Database for MySQL - Flexible Server is now **General Availability** in more than [30 Azure regions](overview.md) worldwide. - **View available full backups in Azure portal**
- A dedicated Backup and Restore blade is now available in the Azure portal. This blade lists the backups available within the serverΓÇÖs retention period, effectively providing you with single pane view for managing a serverΓÇÖs backups and consequent restores. You can use this blade to
- 1) View the completion timestamps for all available full backups within the serverΓÇÖs retention period
+ A dedicated Backup and Restore blade is now available in the Azure portal. This blade lists the backups available within the serverΓÇÖs retention period, effectively providing you with single pane view for managing a serverΓÇÖs backups and consequent restores. You can use this blade to
+ 1) View the completion timestamps for all available full backups within the serverΓÇÖs retention period
2) Perform restore operations using these full backups - **Fastest restore points**
- With the fastest restore point option, you can restore a Flexible Server instance in the fastest time possible on a given day within the serverΓÇÖs retention period. This restore operation will simply restore the full snapshot backup without requiring restore or recovery of logs. With fastest restore point, customers will see 3 options while performing point in time restores from Azure portal viz latest restore point, custom restore point and fastest restore point. [Learn more](concepts-backup-restore.md#point-in-time-restore)
+ With the fastest restore point option, you can restore a Flexible Server instance in the fastest time possible on a given day within the serverΓÇÖs retention period. This restore operation will simply restore the full snapshot backup without requiring restore or recovery of logs. With fastest restore point, customers will see three options while performing point in time restores from Azure portal viz latest restore point, custom restore point and fastest restore point. [Learn more](concepts-backup-restore.md#point-in-time-restore)
- **FAQ blade in Azure portal** The Backup and Restore blade will also include section dedicated to listing your most frequently asked questions, together with answers. This should provide you with answers to most questions about backup directly within the Azure portal. In addition, selecting the question mark icon for FAQs on the top menu provides access to even more related detail. - **Restore a deleted Flexible server**
-
- The service now allows you to recover a deleted MySQL flexible server resource within 5 days from the time of server deletion. For a detailed guide on how to restore a deleted server, [refer documented steps](../flexible-server/how-to-restore-dropped-server.md). To protect server resources post deployment from accidental deletion or unexpected changes, we recommend administrators to leverage [management locks](../../azure-resource-manager/management/lock-resources.md).
+
+ The service now allows you to recover a deleted MySQL flexible server resource within five days from the time of server deletion. For a detailed guide on how to restore a deleted server, [refer documented steps](../flexible-server/how-to-restore-dropped-server.md). To protect server resources post deployment from accidental deletion or unexpected changes, we recommend administrators to leverage [management locks](../../azure-resource-manager/management/lock-resources.md).
- **Known issues**
- On servers where we have HA and Geo-redundant backup option enabled, we found an rare issue encountered by a race condition which blocks the restart of the standby server to finish. As a result of this issue, when you failover the HA enabled Azure database for MySQL - Flexible server MySQL Instance may get stuck in restarting state for a long time. The fix will be deployed to the production in the next deployment cycle.
+ On servers where we have HA and Geo-redundant backup option enabled, we found a rare issue encountered by a race condition, which blocks the restart of the standby server to finish. As a result of this issue, when you fail over the HA enabled Azure database for MySQL - Flexible server MySQL Instance may get stuck in restarting state for a long time. The fix will be deployed to the production in the next deployment cycle.
## October 2021 - **Thread pools are now available for Azure Database for MySQL ΓÇô Flexible Server**
-
+ Thread pools enhance the scalability of the Azure Database for MySQL ΓÇô Flexible Server. By using a thread pool, users can now optimize performance, achieve better throughput, and lower latency for high concurrent workloads. [Learn more](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/achieve-up-to-a-50-performance-boost-in-azure-database-for-mysql/ba-p/2909691). - **Geo-redundant backup restore to geo-paired region for DR scenarios**
This article summarizes new releases and features in Azure Database for MySQL -
- **Availability Zones Selection when creating Read replicas**
- When creating Read replica you have an option to select the Availability Zones location of your choice. An Availability Zone is a high availability offering that protects your applications and data from datacenter failures. Availability Zones are unique physical locations within an Azure region. [Learn more](../flexible-server/concepts-read-replicas.md).
+ When creating Read replica, you have an option to select the Availability Zones location of your choice. An Availability Zone is a high availability offering that protects your applications and data from datacenter failures. Availability Zones are unique physical locations within an Azure region. [Learn more](../flexible-server/concepts-read-replicas.md).
- **Read replicas in Azure Database for MySQL - Flexible servers will no longer be available on Burstable SKUs**
-
- You will not be able to create new or maintain existing read replicas on the Burstable tier server. In the interest of providing a good query and development experience for Burstable SKU tiers, the support for creating and maintaining read replica for servers in the Burstable pricing tier will be discontinued.
- If you have an existing Azure Database for MySQL - Flexible Server with read replica enabled, you will have to scale up your server to either General Purpose or Memory Optimized pricing tiers or delete the read replica within 60 days. After the 60-day period, while you can continue to use the primary server for your read-write operations, replication to read replica servers will be stopped. For newly created servers, read replica option will be available only for the General Purpose and Memory Optimized pricing tiers.
+ You wonΓÇÖt be able to create new or maintain existing read replicas on the Burstable tier server. In the interest of providing a good query and development experience for Burstable SKU tiers, the support for creating and maintaining read replica for servers in the Burstable pricing tier will be discontinued.
+
+ If you have an existing Azure Database for MySQL - Flexible Server with read replica enabled, youΓÇÖll have to scale up your server to either General Purpose or Memory Optimized pricing tiers or delete the read replica within 60 days. After the 60-day period, while you can continue to use the primary server for your read-write operations, replication to read replica servers will be stopped. For newly created servers, read replica option will be available only for the General Purpose and Memory Optimized pricing tiers.
- **Monitoring Azure Database for MySQL - Flexible Server with Azure Monitor Workbooks**
-
+ Azure Database for MySQL - Flexible Server is now integrated with Azure Monitor Workbooks. Workbooks provide a flexible canvas for data analysis and the creation of rich visual reports within the Azure portal. With this integration, the server has link to workbooks and few sample templates, which help to monitor the service at scale. These templates can be edited, customized to customer requirements and pinned to dashboard to create a focused and organized view of Azure resources. [Query Performance Insights](./tutorial-query-performance-insights.md), [Auditing](./tutorial-configure-audit.md), and Instance Overview templates are currently available. [Learn more](./concepts-workbooks.md). - **Prepay for Azure Database for MySQL compute resources with reserved instances**
This article summarizes new releases and features in Azure Database for MySQL -
Azure Database for MySQL - Flexible Server now helps you save money by prepaying for compute resources compared to pay-as-you-go prices. With Azure Database for MySQL reserved instances, you make an upfront commitment on MySQL server for a one or three year period to get a significant discount on the compute costs. You can also exchange a reservation from Azure Database for MySQL - Single Server with Flexible Server. [Learn more](../concept-reserved-pricing.md). - **Stopping the server for up to 30 days while the server is not in use**
-
- Azure Database for MySQL Flexible Server now gives you the ability to Stop the server for up to 30 days when not in use and Start the server within this time when you are ready to resume your development. This enables you to develop at your own pace and save development costs on the database servers by paying for the resources only when they are in use. This is important for dev-test workloads and when you are only using the server for part of the day. When you stop the server, all active connections will be dropped. When the server is in the Stopped state, the server's compute is not billed. However, storage continues to to be billed as the server's storage remains to ensure that data files are available when the server is started again. [Learn more](concept-servers.md#stopstart-an-azure-database-for-mysql-flexible-server)
+
+ Azure Database for MySQL Flexible Server now gives you the ability to Stop the server for up to 30 days when not in use and Start the server within this time when youΓÇÖre ready to resume your development. This enables you to develop at your own pace and save development costs on the database servers by paying for the resources only when they are in use. This is important for dev-test workloads and when youΓÇÖre only using the server for part of the day. When you stop the server, all active connections will be dropped. When the server is in the Stopped state, the server's compute isnΓÇÖt billed. However, storage continues to be billed as the server's storage remains to ensure that data files are available when the server is started again. [Learn more](concept-servers.md#stopstart-an-azure-database-for-mysql-flexible-server)
- **Terraform Support for MySQL Flexible Server**
-
+ Terraform support for MySQL Flexible Server is now released with the [latest v2.81.0 release of azurerm](https://github.com/hashicorp/terraform-provider-azurerm/blob/v2.81.0/CHANGELOG.md). The detailed reference document for provisioning and managing a MySQL Flexible Server using Terraform can be found [here](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/mysql_flexible_server). Any bugs or known issues can be found or report [here](https://github.com/hashicorp/terraform-provider-azurerm/issues). - **Static Parameter innodb_log_file_size is now Configurable**
- - [innodb_log_file_size](https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_log_file_size) can now be configured to any of these values: 256MB, 512MB, 1GB, or 2GB. Because it's a static parameter, it will require a server restart. If you have changed the parameter innodb_log_file_size from default, check if the value of "show global status like 'innodb_buffer_pool_pages_dirty'" stays at 0 for 30 seconds to avoid restart delay. See [Server parameters in Azure Database for MySQL](./concepts-server-parameters.md) to learn more.
+ - [innodb_log_file_size](https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_log_file_size) can now be configured to any of these values: 256 MB, 512 MB, 1 GB, or 2 GB. Because it's a static parameter, it will require a server restart. If youΓÇÖve changed the parameter innodb_log_file_size from default, check if the value of "show global status like 'innodb_buffer_pool_pages_dirty'" stays at 0 for 30 seconds to avoid restart delay. See [Server parameters in Azure Database for MySQL](./concepts-server-parameters.md) to learn more.
- **Availability in two additional Azure regions** Azure Database for MySQL - Flexible Server is now available in the following Azure regions:
- - US West 3
- - North Central US
+ - US West 3
+ - North Central US
[Learn more](overview.md#azure-regions). - **Known Issues**
- - When a primary Azure region is down, one cannot create geo-redundant servers in it's geo-paired region as storage cannot be provisioned in the primary Azure region. One must wait for the primary region to be up to provision geo-redundant servers in the geo-paired region.
-
+ - When a primary Azure region is down, one canΓÇÖt create geo-redundant servers in its geo-paired region as storage canΓÇÖt be provisioned in the primary Azure region. One must wait for the primary region to be up to provision geo-redundant servers in the geo-paired region.
+ ## September 2021
This release of Azure Database for MySQL - Flexible Server includes the followin
- Right after Zone-Redundant high availability server failover, clients fail to connect to the server if using SSL with ssl_mode VERIFY_IDENTITY. This issue can be mitigated by using ssl_mode as VERIFY_CA. - Unable to create Same-Zone High availability server in the following regions: Central India, East Asia, Korea Central, South Africa North, Switzerland North. - In a rare scenario and after HA failover, the primary server will be in read_only mode. Resolve the issue by updating ΓÇ£read_onlyΓÇ¥ value from the server parameters blade to OFF.
- - After successfully scaling Compute in the Compute+Storage blade, IOPS is reset to the SKU default. Customers can work around the issue by rescaling IOPs in the Compute+Storage blade to desired value (previously set) post the compute deployment and consequent IOPS reset.
+ - After successfully scaling Compute in the Compute+Storage blade, IOPS are reset to the SKU default. Customers can work around the issue by rescaling IOPs in the Compute+Storage blade to desired value (previously set) post the compute deployment and consequent IOPS reset.
## July 2021
This release of Azure Database for MySQL - Flexible Server includes the followin
- **Zone redundant HA available in UK South and Japan East region**
- Azure Database for MySQL - Flexible Server now offers zone redundant high availability in two additional regions: UK South and Japan East. [Learn more](overview.md#azure-regions).
+ Azure Database for MySQL - Flexible Server now offers zone-redundant high availability in two additional regions: UK South and Japan East. [Learn more](overview.md#azure-regions).
- **Known issues**
This release of Azure Database for MySQL - Flexible Server includes the followin
- **Known issues**
- - SSL\TLS 1.2 is enforced and cannot be disabled. (No workarounds)
+ - SSL\TLS 1.2 is enforced and canΓÇÖt be disabled. (No workarounds)
- There are intermittent provisioning failures for servers provisioned in a VNet. The workaround is to retry the server provisioning until it succeeds. ## February 2021
This release of Azure Database for MySQL - Flexible Server includes the followin
- **Known issues**
- The performance of Azure Database for MySQL ΓÇô Flexible Server degrades with private access virtual network isolation (No workaround).
+ The performance of Azure Database for MySQL ΓÇô Flexible Server degrades with private access-virtual network isolation (No workaround).
## January 2021
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concepts-extensions.md
There is a tradeoff between the query execution information pg_stat_statements p
## dblink and postgres_fdw [dblink](https://www.postgresql.org/docs/current/contrib-dblink-function.html) and [postgres_fdw](https://www.postgresql.org/docs/current/postgres-fdw.html) allow you to connect from one PostgreSQL server to another, or to another database in the same server. The receiving server needs to allow connections from the sending server through its firewall. When using these extensions to connect between Azure Database for PostgreSQL servers, this can be done by setting "Allow access to Azure services" to ON. This is also needed if you want to use the extensions to loop back to the same server. The "Allow access to Azure services" setting can be found in the Azure portal page for the Postgres server, under Connection Security. Turning "Allow access to Azure services" ON puts all Azure IPs on the allow list.
-Currently, outbound connections from Azure Database for PostgreSQL are not supported, except for connections to other Azure Database for PostgreSQL servers in the same region.
+> [!NOTE]
+> Currently, outbound connections from Azure Database for PostgreSQL via foreign data wrapper extensions such as postgres_fdw are not supported, except for connections to other Azure Database for PostgreSQL servers in the same Azure region.
## uuid If you are planning to use `uuid_generate_v4()` from the [uuid-ossp extension](https://www.postgresql.org/docs/current/uuid-ossp.html), consider comparing with `gen_random_uuid()` from the [pgcrypto extension](https://www.postgresql.org/docs/current/pgcrypto.html) for performance benefits.
postgresql Connect Rust https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/connect-rust.md
Title: 'Quickstart: Connect with Rust - Azure Database for PostgreSQL - Single Server' description: This quickstart provides Rust code samples that you can use to connect and query data from Azure Database for PostgreSQL - Single Server.--++ ms.devlang: rust
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/private-endpoint-dns.md
Previously updated : 01/14/2021 Last updated : 01/25/2022 ++ # Azure Private Endpoint DNS configuration
The following diagram shows the DNS resolution for both networks, on-prem
:::image type="content" source="media/private-endpoint-dns/hybrid-scenario.png" alt-text="Hybrid scenario":::
+## Private DNS zone group
+
+If you choose to integrate your private endpoint with a private DNS zone, a private DNS zone group is also created. The DNS zone group is a strong association between the private DNS zone and the private endpoint that helps auto-updating the private DNS zone when there is an update on the private endpoint. For example, when you add or remove regions, the private DNS zone is automatically updated.
+
+Previously, the DNS records for the private endpoint were created via scripting (retrieving certain information about the private endpoint and then adding it on the DNS zone). With the DNS zone group, there is no need to write any additional CLI/Powershell lines for every DNS zone. Also, when you delete the private endpoint, all the DNS records within the DNS zone group will be deleted as well.
+
+A common scenario for DNS zone group is in a hub-and-spoke topology, where it allows the private DNS zones to be created only once in the hub and allows the spokes to register to it, rather than creating differents zones in each spoke.
+
+> [!NOTE]
+> Each DNS zone group can support up to 5 DNS zones.
+
+> [!NOTE]
+> Adding multiple DNS zone groups to a single Private Endpoint is not supported.
+ ## Next steps - [Learn about private endpoints](private-endpoint-overview.md)
security Ddos Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/ddos-best-practices.md
-- Title: Designing resilient solutions with Azure DDoS Protection
-description: Learn about how you can use logging data to gain deep insights about your application.
-------- Previously updated : 10/18/2018---
-# Azure DDoS Protection - Designing resilient solutions
-
-This article is for IT decision makers and security personnel. It expects that you're familiar with Azure, networking, and security.
-DDoS is a type of attack that tries to exhaust application resources. The goal is to affect the applicationΓÇÖs availability and its ability to handle legitimate requests. Attacks are becoming more sophisticated and larger in size and impact. DDoS attacks can be targeted at any endpoint that is publicly reachable through the internet. Designing for distributed denial of service (DDoS) resiliency requires planning and designing for a variety of failure modes. Azure provides continuous protection against DDoS attacks. This protection is integrated into the Azure platform by default and at no extra cost.
-
-In addition to the core DDoS protection in the platform, [Azure DDoS Protection Standard](https://azure.microsoft.com/services/ddos-protection/) provides advanced DDoS mitigation capabilities against network attacks. It's automatically tuned to protect your specific Azure resources. Protection is simple to enable during the creation of new virtual networks. It can also be done after creation and requires no application or resource changes.
-
-![The role of Azure DDoS Protection in protecting customers and a virtual network from an attacker](./media/ddos-best-practices/image1.png)
--
-## Fundamental best practices
-
-The following sections give prescriptive guidance to build DDoS-resilient services on Azure.
-
-### Design for security
-
-Ensure that security is a priority throughout the entire lifecycle of an application, from design and implementation to deployment and operations. Applications can have bugs that allow a relatively low volume of requests to use an inordinate amount of resources, resulting in a service outage.
-
-To help protect a service running on Microsoft Azure, you should have a good understanding of your application architecture and focus on the [five pillars of software quality](/azure/architecture/guide/pillars).
-You should know typical traffic volumes, the connectivity model between the application and other applications, and the service endpoints that are exposed to the public internet.
-
-Ensuring that an application is resilient enough to handle a denial of service that's targeted at the application itself is most important. Security and privacy are built into the Azure platform, beginning with the [Security Development Lifecycle (SDL)](https://www.microsoft.com/sdl/default.aspx). The SDL addresses security at every development phase and ensures that Azure is continually updated to make it even more secure.
-
-### Design for scalability
-
-Scalability is how well a system can handle increased load. Design your applications to [scale horizontally](/azure/architecture/guide/design-principles/scale-out) to meet the demand of an amplified load, specifically in the event of a DDoS attack. If your application depends on a single instance of a service, it creates a single point of failure. Provisioning multiple instances makes your system more resilient and more scalable.
-
-For [Azure App Service](../../app-service/overview.md), select an [App Service plan](../../app-service/overview-hosting-plans.md) that offers multiple instances. For Azure Cloud Services, configure each of your roles to use [multiple instances](../../cloud-services/cloud-services-choose-me.md).
-For [Azure Virtual Machines](../../virtual-machines/index.yml), ensure that your virtual machine (VM) architecture includes more than one VM and that each VM is
-included in an [availability set](../../virtual-machines/windows/tutorial-availability-sets.md). We recommend using [virtual machine scale sets](../../virtual-machine-scale-sets/overview.md)
-for autoscaling capabilities.
-
-### Defense in depth
-
-The idea behind defense in depth is to manage risk by using diverse defensive strategies. Layering security defenses in an application reduces the chance of a successful attack. We recommend that you implement secure designs for your applications by using the built-in capabilities of the Azure platform.
-
-For example, the risk of attack increases with the size (*surface area*) of the application. You can reduce the surface area by using an approval list to close down the exposed IP address space and listening ports that are not needed on the load balancers ([Azure Load Balancer](../../load-balancer/quickstart-load-balancer-standard-public-portal.md) and [Azure Application Gateway](../../application-gateway/application-gateway-create-probe-portal.md)). [Network security groups (NSGs)](../../virtual-network/network-security-groups-overview.md) are another way to reduce the attack surface.
-You can use [service tags](../../virtual-network/network-security-groups-overview.md#service-tags) and [application security groups](../../virtual-network/network-security-groups-overview.md#application-security-groups) to minimize complexity for creating security rules and configuring network security, as a natural extension of an applicationΓÇÖs structure.
-
-You should deploy Azure services in a [virtual network](../../virtual-network/virtual-networks-overview.md) whenever possible. This practice allows service resources to communicate through private IP addresses. Azure service traffic from a virtual network uses public IP addresses as source IP addresses by default. Using [service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md) will switch service traffic to use virtual network private addresses as the source IP addresses when they're accessing the Azure service from a virtual network.
-
-We often see customers' on-premises resources getting attacked along with their resources in Azure. If you're connecting an on-premises environment to Azure, we recommend that you minimize exposure of on-premises resources to the public internet. You can use the scale and advanced DDoS protection capabilities of Azure by deploying your well-known public entities in Azure. Because these publicly accessible entities are often a target for DDoS attacks, putting them in Azure reduces the impact on your on-premises resources.
-
-## Azure offerings for DDoS protection
-
-Azure has two DDoS service offerings that provide protection from network attacks (Layer 3 and 4): DDoS Protection Basic and DDoS Protection Standard.
-
-### DDoS Protection Basic
-
-Basic protection is integrated into the Azure by default at no additional cost. The scale and capacity of the globally deployed Azure network provides defense against common network-layer attacks through always-on traffic monitoring and real-time mitigation. DDoS Protection Basic requires no user configuration or application changes. DDoS Protection Basic helps protect all Azure services, including PaaS services like Azure DNS.
-
-![Map representation of the Azure network, with the text "Global DDoS mitigation presence" and "Leading DDoS mitigation capacity"](./media/ddos-best-practices/image3.png)
-
-Basic DDoS protection in Azure consists of both software and hardware components. A software control plane decides when, where, and what type of traffic should be steered through hardware appliances that analyze and remove attack traffic. The control plane makes this decision based on an infrastructure-wide DDoS Protection *policy*. This policy is statically set and universally applied to all Azure customers.
-
-For example, the DDoS Protection policy specifies at what traffic volume the protection should be *triggered.* (That is, the tenantΓÇÖs traffic should be routed through scrubbing appliances.) The policy then specifies how the scrubbing appliances should *mitigate* the attack.
-
-The Azure DDoS Protection Basic service is targeted at protection of the infrastructure and protection of the Azure platform. It mitigates traffic when it exceeds a rate that is so significant that it might affect multiple customers in a multitenant environment. It doesnΓÇÖt provide alerting or per-customer customized policies.
-
-### DDoS Protection Standard
-
-Standard protection provides enhanced DDoS mitigation features. It's automatically tuned to help protect your specific Azure resources in a virtual network. Protection is simple to enable on any new or existing virtual network, and it requires no application or resource changes. It has several advantages over the basic service, including logging, alerting, and telemetry. The following sections outline the key features of the Azure DDoS Protection Standard service.
-
-#### Adaptive real time tuning
-
-The Azure DDoS Protection Basic service helps protect customers and prevent impacts to other customers. For example, if a service is provisioned for a typical volume of legitimate incoming traffic that's smaller than the *trigger rate* of the infrastructure-wide DDoS Protection policy, a DDoS attack on that customerΓÇÖs resources might go unnoticed. More generally, the complexity of recent attacks (for example, multi-vector DDoS) and the application-specific behaviors of tenants call for per-customer, customized protection policies. The service accomplishes this customization by using two insights:
--- Automatic learning of per-customer (per-IP) traffic patterns for Layer 3 and 4.--- Minimizing false positives, considering that the scale of Azure allows it to absorb a significant amount of traffic.-
-![Diagram of how DDoS Protection Standard works, with "Policy Generation" circled](./media/ddos-best-practices/image5.png)
-
-#### DDoS Protection telemetry, monitoring, and alerting
-
-DDoS Protection Standard exposes rich telemetry via [Azure Monitor](../../azure-monitor/overview.md) for the duration of a DDoS attack. You can configure alerts for any of the Azure Monitor metrics that DDoS Protection uses. You can integrate logging with Splunk (Azure Event Hubs), Azure Monitor logs, and Azure Storage for advanced analysis via the Azure Monitor Diagnostics interface.
-
-##### DDoS mitigation policies
-
-In the Azure portal, select **Monitor** > **Metrics**. In the **Metrics** pane, select the resource group, select a resource type of **Public IP Address**, and select your Azure public IP address. DDoS metrics are visible in the **Available metrics** pane.
-
-DDoS Protection Standard applies three autotuned mitigation policies (TCP SYN, TCP, and UDP) for each public IP of the protected resource, in the virtual network that has DDoS enabled. You can view the policy thresholds by selecting the metric **Inbound packets to trigger DDoS mitigation**.
-
-![Available metrics and metrics chart](./media/ddos-best-practices/image7.png)
-
-The policy thresholds are autoconfigured via machine learning-based network traffic profiling. DDoS mitigation occurs for an IP address under attack only when the policy threshold is exceeded.
-
-##### Metric for an IP address under DDoS attack
-
-If the public IP address is under attack, the value for the metric **Under DDoS attack or not** changes to 1 as DDoS Protection performs mitigation on the attack traffic.
-
-!["Under DDoS attack or not" metric and chart](./media/ddos-best-practices/image8.png)
-
-We recommend configuring an alert on this metric. You'll then be notified when thereΓÇÖs an active DDoS mitigation performed on your public IP address.
-
-For more information, see [Manage Azure DDoS Protection Standard using the Azure portal](../../ddos-protection/manage-ddos-protection.md).
-
-#### Web application firewall for resource attacks
-
-Specific to resource attacks at the application layer, you should configure a web application firewall (WAF) to help secure web applications. A WAF inspects inbound web traffic to block SQL injections, cross-site scripting, DDoS, and other Layer 7 attacks. Azure provides [WAF as a feature of Application Gateway](../../web-application-firewall/ag/ag-overview.md) for centralized protection of your web applications from common exploits and vulnerabilities. There are other WAF offerings available from Azure partners that might be more suitable for your needs via the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?search=WAF&page=1).
-
-Even web application firewalls are susceptible to volumetric and state exhaustion attacks. We strongly recommend enabling DDoS Protection Standard on the WAF virtual network to help protect from volumetric and protocol attacks. For more information, see the [DDoS Protection reference architectures](#ddos-protection-reference-architectures) section.
-
-### Protection planning
-
-Planning and preparation are crucial to understand how a system will perform during a DDoS attack. Designing an incident management response plan is part of this effort.
-
-If you have DDoS Protection Standard, make sure that it's enabled on the virtual network of internet-facing endpoints. Configuring DDoS alerts helps you constantly watch for any potential attacks on your infrastructure.
-
-Monitor your applications independently. Understand the normal behavior of an application. Prepare to act if the application is not behaving as expected during a DDoS attack.
-
-#### Testing through simulations
-
-ItΓÇÖs a good practice to test your assumptions about how your services will respond to an attack by conducting periodic simulations. During testing, validate that your services or applications continue to function as expected and thereΓÇÖs no disruption to the user experience. Identify gaps from both a technology and process standpoint and incorporate them in the DDoS response strategy. We recommend that you perform such tests in staging environments or during non-peak hours to minimize the impact to the production environment.
-
-We have partnered with [BreakingPoint Cloud](https://www.ixiacom.com/products/breakingpoint-cloud) to build an interface where Azure customers can generate traffic against DDoS Protection-enabled public endpoints for simulations. You can use the [BreakingPoint Cloud](https://www.ixiacom.com/products/breakingpoint-cloud) simulation to:
--- Validate how Azure DDoS Protection helps protect your Azure resources from DDoS attacks.--- Optimize your incident response process while under DDoS attack.--- Document DDoS compliance.--- Train your network security teams.-
-Cybersecurity requires constant innovation in defense. Azure DDoS Standard protection is a state-of-the-art offering with an effective solution to mitigate increasingly complex DDoS attacks.
-
-## Components of a DDoS response strategy
-
-A DDoS attack that targets Azure resources usually requires minimal intervention from a user standpoint. Still, incorporating DDoS mitigation as part of an incident response strategy helps minimize the impact to business continuity.
-
-### Microsoft threat intelligence
-
-Microsoft has an extensive threat intelligence network. This network uses the collective knowledge of an extended security community that supports Microsoft online services, Microsoft partners, and relationships within the internet security community.
-
-As a critical infrastructure provider, Microsoft receives early warnings about threats. Microsoft gathers threat intelligence from its online services and from its global customer base. Microsoft incorporates all of this threat intelligence back into the Azure DDoS Protection products.
-
-Also, the Microsoft Digital Crimes Unit (DCU) performs offensive strategies against botnets. Botnets are a common source of command and control for DDoS attacks.
-
-### Risk evaluation of your Azure resources
-
-ItΓÇÖs imperative to understand the scope of your risk from a DDoS attack on an ongoing basis. Periodically ask yourself:
--- What new publicly available Azure resources need protection?--- Is there a single point of failure in the service? --- How can services be isolated to limit the impact of an attack while still making services available to valid customers?--- Are there virtual networks where DDoS Protection Standard should be enabled but isn't? --- Are my services active/active with failover across multiple regions?-
-### Customer DDoS response team
-
-Creating a DDoS response team is a key step in responding to an attack quickly and effectively. Identify contacts in your organization who will oversee both planning and execution. This DDoS response team should thoroughly understand the Azure DDoS Protection Standard service. Make sure that the team can identify and mitigate an attack by coordinating with internal and external customers, including the Microsoft support team.
-
-For your DDoS response team, we recommend that you use simulation exercises as a normal part of your service availability and continuity planning. These exercises should include scale testing.
-
-### Alerts during an attack
-
-Azure DDoS Protection Standard identifies and mitigates DDoS attacks without any user intervention. To get notified when thereΓÇÖs an active mitigation for a protected public IP, you can [configure an alert](../../ddos-protection/manage-ddos-protection.md) on the metric **Under DDoS attack or not**. You can choose to create alerts for the other DDoS metrics to understand the scale of the attack, traffic being dropped, and other details.
-
-#### When to contact Microsoft support
--- During a DDoS attack, you find that the performance of the protected resource is severely degraded, or the resource is not available.--- You think the DDoS Protection service is not behaving as expected. -
- The DDoS Protection service starts mitigation only if the metric value **Policy to trigger DDoS mitigation (TCP/TCP SYN/UDP)** is lower than the traffic received on the protected public IP resource.
--- You're planning a viral event that will significantly increase your network traffic.--- An actor has threatened to launch a DDoS attack against your resources.--- If you need to allow list an IP or IP range from Azure DDoS Protection Standard. A common scenario is to allow list IP if the traffic is routed from an external cloud WAF to Azure. -
-For attacks that have a critical business impact, create a severity-A [support ticket](https://ms.portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
-
-### Post-attack steps
-
-ItΓÇÖs always a good strategy to do a postmortem after an attack and adjust the DDoS response strategy as needed. Things to consider:
--- Was there any disruption to the service or user experience due to lack of scalable architecture?--- Which applications or services suffered the most?--- How effective was the DDoS response strategy, and how can it be improved?-
-If you suspect you're under a DDoS attack, escalate through your normal Azure Support channels.
-
-## DDoS Protection reference architectures
-
-DDoS Protection Standard is designed [for services that are deployed in a virtual network](../../virtual-network/virtual-network-for-azure-services.md). For other services, the default DDoS Protection Basic service applies. The following reference architectures are arranged by scenarios, with architecture patterns grouped together.
-
-### Virtual machine (Windows/Linux) workloads
-
-#### Application running on load-balanced VMs
-
-This reference architecture shows a set of proven practices for running multiple Windows VMs in a scale set behind a load balancer, to improve availability and scalability. This architecture can be used for any stateless workload, such as a web server.
-
-![Diagram of the reference architecture for an application running on load-balanced VMs](./media/ddos-best-practices/image9.png)
-
-In this architecture, a workload is distributed across multiple VM instances. There is a single public IP address, and internet traffic is distributed to the VMs through a load balancer. DDoS Protection Standard is enabled on the virtual network of the Azure (internet) load balancer that has the public IP associated with it.
-
-The load balancer distributes incoming internet requests to the VM instances. Virtual machine scale sets allow the number of VMs to be scaled in or out manually, or automatically based on predefined rules. This is important if the resource is under DDoS attack. For more information on this reference architecture, see
-[this article](/azure/architecture/reference-architectures/virtual-machines-windows/multi-vm).
-
-#### Application running on Windows N-tier
-
-There are many ways to implement an N-tier architecture. The following diagram shows a typical three-tier web application. This architecture builds on the article [Run load-balanced VMs for scalability and availability](/azure/architecture/reference-architectures/virtual-machines-windows/multi-vm). The web and business tiers use load-balanced VMs.
-
-![Diagram of the reference architecture for an application running on Windows N-tier](./media/ddos-best-practices/image10.png)
-
-In this architecture, DDoS Protection Standard is enabled on the virtual network. All public IPs in the virtual network get DDoS protection for Layer 3 and 4. For Layer 7 protection, deploy Application Gateway in the WAF SKU. For more information on this reference architecture, see
-[this article](/azure/architecture/reference-architectures/virtual-machines-windows/n-tier).
-
-#### PaaS web application
-
-This reference architecture shows running an Azure App Service application in a single region. This architecture shows a set of proven practices for a web application that uses [Azure App Service](https://azure.microsoft.com/documentation/services/app-service/) and [Azure SQL Database](https://azure.microsoft.com/documentation/services/sql-database/).
-A standby region is set up for failover scenarios.
-
-![Diagram of the reference architecture for a PaaS web application](./media/ddos-best-practices/image11.png)
-
-Azure Traffic Manager routes incoming requests to Application Gateway in one of the regions. During normal operations, it routes requests to Application Gateway in the active region. If that region becomes unavailable, Traffic Manager fails over to Application Gateway in the standby region.
-
-All traffic from the internet destined to the web application is routed to the [Application Gateway public IP address](../../application-gateway/application-gateway-web-app-overview.md) via Traffic Manager. In this scenario, the app service (web app) itself is not directly externally facing and is protected by Application Gateway.
-
-We recommend that you configure the Application Gateway WAF SKU (prevent mode) to help protect against Layer 7 (HTTP/HTTPS/WebSocket) attacks. Additionally, web apps are configured to [accept only traffic from the Application Gateway](https://azure.microsoft.com/blog/ip-and-domain-restrictions-for-windows-azure-web-sites/) IP address.
-
-For more information about this reference architecture, see [this article](/azure/architecture/reference-architectures/app-service-web-app/multi-region).
-
-### Mitigation for non-web PaaS services
-
-#### HDInsight on Azure
-
-This reference architecture shows configuring DDoS Protection Standard for an [Azure HDInsight cluster](../../hdinsight/index.yml). Make sure that the HDInsight cluster is linked to a virtual network and that DDoS Protection is enabled on the virtual network.
-
-!["HDInsight" and "Advanced settings" panes, with virtual network settings](./media/ddos-best-practices/image12.png)
-
-![Selection for enabling DDoS Protection](./media/ddos-best-practices/image13.png)
-
-In this architecture, traffic destined to the HDInsight cluster from the internet is routed to the public IP associated with the HDInsight gateway load balancer. The gateway load balancer then sends the traffic to the head nodes or the worker nodes directly. Because DDoS Protection Standard is enabled on the HDInsight virtual network, all public IPs in the virtual network get DDoS protection for Layer 3 and 4. This reference architecture can be combined with the N-Tier and multi-region reference architectures.
-
-For more information on this reference architecture, see the [Extend Azure HDInsight using an Azure Virtual Network](../../hdinsight/hdinsight-plan-virtual-network-deployment.md?toc=%2fazure%2fvirtual-network%2ftoc.json)
-documentation.
--
-> [!NOTE]
-> Azure App Service Environment for Power Apps or API management in a virtual network with a public IP are both not natively supported.
-
-## Next steps
-
-* [Shared responsibility in the cloud](shared-responsibility.md)
-* [Azure DDoS Protection product page](https://azure.microsoft.com/services/ddos-protection/)
-* [Azure DDoS Protection documentation](../../ddos-protection/ddos-protection-overview.md)
sentinel Configure Fusion Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/configure-fusion-rules.md
Title: Configure multistage attack detection (Fusion) rules in Microsoft Sentine
description: Create and configure attack detection rules based on Fusion technology in Microsoft Sentinel. Previously updated : 11/09/2021 Last updated : 01/30/2022
This detection is enabled by default in Microsoft Sentinel. To check or change i
- Review **entity mapping** for these scheduled rules. Use the [entity mapping configuration section](map-data-fields-to-entities.md) to map parameters from your query results to Microsoft Sentinel-recognized entities. Because Fusion correlates alerts based on entities (such as *user account* or *IP address*), its ML algorithms cannot perform alert matching without the entity information.
- - Review the **tactics** in your analytics rule details. The Fusion ML algorithm uses [MITRE ATT&CK](https://attack.mitre.org/) tactic information for detecting multi-stage attacks, and the tactics you label the analytics rules with will show up in the resulting incidents. Fusion calculations may be affected if incoming alerts are missing tactic information.
+ - Review the **tactics and techniques** in your analytics rule details. The Fusion ML algorithm uses [MITRE ATT&CK](https://attack.mitre.org/) information for detecting multi-stage attacks, and the tactics and techniques you label the analytics rules with will show up in the resulting incidents. Fusion calculations may be affected if incoming alerts are missing tactic information.
1. Fusion can also detect scenario-based threats using rules based on the following **scheduled analytics rule templates**, which can be found in the **Rule templates** tab in the **Analytics** blade. To enable these detections, select the rule name in the templates gallery, and click **Create rule** in the details pane.
sentinel Connect Aws https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-aws.md
Permissions policies that must be applied to the [Microsoft Sentinel role you cr
1. **Verify that messages are being read from the SQS queue.**
- Check the "Number of Messages Received" and "Number of Messages Deleted" widgets in the queue dashboard. If there are no notifications under messages deleted," then check health messages. It's possible that some permissions are missing. Check your IAM configurations.
+ Check the "Number of Messages Received" and "Number of Messages Deleted" widgets in the queue dashboard. If there are no notifications under messages deleted," then check health messages. It's possible that some permissions are missing. Check your IAM configurations.
+
+For more information, see [Monitor the health of your data connectors](monitor-data-connector-health.md).
+ # [CloudTrail connector (legacy)](#tab/ct)
sentinel Create Codeless Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/create-codeless-connector.md
+
+ Title: Create a codeless connector for Microsoft Sentinel
+description: Learn how to create a codeless connector in Microsoft Sentinel using the Codeless Connector Platform (CCP).
+++ Last updated : 01/24/2022+
+# Create a codeless connector for Microsoft Sentinel (Public preview)
+
+> [!IMPORTANT]
+> The Codeless Connector Platform (CCP) is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+
+The Codeless Connector Platform (CCP) provides partners, advanced users, and developers with the ability to create custom connectors, connect them, and ingest data to Microsoft Sentinel. Connectors created via the CCP can be deployed via API, an ARM template, or as a solution in the Microsoft Sentinel [content hub](sentinel-solutions.md).
+
+Connectors created using CCP are fully SaaS, without any requirements for service installations, and also include [health monitoring](monitor-data-connector-health.md) and full support from Microsoft Sentinel.
+
+Create your data connector by defining a JSON configuration file, with settings for how the data connector page in Microsoft Sentinel looks and works and polling settings that define how the connection works between Microsoft Sentinel and your data source.
+
+**Use the following steps to create your CCP connector and connect to your data source from Microsoft Sentinel**:
+
+> [!div class="checklist"]
+> * Configure the connector's user interface
+> * Configure the connector's polling settings
+> * Deploy your connector to your Microsoft Sentinel workspace
+> * Connect Microsoft Sentinel to your data source and start ingesting data
+
+This article describes the syntax used in the CCP JSON configuration file and procedures for deploying your connector via API, an ARM template, or a Microsoft Sentinel solution.
+
+## Prerequisites
+
+Before building a connector, we recommend that you learn and understand how your data source behaves and exactly how Microsoft Sentinel will need to connect.
+
+For example, you'll need to understand the types of authentication, pagination, and API endpoints that are required for successful connections.
+
+## Create a connector JSON configuration file
+
+To create your custom, CCP connector, create a JSON file with the following basic syntax:
+
+```json
+{
+ "kind": "<name>",
+ "properties": {
+ "connectorUiConfig": {...
+ },
+ "pollingConfig": {...
+ }
+ }
+}
+```
+
+Fill in each of the following area with additional properties that define how your connector connects Microsoft Sentinel to your data source, and is displayed in the Azure portal:
+
+- `connectorUiConfig`. Defines the visual elements and text displayed on the data connector page in Microsoft Sentinel. For more information, see [Configure your connector's user interface](#configure-your-connectors-user-interface).
+
+- `pollingConfig`. Defines how Microsoft Sentinel collects data from your data source. For more information, see [Configure your connector's polling settings](#configure-your-connectors-polling-settings).
+
+## Configure your connector's user interface
+
+This section describes the configuration for how the user interface on the data connector page appears in Microsoft Sentinel.
+
+Use the [properties supported](#ui-props) for the `connectorUiConfig` area of the [JSON configuration file](#create-a-connector-json-configuration-file) to configure the user interface displayed for your data connector in the Azure portal.
+
+The following image shows a sample data connector page, highlighted with numbers that correspond to configurable areas of the user interface:
++
+1. **Title**. The title displayed for your data connector.
+1. **Icon**. The icon displayed for your data connector.
+1. **Status**. Describes whether or not your data connector is connected to Microsoft Sentinel.
+1. **Data charts**. Displays relevant queries and the amount of ingested data in the last two weeks.
+1. **Instructions tab**. Includes a **Prerequisites** section, with a list of minimal validations before the user can enable the connector, and an **Instructions**, with a list of instructions to guide the user in enabling the connector. This section can include text, buttons, forms, tables, and other common widgets to simplify the process.
+1. **Next steps tab**. Includes useful information for understanding how to find data in the event logs, such as sample queries.
+
+<a name="ui-props"></a>The `connectorUiConfig` section of the configuration file includes the following properties:
++
+|Name |Type |Description |
+||||
+|**id** | GUID | A distinct ID for the connector. |
+|**title** | String |Title displayed in the data connector page. |
+|**publisher** | String | Your company name. |
+|**descriptionMarkdown** | String, in markdown | A description for the connector. |
+|**additionalRequirementBanner** | String, in markdown | Text for the **Prerequisites** section of the **Instructions** tab. |
+| **graphQueriesTableName** | String | Defines the name of the Log Analytics table from which data for your queries is pulled. <br><br>The table name can be any string, but must end in `_CL`. For example: `TableName_CL`
+|**graphQueries** | [GraphQuery[]](#graphquery) | Queries that present data ingestion over the last two weeks in the **Data charts** pane.<br><br>Provide either one query for all of the data connector's data types, or a different query for each data type. |
+|**sampleQueries** | [SampleQuery[]](#samplequery) | Sample queries for the customer to understand how to find the data in the event log, to be displayed in the **Next steps** tab. |
+|**dataTypes** | [DataTypes[]](#datatypes) | A list of all data types for your connector, and a query to fetch the time of the last event for each data type. |
+|**connectivityCriteria** | [ConnectivityCriteria[]](#connectivitycriteria) |An object that defines how to verify if the connector is correctly defined. |
+|**availability** | `{`<br>` status: Number,`<br>` isPreview: Boolean`<br>`}` | One of the following values: <br><br>- **1**: Connector is generally available to customers. <br>- **isPreview**: Indicates that the connector is not yet generally available. |
+|**permissions** | [RequiredConnectorPermissions[]](#requiredconnectorpermissions) | Lists the permissions required to enable or disable the connector. |
+|**instructionsSteps** | [InstructionStep[]](#instructionstep) | An array of widget parts that explain how to install the connector, displayed on the **Instructions** tab. |
+|**metadata** | [Metadata](#metadata) | ARM template metadata, for deploying the connector as an ARM template. |
+| | | |
+
+### GraphQuery
+
+Defines a query that presents data ingestion over the last two weeks in the **Data charts** pane.
+
+Provide either one query for all of the data connector's data types, or a different query for each data type.
+
+|Name |Type |Description |
+||||
+|**metricName** | String | A meaningful name for your graph. <br><br>Example: `Total data received` |
+|**legend** | String | The string that appears in the legend to the right of the chart, including a variable reference.<br><br>Example: `{{graphQueriesTableName}}` |
+|**baseQuery** | String | The query that filters for relevant events, including a variable reference. <br><br>Example: `TableName | where ProviderName == ΓÇ£myproviderΓÇ¥` or `{{graphQueriesTableName}}` |
+| | | |
++
+### SampleQuery
+
+|Name |Type |Description |
+||||
+| **Description** | String | A meaningful description for the sample query.<br><br>Example: `Top 10 vulnerabilities detected` |
+| **Query** | String | Sample query used to fetch the data type's data. <br><br>Example: `{{graphQueriesTableName}}\n | sort by TimeGenerated\n | take 10` |
+| | | |
+
+### DataTypes
+
+|Name |Type |Description |
+||||
+| **dataTypeName** | String | A meaningful description for the`lastDataReceivedQuery` query, including support for a variable. <br><br>Example: `{{graphQueriesTableName}}` |
+| **lastDataReceivedQuery** | String | A query that returns one row, and indicates the last time data was received, or no data if there is no relevant data. <br><br>Example: `{{graphQueriesTableName}}\n | summarize Time = max(TimeGenerated)\n | where isnotempty(Time)`
+| | | |
++
+### ConnectivityCriteria
+
+|Name |Type |Description |
+||||
+| **type** | ENUM | Always define this value as `SentinelKindsV2`. |
+| **value** | deprecated |N/A |
+| | | |
+
+### Availability
+
+|Name |Type |Description |
+||||
+| **status** | Boolean | Determines whether or not the data connector is available in your workspace. <br><br>Example: `1`|
+| **isPreview** | Boolean |Determines whether the data connector is supported as Preview or not. <br><br>Example: `false` |
+| | | |
+
+### RequiredConnectorPermissions
+
+|Name |Type |Description |
+||||
+| **tenant** | ENUM | Defines the required permissions, as one or more of the following values: `GlobalAdmin`, `SecurityAdmin`, `SecurityReader`, `InformationProtection` <br><br>Example: The **tenant** value displays displays in Microsoft Sentinel as: **Tenant Permissions: Requires `Global Administrator` or `Security Administrator` on the workspace's tenant**|
+| **licenses** | ENUM | Defines the required licenses, as one of the following values: `OfficeIRM`,`OfficeATP`, `Office365`, `AadP1P2`, `Mcas`, `Aatp`, `Mdatp`, `Mtp`, `IoT` <br><br>Example: The **licenses** value displays in Microsoft Sentinel as: **License: Required Azure AD Premium P2**|
+| **customs** | String | Describes any custom permissions required for your data connection, in the following syntax: <br>`{`<br>` name:string,`<br>` description:string`<br>`}` <br><br>Example: The **customs** value displays in Microsoft Sentinel as: **Subscription: Contributor permissions to the subscription of your IoT Hub.** |
+| **resourceProvider** | [ResourceProviderPermissions](#resourceproviderpermissions) | Describes any prerequisites for your Azure resource. <br><br>Example: The **resourceProvider** value displays in Microsoft Sentinel as: <br>**Workspace: write permission is required.**<br>**Keys: read permissions to shared keys for the workspace are required.**|
+| | | |
+
+#### ResourceProviderPermissions
+
+|Name |Type |Description |
+||||
+| **provider** | ENUM | Describes the resource provider, with one of the following values: <br>- `Microsoft.OperationalInsights/workspaces` <br>- `Microsoft.OperationalInsights/solutions`<br>- `Microsoft.OperationalInsights/workspaces/datasources`<br>- `microsoft.aadiam/diagnosticSettings`<br>- `Microsoft.OperationalInsights/workspaces/sharedKeys`<br>- `Microsoft.Authorization/policyAssignments` |
+| **providerDisplayName** | String | A query that should return one row, indicating the last time that data was received, or no data if there is no relevant data. |
+| **permissionsDisplayText** | String | Display text for *Read*, *Write*, or *Read and Write* permissions. |
+| **requiredPermissions** | [RequiredPermissionSet](#requiredpermissionset) | Describes the minimum permissions required for the connector as one of the following values: `read`, `write`, `delete`, `action` |
+| **Scope** | ENUM | Describes the scope of the data connector, as one of the following values: `Subscription`, `ResourceGroup`, `Workspace` |
+| | | |
+
+### RequiredPermissionSet
+
+|Name |Type |Description |
+||||
+|**read** | boolean | Determines whether *read* permissions are required. |
+| **write** | boolean | Determines whether *write* permissions are required. |
+| **delete** | boolean | Determines whether *delete* permissions are required. |
+| **action** | boolean | Determines whether *action* permissions are required. |
+| | | |
+
+### Metadata
+
+This section provides metadata used when you're [deploying your data connector as an ARM template](#deploy-your-connector-in-microsoft-sentinel-and-start-ingesting-data).
+
+|Name |Type |Description |
+||||
+| **id** | String | Defines a GUID for your ARM template. |
+| **kind** | String | Defines the kind of ARM template you're creating. Always use `dataConnector`. |
+| **source** | String |Describes your data source, using the following syntax: <br>`{`<br>` kind:string`<br>` name:string`<br>`}`|
+| **author** | String | Describes the data connector author, using the following syntax: <br>`{`<br>` name:string`<br>`}`|
+| **support** | String | Describe the support provided for the data connector using the following syntax: <br> `{`<br>` "tier": string,`<br>` "name": string,`<br>`"email": string,`<br> `"link": string`<br>` }`|
+| | | |
+
+### Instructions
+
+This section provides parameters that define the set of instructions that appear on your data connector page in Microsoft Sentinel.
++
+|Name |Type |Description |
+||||
+| **title** | String | Optional. Defines a title for your instructions. |
+| **description** | String | Optional. Defines a meaningful description for your instructions. |
+| **innerSteps** | [InstructionStep](#instructionstep) | Optional. Defines an array of inner instruction steps. |
+| **bottomBorder** | Boolean | When `true`, adds a bottom border to the instructions area on the connector page in Microsoft Sentinel |
+| **isComingSoon** | Boolean | When `true`, adds a **Coming soon** title on the connector page in Microsoft Sentinel |
+| | | |
++
+#### CopyableLabel
+
+Shows a field with a button on the right to copy the field value. For example:
++
+**Sample code**:
+
+```json
+Implementation:
+instructions: [
+ new CopyableLabelInstructionModel({
+ fillWith: [ΓÇ£MicrosoftAwsAccountΓÇ¥],
+ label: ΓÇ£Microsoft Account IDΓÇ¥,
+ }),
+ new CopyableLabelInstructionModel({
+ fillWith: [ΓÇ£workspaceIdΓÇ¥],
+ label: ΓÇ£External ID (WorkspaceId)ΓÇ¥,
+ }),
+ ]
+```
+
+**Parameters**: `CopyableLabelInstructionParameters`
+
+|Name |Type |Description |
+||||
+|**fillWith** | ENUM | Optional. Array of environment variables used to populate a placeholder. Separate multiple placeholders with commas. For example: `{0},{1}` <br><br>Supported values: `workspaceId`, `workspaceName`, `primaryKey`, `MicrosoftAwsAccount`, `subscriptionId` |
+|**label** | String | Defines the text for the label above a text box. |
+|**value** | String | Defines the value to present in the text box, supports placeholders. |
+|**rows** | Rows | Optional. Defines the rows in the user interface area. By default, set to **1**. |
+|**wideLabel** |Boolean | Optional. Determines a wide label for long strings. By default, set to `false`. |
+| | | |
++
+#### InfoMessage
+
+Defines an inline information message. For example:
++
+In contrast, the following image shows a *non*-inline information message:
++
+**Sample code**:
+
+```json
+instructions: [
+ new InfoMessageInstructionModel({
+ text:”Microsoft Defender for Endpoint… “,
+ visible: true,
+ inline: true,
+ }),
+ new InfoMessageInstructionModel({
+ text:”In order to export… “,
+ visible: true,
+ inline: false,
+ }),
+
+ ]
+```
+**Parameters**: `InfoMessageInstructionModelParameters`
++
+|Name |Type |Description |
+||||
+|**text** | String | Define the text to display in the message. |
+|**visible** | Boolean | Determines whether the message is displayed. |
+|**inline** | Boolean | Determines how the information message is displayed. <br><br>- `true`: (Recommended) Shows the information message embedded in the instructions. <br>- `false`: Adds a blue background. |
+| | | |
+++
+#### LinkInstructionModel
+
+Displays a link to other pages in the Azure portal, as a button or a link. For example:
+++
+**Sample code**:
+
+```json
+new LinkInstructionModel({linkType: ΓÇ£OpenPolicyAssignmentΓÇ¥, policyDefinitionGuid: <GUID>, assignMode = ΓÇ£PolicyΓÇ¥})
+
+new LinkInstructionModel({ linkType: LinkType.OpenAzureActivityLog } )
+```
+
+**Parameters**: `InfoMessageInstructionModelParameters`
+
+|Name |Type |Description |
+||||
+|**linkType** | ENUM | Determines the link type, as one of the following values: <br><br>`InstallAgentOnWindowsVirtualMachine`<br>`InstallAgentOnWindowsNonAzure`<br> `InstallAgentOnLinuxVirtualMachine`<br> `InstallAgentOnLinuxNonAzure`<br>`OpenSyslogSettings`<br>`OpenCustomLogsSettings`<br>`OpenWaf`<br> `OpenAzureFirewall` `OpenMicrosoftAzureMonitoring` <br> `OpenFrontDoors` <br>`OpenCdnProfile` <br>`AutomaticDeploymentCEF` <br> `OpenAzureInformationProtection` <br> `OpenAzureActivityLog` <br> `OpenIotPricingModel` <br> `OpenPolicyAssignment` <br> `OpenAllAssignmentsBlade` <br> `OpenCreateDataCollectionRule` |
+|**policyDefinitionGuid** | String | Optional. For policy-based connectors, defines the GUID of the built-in policy definition. |
+|**assignMode** | ENUM | Optional. For policy-based connectors, defines the assign mode, as one of the following values: `Initiative`, `Policy` |
+|**dataCollectionRuleType** | ENUM | Optional. For DCR-based connectors, defines the type of data collection rule type as one of the following: `SecurityEvent`, `ForwardEvent` |
+| | | |
+
+To define an inline link using markdown, use the following example as a guide:
+
+```markdown
+<value>Follow the instructions found on article [Connect Microsoft Sentinel to your threat intelligence platform]({0}). Once the application is created you will need to record the Tenant ID, Client ID and Client Secret.</value>
+```
+
+The code sample listed above shows an inline link that looks like the following image:
++
+To define a link as an ARM template, use the following example as a guide:
+
+```markdown
+ <value>1. Click the **Deploy to Azure** button below.
+[![Deploy To Azure]({0})]({1})
+```
+
+The code sample listed above shows a link button that looks like the following image:
+
+ΓÇâ:::image type="content" source="media/create-codeless-connector/sample-markdown-link-button.png" alt-text="Screenshot of the link button created by the earlier sample markdown.":::
+
+#### InstructionStep
+
+Displays a group of instructions, as an expandable accordion or non-expandable, separate from the main instructions section.
+
+For example:
++
+**Parameters**: `InstructionStepsGroupModelParameters`
+
+|Name |Type |Description |
+||||
+|**title** | String | Defines the title for the instruction step. |
+|**instructionSteps** | [InstructionStep[]](#instructionstep) | Optional. Defines an array of inner instruction steps. |
+|**canCollapseAllSections** | Boolean | Optional. Determines whether the section is a collapsible accordion or not. |
+|**noFxPadding** | Boolean | Optional. If `true`, reduces the height padding to save space. |
+|**expanded** | Boolean | Optional. If `true`, shows as expanded by default. |
+| | | |
++++
+## Configure your connector's polling settings
+
+This section describes the configuration for how data is polled from your data source for a codeless data connector.
+
+The following code shows the syntax of the `pollingConfig` section of the [CCP configuration](#create-a-connector-json-configuration-file) file.
+
+```rest
+"pollingConfig": {
+ auth": {
+ "authType": <string>,
+ },
+ "request": {…
+ },
+ "response": {…
+ },
+ "paging": {…
+ }
+ }
+```
+
+The `pollingConfig` section includes the following properties:
+
+|Name |Type |Description |
+||||
+|**id** | String | Mandatory. Defines a unique identifier for a rule or configuration entry, using one of the following values: <br><br>- A GUID (recommended) <br>- A document ID, if the data source resides in a Cosmos DB |
+|**auth** | String | Describes the authentication properties for polling the data. For more information, see [auth configuration](#auth-configuration). |
+|<a name="authtype"></a>**auth.authType** | String | Mandatory. Defines the type of authentication, nested inside the `auth` object, as one of the following values: `Basic`, `APIKey`, `Session` |
+|**request** | Nested JSON | Mandatory. Describes the request payload for polling the data, such as the API endpoint. For more information, see [request configuration](#request-configuration). |
+|**response** | Nested JSON | Mandatory. Describes the response object and nested message returned from the API when polling the data. For more information, see [response configuration](#response-configuration). |
+|**paging** | Nested JSON. | Optional. Describes the pagination payload when polling the data. For more information, see [paging configuration](#paging-configuration). |
+| | | |
+
+For more information, see [Sample pollingConfig code](#sample-pollingconfig-code).
+
+### auth configuration
+
+The `auth` section of the [pollingConfig](#configure-your-connectors-polling-settings) configuration includes the following parameters, depending on the type defined in the [authType](#authtype) element:
+
+#### APIKey authType parameters
+
+|Name |Type |Description |
+||||
+|**APIKeyName** |String | Optional. Defines the name of your API key, as one of the following values: <br><br>- `XAuthToken` <br>- `Authorization` |
+|**IsAPIKeyInPostPayload** |Boolean | Determines where your API key is defined. <br><br>True: API key is defined in the POST request payload <br>False: API key is defined in the header |
+|**APIKeyIdentifier** | String | Optional. Defines the name of the identifier for the API key. <br><br>For example, where the authorization is defined as `"Authorization": "token <secret>"`, this parameter is defined as: `{APIKeyIdentifier: ΓÇ£tokenΓÇ¥})` |
+| | | |
+
+#### Session authType parameters
+
+|Name |Type |Description |
+||||
+|**QueryParameters** | String | Optional. A list of query parameters, in the serialized `dictionary<string, string>` format: <br><br>`{'<attr_name>': '<val>', '<attr_name>': '<val>'... }` |
+|**IsPostPayloadJson** | Boolean | Optional. Determines whether the query parameters are in JSON format. |
+|**Headers** | String. | Optional. Defines the header used when calling the endpoint to get the session ID, and when calling the endpoint API. <br><br> Define the string in the serialized `dictionary<string, string>` format: `{'<attr_name>': '<val>', '<attr_name>': '<val>'... }` |
+|**SessionTimeoutInMinutes** | String | Optional. Defines a session timeout, in minutes. |
+|**SessionIdName** | String | Optional. Defines an ID name for the session. |
+|**SessionLoginRequestUri** | String | Optional. Defines a session login request URI. |
+| | | |
++++
+### request configuration
+
+The `request` section of the [pollingConfig](#configure-your-connectors-polling-settings) configuration includes the following parameters:
+
+|Name |Type |Description |
+||||
+|**apiEndpoint** | String | Mandatory. Defines the endpoint to pull data from. |
+|**httpMethod** |String | Mandatory. Defines the API method: `GET` or `POST` |
+|**queryTimeFormat** | String, or *UnixTimestamp* or *UnixTimestampInMills* | Mandatory. Defines the format used to define the query time. <br><br>This value can be a string, or in *UnixTimestamp* or *UnixTimestampInMills* format to indicate the query start and end time in the UnixTimestamp. |
+|**startTimeAttributeName** | String | Optional. Defines the name of the attribute that defines the query start time. |
+|**endTimeAttributeName** | String | Optional. Defines the name of the attribute that defines the query end time. |
+|**queryTimeIntervalAttributeName** | String. | Optional. Defines the name of the attribute that defines the query time interval. |
+|**queryTimeIntervalDelimiter** | String | Optional. Defines the query time interval delimiter. |
+|**queryWindowInMin** | String | Optional. Defines the available query window, in minutes. <br><br>Minimum value: `5` |
+|**queryParameters** | String | Optional. Defines the parameters passed in the query in the [`eventsJsonPaths`](#eventsjsonpaths) path. <br><br>Define the string in the serialized `dictionary<string, string>` format: `{'<attr_name>': '<val>', '<attr_name>': '<val>'... }` |
+|**queryParametersTemplate** | String object | Optional. Defines the query parameters template to use when passing query parameters in advanced scenarios. <br><br>For example: `"queryParametersTemplate": "{'cid': 1234567, 'cmd': 'reporting', 'format': 'siem', 'data': { 'from': '{_QueryWindowStartTime}', 'to': '{_QueryWindowEndTime}'}, '{_APIKeyName}': '{_APIKey}'}"` |
+|**isPostPayloadJson** | Boolean | Optional. Determines whether the POST payload is in JSON format. |
+|**rateLimitQPS** | Double | Optional. Defines the number of calls or queries allowed in a second. |
+|**timeoutInSeconds** | Integer | Optional. Defines the request timeout, in seconds. |
+|**retryCount** | Integer | Optional. Defines the number of request retries to try if needed. |
+|**headers** | String | Optional. Defines the request header value, in the serialized `dictionary<string, string>` format: `{'<attr_name>': '<val>', '<attr_name>': '<val>'... }` |
+| | | |
++
+### response configuration
+
+The `response` section of the [pollingConfig](#configure-your-connectors-polling-settings) configuration includes the following parameters:
+
+|Name |Type |Description |
+||||
+| <a name="eventsjsonpaths"></a> **eventsJsonPaths** | List of strings | Mandatory. Defines the path to the message in the response JSON. <br><br>A JSON path expression specifies a path to an element, or a set of elements, in a JSON structure |
+| **successStatusJsonPath** | String | Optional. Defines the path to the success message in the response JSON. |
+| **successStatusValue** | String | Optional. Defines the path to the success message value in the response JSON |
+| **isGzipCompressed** | Boolean | Optional. Determines whether the response is compressed in a gzip file. |
+| | | |
+
+The following code shows an example of the [eventsJsonPaths](#eventsjsonpaths) value for a top-level message:
+
+```json
+"eventsJsonPaths": [
+ "$"
+ ]
+```
++
+### paging configuration
+
+The `paging` section of the [pollingConfig](#configure-your-connectors-polling-settings) configuration includes the following parameters:
+
+|Name |Type |Description |
+||||
+| **pagingType** | String | Mandatory. Determines the paging type to use in results, as one of the following values: `None`, `LinkHeader`, `NextPageToken`, `NextPageUrl`, `Offset` |
+| **linkHeaderTokenJsonPath** | String | Optional. Defines the JSON path to link header in the response JSON, if the `LinkHeader` isn't defined in the response header. |
+| **nextPageTokenJsonPath** | String | Optional. Defines the path to a next page token JSON. |
+| **hasNextFlagJsonPath** |String | Optional. Defines the path to the `HasNextPage` flag attribute. |
+| **nextPageTokenResponseHeader** | String | Optional. Defines the *next page* token header name in the response. |
+| **nextPageParaName** | String | Optional. Determines the *next page* name in the request. |
+| **nextPageRequestHeader** | String | Optional. Determines the *next page* header name in the request. |
+| **nextPageUrl** | String | Optional. Determines the *next page* URL, if it's different from the initial request URL. |
+| **nextPageUrlQueryParameters** | String | Optional. Determines the *next page* URL's query parameters if it's different from the initial request's URL. <br><br>Define the string in the serialized `dictionary<string, object>` format: `{'<attr_name>': <val>, '<attr_name>': <val>... }` |
+| **offsetParaName** | String | Optional. Defines the name of the offset parameter. |
+| **pageSizeParaName** | String | Optional. Defines the name of the page size parameter. |
+| **PageSize** | Integer | Defines the paging size. |
+| | | |
++
+### Sample pollingConfig code
+
+The following code shows an example of the `pollingConfig` section of the [CCP configuration](#create-a-connector-json-configuration-file) file:
+
+```rest
+"pollingConfig": {
+ "auth": {
+ "authType": "APIKey",
+ "APIKeyIdentifier": "token",
+ "APIKeyName": "Authorization"
+ },
+ "request": {
+ "apiEndpoint": "https://api.github.com/../{{placeHolder1}}/audit-log",
+ "rateLimitQPS": 50,
+ "queryWindowInMin": 15,
+ "httpMethod": "Get",
+ "queryTimeFormat": "yyyy-MM-ddTHH:mm:ssZ",
+ "retryCount": 2,
+ "timeoutInSeconds": 60,
+ "headers": {
+ "Accept": "application/json",
+ "User-Agent": "Scuba"
+ },
+ "queryParameters": {
+ "phrase": "created:{_QueryWindowStartTime}..{_QueryWindowEndTime}"
+ }
+ },
+ "paging": {
+ "pagingType": "LinkHeader",
+ "pageSizeParaName": "per_page"
+ },
+ "response": {
+ "eventsJsonPaths": [
+ "$"
+ ]
+ }
+}
+```
+
+## Add placeholders to your connector's JSON configuration file
+
+You may want to create a JSON configuration file template, with placeholders parameters, to reuse across multiple connectors, or even to create a connector with data that you don't currently have.
+
+To create placeholder parameters, define an additional array named `userRequestPlaceHoldersInput` in the [Instructions](#instructions) section of your [CCP JSON configuration](#create-a-connector-json-configuration-file) file, using the following syntax:
+
+```json
+"instructions": [
+ {
+ "parameters": {
+ "enable": "true",
+ "userRequestPlaceHoldersInput": [
+ {
+ "displayText": "Organization Name",
+ "requestObjectKey": "apiEndpoint",
+ "placeHolderName": "{{placeHolder1}}"
+ }
+ ]
+ },
+ "type": "APIKey"
+ }
+ ]
+```
+
+The `userRequestPlaceHoldersInput` parameter includes the following attributes:
+
+|Name |Type |Description |
+||||
+|**DisplayText** | String | Defines the text box display value, which is displayed to the user when connecting. |
+|**RequestObjectKey** |String | Defines the ID used to identify where in the request section of the API call to replace the placeholder value with a user value. <br><br>If you don't use this attribute, use the `PollingKeyPaths` attribute instead. |
+|**PollingKeyPaths** |String |Defines an array of [JsonPath](https://www.npmjs.com/package/JSONPath) objects that directs the API call to anywhere in the template, to replace a placeholder value with a user value.<br><br>**Example**: `"pollingKeyPaths":["$.request.queryParameters.test1"]` <br><br>If you don't use this attribute, use the `RequestObjectKey` attribute instead. |
+|**PlaceHolderName** |String |Defines the name of the placeholder parameter in the JSON template file. This can be any unique value, such as `{{placeHolder}}`. |
+| | |
++
+## Deploy your connector in Microsoft Sentinel and start ingesting data
+
+After creating your [JSON configuration file](#create-a-connector-json-configuration-file), including both the [user interface](#configure-your-connectors-user-interface) and [polling](#configure-your-connectors-polling-settings) configuration, deploy your connector in your Microsoft Sentinel workspace.
+
+1. Use one of the following options to deploy your data connector.
+
+ > [!TIP]
+ > The advantage of deploying via an Azure Resource Manager (ARM) template is that several values are built-in to the template, and you don't need to define them manually in an API call.
+ >
+
+ # [Deploy via ARM template](#tab/deploy-via-arm-template)
+
+ Use a JSON configuration file to create an ARM template to use when deploying your connector. To ensure that your data connector gets deployed to the correct workspace, make sure to either define the workspace for the ARM template to deploy when creating your JSON file, or select the workspace when deploying the ARM template.
+
+ 1. Prepare an [ARM template JSON file](/azure/templates/microsoft.securityinsights/dataconnectors) for your connector. For example, see the following ARM template JSON files:
+
+ - Data connector in the [Slack solution](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SlackAudit/Data%20Connectors/SlackNativePollerConnector/azuredeploy_Slack_native_poller_connector.json)
+ - [Atlassian Jira Audit data connector](https://github.com/Azure/Azure-Sentinel/blob/master/DataConnectors/AtlassianJiraAudit/JiraNativePollerConnector/azuredeploy_Jira_native_poller_connector.json)
+
+ 1. In the Azure portal, search for **Deploy a custom template**.
+
+ 1. On the **Custom deployment** page, select **Build your own template in the editor** > **Load file**. Browse to and select your local ARM template, and then save your changes.
+
+ 1. Select your subscription and resource group, and then enter the Log Analytics workspace where you want to deploy your custom connector.
+
+ 1. Select **Review + create** to deploy your custom connector to Microsoft Sentinel.
+
+ 1. In Microsoft Sentinel, go to the **Data connectors** page, search for your new connector. Configure it to start ingesting data.
+
+ For more information, see [Deploy a local template](/azure/azure-resource-manager/templates/deployment-tutorial-local-template?tabs=azure-powershell) in the Azure Resource Manager documentation.
+
+ # [Deploy via API](#tab/deploy-via-api)
+
+ 1. Authenticate to the Azure API. For more information, see [Getting started with REST](/rest/api/azure/).
+
+ 1. Invoke an `UPSERT` API call to Microsoft Sentinel to deploy your new connector. Your data connector is deployed to your Microsoft Sentinel workspace, and is available on the **Data connectors** page.
+
+
+
+1. Configure your data connector to connect your data source and start ingesting data into Microsoft Sentinel. You can connect to your data source either via the portal, as with out-of-the-box data connectors, or via API.
+
+ When you use the Azure portal to connect, user data is sent automatically. When you connect via API, you'll need to send the relevant authentication parameters in the API call.
+
+ # [Connect via the Azure portal](#tab/connect-via-the-azure-portal)
+
+ In your Microsoft Sentinel data connector page, follow the instructions you've provided to connect to your data connector.
+
+ The data connector page in Microsoft Sentinel is controlled by the [InstructionStep](#instructionstep) configuration in the `connectorUiConfig` element of the [CCP JSON configuration](#create-a-connector-json-configuration-file) file. If you have issues with the user interface connection, make sure that you have the correct configuration for your authentication type.
+
+ # [Connect via API](#tab/connect-via-api)
+
+ Use the `CONNECT` endpoint to send a PUT method and pass the JSON configuration directly in the body of the message. For more information, see [auth configuration](#auth-configuration).
+
+ Use the following API attributes, depending on the [authType](#authtype) defined. For each `authType` parameter, all listed attributes are mandatory and are string values.
+
+ |authType |Attributes |
+ |||
+ |**Basic** | Define: <br>- `kind` as `Basic` <br>- `userName` as your username, in quotes <br>- `password` as your password, in quotes |
+ |**APIKey** |Define: <br>- `kind` as `APIKey` <br>- `APIKey` as your full API key string, in quotes|
+ | | |
+
+ If you're using a [template configuration file with placeholder data](#add-placeholders-to-your-connectors-json-configuration-file), send the data together with the `placeHolderValue` attributes that hold the user data. For example:
+
+ ```rest
+ "requestConfigUserInputValues": [
+ {
+ "displayText": "<A display name>",
+ "placeHolderName": "<A placeholder name>",
+ "placeHolderValue": "<A value for the placeholder>",
+ "pollingKeyPaths": "<Array of items to use in place of the placeHolderName>"
+ }
+ ]
+ ```
+
+
+
+1. In Microsoft Sentinel, go to the **Logs** page and verify that you see the logs from your data source flowing in to your workspace.
+
+If you don't see data flowing into Microsoft Sentinel, check your data source documentation and troubleshooting resources, check the configuration details, and check the connectivity. For more information, see [Monitor the health of your data connectors](monitor-data-connector-health.md).
+
+### Disconnect your connector
+
+If you no longer need your connector's data, disconnect the connector to stop the data flow.
+
+Use one of the following methods:
+
+- **Azure portal**: In your Microsoft Sentinel data connector page, select **Disconnect**.
+
+- **API**: Use the DISCONNECT API to send a PUT call with an empty body to the following URL:
+
+ ```rest
+ https://management.azure.com /subscriptions/{{SUB}}/resourceGroups/{{RG}}/providers/Microsoft.OperationalInsights/workspaces/{{WS-NAME}}/providers/Microsoft.SecurityInsights/dataConnectors/{{Connector_Id}}/disconnect?api-version=2021-03-01-preview
+ ```
+
+## Next steps
+
+If you haven't yet, share your new codeless data connector with the Microsoft Sentinel community! Create a solution for your data connector and share it in the Microsoft Sentinel Marketplace.
+
+For more information, see [About Microsoft Sentinel solutions](sentinel-solutions.md).
sentinel Create Custom Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/create-custom-connector.md
description: Learn about available resources for creating custom connectors for
Previously updated : 11/09/2021 Last updated : 11/21/2021
The following table compares essential details about each method for creating cu
|Method description |Capability | Serverless |Complexity | |||||
+| **[Codeless Connector Platform (CCP)](#connect-with-the-codeless-connector-platform)** <br>Best for less technical audiences to create SaaS connectors using a configuration file instead of advanced development. | Supports all capabilities available with the code. | Yes | Low; simple, codeless development
|**[Log Analytics Agent](#connect-with-the-log-analytics-agent)** <br>Best for collecting files from on-premises and IaaS sources | File collection only | No |Low | |**[Logstash](#connect-with-logstash)** <br>Best for on-premises and IaaS sources, any source for which a plugin is available, and organizations already familiar with Logstash | Available plugins, plus custom plugin, capabilities provide significant flexibility. | No; requires a VM or VM cluster to run | Low; supports many scenarios with plugins | |**[Logic Apps](#connect-with-logic-apps)** <br>High cost; avoid for high-volume data <br>Best for low-volume cloud sources | Codeless programming allows for limited flexibility, without support for implementing algorithms.<br><br> If no available action already supports your requirements, creating a custom action may add complexity. | Yes | Low; simple, codeless development | |**[PowerShell](#connect-with-powershell)** <br>Best for prototyping and periodic file uploads | Direct support for file collection. <br><br>PowerShell can be used to collect more sources, but will require coding and configuring the script as a service. |No | Low | |**[Log Analytics API](#connect-with-the-log-analytics-api)** <br>Best for ISVs implementing integration, and for unique collection requirements | Supports all capabilities available with the code. | Depends on the implementation | High |
-|**[Azure Functions](#connect-with-azure-functions)** Best for high-volume cloud sources, and for unique collection requirements | Supports all capabilities available with the code. | Yes | High; requires programming knowledge |
+|**[Azure Functions](#connect-with-azure-functions)** <br>Best for high-volume cloud sources, and for unique collection requirements | Supports all capabilities available with the code. | Yes | High; requires programming knowledge |
| | | | > [!TIP]
The following table compares essential details about each method for creating cu
> - Office 365 (Microsoft Sentinel GitHub community): [Logic App connector](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Get-O365Data) | [Azure Function connector](https://github.com/Azure/Azure-Sentinel/tree/master/DataConnectors/O365%20Data) >
+## Connect with the Codeless Connector Platform
+
+The Codeless Connector Platform (CCP) provides a configuration file that can be used by both customers and partners, and then deployed to your own workspace, or as a solution to Microsoft Sentinel's solution's gallery.
+
+Connectors created using the CCP are fully SaaS, without any requirements for service installations, and also include health monitoring and full support from Microsoft Sentinel.
+
+For more information, see [Create a codeless connector for Microsoft Sentinel](create-codeless-connector.md).
+ ## Connect with the Log Analytics agent If your data source delivers events in files, we recommend that you use the Azure Monitor Log Analytics agent to create your custom connector.
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/data-connectors-reference.md
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| **Supported by** | Microsoft | | | |
+
+## Microsoft Project
+| Connector attribute | Description |
+| | |
+| **Data ingestion method** | **Azure service-to-service integration: <br>[API-based connections](connect-azure-windows-microsoft-services.md#api-based-connections)** |
+| **License prerequisites/<br>Cost information** | Your Office 365 deployment must be on the same tenant as your Microsoft Sentinel workspace.<br>Other charges may apply |
+| **Log Analytics table(s)** | ProjectActivity |
+| **Supported by** | Microsoft |
+| | |
+ ## Microsoft Defender for Cloud | Connector attribute | Description |
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| **Log Analytics table(s)** | OfficeActivity | | **Supported by** | Microsoft | | | |
+
+## Microsoft Power BI
+| Connector attribute | Description |
+| | |
+| **Data ingestion method** | **Azure service-to-service integration: <br>[API-based connections](connect-azure-windows-microsoft-services.md#api-based-connections)** |
+| **License prerequisites/<br>Cost information** | Your Office 365 deployment must be on the same tenant as your Microsoft Sentinel workspace.<br>Other charges may apply |
+| **Log Analytics table(s)** | PowerBIActivity |
+| **Supported by** | Microsoft |
+| | |
## Microsoft Sysmon for Linux (Preview)
You can find the value of your workspace ID on the ZScaler Private Access connec
For more information, see: - [Microsoft Sentinel solutions catalog](sentinel-solutions-catalog.md)-- [Threat intelligence integration in Microsoft Sentinel](threat-intelligence-integration.md)
+- [Threat intelligence integration in Microsoft Sentinel](threat-intelligence-integration.md)
sentinel Detect Threats Custom https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/detect-threats-custom.md
Title: Create custom analytics rules to detect threats with Microsoft Sentinel |
description: Learn how to create custom analytics rules to detect security threats with Microsoft Sentinel. Take advantage of event grouping, alert grouping, and alert enrichment, and understand AUTO DISABLED. Previously updated : 11/09/2021 Last updated : 01/30/2022
Analytics rules search for specific events or sets of events across your environ
- Provide a unique **Name** and a **Description**. -- In the **Tactics** field, you can choose from among categories of attacks by which to classify the rule. These are based on the tactics of the [MITRE ATT&CK](https://attack.mitre.org/) framework.
+- In the **Tactics and techniques** field, you can choose from among categories of attacks by which to classify the rule. These are based on the tactics and techniques of the [MITRE ATT&CK](https://attack.mitre.org/) framework.
+
+ [Incidents](investigate-cases.md) created from alerts that are detected by rules mapped to MITRE ATT&CK tactics and techniques automatically inherit the rule's mapping.
- Set the alert **Severity** as appropriate.
sentinel Investigate Cases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/investigate-cases.md
Title: Investigate incidents with Microsoft Sentinel| Microsoft Docs
description: In this article, learn how to use Microsoft Sentinel to create advanced alert rules that generate incidents you can assign and investigate. Previously updated : 11/09/2021 Last updated : 01/30/2022
An incident can include multiple alerts. It's an aggregation of all the relevant
1. You can filter the incidents as needed, for example by status or severity. For more information, see [Search for incidents](#search-for-incidents).
-1. To begin an investigation, select a specific incident. On the right, you can see detailed information for the incident including its severity, summary of the number of entities involved, the raw events that triggered this incident, and the incidentΓÇÖs unique ID.
+1. To begin an investigation, select a specific incident. On the right, you can see detailed information for the incident including its severity, summary of the number of entities involved, the raw events that triggered this incident, the incidentΓÇÖs unique ID, and any mapped MITRE ATT&CK tactics or techniques.
1. To view more details about the alerts and entities in the incident, select **View full details** in the incident page and review the relevant tabs that summarize the incident information.
sentinel Monitor Data Connector Health https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/monitor-data-connector-health.md
Once the health feature is turned on, the *SentinelHealth* data table is created
The *SentinelHealth* data table is currently supported only for the following data connectors: -- [Amazon Web Services (CloudTrail)](connect-aws.md)
+- [Amazon Web Services (CloudTrail and S3)](connect-aws.md)
- [Dynamics 365](connect-dynamics-365.md) - [Office 365](connect-office-365.md) - [Office ATP](connect-microsoft-defender-advanced-threat-protection.md)
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/whats-new.md
description: This article describes new features in Microsoft Sentinel from the
Previously updated : 01/30/2022 Last updated : 01/31/2022
If you're looking for items older than six months, you'll find them in the [Arch
## January 2022
+- [Support for MITRE ATT&CK techniques (Public preview)](#support-for-mitre-attck-techniques-public-preview)
+- [Codeless data connectors (Public preview)](#codeless-data-connectors-public-preview)
- [Maturity Model for Event Log Management (M-21-31) Solution (Public preview)](#maturity-model-for-event-log-management-m-21-31-solution-public-preview) - [SentinelHealth data table (Public preview)](#sentinelhealth-data-table-public-preview) - [More workspaces supported for Multiple Workspace View](#more-workspaces-supported-for-multiple-workspace-view) - [Kusto Query Language workbook and tutorial](#kusto-query-language-workbook-and-tutorial)
+### Support for MITRE ATT&CK techniques (Public preview)
+
+In addition to supporting MITRE ATT&CK tactics, your entire Microsoft Sentinel user flow now also supports MITRE ATT&CK techniques.
+
+When creating or editing [analytics rules](detect-threats-custom.md), map the rule to one or more specific tactics *and* techniques. When searching for rules on the **Analytics** page, filter by tactic and technique to narrow your search results.
++
+Check for mapped tactics and techniques throughout Microsoft Sentinel, in:
+
+- **[Incidents](investigate-cases.md)**. Incidents created from alerts that are detected by rules mapped to MITRE ATT&CK tactics and techniques automatically inherit the rule's tactic and technique mapping.
+
+- **[Bookmarks](bookmarks.md)**. Bookmarks that capture results from hunting queries mapped to MITRE ATT&CK tactics and techniques automatically inherit the query's mapping.
+
+#### MITRE ATT&CK framework version upgrade
+
+We also upgraded the MITRE ATT&CK support throughout Microsoft Sentinel to use the MITRE ATT&CK framework *version 9*. This update includes support for the following new tactics:
+
+**Replacing the deprecated *PreAttack* tactic**:
+
+- [Reconnaissance](https://attack.mitre.org/versions/v9/tactics/TA0043/)
+- [Resource Development](https://attack.mitre.org/versions/v9/tactics/TA0042/)
+
+**Industrial Control System (ICS) tactics**:
+
+- [Impair Process Control](https://collaborate.mitre.org/attackics/index.php/Impair_Process_Control)
+- [Inhibit Response Function](https://collaborate.mitre.org/attackics/index.php/Inhibit_Response_Function)
+
+### Codeless data connectors (Public preview)
+
+Partners, advanced users, and developers can now use the new Codeless Connector Platform (CCP) to create custom connectors, connect their data sources, and ingest data to Microsoft Sentinel.
+
+The Codeless Connector Platform (CCP) provides support for new data connectors via ARM templates, API, or via a solution in the Microsoft Sentinel [content hub](sentinel-solutions.md).
+
+Connectors created using CCP are fully SaaS, without any requirements for service installations, and also include [health monitoring](monitor-data-connector-health.md) and full support from Microsoft Sentinel.
+
+For more information, see [Create a codeless connector for Microsoft Sentinel](create-codeless-connector.md).
+ ### Maturity Model for Event Log Management (M-21-31) Solution (Public preview) The Microsoft Sentinel content hub now includes the **Maturity Model for Event Log Management (M-21-31)** solution, which integrates Microsoft Sentinel and Microsoft Defender for Cloud to provide an industry differentiator for meeting challenging requirements in regulated industries.
While we often recommend a single-workspace environment, some use cases require
For more information, see: -- [The need to use multiple Microsoft Sentinel workspaces](extend-sentinel-across-workspaces-tenants.md#the-need-to-use-multiple-microsoft-sentinel-workspaces)
+- [Use multiple Microsoft Sentinel workspaces](extend-sentinel-across-workspaces-tenants.md#the-need-to-use-multiple-microsoft-sentinel-workspaces)
- [Work with incidents in many workspaces at once](multiple-workspace-view.md) - [Manage multiple tenants in Microsoft Sentinel as an MSSP](multiple-tenants-service-providers.md)
Kusto Query Language is used in Microsoft Sentinel to search, analyze, and visua
The new **Advanced KQL for Microsoft Sentinel** interactive workbook is designed to help you improve your Kusto Query Language proficiency by taking a use case-driven approach based on: -- Grouping Kusto Query Language operators/commands by category for easy navigation.-- Listing the possible tasks a user would perform with Kusto Query Language in Microsoft Sentinel. Each task includes operators used, sample queries and use cases.
+- Grouping Kusto Query Language operators / commands by category for easy navigation.
+- Listing the possible tasks a user would perform with Kusto Query Language in Microsoft Sentinel. Each task includes operators used, sample queries, and use cases.
- Compiling a list of existing content found in Microsoft Sentinel (analytics rules, hunting queries, workbooks and so on) to provide additional references specific to the operators you want to learn. - Allowing you to execute sample queries on-the-fly, within your own environment or in "LA Demo" - a public [Log Analytics demo environment](https://aka.ms/lademo). Try the sample Kusto Query Language statements in real time without the need to navigate away from the workbook.
Accompanying the new workbook is an explanatory [blog post](https://techcommunit
## December 2021 -- [IoT OT Threat Monitoring with Defender for IoT solution](#iot-ot-threat-monitoring-with-defender-for-iot-solution-public-preview)-- [Ingest GitHub logs into your Microsoft Sentinel workspace](#ingest-github-logs-into-your-microsoft-sentinel-workspace-public-preview) - [Apache Log4j Vulnerability Detection solution](#apache-log4j-vulnerability-detection-solution-public-preview)
+- [IoT OT Threat Monitoring with Defender for IoT solution](#iot-ot-threat-monitoring-with-defender-for-iot-solution-public-preview)
+- [Continuous Threat Monitoring for GitHub solution](#ingest-github-logs-into-your-microsoft-sentinel-workspace-public-preview)
++
+### Apache Log4j Vulnerability Detection solution
+
+Remote code execution vulnerabilities related to Apache Log4j were disclosed on 9 December 2021. The vulnerability allows for unauthenticated remote code execution, and it's triggered when a specially crafted string, provided by the attacker through a variety of different input vectors, is parsed and processed by the Log4j 2 vulnerable component.
+
+The [Apache Log4J Vulnerability Detection](sentinel-solutions-catalog.md#domain-solutions) solution was added to the Microsoft Sentinel content hub to help customers monitor, detect, and investigate signals related to the exploitation of this vulnerability, using Microsoft Sentinel.
+
+For more information, see the [Microsoft Security Response Center blog](https://msrc-blog.microsoft.com/2021/12/11/microsofts-response-to-cve-2021-44228-apache-log4j2/) and [Centrally discover and deploy Microsoft Sentinel out-of-the-box content and solutions](sentinel-solutions-deploy.md).
### IoT OT Threat Monitoring with Defender for IoT solution (Public preview)
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
### Apache Log4j Vulnerability Detection solution (Public preview)
-Remote code execution vulnerabilities related to Apache Log4j were disclosed on 9 Dec 2021. The vulnerability allows for unauthenticated remote code execution, and it is triggered when a specially crafted string, provided by the attacker through a variety of different input vectors, is parsed and processed by the Log4j 2 vulnerable component.
+Remote code execution vulnerabilities related to Apache Log4j were disclosed on 9 December 2021. The vulnerability allows for unauthenticated remote code execution, and it's triggered when a specially crafted string, provided by the attacker through a variety of different input vectors, is parsed and processed by the Log4j 2 vulnerable component.
The [Apache Log4J Vulnerability Detection](sentinel-solutions-catalog.md#domain-solutions) solution was added to the Microsoft Sentinel content hub to help customers monitor, detect, and investigate signals related to the exploitation of this vulnerability, using Microsoft Sentinel.
For more information, see the [Microsoft Security Response Center blog](https://
- [Windows Forwarded Events connector now available (Public preview)](#windows-forwarded-events-connector-now-available-public-preview) - [Near-real-time (NRT) threat detection rules now available (Public preview)](#near-real-time-nrt-threat-detection-rules-now-available-public-preview) - [Fusion engine now detects emerging and unknown threats (Public preview)](#fusion-engine-now-detects-emerging-and-unknown-threats-public-preview)-- [Get fine-tuning recommendations for your analytics rules (Public preview)](#get-fine-tuning-recommendations-for-your-analytics-rules-public-preview)
+- [Fine-tuning recommendations for your analytics rules (Public preview)](#get-fine-tuning-recommendations-for-your-analytics-rules-public-preview)
- [Free trial updates](#free-trial-updates) - [Content hub and new solutions (Public preview)](#content-hub-and-new-solutions-public-preview)-- [Enable continuous deployment from your content repositories (Public preview)](#enable-continuous-deployment-from-your-content-repositories-public-preview)
+- [Continuous deployment from your content repositories (Public preview)](#enable-continuous-deployment-from-your-content-repositories-public-preview)
- [Enriched threat intelligence with Geolocation and WhoIs data (Public preview)](#enriched-threat-intelligence-with-geolocation-and-whois-data-public-preview) - [Use notebooks with Azure Synapse Analytics in Microsoft Sentinel (Public preview)](#use-notebooks-with-azure-synapse-analytics-in-microsoft-sentinel-public-preview) - [Enhanced Notebooks area in Microsoft Sentinel](#enhanced-notebooks-area-in-microsoft-sentinel) - [Microsoft Sentinel renaming](#microsoft-sentinel-renaming)-- [Deploy and monitor Azure Key Vault honeytokens with Azure Sentinel](#deploy-and-monitor-azure-key-vault-honeytokens-with-azure-sentinel)
+- [Deploy and monitor Azure Key Vault honeytokens with Microsoft Sentinel](#deploy-and-monitor-azure-key-vault-honeytokens-with-microsoft-sentinel)
### Incident advanced search now available in GA
-Searching for incidents using the advanced search functionality is now Generally Available.
+Searching for incidents using the advanced search functionality is now generally available.
The advanced incident search provides the ability to search across more data, including alert details, descriptions, entities, tactics, and more.
For more information, see [Connect Microsoft Sentinel to S3 Buckets to get Amazo
### Windows Forwarded Events connector now available (Public preview)
-You can now stream event logs from Windows Servers connected to your Azure Sentinel workspace using Windows Event Collection/Windows Event Forwarding (WEC/WEF), thanks to this new data connector. The connector uses the new Azure Monitor Agent (AMA), which provides a number of advantages over the legacy Log Analytics agent (also known as the MMA):
+You can now stream event logs from Windows Servers connected to your Microsoft Sentinel workspace using Windows Event Collection / Windows Event Forwarding (WEC / WEF), thanks to this new data connector. The connector uses the new Azure Monitor Agent (AMA), which provides a number of advantages over the legacy Log Analytics agent (also known as the MMA):
-- **Scalability:** If you have enabled Windows Event Collection (WEC), you can install the Azure Monitor Agent (AMA) on the WEC machine to collect logs from many servers with a single connection point.
+- **Scalability:** If you've enabled Windows Event Collection (WEC), you can install the Azure Monitor Agent (AMA) on the WEC machine to collect logs from many servers with a single connection point.
-- **Speed:** The AMA can send data at an improved rate of 5K EPS, allowing for faster data refresh.
+- **Speed:** The AMA can send data at an improved rate of 5 K EPS, allowing for faster data refresh.
- **Efficiency:** The AMA allows you to design complex Data Collection Rules (DCR) to filter the logs at their source, choosing the exact events to stream to your workspace. DCRs help lower your network traffic and your ingestion costs by leaving out undesired events. -- **Coverage:** WEC/WEF enables the collection of Windows Event logs from legacy (on-premises and physical) servers and also from high-usage or sensitive machines, such as domain controllers, where installing an agent is undesired.
+- **Coverage:** WEC / WEF enables the collection of Windows Event logs from legacy (on-premises and physical) servers and also from high-usage or sensitive machines, such as domain controllers, where installing an agent is undesired.
-We recommend using this connector with the [Azure Sentinel Information Model (ASIM)](normalization.md) parsers installed to ensure full support for data normalization.
+We recommend using this connector with the [Microsoft Sentinel Information Model (ASIM)](normalization.md) parsers installed to ensure full support for data normalization.
Learn more about the [Windows Forwarded Events connector](data-connectors-reference.md#windows-forwarded-events-preview).
Fine-tuning threat detection rules in your SIEM can be a difficult, delicate, an
### Free trial updates Microsoft Sentinel's free trial continues to support new or existing Log Analytics workspaces at no additional cost for the first 31 days.
-We are evolving our current free trial experience to include the following updates:
-- **New Log Analytics workspaces** can ingest up to 10 GB/day of log data for the first 31-days at no cost. New workspaces include workspaces that are less than three days old.
+We're evolving our free trial experience to include the following updates:
+
+- **New Log Analytics workspaces** can ingest up to 10 GB / day of log data for the first 31-days at no cost. New workspaces include workspaces that are less than three days old.
- Both Log Analytics data ingestion and Microsoft Sentinel charges are waived during the 31-day trial period. This free trial is subject to a 20 workspace limit per Azure tenant.
+ Both Log Analytics data ingestion and Microsoft Sentinel charges are waived during the 31-day trial period. This free trial is subject to a 20-workspace limit per Azure tenant.
- **Existing Log Analytics workspaces** can enable Microsoft Sentinel at no additional cost. Existing workspaces include any workspaces created more than three days ago.
We are evolving our current free trial experience to include the following updat
Usage beyond these limits will be charged per the pricing listed on the [Microsoft Sentinel pricing](https://azure.microsoft.com/pricing/details/azure-sentinel) page. Charges related to additional capabilities for [automation](automation.md) and [bring your own machine learning](bring-your-own-ml.md) are still applicable during the free trial. > [!TIP]
-> During your free trial, find resources for cost management, training, and more on the **News & guides > Free trial** tab in Microsoft Sentinel. This tab also displays details about the dates of your free trial, and how many days you have left until it expires.
+> During your free trial, find resources for cost management, training, and more on the **News & guides > Free trial** tab in Microsoft Sentinel. This tab also displays details about the dates of your free trial, and how many days you've left until it expires.
> For more information, see [Plan and manage costs for Microsoft Sentinel](billing.md).
The following list includes highlights of new, out-of-the-box solutions added to
For more information, see: -- [About Microsoft Sentinel solutions](sentinel-solutions.md)
+- [Learn about Microsoft Sentinel solutions](sentinel-solutions.md)
- [Discover and deploy Microsoft Sentinel solutions](sentinel-solutions-deploy.md) - [Microsoft Sentinel solutions catalog](sentinel-solutions-catalog.md)
For example:
For more information, see: - [Understand threat intelligence in Microsoft Sentinel](understand-threat-intelligence.md)-- [Threat intelligence integrations](threat-intelligence-integration.md)
+- [Understand threat intelligence integrations](threat-intelligence-integration.md)
- [Work with threat indicators in Microsoft Sentinel](work-with-threat-indicators.md) - [Connect threat intelligence platforms](connect-threat-intelligence-tip.md)
Until now, Jupyter notebooks in Microsoft Sentinel have been integrated with Azu
The new Azure Synapse integration provides extra analytic horsepower, such as: -- **Security big data analytics**, using cost-optimized, fully-managed Azure Synapse Apache Spark compute pool.
+- **Security big data analytics**, using cost-optimized, fully managed Azure Synapse Apache Spark compute pool.
- **Cost-effective Data Lake access** to build analytics on historical data via Azure Data Lake Storage Gen2, which is a set of capabilities dedicated to big data analytics, built on top of Azure Blob Storage.
The new Azure Synapse integration provides extra analytic horsepower, such as:
- **PySpark, a Python-based API** for using the Spark framework in combination with Python, reducing the need to learn a new programming language if you're already familiar with Python.
-To support this integration, we've added the ability to create and launch an Azure Synapse workspace directly from Microsoft Sentinel. We also added new, sample notebooks to guide you through configuring the Azure Synapse environment, setting up a continuous data export pipeline from Log Analytics into Azure Data Lake Storage, and then hunting on that data at scale.
+To support this integration, we added the ability to create and launch an Azure Synapse workspace directly from Microsoft Sentinel. We also added new, sample notebooks to guide you through configuring the Azure Synapse environment, setting up a continuous data export pipeline from Log Analytics into Azure Data Lake Storage, and then hunting on that data at scale.
For more information, see [Integrate notebooks with Azure Synapse](notebooks-with-synapse.md).
Earlier entries in this article and the older [Archive for What's new in Sentine
For more information, see our [blog on recent security enhancements](https://aka.ms/secblg11).
-### Deploy and monitor Azure Key Vault honeytokens with Azure Sentinel
+### Deploy and monitor Azure Key Vault honeytokens with Microsoft Sentinel
-The new **Azure Sentinel Deception** solution helps you watch for malicious activity in your key vaults by helping you to deploy decoy keys and secrets, called *honeytokens*, to selected Azure key vaults.
+The new **Microsoft Sentinel Deception** solution helps you watch for malicious activity in your key vaults by helping you to deploy decoy keys and secrets, called *honeytokens*, to selected Azure key vaults.
-Once deployed, any access or operation with the honeytoken keys and secrets generate incidents that you can investigate in Azure Sentinel.
+Once deployed, any access or operation with the honeytoken keys and secrets generate incidents that you can investigate in Microsoft Sentinel.
Since there's no reason to actually use honeytoken keys and secrets, any similar activity in your workspace may be malicious and should be investigated.
-The **Azure Sentinel Deception** solution includes a workbook to help you deploy the honeytokens, either at scale or one at a time, watchlists to track the honeytokens created, and analytics rules to generate incidents as needed.
+The **Microsoft Sentinel Deception** solution includes a workbook to help you deploy the honeytokens, either at scale or one at a time, watchlists to track the honeytokens created, and analytics rules to generate incidents as needed.
-For more information, see [Deploy and monitor Azure Key Vault honeytokens with Azure Sentinel (Public preview)](monitor-key-vault-honeytokens.md).
+For more information, see [Deploy and monitor Azure Key Vault honeytokens with Microsoft Sentinel (Public preview)](monitor-key-vault-honeytokens.md).
## October 2021 - [Windows Security Events connector using Azure Monitor Agent now in GA](#windows-security-events-connector-using-azure-monitor-agent-now-in-ga) - [Defender for Office 365 events now available in the Microsoft 365 Defender connector (Public preview)](#defender-for-office-365-events-now-available-in-the-microsoft-365-defender-connector-public-preview) - [Playbook templates and gallery now available (Public preview)](#playbook-templates-and-gallery-now-available-public-preview)-- [Manage template versions for your scheduled analytics rules (Public preview)](#manage-template-versions-for-your-scheduled-analytics-rules-public-preview)
+- [Template versioning for your scheduled analytics rules (Public preview)](#manage-template-versions-for-your-scheduled-analytics-rules-public-preview)
- [DHCP normalization schema (Public preview)](#dhcp-normalization-schema-public-preview) ### Windows Security Events connector using Azure Monitor Agent now in GA
-The new version of the Windows Security Events connector, based on the Azure Monitor Agent, is now generally available! See [Connect to Windows servers to collect security events](connect-windows-security-events.md?tabs=AMA) for more information.
+The new version of the Windows Security Events connector, based on the Azure Monitor Agent, is now generally available. For more information, see [Connect to Windows servers to collect security events](connect-windows-security-events.md?tabs=AMA).
### Defender for Office 365 events now available in the Microsoft 365 Defender connector (Public preview)
However, rules created from templates ***do*** remember which templates they cam
- If you made changes to a rule when creating it from a template (or at any time after that), you can always revert the rule back to its original version (as a copy of the template). -- You can get notified when a template is updated, and you'll have the choice to update your rules to the new version of their templates or leave them as they are.
+- If a template is updated, you'll be notified and you can choose to update your rules to the new version of their templates, or leave them as they are.
[Learn how to manage these tasks](manage-analytics-rule-templates.md), and what to keep in mind. These procedures apply to any [Scheduled](detect-threats-built-in.md#scheduled) analytics rules created from templates.
For more information, see:
### Data connector health enhancements (Public preview)
-Azure Sentinel now provides the ability to enhance your data connector health monitoring with a new *SentinelHealth* table. The *SentinelHealth* table is created after you've [turned on the Azure Sentinel health feature](monitor-data-connector-health.md#turn-on-microsoft-sentinel-health-for-your-workspace) in your Azure Sentinel workspace, at the first success or failure health event that's generated.
+Azure Sentinel now provides the ability to enhance your data connector health monitoring with a new *SentinelHealth* table. The *SentinelHealth* table is created after you [turn on the Azure Sentinel health feature](monitor-data-connector-health.md#turn-on-microsoft-sentinel-health-for-your-workspace) in your Azure Sentinel workspace, at the first success or failure health event generated.
For more information, see [Monitor the health of your data connectors with this Azure Sentinel workbook](monitor-data-connector-health.md).
For more information, see [Monitor the health of your data connectors with this
### New in docs: scaling data connector documentation
-As we continue to add more and more built-in data connectors for Azure Sentinel, we've reorganized our data connector documentation to reflect this scaling.
+As we continue to add more and more built-in data connectors for Azure Sentinel, we reorganized our data connector documentation to reflect this scaling.
-For most data connectors, we've replaced full articles that describe an individual connector with a series of generic procedures and a full reference of all currently supported connectors.
+For most data connectors, we replaced full articles that describe an individual connector with a series of generic procedures and a full reference of all currently supported connectors.
Check the [Azure Sentinel data connectors reference](data-connectors-reference.md) for details about your connector, including references to the relevant generic procedure, as well as extra information and configurations required.
When configuring diagnostics for a storage account, you must select and configur
- The parent account resource, exporting the **Transaction** metric. - Each of the child storage-type resources, exporting all the logs and metrics (see the table above).
-You will only see the storage types that you actually have defined resources for.
+You'll only see the storage types that you actually have defined resources for.
:::image type="content" source="media/whats-new/storage-diagnostics.png" alt-text="Screenshot of Azure Storage diagnostics configuration.":::
For more information, see [Search for incidents](investigate-cases.md#search-for
Azure Sentinel now provides new Fusion detections for possible Ransomware activities, generating incidents titled as **Multiple alerts possibly related to Ransomware activity detected**.
-Incidents are generated for alerts that are possibly associated with Ransomware activities, when they occur during a specific time-frame, and are associated with the Execution and Defense Evasion stages of an attack. You can use the alerts listed in the incident to analyze the techniques possibly used by attackers to compromise a host/device and to evade detection.
+Incidents are generated for alerts that are possibly associated with Ransomware activities, when they occur during a specific time-frame, and are associated with the Execution and Defense Evasion stages of an attack. You can use the alerts listed in the incident to analyze the techniques possibly used by attackers to compromise a host / device and to evade detection.
Supported data connectors include:
For more information, see:
### New in docs: Best practice guidance
-In response to multiple requests from customers and our support teams, we've added a series of best practice guidance to our documentation.
+In response to multiple requests from customers and our support teams, we added a series of best practice guidance to our documentation.
For more information, see:
sentinel Work With Anomaly Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/work-with-anomaly-rules.md
Title: Work with anomaly detection analytics rules in Microsoft Sentinel | Micro
description: This article explains how to view, create, manage, assess, and fine-tune anomaly detection analytics rules in Microsoft Sentinel. Previously updated : 11/09/2021 Last updated : 01/30/2022
Microsoft SentinelΓÇÖs [customizable anomalies feature](soc-ml-anomalies.md) pro
1. From the Microsoft Sentinel navigation menu, select **Analytics**.
-1. In the **Analytics** blade, select the **Rule templates** tab.
+1. On the **Analytics** page, select the **Rule templates** tab.
1. Filter the list for **Anomaly** templates:
- 1. Click the **Rule type** filter, then the drop-down list that appears below.
+ 1. Select the **Rule type** filter, then the drop-down list that appears below.
1. Unmark **Select all**, then mark **Anomaly**.
- 1. If necessary, click the top of the drop-down list to retract it, then click **OK**.
+ 1. If necessary, select the top of the drop-down list to retract it, then select **OK**.
## Activate anomaly rules
When you select one of the rule templates, you will see the following informatio
- **Data sources** indicates the type of logs that need to be ingested in order to be analyzed. -- **Tactics** are the MITRE ATT&CK framework tactics covered by the anomaly.
+- **Tactics and techniques** are the MITRE ATT&CK framework tactics and techniques covered by the anomaly.
- **Parameters** are the configurable attributes for the anomaly.
When you select one of the rule templates, you will see the following informatio
Complete the following steps to activate a rule:
-1. Choose a rule template that is not already labeled **IN USE**. Click the **Create rule** button to open the rule creation wizard.
+1. Choose a rule template that is not already labeled **IN USE**. Select the **Create rule** button to open the rule creation wizard.
The wizard for each rule template will be slightly different, but it has three steps or tabs: **General**, **Configuration**, **Review and create**.
You can see how well an anomaly rule is performing by reviewing a sample of the
1. From the Microsoft Sentinel navigation menu, select **Analytics**.
-1. In the **Analytics** blade, check that the **Active rules** tab is selected.
+1. On the **Analytics** page, check that the **Active rules** tab is selected.
1. Filter the list for **Anomaly** rules (as above).
You can see how well an anomaly rule is performing by reviewing a sample of the
1. If a **Queries** gallery pops up over the top, close it.
-1. Select the **Tables** tab on the left pane of the **Logs** blade.
+1. Select the **Tables** tab on the left pane of the **Logs** page.
1. Set the **Time range** filter to **Last 24 hours**.
You can see how well an anomaly rule is performing by reviewing a sample of the
``` Paste the rule name you copied above in place of the underscores between the quotation marks.
-1. Click **Run**.
+1. Select **Run**.
When you have some results, you can start assessing the quality of the anomalies. If you donΓÇÖt have results, try increasing the time range.
This is by design, to give you the opportunity to compare the results generated
1. To change the configuration of an anomaly rule, select the anomaly rule in the **Active rules** tab.
-1. Right-click anywhere on the row of the rule, or left-click the ellipsis (...) at the end of the row, then click **Duplicate**.
+1. Right-click anywhere on the row of the rule, or left-click the ellipsis (...) at the end of the row, then select **Duplicate**.
-1. The new copy of the rule will have the suffix " - Customized" in the rule name. To actually customize this rule, select this rule and click **Edit**.
+1. The new copy of the rule will have the suffix " - Customized" in the rule name. To actually customize this rule, select this rule and select **Edit**.
1. The rule opens in the Analytics rule wizard. Here you can change the parameters of the rule and its threshold. The parameters that can be changed vary with each anomaly type and algorithm.
- You can preview the results of your changes in the **Results preview pane**. Click an **Anomaly ID** in the results preview to see why the ML model identifies that anomaly.
+ You can preview the results of your changes in the **Results preview pane**. Select an **Anomaly ID** in the results preview to see why the ML model identifies that anomaly.
-1. Enable the customized rule to generate results. Some of your changes may require the rule to re-run, so you must wait for it to finish and come back to check the results on the logs page. The customized anomaly rule runs in **Flighting** (testing) mode by default. The original rule continues to run in **Production** mode by default.
+1. Enable the customized rule to generate results. Some of your changes may require the rule to run again, so you must wait for it to finish and come back to check the results on the logs page. The customized anomaly rule runs in **Flighting** (testing) mode by default. The original rule continues to run in **Production** mode by default.
1. To compare the results, go back to the Anomalies table in **Logs** to [assess the new rule as before](#assess-the-quality-of-anomalies), only look for rows with the original rule name as well as the duplicate rule name with " - Customized" appended to it in the **AnomalyTemplateName** column.
- If you are satisfied with the results for the customized rule, you can go back to the **Active rules** tab, click on the customized rule, click the **Edit** button and on the **General** tab switch it from **Flighting** to **Production**. The original rule will automatically change to **Flighting** since you can't have two versions of the same rule in production at the same time.
+ If you are satisfied with the results for the customized rule, you can go back to the **Active rules** tab, select on the customized rule, select the **Edit** button and on the **General** tab switch it from **Flighting** to **Production**. The original rule will automatically change to **Flighting** since you can't have two versions of the same rule in production at the same time.
## Next steps
service-connector Concept Region Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-connector/concept-region-support.md
Title: Service Connector Region Support
description: Service Connector region availability and region support list -+ Last updated 10/29/2021
service-connector Concept Service Connector Internals https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-connector/concept-service-connector-internals.md
Title: Service Connector internals
description: Learn about Service Connector internals, the architecture, the connections and how data is transmitted. -+ Last updated 10/29/2021
service-connector How To Integrate Confluent Kafka https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-connector/how-to-integrate-confluent-kafka.md
Title: Integrate Apache kafka on Confluent Cloud with Service Connector
description: Integrate Apache kafka on Confluent Cloud into your application with Service Connector -+ Last updated 10/29/2021
service-connector How To Integrate Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-connector/how-to-integrate-cosmos-db.md
Title: Integrate Azure Cosmos DB with Service Connector
description: Integrate Azure Cosmos DB into your application with Service Connector -+ Last updated 11/11/2021
service-connector How To Integrate Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-connector/how-to-integrate-key-vault.md
Title: Integrate Azure Key Vault with Service Connector
description: Integrate Azure Key Vault into your application with Service Connector -+ Last updated 10/29/2021
service-connector How To Integrate Mysql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-connector/how-to-integrate-mysql.md
Title: Integrate Azure Database for MySQL with Service Connector
description: Integrate Azure Database for MySQL into your application with Service Connector -+ Last updated 10/29/2021
service-connector How To Integrate Postgres https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-connector/how-to-integrate-postgres.md
Title: Integrate Azure Database for PostgreSQL with Service Connector
description: Integrate Azure Database for PostgreSQL into your application with Service Connector -+ Last updated 10/29/2021
service-connector How To Integrate Redis Cache https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-connector/how-to-integrate-redis-cache.md
Title: Integrate Azure Cache for Redis and Azure Cache Redis Enterprise with Ser
description: Integrate Azure Cache for Redis and Azure Cache Redis Enterprise into your application with Service Connector -+ Last updated 1/3/2022
service-connector How To Integrate Signalr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-connector/how-to-integrate-signalr.md
Title: Integrate Azure SignalR Service with Service Connector
description: Integrate Azure SignalR Service into your application with Service Connector -+ Last updated 10/29/2021
service-connector How To Integrate Storage Blob https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-connector/how-to-integrate-storage-blob.md
Title: Integrate Azure Blob Storage with Service Connector
description: Integrate Azure Blob Storage into your application with Service Connector -+ Last updated 10/29/2021
service-connector How To Integrate Storage File https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-connector/how-to-integrate-storage-file.md
Title: Integrate Azure File Storage with Service Connector
description: Integrate Azure File Storage into your application with Service Connector -+ Last updated 10/29/2021
service-connector How To Integrate Storage Queue https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-connector/how-to-integrate-storage-queue.md
Title: Integrate Azure Queue Storage with Service Connector
description: Integrate Azure Queue Storage into your application with Service Connector -+ Last updated 10/29/2021
service-connector How To Integrate Storage Table https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-connector/how-to-integrate-storage-table.md
Title: Integrate Azure Table Storage with Service Connector
description: Integrate Azure Table Storage into your application with Service Connector -+ Last updated 10/29/2021
service-connector How To Troubleshoot Front End Error https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-connector/how-to-troubleshoot-front-end-error.md
Title: Service Connector Troubleshooting Guidance
description: Error list and suggested actions of Service Connector -+ Last updated 10/29/2021
service-connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-connector/overview.md
Title: What is Service Connector?
description: Better understand what typical use case scenarios to use Service Connector, and learn the key benefits of Service Connector. -+ Last updated 10/29/2021
service-connector Quickstart Cli App Service Connection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-connector/quickstart-cli-app-service-connection.md
Title: Quickstart - Create a service connection in App Service with the Azure CL
description: Quickstart showing how to create a service connection in App Service with the Azure CLI -+ Last updated 10/29/2021
service-connector Quickstart Cli Spring Cloud Connection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-connector/quickstart-cli-spring-cloud-connection.md
Title: Quickstart - Create a service connection in Spring Cloud with the Azure C
description: Quickstart showing how to create a service connection in Spring Cloud with the Azure CLI -+ Last updated 10/29/2021
service-connector Quickstart Portal App Service Connection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-connector/quickstart-portal-app-service-connection.md
Title: Quickstart - Create a service connection in App Service from the Azure po
description: Quickstart showing how to create a service connection in App Service from the Azure portal -+ Last updated 01/27/2022
service-connector Quickstart Portal Spring Cloud Connection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-connector/quickstart-portal-spring-cloud-connection.md
Title: Quickstart - Create a service connection in Spring Cloud from Azure porta
description: Quickstart showing how to create a service connection in Spring Cloud from Azure portal -+ Last updated 10/29/2021
service-connector Tutorial Csharp Webapp Storage Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-connector/tutorial-csharp-webapp-storage-cli.md
Title: 'Tutorial: Deploy Web Application Connected to Azure Storage Blob with Se
description: Create a web app connected to Azure Storage Blob with Service Connector. -+ Last updated 10/28/2021
service-connector Tutorial Django Webapp Postgres Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-connector/tutorial-django-webapp-postgres-cli.md
description: Create a Python web app with a PostgreSQL database and deploy it to
ms.devlang: python -+ Last updated 11/30/2021 zone_pivot_groups: postgres-server-options
service-connector Tutorial Java Spring Confluent Kafka https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-connector/tutorial-java-spring-confluent-kafka.md
description: Create a Spring Boot app connected to Apache Kafka on Confluent Clo
ms.devlang: java -+ Last updated 10/28/2021
service-connector Tutorial Java Spring Mysql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-connector/tutorial-java-spring-mysql.md
Title: 'Tutorial: Deploy Spring Cloud Application Connected to Azure Database fo
description: Create a Spring Boot application connected to Azure Database for MySQL with Service Connector. -+ Last updated 10/28/2021
stream-analytics Automation Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/automation-powershell.md
Once it's provisioned, let's start with its overall configuration.
The Function needs permissions to start and stop the ASA job. We'll assign these permissions via a [managed identity](../active-directory/managed-identities-azure-resources/overview.md).
-The first step is to enable a **system-assigned managed identity** for the Function, following that [procedure](../app-service/overview-managed-identity.md?tabs=dotnet&toc=%2fazure%2fazure-functions%2ftoc.json#using-the-azure-portal).
+The first step is to enable a **system-assigned managed identity** for the Function, following that [procedure](/azure/app-service/overview-managed-identity?toc=%2Fazure%2Fazure-functions%2Ftoc.json&tabs=ps%2Cportal).
Now we can grant the right permissions to that identity on the ASA job we want to auto-pause. For that, in the Portal for the **ASA job** (not the Function one), in **Access control (IAM)**, add a **role assignment** to the role *Contributor* for a member of type *Managed Identity*, selecting the name of the Function above.
synapse-analytics Overview Database Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/database-designer/overview-database-templates.md
A typical database template addresses the core requirements of a specific indust
## Available database templates
-Currently, you can choose from 11 database templates in Azure Synapse Studio to start creating your lake database:
+Currently, you can choose from the following database templates in Azure Synapse Studio to start creating your lake database:
* **Agriculture**ΓÇè-ΓÇèFor companies engaged in growing crops, raising livestock, and dairy production. * **Automotive** - For companies manufacturing automobiles, heavy vehicles, tires, and other automotive components.
synapse-analytics Get Started Analyze Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/get-started-analyze-sql-on-demand.md
However, as you continue data exploration, you might want to create some utility
> [!NOTE] > An external data source can be created without a credential. If a credential does not exist, the caller's identity will be used to access the external data source.
-3. Optionally, use the 'master' database to create a login for a user in `DataExplorationDB` that will access external data:
+3. Optionally, use the newly created 'DataExplorationDB' database to create a login for a user in DataExplorationDB that will access external data:
```sql CREATE LOGIN data_explorer WITH PASSWORD = 'My Very Strong Password 1234!';
synapse-analytics Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/whats-new-archive.md
Previously updated : 12/17/2021 Last updated : 01/28/2022 # Previous monthly updates in Azure Synapse Analytics This article describes previous month updates to Azure Synapse Analytics. For the most current month's release, check out [Azure Synapse Analytics latest updates](whats-new.md). Each update links to the Azure Synapse Analytics blog and an article that provides more information.
+## December 2021 update
+
+The following updates are new to Azure Synapse Analytics this month.
+
+### Apache Spark for Synapse
+
+* Accelerate Spark workloads with NVIDIA GPU acceleration [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--16536080) [article](./spark/apache-spark-rapids-gpu.md)
+* Mount remote storage to a Synapse Spark pool [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--1823990543) [article](./spark/synapse-file-mount-api.md)
+* Natively read & write data in ADLS with Pandas [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-663522290) [article](./spark/tutorial-use-pandas-spark-pool.md)
+* Dynamic allocation of executors for Spark [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--1143932173) [article](./spark/apache-spark-autoscale.md)
+
+### Machine Learning
+
+* The Synapse Machine Learning library [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--463873803) [article](https://microsoft.github.io/SynapseML/docs/about/)
+* Getting started with state-of-the-art pre-built intelligent models [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-2023639030) [article](./machine-learning/tutorial-form-recognizer-use-mmlspark.md)
+* Building responsible AI systems with the Synapse ML library [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-914346508) [article](https://microsoft.github.io/SynapseML/docs/features/responsible_ai/Model%20Interpretation%20on%20Spark/)
+* PREDICT is now GA for Synapse Dedicated SQL pools [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-1594404878) [article](./machine-learning/tutorial-sql-pool-model-scoring-wizard.md)
+* Simple & scalable scoring with PREDICT and MLFlow for Apache Spark for Synapse [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--213049585) [article](./machine-learning/tutorial-score-model-predict-spark-pool.md)
+* Retail AI solutions [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--2020504048) [article](./machine-learning/quickstart-industry-ai-solutions.md)
+
+### Security
+
+* User-Assigned managed identities now supported in Synapse Pipelines in preview [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--1340445678) [article](../data-factory/credentials.md?context=%2Fazure%2Fsynapse-analytics%2Fcontext%2Fcontext&tabs=data-factory)
+* Browse ADLS Gen2 folders in an Azure Synapse Analytics workspace in preview [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-1147067155) [article](how-to-access-container-with-access-control-lists.md)
+
+### Data Integration
+
+* Pipeline Fail activity [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-1827125525) [article](../data-factory/control-flow-fail-activity.md)
+* Mapping Data Flow gets new native connectors [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-717833003) [article](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/mapping-data-flow-gets-new-native-connectors/ba-p/2866754)
+* Additional notebook export formats: HTML, Python, and LaTeX [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-december-2021-update/ba-p/3042904#REF3)
+* Three new chart types in notebook view: box plot, histogram, and pivot table [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-december-2021-update/ba-p/3042904#REF4)
+* Reconnect to lost notebook session [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-december-2021-update/ba-p/3042904#REF5)
+
+### Integrate
+
+* Synapse Link for Dataverse [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-1397891373) [article](/powerapps/maker/data-platform/azure-synapse-link-synapse)
+* Custom partitions for Synapse link for Azure Cosmos DB in preview [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--409563090) [article](../cosmos-db/custom-partitioning-analytical-store.md)
+* Map data tool (Public Preview), a no-code guided ETL experience [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-december-2021-update/ba-p/3042904#REF7) [article](./database-designer/overview-map-data.md)
+* Quick reuse of spark cluster [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-december-2021-update/ba-p/3042904#REF7) [article](../data-factory/concepts-integration-runtime-performance.md#time-to-live)
+* External Call transformation [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-december-2021-update/ba-p/3042904#REF9) [article](../data-factory/data-flow-external-call.md)
+* Flowlets (Public Preview) [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-december-2021-update/ba-p/3042904#REF10) [article](../data-factory/concepts-data-flow-flowlet.md)
+ ## November 2021 update The following updates are new to Azure Synapse Analytics this month.
synapse-analytics Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/whats-new.md
Previously updated : 12/23/2021 Last updated : 01/28/2022 # What's new in Azure Synapse Analytics?
-This article lists updates to Azure Synapse Analytics that are published in December 2021. Each update links to the Azure Synapse Analytics blog and an article that provides more information. For previous months releases, check out [Azure Synapse Analytics - updates archive](whats-new-archive.md).
+This article lists updates to Azure Synapse Analytics that are published in Jan 2022. Each update links to the Azure Synapse Analytics blog and an article that provides more information. For previous months releases, check out [Azure Synapse Analytics - updates archive](whats-new-archive.md).
-## December 2021 update
+## Jan 2022 update
The following updates are new to Azure Synapse Analytics this month. ### Apache Spark for Synapse
-* Accelerate Spark workloads with NVIDIA GPU acceleration [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--16536080) [article](./spark/apache-spark-rapids-gpu.md)
-* Mount remote storage to a Synapse Spark pool [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--1823990543) [article](./spark/synapse-file-mount-api.md)
-* Natively read & write data in ADLS with Pandas [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-663522290) [article](./spark/tutorial-use-pandas-spark-pool.md)
-* Dynamic allocation of executors for Spark [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--1143932173) [article](./spark/apache-spark-autoscale.md)
+You can now use four new database templates in Azure Synapse. [Learn more about Automotive, Genomics, Manufacturing, and Pharmaceuticals templates from the blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/four-additional-azure-synapse-database-templates-now-available/ba-p/3058044) or the [database templates article](./database-designer/overview-database-templates.md). These templates are currently in public preview and are available within the Synapse Studio gallery.
### Machine Learning
-* The Synapse Machine Learning library [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--463873803) [article](https://microsoft.github.io/SynapseML/docs/about/)
-* Getting started with state-of-the-art pre-built intelligent models [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-2023639030) [article](./machine-learning/tutorial-form-recognizer-use-mmlspark.md)
-* Building responsible AI systems with the Synapse ML library [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-914346508) [article](https://microsoft.github.io/SynapseML/docs/features/responsible_ai/Model%20Interpretation%20on%20Spark/)
-* PREDICT is now GA for Synapse Dedicated SQL pools [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-1594404878) [article](./machine-learning/tutorial-sql-pool-model-scoring-wizard.md)
-* Simple & scalable scoring with PREDICT and MLFlow for Apache Spark for Synapse [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--213049585) [article](./machine-learning/tutorial-score-model-predict-spark-pool.md)
-* Retail AI solutions [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--2020504048) [article](./machine-learning/quickstart-industry-ai-solutions.md)
+Improvements to the Synapse Machine Learning library v0.9.5 (previously called MMLSpark). This release simplifies the creation of massively scalable machine learning pipelines with Apache Spark. To learn more, [read the blog post about the new capabilities in this release](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_3) or see the [full release notes](https://microsoft.github.io/SynapseML/)
### Security
-* User-Assigned managed identities now supported in Synapse Pipelines in preview [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--1340445678) [article](../data-factory/credentials.md?context=%2Fazure%2Fsynapse-analytics%2Fcontext%2Fcontext&tabs=data-factory)
-* Browse ADLS Gen2 folders in an Azure Synapse Analytics workspace in preview [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-1147067155) [article](how-to-access-container-with-access-control-lists.md)
+* The Azure Synapse Analytics security overview - A whitepaper that covers the five layers of security. The security layers include authentication, access control, data protection, network security, and threat protection. [Understand each security feature in detailed](./guidance/security-white-paper-introduction.md) to implement an industry-standard security baseline and protect your data on the cloud.
+
+* TLS 1.2 is now required for newly created Synapse Workspaces. To learn more, see how [TLS 1.2 provides enhanced security using this article](./security/connectivity-settings.md) or the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_6). Login attempts to a newly created Synapse workspace from connections using a TLS versions lower than 1.2 will fail.
### Data Integration
-* Pipeline Fail activity [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-1827125525) [article](../data-factory/control-flow-fail-activity.md)
-* Mapping Data Flow gets new native connectors [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-717833003) [article](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/mapping-data-flow-gets-new-native-connectors/ba-p/2866754)
-* Additional notebook export formats: HTML, Python, and LaTeX [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-december-2021-update/ba-p/3042904#REF3)
-* Three new chart types in notebook view: box plot, histogram, and pivot table [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-december-2021-update/ba-p/3042904#REF4)
-* Reconnect to lost notebook session [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-december-2021-update/ba-p/3042904#REF5)
+* Data quality validation rules using Assert transformation - You can now easily add data quality, data validation, and schema validation to your Synapse ETL jobs by leveraging Assert transformation in Synapse data flows. To learn more, see the [Assert transformation in mapping data flow article](/data-factory/data-flow-assert.md) or [the blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_8).
+
+* Native data flow connector for Dynamics - Synapse data flows can now read and write data directly to Dynamics through the new data flow Dynamics connector. Learn more on how to [Create data sets in data flows to read, transform, aggregate, join, etc. using this article](../data-factory/connector-dynamics-crm-office-365.md) or the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_9). You can then write the data back into Dynamics using the built-in Synapse Spark compute.
+
+* IntelliSense and auto-complete added to pipeline expressions - IntelliSense makes creating expressions, editing them easy. To learn more, see how to [check your expression syntax, find functions, and add code to your pipelines.](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/intellisense-support-in-expression-builder-for-more-productive/ba-p/3041459)
+### Synapse SQL
-### Integrate
+* COPY schema discovery for complex data ingestion. To learn more, see the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_12) or [how Github leveraged this functionality in Introducing Automatic Schema Discovery with auto table creation for complex datatypes](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/introducing-automatic-schema-discovery-with-auto-table-creation/ba-p/3068927).
-* Synapse Link for Dataverse [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-1397891373) [article](/powerapps/maker/data-platform/azure-synapse-link-synapse)
-* Custom partitions for Synapse link for Azure Cosmos DB in preview [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--409563090) [article](../cosmos-db/custom-partitioning-analytical-store.md)
-* Map data tool (Public Preview), a no-code guided ETL experience [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-december-2021-update/ba-p/3042904#REF7) [article](./database-designer/overview-map-data.md)
-* Quick reuse of spark cluster [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-december-2021-update/ba-p/3042904#REF7) [article](../data-factory/concepts-integration-runtime-performance.md#time-to-live)
-* External Call transformation [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-december-2021-update/ba-p/3042904#REF9) [article](../data-factory/data-flow-external-call.md)
-* Flowlets (Public Preview) [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-december-2021-update/ba-p/3042904#REF10) [article](../data-factory/concepts-data-flow-flowlet.md)
+* Serverless SQL pools now support the HASHBYTES function. HASHBYTES is a T-SQL function which hashes values. Learn how to use [hash values in distributing data using this article](/sql/t-sql/functions/hashbytes-transact-sql) or the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_13).
## Next steps
traffic-manager Traffic Manager Faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-FAQs.md
na Previously updated : 03/03/2021 Last updated : 01/31/2022
### What IP address does Traffic Manager use?
-As explained in [How Traffic Manager Works](../traffic-manager/traffic-manager-how-it-works.md), Traffic Manager works at the DNS level. It sends DNS responses to direct clients to the appropriate service endpoint. Clients then connect to the service endpoint directly, not through Traffic Manager.
+As explained in [How Traffic Manager Works](../traffic-manager/traffic-manager-how-it-works.md), Traffic Manager works at the Domain Name System (DNS) level. It sends DNS responses to direct clients to the appropriate service endpoint. Clients then connect to the service endpoint directly, not through Traffic Manager.
-Therefore, Traffic Manager does not provide an endpoint or IP address for clients to connect to. If you want static IP address for your service, that must be configured at the service, not in Traffic Manager.
+Therefore, Traffic Manager doesnΓÇÖt provide an endpoint or IP address for clients to connect to. If you want static IP address for your service, that must be configured at the service, not in Traffic Manager.
### What types of traffic can be routed using Traffic Manager?
-As explained in [How Traffic Manager Works](../traffic-manager/traffic-manager-how-it-works.md), a Traffic Manager endpoint can be any internet facing service hosted inside or outside of Azure. Hence, Traffic Manager can route traffic that originates from the public internet to a set of endpoints that are also internet facing. If you have endpoints that are inside a private network (for example, an internal version of [Azure Load Balancer](../load-balancer/components.md#frontend-ip-configurations)) or have users making DNS requests from such internal networks, then you cannot use Traffic Manager to route this traffic.
+As explained in [How Traffic Manager Works](../traffic-manager/traffic-manager-how-it-works.md), a Traffic Manager endpoint can be any internet facing service hosted inside or outside of Azure. Hence, Traffic Manager can route traffic that originates from the public internet to a set of endpoints that are also internet facing. If you have endpoints that are inside a private network (for example, an internal version of [Azure Load Balancer](../load-balancer/components.md#frontend-ip-configurations)) or have users making DNS requests from such internal networks, then you canΓÇÖt use Traffic Manager to route this traffic.
### Does Traffic Manager support "sticky" sessions?
-As explained in [How Traffic Manager Works](../traffic-manager/traffic-manager-how-it-works.md), Traffic Manager works at the DNS level. It uses DNS responses to direct clients to the appropriate service endpoint. Clients connect to the service endpoint directly, not through Traffic Manager. Therefore, Traffic Manager does not see the HTTP traffic between the client and the server.
+As explained in [How Traffic Manager Works](../traffic-manager/traffic-manager-how-it-works.md), Traffic Manager works at the DNS level. It uses DNS responses to direct clients to the appropriate service endpoint. Clients connect to the service endpoint directly, not through Traffic Manager. Therefore, Traffic Manager doesnΓÇÖt see the HTTP traffic between the client and the server.
-Additionally, the source IP address of the DNS query received by Traffic Manager belongs to the recursive DNS service, not the client. Therefore, Traffic Manager has no way to track individual clients and cannot implement 'sticky' sessions. This limitation is common to all DNS-based traffic management systems and is not specific to Traffic Manager.
+Additionally, the source IP address of the DNS query received by Traffic Manager belongs to the recursive DNS service, not the client. Therefore, Traffic Manager has no way to track individual clients and canΓÇÖt implement 'sticky' sessions. This limitation is common to all DNS-based traffic management systems and isnΓÇÖt specific to Traffic Manager.
### Why am I seeing an HTTP error when using Traffic Manager?
-As explained in [How Traffic Manager Works](../traffic-manager/traffic-manager-how-it-works.md), Traffic Manager works at the DNS level. It uses DNS responses to direct clients to the appropriate service endpoint. Clients then connect to the service endpoint directly, not through Traffic Manager. Traffic Manager does not see HTTP traffic between client and server. Therefore, any HTTP error you see must be coming from your application. For the client to connect to the application, all DNS resolution steps are complete. That includes any interaction that Traffic Manager has on the application traffic flow.
+As explained in [How Traffic Manager Works](../traffic-manager/traffic-manager-how-it-works.md), Traffic Manager works at the DNS level. It uses DNS responses to direct clients to the appropriate service endpoint. Clients then connect to the service endpoint directly, not through Traffic Manager. Traffic Manager doesnΓÇÖt see HTTP traffic between client and server. Therefore, any HTTP error you see must be coming from your application. For the client to connect to the application, all DNS resolution steps are complete. That includes any interaction that Traffic Manager has on the application traffic flow.
Further investigation should therefore focus on the application.
-The HTTP host header sent from the client's browser is the most common source of problems. Make sure that the application is configured to accept the correct host header for the domain name you are using. For endpoints using the Azure App Service, see [configuring a custom domain name for a web app in Azure App Service using Traffic Manager](../app-service/configure-domain-traffic-manager.md).
+The HTTP host header sent from the client's browser is the most common source of problems. Make sure that the application is configured to accept the correct host header for the domain name youΓÇÖre using. For endpoints using the Azure App Service, see [configuring a custom domain name for a web app in Azure App Service using Traffic Manager](../app-service/configure-domain-traffic-manager.md).
### What is the performance impact of using Traffic Manager?
-As explained in [How Traffic Manager Works](../traffic-manager/traffic-manager-how-it-works.md), Traffic Manager works at the DNS level. Since clients connect to your service endpoints directly, there is no performance impact incurred when using Traffic Manager once the connection is established.
+As explained in [How Traffic Manager Works](../traffic-manager/traffic-manager-how-it-works.md), Traffic Manager works at the DNS level. Since clients connect to your service endpoints directly, thereΓÇÖs no performance impact incurred when using Traffic Manager once the connection is established.
Since Traffic Manager integrates with applications at the DNS level, it does require an additional DNS lookup to be inserted into the DNS resolution chain. The impact of Traffic Manager on DNS resolution time is minimal. Traffic Manager uses a global network of name servers, and uses [anycast](https://en.wikipedia.org/wiki/Anycast) networking to ensure DNS queries are always routed to the closest available name server. In addition, caching of DNS responses means that the additional DNS latency incurred by using Traffic Manager applies only to a fraction of sessions.
Yes. To learn how to create an alias record for your domain name apex to referen
### Does Traffic Manager consider the client subnet address when handling DNS queries?
-Yes, in addition to the source IP address of the DNS query it receives (which usually is the IP address of the DNS resolver), when performing lookups for Geographic, Performance, and Subnet routing methods, traffic manager also considers the client subnet address if it is included in the query by the resolver making the request on behalf of the end user.
+Yes, in addition to the source IP address of the DNS query it receives (which usually is the IP address of the DNS resolver), when performing lookups for Geographic, Performance, and Subnet routing methods, traffic manager also considers the client subnet address if itΓÇÖs included in the query by the resolver making the request on behalf of the end user.
Specifically, [RFC 7871 ΓÇô Client Subnet in DNS Queries](https://tools.ietf.org/html/rfc7871) that provides an [Extension Mechanism for DNS (EDNS0)](https://tools.ietf.org/html/rfc2671) which can pass on the client subnet address from resolvers that support it. ### What is DNS TTL and how does it impact my users?
-When a DNS query lands on Traffic Manager, it sets a value in the response called time-to-live (TTL). This value, whose unit is in seconds, indicates to DNS resolvers downstream on how long to cache this response. While DNS resolvers are not guaranteed to cache this result, caching it enables them to respond to any subsequent queries off the cache instead of going to Traffic Manager DNS servers. This impacts the responses as follows:
+When a DNS query lands on Traffic Manager, it sets a value in the response called time-to-live (TTL). This value, whose unit is in seconds, indicates to DNS resolvers downstream on how long to cache this response. While DNS resolvers arenΓÇÖt guaranteed to cache this result, caching it enables them to respond to any subsequent queries off the cache instead of going to Traffic Manager DNS servers. This impacts the responses as follows:
- a higher TTL reduces the number of queries that land on the Traffic Manager DNS servers, which can reduce the cost for a customer since number of queries served is a billable usage. - a higher TTL can potentially reduce the time it takes to do a DNS lookup.-- a higher TTL also means that your data does not reflect the latest health information that Traffic Manager has obtained through its probing agents.
+- a higher TTL also means that your data doesnΓÇÖt reflect the latest health information that Traffic Manager has obtained through its probing agents.
### How high or low can I set the TTL for Traffic Manager responses?
-You can set, at a per profile level, the DNS TTL to be as low as 0 seconds and as high as 2,147,483,647 seconds (the maximum range compliant with [RFC-1035](https://www.ietf.org/rfc/rfc1035.txt )). A TTL of 0 means that downstream DNS resolvers do not cache query responses and all queries are expected to reach the Traffic Manager DNS servers for resolution.
+You can set, at a per profile level, the DNS TTL to be as low as 0 seconds and as high as 2,147,483,647 seconds (the maximum range compliant with [RFC-1035](https://www.ietf.org/rfc/rfc1035.txt )). A TTL of 0 means that downstream DNS resolvers donΓÇÖt cache query responses and all queries are expected to reach the Traffic Manager DNS servers for resolution.
### How can I understand the volume of queries coming to my profile? One of the metrics provided by Traffic Manager is the number of queries responded by a profile. You can get this information at a profile level aggregation or you can split it up further to see the volume of queries where specific endpoints were returned. In addition, you can set up alerts to notify you if the query response volume crosses the conditions you have set. For more details, [Traffic Manager metrics and alerts](traffic-manager-metrics-alerts.md).
+### When I delete a Traffic Manager profile, what is the amount of time before the name of the profile is available for reuse?
+
+It can take up to 2 hours for the name to become available after a Traffic Manger profile is deleted.
+ ## Traffic Manager Geographic traffic routing method ### What are some use cases where geographic routing is useful?
Geographic routing type can be used in any scenario where an Azure customer need
### How do I decide if I should use Performance routing method or Geographic routing method?
-The key difference between these two popular routing methods is that in Performance routing method your primary goal is to send traffic to the endpoint that can provide the lowest latency to the caller, whereas, in Geographic routing the primary goal is to enforce a geo fence for your callers so that you can deliberately route them to a specific endpoint. The overlap happens since there is a correlation between geographical closeness and lower latency, although this is not always true. There might be an endpoint in a different geography that can provide a better latency experience for the caller and in that case Performance routing will send the user to that endpoint but Geographic routing will always send them to the endpoint you have mapped for their geographic region. To further make it clear, consider the following example - with Geographic routing you can make uncommon mappings such as send all traffic from Asia to endpoints in the US and all US traffic to endpoints in Asia. In that case, Geographic routing will deliberately do exactly what you have configured it to do and performance optimization is not a consideration.
+The key difference between these two popular routing methods is that in Performance routing method your primary goal is to send traffic to the endpoint that can provide the lowest latency to the caller, whereas, in Geographic routing the primary goal is to enforce a geo fence for your callers so that you can deliberately route them to a specific endpoint. The overlap happens since thereΓÇÖs a correlation between geographical closeness and lower latency, although this isnΓÇÖt always true. There might be an endpoint in a different geography that can provide a better latency experience for the caller and in that case Performance routing will send the user to that endpoint but Geographic routing will always send them to the endpoint youΓÇÖve mapped for their geographic region. To further make it clear, consider the following example - with Geographic routing you can make uncommon mappings such as send all traffic from Asia to endpoints in the US and all US traffic to endpoints in Asia. In that case, Geographic routing will deliberately do exactly what you have configured it to do and performance optimization isnΓÇÖt a consideration.
>[!NOTE] >There may be scenarios where you might need both performance and geographic routing capabilities, for these scenarios nested profiles can be great choice. For example, you can set up a parent profile with geographic routing where you send all traffic from North America to a nested profile that has endpoints in the US and use performance routing to send those traffic to the best endpoint within that set. ### What are the regions that are supported by Traffic Manager for geographic routing?
-The country/region hierarchy that is used by Traffic Manager can be found [here](traffic-manager-geographic-regions.md). While this page is kept up-to-date with any changes, you can also programmatically retrieve the same information by using the [Azure Traffic Manager REST API](/rest/api/trafficmanager/).
+The country/region hierarchy that is used by Traffic Manager can be found [here](traffic-manager-geographic-regions.md). While this page is kept up to date with any changes, you can also programmatically retrieve the same information by using the [Azure Traffic Manager REST API](/rest/api/trafficmanager/).
### How does traffic manager determine where a user is querying from?
Traffic Manager looks at the source IP of the query (this most likely is a local
### Is it guaranteed that Traffic Manager can correctly determine the exact geographic location of the user in every case?
-No, Traffic Manager cannot guarantee that the geographic region we infer from the source IP address of a DNS query will always correspond to the user's location due to the following reasons:
+No, Traffic Manager canΓÇÖt guarantee that the geographic region we infer from the source IP address of a DNS query will always correspond to the user's location due to the following reasons:
-- First, as described in the previous FAQ, the source IP address we see is that of a DNS resolver doing the lookup on behalf of the user. While the geographic location of the DNS resolver is a good proxy for the geographic location of the user, it can also be different depending upon the footprint of the DNS resolver service and the specific DNS resolver service a customer has chosen to use.
-As an example, a customer located in Malaysia could specify in their device's settings use a DNS resolver service whose DNS server in Singapore might get picked to handle the query resolutions for that user/device. In that case, Traffic Manager can only see the resolver's IP address that corresponds to the Singapore location. Also, see the earlier FAQ regarding client subnet address support on this page.
+- First, as described in the previous FAQ, the source IP we see is that of a DNS resolver doing the lookup on behalf of the user. While the geographic location of the DNS resolver is a good proxy for the geographic location of the user, it can also be different depending upon the footprint of the DNS resolver service and the specific DNS resolver service a customer has chosen to use.
+As an example, a customer located in Malaysia could specify in their device's settings use a DNS resolver service whose DNS server in Singapore might get picked to handle the query resolutions for that user/device. In that case, Traffic Manager can only see the resolver's IP that corresponds to the Singapore location. Also, see the earlier FAQ regarding client subnet address support on this page.
-- Second, Traffic Manager uses an internal map to do the IP address to geographic region translation. While this map is validated and updated on an ongoing basis to increase its accuracy and account for the evolving nature of the internet, there is still the possibility that our information is not an exact representation of the geographic location of all the IP addresses.
+- Second, Traffic Manager uses an internal map to do the IP address to geographic region translation. While this map is validated and updated on an ongoing basis to increase its accuracy and account for the evolving nature of the internet, thereΓÇÖs still the possibility that our information isnΓÇÖt an exact representation of the geographic location of all the IP addresses.
-### Does an endpoint need to be physically located in the same region as the one it is configured with for geographic routing?
+### Does an endpoint need to be physically located in the same region as the one itΓÇÖs configured with for geographic routing?
No, the location of the endpoint imposes no restrictions on which regions can be mapped to it. For example, an endpoint in US-Central Azure region can have all users from India directed to it.
-### Can I assign geographic regions to endpoints in a profile that is not configured to do geographic routing?
+### Can I assign geographic regions to endpoints in a profile that isnΓÇÖt configured to do geographic routing?
-Yes, if the routing method of a profile is not geographic, you can use the [Azure Traffic Manager REST API](/rest/api/trafficmanager/) to assign geographic regions to endpoints in that profile. In the case of non-geographic routing type profiles, this configuration is ignored. If you change such a profile to geographic routing type at a later time, Traffic Manager can use those mappings.
+Yes, if the routing method of a profile isnΓÇÖt geographic, you can use the [Azure Traffic Manager REST API](/rest/api/trafficmanager/) to assign geographic regions to endpoints in that profile. In the case of non-geographic routing type profiles, this configuration is ignored. If you change such a profile to geographic routing type at a later time, Traffic Manager can use those mappings.
### Why am I getting an error when I try to change the routing method of an existing profile to Geographic?
All the endpoints under a profile with geographic routing need to have at least
### Why is it strongly recommended that customers create nested profiles instead of endpoints under a profile with geographic routing enabled?
-A region can be assigned to only one endpoint within a profile if it is using the geographic routing method. If that endpoint is not a nested type with a child profile attached to it, if that endpoint going unhealthy, Traffic Manager continues to send traffic to it since the alternative of not sending any traffic isn't any better. Traffic Manager does not failover to another endpoint, even when the region assigned is a "parent" of the region assigned to the endpoint that went unhealthy (for example, if an endpoint that has region Spain goes unhealthy, we do not failover to another endpoint that has the region Europe assigned to it). This is done to ensure that Traffic Manager respects the geographic boundaries that a customer has setup in their profile. To get the benefit of failing over to another endpoint when an endpoint goes unhealthy, it is recommended that geographic regions be assigned to nested profiles with multiple endpoints within it instead of individual endpoints. In this way, if an endpoint in the nested child profile fails, traffic can failover to another endpoint inside the same nested child profile.
+A region can be assigned to only one endpoint within a profile if itΓÇÖs using the geographic routing method. If that endpoint isnΓÇÖt a nested type with a child profile attached to it, if that endpoint going unhealthy, Traffic Manager continues to send traffic to it since the alternative of not sending any traffic isn't any better. Traffic Manager doesnΓÇÖt fail over to another endpoint, even when the region assigned is a "parent" of the region assigned to the endpoint that went unhealthy (for example, if an endpoint that has region Spain goes unhealthy, we donΓÇÖt fail over to another endpoint that has the region Europe assigned to it). This is done to ensure that Traffic Manager respects the geographic boundaries that a customer has setup in their profile. To get the benefit of failing over to another endpoint when an endpoint goes unhealthy, itΓÇÖs recommended that geographic regions be assigned to nested profiles with multiple endpoints within it instead of individual endpoints. In this way, if an endpoint in the nested child profile fails, traffic can fail over to another endpoint inside the same nested child profile.
### Are there any restrictions on the API version that supports this routing type?
-Yes, only API version 2017-03-01 and newer supports the Geographic routing type. Any older API versions cannot be used to created profiles of Geographic routing type or assign geographic regions to endpoints. If an older API version is used to retrieve profiles from an Azure subscription, any profile of Geographic routing type is not returned. In addition, when using older API versions, any profile returned that has endpoints with a geographic region assignment, does not have its geographic region assignment shown.
+Yes, only API version 2017-03-01 and newer supports the Geographic routing type. Any older API versions canΓÇÖt be used to created profiles of Geographic routing type or assign geographic regions to endpoints. If an older API version is used to retrieve profiles from an Azure subscription, any profile of Geographic routing type isnΓÇÖt returned. In addition, when using older API versions, any profile returned that has endpoints with a geographic region assignment, doesnΓÇÖt have its geographic region assignment shown.
## Traffic Manager Subnet traffic routing method ### What are some use cases where subnet routing is useful?
-Subnet routing allows you to differentiate the experience you deliver for specific sets of users identified by the source IP of their DNS requests IP address. An example would be showing different content if users are connecting to a website from your corporate HQ. Another would be restricting users from certain ISPs to only access endpoints that support only IPv4 connections if those ISPs have sub-par performance when IPv6 is used.
+Subnet routing allows you to differentiate the experience you deliver for specific sets of users identified by the source IP of their DNS requests IP address. An example would be showing different content if users are connecting to a website from your corporate HQ. Another would be restricting users from certain ISPs to only access endpoints that support only IPv4 connections if those ISPs have subpar performance when IPv6 is used.
Another reason to use Subnet routing method is in conjunction with other profiles in a nested profile set. For example, if you want to use Geographic routing method for geo-fencing your users, but for a specific ISP you want to do a different routing method, you can have a profile withy Subnet routing method as the parent profile and override that ISP to use a specific child profile and have the standard Geographic profile for everyone else. ### How does Traffic Manager know the IP address of the end user?
-End user devices typically use a DNS resolver to do the DNS lookup on their behalf. The outgoing IP of such resolvers is what Traffic Manager sees as the source IP. In addition, Subnet routing method also looks to see if there is EDNS0 Extended Client Subnet (ECS) information that was passed with the request. If ECS information is present, that is the address used to determine the routing. In the absence of ECS information, the source IP of the query is used for routing purposes.
+End-user devices typically use a DNS resolver to do the DNS lookup on their behalf. The outgoing IP of such resolvers is what Traffic Manager sees as the source IP. In addition, Subnet routing method also looks to see if thereΓÇÖs EDNS0 Extended Client Subnet (ECS) information that was passed with the request. If ECS information is present, that is the address used to determine the routing. In the absence of ECS information, the source IP of the query is used for routing purposes.
### How can I specify IP addresses when using Subnet routing? The IP addresses to associate with an endpoint can be specified in two ways. First, you can use the quad dotted decimal octet notation with a start and end addresses to specify the range (for example, 1.2.3.4-5.6.7.8 or 3.4.5.6-3.4.5.6). Second, you can use the CIDR notation to specify the range (for example, 1.2.3.0/24). You can specify multiple ranges and can use both notation types in a range set. A few restrictions apply. -- You cannot have overlap of address ranges since each IP needs to be mapped to only a single endpoint-- The start address cannot be more than the end address
+- You canΓÇÖt have overlap of address ranges since each IP needs to be mapped to only a single endpoint
+- The start address canΓÇÖt be more than the end address
- In the case of the CIDR notation, the IP address before the '/' should be the start address of that range (for example, 1.2.3.0/24 is valid but 1.2.3.4.4/24 is NOT valid) ### How can I specify a fallback endpoint when using Subnet routing?
-In a profile with Subnet routing, if you have an endpoint with no subnets mapped to it, any request that does not match with other endpoints will be directed to here. It is highly recommended that you have such a fallback endpoint in your profile since Traffic Manager will return a NXDOMAIN response if a request comes in and it is not mapped to any endpoints or if it is mapped to an endpoint but that endpoint is unhealthy.
+In a profile with Subnet routing, if you have an endpoint with no subnets mapped to it, any request that doesnΓÇÖt match with other endpoints will be directed to here. ItΓÇÖs highly recommended that you have such a fallback endpoint in your profile since Traffic Manager will return an NXDOMAIN response if a request comes in and it isnΓÇÖt mapped to any endpoints or if itΓÇÖs mapped to an endpoint but that endpoint is unhealthy.
### What happens if an endpoint is disabled in a Subnet routing type profile?
-In a profile with Subnet routing, if you have an endpoint with that is disabled, Traffic Manager will behave as if that endpoint and the subnet mappings it has does not exist. If a query that would've matched with its IP address mapping is received and the endpoint is disabled, Traffic Manager will return a fallback endpoint (one with no mappings) or if such an endpoint is not present, will return a NXDOMAIN response.
+In a profile with Subnet routing, if you have an endpoint with that is disabled, Traffic Manager will behave as if that endpoint and the subnet mappings it has doesnΓÇÖt exist. If a query that would have matched with its IP address mapping is received and the endpoint is disabled, Traffic Manager will return a fallback endpoint (one with no mappings) or if such an endpoint isnΓÇÖt present, will return an NXDOMAIN response.
## Traffic Manager MultiValue traffic routing method ### What are some use cases where MultiValue routing is useful?
-MultiValue routing returns multiple healthy endpoints in a single query response. The main advantage of this is that, if an endpoint is unhealthy, the client has more options to retry without making another DNS call (which might return the same value from an upstream cache). This is applicable for availability sensitive applications that want to minimize the downtime.
+MultiValue routing returns multiple healthy endpoints in a single query response. The main advantage of this is that, if an endpoint is unhealthy, the client has more options to retry without making another DNS call (which might return the same value from an upstream cache). This is applicable for availability-sensitive applications that want to minimize the downtime.
Another use for MultiValue routing method is if an endpoint is "dual-homed" to both IPv4 and IPv6 addresses and you want to give the caller both options to choose from when it initiates a connection to the endpoint. ### How many endpoints are returned when MultiValue routing is used?
You can specify the maximum number of endpoints to be returned and MultiValue wi
### Will I get the same set of endpoints when MultiValue routing is used?
-We cannot guarantee that the same set of endpoints will be returned in each query. This is also affected by the fact that some of the endpoints might go unhealthy at which point they will not be included in the response
+We canΓÇÖt guarantee that the same set of endpoints will be returned in each query. This is also affected by the fact that some of the endpoints might go unhealthy at which point they wonΓÇÖt be included in the response
## Real User Measurements ### What are the benefits of using Real User Measurements?
-When you use performance routing method, Traffic Manager picks the best Azure region for your end user to connect to by inspecting the source IP and EDNS Client Subnet (if passed in) and checking it against the network latency intelligence the service maintains. Real User Measurements enhances this for your end user base by having their experience contribute to this latency table in addition to ensuring that this table adequately spans the end user networks from where your end users connect to Azure. This leads to an increased accuracy in the routing of your end user.
+When you use performance routing method, Traffic Manager picks the best Azure region for your end user to connect to by inspecting the source IP and EDNS Client Subnet (if passed in) and checking it against the network latency intelligence the service maintains. Real User Measurements enhances this for your end-user base by having their experience contribute to this latency table in addition to ensuring that this table adequately spans the end-user networks from where your end users connect to Azure. This leads to an increased accuracy in the routing of your end user.
### Can I use Real User Measurements with non-Azure regions?
-Real User Measurements measures and reports on only the latency to reach Azure regions. If you are using performance-based routing with endpoints hosted in non-Azure regions, you can still benefit from this feature by having increased latency information about the representative Azure region you had selected to be associated with this endpoint.
+Real User Measurements measures and reports on only the latency to reach Azure regions. If youΓÇÖre using performance-based routing with endpoints hosted in non-Azure regions, you can still benefit from this feature by having increased latency information about the representative Azure region you had selected to be associated with this endpoint.
### Which routing method benefits from Real User Measurements?
You can also turn off Real User Measurements by deleting your key. Once you dele
### Can I use Real User Measurements with client applications other than web pages?
-Yes, Real User Measurements is designed to ingest data collected through different type of end user clients. This FAQ will be updated as new types of client applications get supported.
+Yes, Real User Measurements is designed to ingest data collected through different type of end-user clients. This FAQ will be updated as new types of client applications get supported.
### How many measurements are made each time my Real User Measurements enabled web page is rendered?
-When Real User Measurements is used with the measurement JavaScript provided, each page rendering results in six measurements being taken. These are then reported back to the Traffic Manager service. You are charged for this feature based on the number of measurements reported to Traffic Manager service. For example, if the user navigates away from your webpage while the measurements are being taken but before it was reported, those measurements are not considered for billing purposes.
+When Real User Measurements is used with the measurement JavaScript provided, each page rendering results in six measurements being taken. These are then reported back to the Traffic Manager service. You are charged for this feature based on the number of measurements reported to Traffic Manager service. For example, if the user navigates away from your webpage while the measurements are being taken but before it was reported, those measurements arenΓÇÖt considered for billing purposes.
### Is there a delay before Real User Measurements script runs in my webpage?
-No, there is no programmed delay before the script is invoked.
+No, thereΓÇÖs no programmed delay before the script is invoked.
### Can I use Real User Measurements with only the Azure regions I want to measure?
-No, each time it is invoked, the Real User Measurements script measures a set of six Azure regions as determined by the service. This set changes between different invocations and when a large number of such invocations happen, the measurement coverage spans across different Azure regions.
+No, each time itΓÇÖs invoked, the Real User Measurements script measures a set of six Azure regions as determined by the service. This set changes between different invocations and when a large number of such invocations happen, the measurement coverage spans across different Azure regions.
### Can I limit the number of measurements made to a specific number?
-The measurement JavaScript is embedded within your webpage and you are in complete control over when to start and stop using it. As long as the Traffic Manager service receives a request for a list of Azure regions to be measured, a set of regions are returned.
+The measurement JavaScript is embedded within your webpage and you are in complete control over when to start and stop using it. As long as the Traffic Manager service receives a request for a list of Azure regions to be measured, a set of regions is returned.
### Can I see the measurements taken by my client application as part of Real User Measurements?
-Since the measurement logic is run from your client application, you are in full control of what happens including seeing the latency measurements. Traffic Manager does not report an aggregate view of the measurements received under the key linked to your subscription.
+Since the measurement logic is run from your client application, you are in full control of what happens including seeing the latency measurements. Traffic Manager doesnΓÇÖt report an aggregate view of the measurements received under the key linked to your subscription.
### Can I modify the measurement script provided by Traffic Manager?
While you are in control of what is embedded on your web page, we strongly disco
### Will it be possible for others to see the key I use with Real User Measurements?
-When you embed the measurement script to a web page it will be possible for others to see the script and your Real User Measurements (RUM) key. But it is important to know that this key is different from your subscription ID and is generated by Traffic Manager to be used only for this purpose. Knowing your RUM key will not compromise your Azure account safety.
+When you embed the measurement script to a web page, it will be possible for others to see the script and your Real User Measurements (RUM) key. But itΓÇÖs important to know that this key is different from your subscription ID and is generated by Traffic Manager to be used only for this purpose. Knowing your RUM key wonΓÇÖt compromise your Azure account safety.
### Can others abuse my RUM key?
-While it is possible for others to use your key to send wrong information to Azure, a few wrong measurements will not change the routing since it is taken into account along with all the other measurements we receive. If you need to change your keys, you can re-generate the key at which point the old key becomes discarded.
+While itΓÇÖs possible for others to use your key to send wrong information to Azure, a few wrong measurements wonΓÇÖt change the routing since itΓÇÖs taken into account along with all the other measurements we receive. If you need to change your keys, you can regenerate the key at which point the old key becomes discarded.
### Do I need to put the measurement JavaScript in all my web pages?
When the provided measurement JavaScript is used, Traffic Manager will have visi
### Does the webpage measuring Real User Measurements need to be using Traffic Manager for routing?
-No, it doesn't need to use Traffic Manager. The routing side of Traffic Manager operates separately from the Real User Measurement part and although it is a great idea to have them both in the same web property, they don't need to be.
+No, it doesn't need to use Traffic Manager. The routing side of Traffic Manager operates separately from the Real User Measurement part and although itΓÇÖs a great idea to have them both in the same web property, they don't need to be.
### Do I need to host any service on Azure regions to use with Real User Measurements?
No, you don't need to host any server-side component on Azure for Real User Meas
### Will my Azure bandwidth usage increase when I use Real User Measurements?
-As mentioned in the previous answer, the server-side components of Real User Measurements are owned and managed by Azure. This means your Azure bandwidth usage will not increase because you use Real User Measurements. This does not include any bandwidth usage outside of what Azure charges. We minimize the bandwidth used by downloading only a single pixel image to measurement the latency to an Azure region.
+As mentioned in the previous answer, the server-side components of Real User Measurements are owned and managed by Azure. This means your Azure bandwidth usage wonΓÇÖt increase because you use Real User Measurements. This doesnΓÇÖt include any bandwidth usage outside of what Azure charges. We minimize the bandwidth used by downloading only a single pixel image to measurement the latency to an Azure region.
## Traffic View
Traffic View is a feature of Traffic Manager that helps you understand more abou
- The regions from where your users are connecting to your endpoints in Azure. - The volume of users connecting from these regions.-- The Azure regions to which they are getting routed to.
+- The Azure regions to which theyΓÇÖre getting routed to.
- Their latency experience to these Azure regions. This information is available for you to consume through geographical map overlay and tabular views in the portal in addition to being available as raw data for you to download. ### How can I benefit from using Traffic View?
-Traffic View gives you the overall view of the traffic your Traffic Manager profiles receive. In particular, it can be used to understand where your user base connects from and equally importantly what their average latency experience is. You can then use this information to find areas in which you need to focus, for example, by expanding your Azure footprint to a region that can serve those users with lower latency. Another insight you can derive from using Traffic View is to see the patterns of traffic to different regions which in turn can help you make decisions on increasing or decreasing invent in those regions.
+Traffic View gives you the overall view of the traffic your Traffic Manager profiles receive. In particular, it can be used to understand where your user base connects from and equally importantly what their average latency experience is. You can then use this information to find areas in which you need to focus, for example, by expanding your Azure footprint to a region that can serve those users with lower latency. Another insight you can derive from using Traffic View is to see the patterns of traffic to different regions, which in turn can help you make decisions on increasing or decreasing invent in those regions.
### How is Traffic View different from the Traffic Manager metrics available through Azure monitor?
The DNS queries served by Azure Traffic Manager do consider ECS information to i
### How many days of data does Traffic View use?
-Traffic View creates its output by processing the data from the seven days preceding the day before when it is viewed by you. This is a moving window and the latest data will be used each time you visit.
+Traffic View creates its output by processing the data from the seven days preceding the day before when itΓÇÖs viewed by you. This is a moving window and the latest data will be used each time you visit.
### How does Traffic View handle external endpoints?
-When you use external endpoints hosted outside Azure regions in a Traffic Manager profile you can choose to have it mapped to an Azure region which is a proxy for its latency characteristics (this is in fact needed if you use performance routing method). If it has this Azure region mapping, that Azure region's latency metrics will be used when creating the Traffic View output. If no Azure region is specified, the latency information will be empty in the data for those external endpoints.
+When you use external endpoints hosted outside Azure regions in a Traffic Manager profile, you can choose to have it mapped to an Azure region, which is a proxy for its latency characteristics (this is in fact needed if you use performance routing method). If it has this Azure region mapping, that Azure region's latency metrics will be used when creating the Traffic View output. If no Azure region is specified, the latency information will be empty in the data for those external endpoints.
### Do I need to enable Traffic View for each profile in my subscription?
You can turn off Traffic View for any profile using the Portal or REST API.
### How does Traffic View billing work?
-Traffic View pricing is based on the number of data points used to create the output. Currently, the only data type supported is the queries your profile receives. In addition, you are only billed for the processing that was done when you have Traffic View enabled. This means that, if you enable Traffic View for some time period in a month and turn it off during other times, only the data points processed while you had the feature enabled count towards your bill.
+Traffic View pricing is based on the number of data points used to create the output. Currently, the only data type supported is the queries your profile receives. In addition, youΓÇÖre only billed for the processing that was done when you have Traffic View enabled. This means that, if you enable Traffic View for some time period in a month and turn it off during other times, only the data points processed while you had the feature enabled count towards your bill.
## Traffic Manager endpoints ### Can I use Traffic Manager with endpoints from multiple subscriptions?
-Using endpoints from multiple subscriptions is not possible with Azure Web Apps. Azure Web Apps requires that any custom domain name used with Web Apps is only used within a single subscription. It is not possible to use Web Apps from multiple subscriptions with the same domain name.
+Using endpoints from multiple subscriptions isnΓÇÖt possible with Azure Web Apps. Azure Web Apps requires that any custom domain name used with Web Apps is only used within a single subscription. It isnΓÇÖt possible to use Web Apps from multiple subscriptions with the same domain name.
-For other endpoint types, it is possible to use Traffic Manager with endpoints from more than one subscription. In Resource Manager, endpoints from any subscription can be added to Traffic Manager, as long as the person configuring the Traffic Manager profile has read access to the endpoint. These permissions can be granted using [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md). Endpoints from other subscriptions can be added using [Azure PowerShell](/powershell/module/az.trafficmanager/new-aztrafficmanagerendpoint) or the [Azure CLI](/cli/azure/network/traffic-manager/endpoint#az_network_traffic_manager_endpoint_create).
+For other endpoint types, itΓÇÖs possible to use Traffic Manager with endpoints from more than one subscription. In Resource Manager, endpoints from any subscription can be added to Traffic Manager, as long as the person configuring the Traffic Manager profile has read access to the endpoint. These permissions can be granted using [Azure role-based access control (Azure RBAC role)](../role-based-access-control/role-assignments-portal.md). Endpoints from other subscriptions can be added using [Azure PowerShell](/powershell/module/az.trafficmanager/new-aztrafficmanagerendpoint) or the [Azure CLI](/cli/azure/network/traffic-manager/endpoint#az_network_traffic_manager_endpoint_create).
### Can I use Traffic Manager with Cloud Service 'Staging' slots?
Yes. Cloud Service 'staging' slots can be configured in Traffic Manager as Exter
### Does Traffic Manager support IPv6 endpoints?
-Traffic Manager does not currently provide IPv6-addressable name servers. However, Traffic Manager can still be used by IPv6 clients connecting to IPv6 endpoints. A client does not make DNS requests directly to Traffic Manager. Instead, the client uses a recursive DNS service. An IPv6-only client sends requests to the recursive DNS service via IPv6. Then the recursive service should be able to contact the Traffic Manager name servers using IPv4.
+Traffic Manager doesnΓÇÖt currently provide IPv6-addressable name servers. However, Traffic Manager can still be used by IPv6 clients connecting to IPv6 endpoints. A client doesnΓÇÖt make DNS request directly to Traffic Manager. Instead, the client uses a recursive DNS service. An IPv6-only client sends requests to the recursive DNS service via IPv6. Then the recursive service should be able to contact the Traffic Manager name servers using IPv4.
Traffic Manager responds with the DNS name or IP address of the endpoint. To support an IPv6 endpoint, there are two options. You can add the endpoint as a DNS name that has an associated AAAA record and Traffic Manager will health check that endpoint and return it as a CNAME record type in the query response. You can also add that endpoint directly using the IPv6 address and Traffic Manager will return a AAAA type record in the query response. ### Can I use Traffic Manager with more than one Web App in the same region?
-Typically, Traffic Manager is used to direct traffic to applications deployed in different regions. However, it can also be used where an application has more than one deployment in the same region. The Traffic Manager Azure endpoints do not permit more than one Web App endpoint from the same Azure region to be added to the same Traffic Manager profile.
+Typically, Traffic Manager is used to direct traffic to applications deployed in different regions. However, it can also be used where an application has more than one deployment in the same region. The Traffic Manager Azure endpoints donΓÇÖt permit more than one Web App endpoint from the same Azure region to be added to the same Traffic Manager profile.
### How do I move my Traffic Manager profile's Azure endpoints to a different resource group or subscription?
In the unlikely event of an outage of an entire Azure region, Traffic Manager is
### How does the choice of resource group location affect Traffic Manager?
-Traffic Manager is a single, global service. It is not regional. The choice of resource group location makes no difference to Traffic Manager profiles deployed in that resource group.
+Traffic Manager is a single, global service. It isnΓÇÖt regional. The choice of resource group location makes no difference to Traffic Manager profiles deployed in that resource group.
-Azure Resource Manager requires all resource groups to specify a location, which determines the default location for resources deployed in that resource group. When you create a Traffic Manager profile, it is created in a resource group. All Traffic Manager profiles use **global** as their location, overriding the resource group default.
+Azure Resource Manager requires all resource groups to specify a location, which determines the default location for resources deployed in that resource group. When you create a Traffic Manager profile, itΓÇÖs created in a resource group. All Traffic Manager profiles use **global** as their location, overriding the resource group default.
### How do I determine the current health of each endpoint?
You can also use Azure Monitor to track the health of your endpoints and see a v
Yes. Traffic Manager supports probing over HTTPS. Configure **HTTPS** as the protocol in the monitoring configuration.
-Traffic manager cannot provide any certificate validation, including:
+Traffic manager canΓÇÖt provide any certificate validation, including:
-* Server-side certificates are not validated
-* SNI server-side certificates are not validated
-* Client certificates are not supported
+* Server-side certificates arenΓÇÖt validated
+* SNI server-side certificates arenΓÇÖt validated
+* Client certificates arenΓÇÖt supported
### Do I use an IP address or a DNS name when adding an endpoint?
All routing methods and monitoring settings are supported by the three endpoint
### What types of IP addresses can I use when adding an endpoint?
-Traffic Manager allows you to use IPv4 or IPv6 addresses to specify endpoints. There are a few restrictions which are listed below:
+Traffic Manager allows you to use IPv4 or IPv6 addresses to specify endpoints. There are a few restrictions, which are listed below:
-- Addresses that correspond to reserved private IP address spaces are not allowed. These addresses include those called out in RFC 1918, RFC 6890, RFC 5737, RFC 3068, RFC 2544 and RFC 5771
+- Addresses that correspond to reserved private IP address spaces arenΓÇÖt allowed. These addresses include those called out in RFC 1918, RFC 6890, RFC 5737, RFC 3068, RFC 2544 and RFC 5771
- The address must not contain any port numbers (you can specify the ports to be used in the profile configuration settings) - No two endpoints in the same profile can have the same target IP address ### Can I use different endpoint addressing types within a single profile?
-No, Traffic Manager does not allow you to mix endpoint addressing types within a profile, except for the case of a profile with MultiValue routing type where you can mix IPv4 and IPv6 addressing types
+No, Traffic Manager doesnΓÇÖt allow you to mix endpoint addressing types within a profile, except for the case of a profile with MultiValue routing type where you can mix IPv4 and IPv6 addressing types
### What happens when an incoming query's record type is different from the record type associated with the addressing type of the endpoints?
For profiles with routing method set to MultiValue:
### Can I use a profile with IPv4 / IPv6 addressed endpoints in a nested profile?
-Yes, you can with the exception that a profile of type MultiValue cannot be a parent profile in a nested profile set.
+Yes, you can with the exception that a profile of type MultiValue canΓÇÖt be a parent profile in a nested profile set.
-### I stopped an web application endpoint in my Traffic Manager profile but I am not receiving any traffic even after I restarted it. How can I fix this?
+### I stopped a web application endpoint in my Traffic Manager profile but IΓÇÖm not receiving any traffic even after I restarted it. How can I fix this?
When an Azure web application endpoint is stopped Traffic Manager stops checking its health and restarts the health checks only after it detects that the endpoint has restarted. To prevent this delay, disable and then reenable that endpoint in the Traffic Manager profile after you restart the endpoint.
-### Can I use Traffic Manager even if my application does not have support for HTTP or HTTPS?
+### Can I use Traffic Manager even if my application doesnΓÇÖt have support for HTTP or HTTPS?
Yes. You can specify TCP as the monitoring protocol and Traffic Manager can initiate a TCP connection and wait for a response from the endpoint. If the endpoint replies to the connection request with a response to establish the connection, within the timeout period, then that endpoint is marked as healthy. ### What specific responses are required from the endpoint when using TCP monitoring?
-When TCP monitoring is used, Traffic Manager starts a three-way TCP handshake by sending a SYN request to endpoint at the specified port. It then waits for a SYN-ACK response from the endpoint for a period of time (specified in the timeout settings).
+When TCP monitoring is used, Traffic Manager starts a three-way TCP handshake by sending a SYN request to endpoint at the specified port. It then waits for an SYN-ACK response from the endpoint for a period of time (specified in the timeout settings).
-- If a SYN-ACK response is received within the timeout period specified in the monitoring settings, then that endpoint is considered healthy. A FIN or FIN-ACK is the expected response from the Traffic Manager when it regularly terminates a socket.-- If a SYN-ACK response is received after the specified timeout, the Traffic Manager will respond with an RST to reset the connection.
+- If an SYN-ACK response is received within the timeout period specified in the monitoring settings, then that endpoint is considered healthy. A FIN or FIN-ACK is the expected response from the Traffic Manager when it regularly terminates a socket.
+- If an SYN-ACK response is received after the specified timeout, the Traffic Manager will respond with an RST to reset the connection.
### How fast does Traffic Manager move my users away from an unhealthy endpoint? Traffic Manager provides multiple settings that can help you to control the failover behavior of your Traffic Manager profile as follows: - you can specify that the Traffic Manager probes the endpoints more frequently by setting the Probing Interval at 10 seconds. This ensures that any endpoint going unhealthy can be detected as soon as possible. -- you can specify how long to wait before a health check request times out (minimum time out value is 5 sec).
+- you can specify how long to wait before a health check request times out (minimum time-out value is 5 sec).
- you can specify how many failures can occur before the endpoint is marked as unhealthy. This value can be low as 0, in which case the endpoint is marked unhealthy as soon as it fails the first health check. However, using the minimum value of 0 for the tolerated number of failures can lead to endpoints being taken out of rotation due to any transient issues that may occur at the time of probing.-- you can specify the time-to-live (TTL) for the DNS response to be as low as 0. Doing so means that DNS resolvers cannot cache the response and each new query gets a response that incorporates the most up-to-date health information that the Traffic Manager has.
+- you can specify the time-to-live (TTL) for the DNS response to be as low as 0. Doing so means that DNS resolvers canΓÇÖt cache the response and each new query gets a response that incorporates the most up-to-date health information that the Traffic Manager has.
By using these settings, Traffic Manager can provide failovers under 10 seconds after an endpoint goes unhealthy and a DNS query is made against the corresponding profile.
Traffic Manager monitoring settings are at a per profile level. If you need to u
### How can I assign HTTP headers to the Traffic Manager health checks to my endpoints?
-Traffic Manager allows you to specify custom headers in the HTTP(S) health checks it initiates to your endpoints. If you want to specify a custom header, you can do that at the profile level (applicable to all endpoints) or specify it at the endpoint level. If a header is defined at both levels, then the one specified at the endpoint level will override the profile level one.
+Traffic Manager allows you to specify custom headers in the HTTP(S) health checks it initiates to your endpoints. If you want to specify a custom header, you can do that at the profile level (applicable to all endpoints) or specify it at the endpoint level. If a header is defined at both levels, then the one specified at the endpoint level will override the profile level 1.
One common use case for this is specifying host headers so that Traffic Manager requests may get routed correctly to an endpoint hosted in a multi-tenant environment. Another use case of this is to identify Traffic Manager requests from an endpoint's HTTP(S) request logs ### What host header do endpoint health checks use?
The number of Traffic Manager health checks reaching your endpoint depends on th
### How can I get notified if one of my endpoints goes down?
-One of the metrics provided by Traffic Manager is the health status of endpoints in a profile. You can see this as an aggregate of all endpoints inside a profile (for example, 75% of your endpoints are healthy), or, at a per endpoint level. Traffic Manager metrics are exposed through Azure Monitor and you can use its [alerting capabilities](../azure-monitor/alerts/alerts-metric.md) to get notifications when there is a change in the health status of your endpoint. For more details, see [Traffic Manager metrics and alerts](traffic-manager-metrics-alerts.md).
+One of the metrics provided by Traffic Manager is the health status of endpoints in a profile. You can see this as an aggregate of all endpoints inside a profile (for example, 75% of your endpoints are healthy), or, at a per endpoint level. Traffic Manager metrics are exposed through Azure Monitor and you can use its [alerting capabilities](../azure-monitor/alerts/alerts-metric.md) to get notifications when thereΓÇÖs a change in the health status of your endpoint. For more information, see [Traffic Manager metrics and alerts](traffic-manager-metrics-alerts.md).
## Traffic Manager nested profiles ### How do I configure nested profiles?
-Nested Traffic Manager profiles can be configured using both the Azure Resource Manager and the classic Azure REST APIs, Azure PowerShell cmdlets and cross-platform Azure CLI commands. They are also supported via the new Azure portal.
+Nested Traffic Manager profiles can be configured using both the Azure Resource Manager and the classic Azure REST APIs, Azure PowerShell cmdlets and cross-platform Azure CLI commands. TheyΓÇÖre also supported via the new Azure portal.
### How many layers of nesting does Traffic Manger support?
-You can nest profiles up to 10 levels deep. 'Loops' are not permitted.
+You can nest profiles up to 10 levels deep. 'Loops' arenΓÇÖt permitted.
### Can I mix other endpoint types with nested child profiles, in the same Traffic Manager profile?
Yes. There are no restrictions on how you combine endpoints of different types w
### How does the billing model apply for Nested profiles?
-There is no negative pricing impact of using nested profiles.
+ThereΓÇÖs no negative pricing impact of using nested profiles.
Traffic Manager billing has two components: endpoint health checks and millions of DNS queries
-* Endpoint health checks: There is no charge for a child profile when configured as an endpoint in a parent profile. Monitoring of the endpoints in the child profile is billed in the usual way.
+* Endpoint health checks: ThereΓÇÖs no charge for a child profile when configured as an endpoint in a parent profile. Monitoring of the endpoints in the child profile is billed in the usual way.
* DNS queries: Each query is only counted once. A query against a parent profile that returns an endpoint from a child profile is counted against the parent profile only. For full details, see the [Traffic Manager pricing page](https://azure.microsoft.com/pricing/details/traffic-manager/). ### Is there a performance impact for nested profiles?
-No. There is no performance impact incurred when using nested profiles.
+No. ThereΓÇÖs no performance impact incurred when using nested profiles.
-The Traffic Manager name servers traverse the profile hierarchy internally when processing each DNS query. A DNS query to a parent profile can receive a DNS response with an endpoint from a child profile. A single CNAME record is used whether you are using a single profile or nested profiles. There is no need to create a CNAME record for each profile in the hierarchy.
+The Traffic Manager name servers traverse the profile hierarchy internally when processing each DNS query. A DNS query to a parent profile can receive a DNS response with an endpoint from a child profile. A single CNAME record is used whether youΓÇÖre using a single profile or nested profiles. ThereΓÇÖs no need to create a CNAME record for each profile in the hierarchy.
### How does Traffic Manager compute the health of a nested endpoint in a parent profile?
The following table describes the behavior of Traffic Manager health checks for
| Child Profile Monitor status | Parent Endpoint Monitor status | Notes | | | | |
-| Disabled. The child profile has been disabled. |Stopped |The parent endpoint state is Stopped, not Disabled. The Disabled state is reserved for indicating that you have disabled the endpoint in the parent profile. |
+| Disabled. The child profile has been disabled. |Stopped |The parent endpoint state is Stopped, not Disabled. The Disabled state is reserved for indicating that youΓÇÖve disabled the endpoint in the parent profile. |
| Degraded. At least one child profile endpoint is in a Degraded state. |Online: the number of Online endpoints in the child profile is at least the value of MinChildEndpoints.<BR>CheckingEndpoint: the number of Online plus CheckingEndpoint endpoints in the child profile is at least the value of MinChildEndpoints.<BR>Degraded: otherwise. |Traffic is routed to an endpoint of status CheckingEndpoint. If MinChildEndpoints is set too high, the endpoint is always degraded. | | Online. At least one child profile endpoint is an Online state. No endpoint is in the Degraded state. |See above. | | | CheckingEndpoints. At least one child profile endpoint is 'CheckingEndpoint'. No endpoints are 'Online' or 'Degraded' |Same as above. | |
traffic-manager Traffic Manager Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-monitoring.md
For more information about troubleshooting failed health checks, see [Troublesho
* [Can I use a profile with IPv4 / IPv6 addressed endpoints in a nested profile?](./traffic-manager-faqs.md#can-i-use-a-profile-with-ipv4--ipv6-addressed-endpoints-in-a-nested-profile)
-* [I stopped an web application endpoint in my Traffic Manager profile but I'm not receiving any traffic even after I restarted it. How can I fix this?](./traffic-manager-faqs.md#i-stopped-an-web-application-endpoint-in-my-traffic-manager-profile-but-i-am-not-receiving-any-traffic-even-after-i-restarted-it-how-can-i-fix-this)
+* [I stopped a web application endpoint in my Traffic Manager profile but I'm not receiving any traffic even after I restarted it. How can I fix this?](./traffic-manager-faqs.md#i-stopped-a-web-application-endpoint-in-my-traffic-manager-profile-but-im-not-receiving-any-traffic-even-after-i-restarted-it-how-can-i-fix-this)
-* [Can I use Traffic Manager even if my application doesn't have support for HTTP or HTTPS?](./traffic-manager-faqs.md#can-i-use-traffic-manager-even-if-my-application-does-not-have-support-for-http-or-https)
+* [Can I use Traffic Manager even if my application doesn't have support for HTTP or HTTPS?](./traffic-manager-faqs.md#can-i-use-traffic-manager-even-if-my-application-doesnt-have-support-for-http-or-https)
* [What specific responses are required from the endpoint when using TCP monitoring?](./traffic-manager-faqs.md#what-specific-responses-are-required-from-the-endpoint-when-using-tcp-monitoring)
traffic-manager Traffic Manager Routing Methods https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/traffic-manager-routing-methods.md
As explained in [How Traffic Manager Works](traffic-manager-how-it-works.md), Tr
* [Is it guaranteed that Traffic Manager can correctly determine the exact geographic location of the user in every case?](./traffic-manager-faqs.md#is-it-guaranteed-that-traffic-manager-can-correctly-determine-the-exact-geographic-location-of-the-user-in-every-case)
-* [Does an endpoint need to be physically located in the same region as the one it's configured with for geographic routing?](./traffic-manager-faqs.md#does-an-endpoint-need-to-be-physically-located-in-the-same-region-as-the-one-it-is-configured-with-for-geographic-routing)
+* [Does an endpoint need to be physically located in the same region as the one it's configured with for geographic routing?](./traffic-manager-faqs.md#does-an-endpoint-need-to-be-physically-located-in-the-same-region-as-the-one-its-configured-with-for-geographic-routing)
-* [Can I assign geographic regions to endpoints in a profile that isn't configured to do geographic routing?](./traffic-manager-faqs.md#can-i-assign-geographic-regions-to-endpoints-in-a-profile-that-is-not-configured-to-do-geographic-routing)
+* [Can I assign geographic regions to endpoints in a profile that isn't configured to do geographic routing?](./traffic-manager-faqs.md#can-i-assign-geographic-regions-to-endpoints-in-a-profile-that-isnt-configured-to-do-geographic-routing)
* [Why am I getting an error when I try to change the routing method of an existing profile to Geographic?](./traffic-manager-faqs.md#why-am-i-getting-an-error-when-i-try-to-change-the-routing-method-of-an-existing-profile-to-geographic)
virtual-desktop Automatic Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/automatic-migration.md
Title: Migrate automatically from Azure Virtual Desktop (classic) (preview) - Azure
+ Title: Migrate automatically from Azure Virtual Desktop (classic) - Azure
description: How to migrate automatically from Azure Virtual Desktop (classic) to Azure Virtual Desktop by using the migration module. Previously updated : 09/15/2021 Last updated : 01/31/2022
-# Migrate automatically from Azure Virtual Desktop (classic) (preview)
+# Migrate automatically from Azure Virtual Desktop (classic)
-> [!IMPORTANT]
-> The migration module tool for Azure Virtual Desktop is currently in public preview.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-The migration module tool (preview) lets you migrate your organization from Azure Virtual Desktop (classic) to Azure Virtual Desktop automatically. This article will show you how to use the tool.
+The migration module tool lets you migrate your organization from Azure Virtual Desktop (classic) to Azure Virtual Desktop automatically. This article will show you how to use the tool.
## Requirements
To prepare your PowerShell environment:
Install-Module -Name PackageManagement -Repository PSGallery -Force Install-Module -Name PowerShellGet -Repository PSGallery -Force # Then restart shell
- Install-Module -Name Microsoft.RdInfra.RDPowershell.Migration -RequiredVersion 1.0.3725-Prerelease -AllowPrerelease -AllowClobber
+ Install-Module -Name Microsoft.RdInfra.RDPowershell.Migration -AllowClobber
Import-Module <Full path to the location of the migration module>\Microsoft.RdInfra.RDPowershell.Migration.psd1 ```
virtual-desktop Manage App Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/manage-app-groups.md
Title: Manage app groups for Azure Virtual Desktop portal - Azure
description: How to manage Azure Virtual Desktop app groups with the Azure portal. Previously updated : 07/20/2021 Last updated : 01/31/2022
The deployment process will do the following things for you:
- Create a link to an Azure Resource Manager template based on your configuration that you can download and save for later. >[!IMPORTANT]
->You can only create 200 application groups for each Azure Active Directory tenant. We added this limit because of service limitations for retrieving feeds for our users. This limit doesn't apply to app groups created in Azure Virtual Desktop (classic).
+>You can only create 500 application groups for each Azure Active Directory tenant. We added this limit because of service limitations for retrieving feeds for our users. This limit doesn't apply to app groups created in Azure Virtual Desktop (classic).
## Edit or remove an app
virtual-machine-scale-sets Cli Sample Attach Disks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/scripts/cli-sample-attach-disks.md
Previously updated : 03/27/2018 Last updated : 01/27/2022 # Attach and use data disks with a virtual machine scale set with the Azure CLI
-This script creates a virtual machine scale set and attaches and prepares data disks.
+This script creates a virtual machine scale set and attaches and prepares data disks.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] + ## Sample script
-[!code-azurecli-interactive[main](../../../cli_scripts/virtual-machine-scale-sets/use-data-disks/use-data-disks.sh "Create a virtual machine scale set with data disks")]
-## Clean up deployment
-Run the following command to remove the resource group, scale set, and all related resources.
+
+### Run the script
+
-```azurecli-interactive
-az group delete --name myResourceGroup
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
-This script uses the following commands to create a resource group, virtual machine scale set, and all related resources. Each command in the table links to command specific documentation.
+## Sample reference
+
+This script uses the commands outlined in the following table:
| Command | Notes | |||
This script uses the following commands to create a resource group, virtual mach
| [az group delete](/cli/azure/ad/group) | Deletes a resource group including all nested resources. | ## Next steps+ For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure/overview).
virtual-machine-scale-sets Cli Sample Create Scale Set From Custom Image https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/scripts/cli-sample-create-scale-set-from-custom-image.md
Previously updated : 03/27/2018 Last updated : 01/27/2022 - # Create a virtual machine scale set from a custom VM image with the Azure CLI
-This script creates a virtual machine scale set that uses a custom VM image as the source for the VM instances.
+This script creates a virtual machine scale set that uses a custom VM image as the source for the VM instances.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] + ## Sample script
-[!code-azurecli-interactive[main](../../../cli_scripts/virtual-machine-scale-sets/use-custom-vm-image/use-custom-vm-image.sh "Create a virtual machine scale set with a custom VM image")]
-## Clean up deployment
-Run the following command to remove the resource group, scale set, and all related resources.
+
+### Run the script
-```azurecli-interactive
-az group delete --name myResourceGroup
+
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
-This script uses the following commands to create a resource group, virtual machine scale set, and all related resources. Each command in the table links to command specific documentation.
+## Sample reference
+
+This script uses the commands outlined in the following table:
| Command | Notes | |||
-| [az group create](/cli/azure/ad/group) | Creates a resource group in which all resources are stored. |
+| [az group create](/cli/azure/ad/group#az-ad-group-create) | Creates a resource group in which all resources are stored. |
+| [az vm create](/cli/azure/vm#az-vm-create) | Creates an Azure virtual machine. |
+| [az sig create](/cli/azure/sig#az-sig-create) | Creates a shared image gallery. |
+| [az sig image-definition create](/cli/azure/sig/image-definition#az-sig-image-definition-create) | Creates a gallery image definition. |
+| [az sig image-version create](/cli/azure/sig/image-version#az-sig-image-version-create) | Create a new image version. |
| [az vmss create](/cli/azure/vmss) | Creates the virtual machine scale set and connects it to the virtual network, subnet, and network security group. A load balancer is also created to distribute traffic to multiple VM instances. This command also specifies the VM image to be used and administrative credentials. | | [az group delete](/cli/azure/ad/group) | Deletes a resource group including all nested resources. | ## Next steps+ For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure/overview).
virtual-machine-scale-sets Cli Sample Create Simple Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/scripts/cli-sample-create-simple-scale-set.md
Previously updated : 06/25/2020 Last updated : 01/27/2022 # Create a virtual machine scale set with the Azure CLI
-This script creates an Azure virtual machine scale set with an Ubuntu operating system and related networking resources including a load balancer. After running the script, you can access the VM instances over SSH.
+This script creates an Azure virtual machine scale set with an Ubuntu operating system and related networking resources including a load balancer. After running the script, you can access the VM instances over SSH.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] + ## Sample script
-```azurecli-interactive
-#!/bin/bash
-
-# Create a resource group
-az group create --name myResourceGroup --location eastus
-
-# Create a Network Security Group and allow access to port 22
-az network nsg create --resource-group MyResourceGroup --name MyNsg
-az network nsg rule create --resource-group MyResourceGroup --name AllowSsh --nsg-name MyNsg --priority 100 --destination-port-ranges 22
-
-# Create a scale set
-# Network resources such as an Azure load balancer are automatically created
-az vmss create \
- --resource-group myResourceGroup \
- --name myScaleSet \
- --image UbuntuLTS \
- --upgrade-policy-mode automatic \
- --admin-username azureuser \
- --generate-ssh-keys
- --nsg MyNsg
-```
-## Clean up deployment
-Run the following command to remove the resource group, scale set, and all related resources.
+
+### Run the script
-```azurecli-interactive
-az group delete --name myResourceGroup
+
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
-This script uses the following commands to create a resource group, virtual machine scale set, and all related resources. Each command in the table links to command specific documentation.
+## Sample reference
+
+This script uses the commands outlined in the following table:
| Command | Notes | |||
This script uses the following commands to create a resource group, virtual mach
| [az group delete](/cli/azure/ad/group) | Deletes a resource group including all nested resources. | ## Next steps+ For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure/overview).
virtual-machine-scale-sets Cli Sample Enable Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/scripts/cli-sample-enable-autoscale.md
# Automatically scale a virtual machine scale set with the Azure CLI
-This script creates a virtual machine scale set running Ubuntu and uses host-based metrics to automatically scale as CPU load changes.
+This script creates a virtual machine scale set running Ubuntu and uses host-based metrics to automatically scale as CPU load changes.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] + ## Sample script
-[!code-azurecli-interactive[main](../../../cli_scripts/virtual-machine-scale-sets/auto-scale-host-metrics/auto-scale-host-metrics.sh "Automatically scale a virtual machine scale set")]
-## Clean up deployment
-Run the following command to remove the resource group, scale set, and all related resources.
+
+### Run the script
+
-```azurecli-interactive
-az group delete --name myResourceGroup
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
-This script uses the following commands to create a resource group, virtual machine scale set, and all related resources. Each command in the table links to command specific documentation.
+## Sample reference
+
+This script uses the commands outlined in the following table:
| Command | Notes | |||
This script uses the following commands to create a resource group, virtual mach
| [az group delete](/cli/azure/ad/group) | Deletes a resource group including all nested resources. | ## Next steps+ For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure/overview).
virtual-machine-scale-sets Cli Sample Install Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/scripts/cli-sample-install-apps.md
Previously updated : 03/27/2018 Last updated : 01/27/2022 - # Install applications into a virtual machine scale set with the Azure CLI
-This script creates a virtual machine scale set running Ubuntu and uses the Custom Script Extension to install a basic web application. After running the script, you can access the web app through a web browser.
+This script creates a virtual machine scale set running Ubuntu and uses the Custom Script Extension to install a basic web application. After running the script, you can access the web app through a web browser.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] + ## Sample script
-[!code-azurecli-interactive[main](../../../cli_scripts/virtual-machine-scale-sets/install-apps/install-apps.sh "Install apps into a scale set")]
-## Clean up deployment
-Run the following command to remove the resource group, scale set, and all related resources.
+
+### Run the script
-```azurecli-interactive
-az group delete --name myResourceGroup
+
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
-This script uses the following commands to create a resource group, virtual machine scale set, and all related resources. Each command in the table links to command specific documentation.
+## Sample reference
+
+This script uses the commands outlined in the following table:
| Command | Notes | |||
This script uses the following commands to create a resource group, virtual mach
| [az group delete](/cli/azure/ad/group) | Deletes a resource group including all nested resources. | ## Next steps+ For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure/overview).
virtual-machine-scale-sets Cli Sample Single Availability Zone Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/scripts/cli-sample-single-availability-zone-scale-set.md
Previously updated : 03/27/2018 Last updated : 01/27/2022 - # Create a single-zone virtual machine scale set with the Azure CLI
-This script creates a virtual machine scale set running Ubuntu in a single Availability Zone. After running the script, you can access the virtual machine over RDP.
+This script creates a virtual machine scale set running Ubuntu in a single Availability Zone. After running the script, you can access the virtual machine over RDP.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] + ## Sample script
-[!code-azurecli-interactive[main](../../../cli_scripts/virtual-machine-scale-sets/create-single-availability-zone/create-single-availability-zone.sh "Create single-zone scale set")]
-## Clean up deployment
-Run the following command to remove the resource group, scale set, and all related resources.
+
+### Run the script
-```azurecli-interactive
-az group delete --name myResourceGroup
+
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
-This script uses the following commands to create a resource group, virtual machine scale set, and all related resources. Each command in the table links to command specific documentation.
+## Sample reference
+
+This script uses the commands outlined in the following table:
| Command | Notes | |||
This script uses the following commands to create a resource group, virtual mach
| [az group delete](/cli/azure/ad/group) | Deletes a resource group including all nested resources. | ## Next steps+ For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure/overview).
virtual-machine-scale-sets Cli Sample Zone Redundant Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/scripts/cli-sample-zone-redundant-scale-set.md
Previously updated : 03/27/2018 Last updated : 01/27/2022 # Create a zone-redundant virtual machine scale set with Azure CLI
-This script creates a virtual machine scale set running Ubuntu across multiple Availability Zones. After running the script, you can access the virtual machine over RDP.
+This script creates a virtual machine scale set running Ubuntu across multiple Availability Zones. After running the script, you can access the virtual machine over RDP.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] + ## Sample script
-[!code-azurecli-interactive[main](../../../cli_scripts/virtual-machine-scale-sets/create-zone-redundant-scale-set/create-zone-redundant-scale-set.sh "Create zone-redundant scale set")]
-## Clean up deployment
-Run the following command to remove the resource group, scale set, and all related resources.
+
+### Run the script
-```azurecli-interactive
-az group delete --name myResourceGroup
+
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
-This script uses the following commands to create a resource group, virtual machine scale set, and all related resources. Each command in the table links to command specific documentation.
+## Sample reference
| Command | Notes | |||
This script uses the following commands to create a resource group, virtual mach
| [az group delete](/cli/azure/ad/group) | Deletes a resource group including all nested resources. | ## Next steps+ For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure/overview).
virtual-machines Boot Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/boot-diagnostics.md
Located in the virtual machine blade, the boot diagnostics option is under the *
:::image type="content" source="./media/boot-diagnostics/boot-diagnostics-windows.png" alt-text="Screenshot of Windows boot diagnostics"::: ## Enable managed boot diagnostics
-Managed boot diagnostics can be enabled through the Azure portal, CLI and ARM Templates. Enabling through PowerShell is not yet supported.
+Managed boot diagnostics can be enabled through the Azure portal, CLI and ARM Templates.
### Enable managed boot diagnostics using the Azure portal When creating a VM in the Azure portal, the default setting is to have boot diagnostics enabled using a managed storage account. To view this, navigate to the *Management* tab during the VM creation.
virtual-machines Dbms_Guide_Sqlserver https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/dbms_guide_sqlserver.md
vm-linux Previously updated : 12/08/2021 Last updated : 01/30/2021
The diagram above displays a simple case. As eluded to in the article [Considera
- Using one large volume, which contains the SQL Server data files. Reason behind this configuration is that in real life there are numerous SAP databases with different sized database files with different I/O workload. - Use the D:\drive for tempdb as long as performance is good enough. If the overall workload is limited in performance by tempdb being located on the D:\ drive you might need to consider to move tempdb to separate Azure premium storage or Ultra disk disks as recommended in [this article](../../../azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist.md).
+SQL Server proportional fill mechanism distributes reads and writes to all datafiles evenly provided all SQL Server data files are the same size and have the same freespace. SAP on SQL Server will deliver the best performance when reads and writes are distributed evenly across all available datafiles. If a database has too few datafiles or datafiles with very different sizes the best method to correct this is an R3load export and import. An R3load export and import involves downtime and should only be done if there is an obvious performance problem that needs to be resolved
+If the datafiles are only moderately different sizes, increase all datafiles to the same size and SQL Server will rebalance data over time.
+SQL Server will automatically grow datafiles evenly if traceflag 1117 is set or if SQL Server 2016 or higher is used
+ ### Special for M-Series VMs For Azure M-Series VM, the latency writing into the transaction log can be reduced by factors, compared to Azure Premium Storage performance, when using Azure Write Accelerator. Hence, you should deploy Azure Write Accelerator for the VHD(s) that form the volume for the SQL Server transaction log. Details can be read in the document [Write Accelerator](../../how-to-enable-write-accelerator.md).
virtual-machines Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/get-started.md
ms.assetid: ad8e5c75-0cf6-4564-ae62-ea1246b4e5f2
vm-linux Previously updated : 01/24/2022 Last updated : 01/30/2022
In this section, you find documents about Microsoft Power BI integration into SA
## Change Log
+- January 30, 2022: Adding context about SQL Server proprotional fill and expectations that SQL server data files should be the same size and should have the same free space in [SQL Server Azure Virtual Machines DBMS deployment for SAP NetWeaver](./dbms_guide_sqlserver.md)
- January 24, 2022: Change in [HA for SAP NW on SLES with NFS on Azure Files](./high-availability-guide-suse-nfs-azure-files.md), [HA for SAP NW on Azure VMs on SLES with ANF](./high-availability-guide-suse-netapp-files.md), [HA for SAP NW on Azure VMs on SLES for SAP applications](./high-availability-guide-suse.md), [HA for NFS on Azure VMs on SLES](./high-availability-guide-suse-nfs.md), [HA for SAP NNW on Azure VMs on SLES multi-SID guide](./high-availability-guide-suse-multi-sid.md), [HA for SAP NW on RHEL with NFS on Azure Files](./high-availability-guide-rhel-nfs-azure-files.md), [HA for SAP NW on Azure VMs on RHEL for SAP applications](./high-availability-guide-rhel.md) and [HA for SAP NW on Azure VMs on RHEL with ANF](./high-availability-guide-rhel-netapp-files.md) and [HA for SAP NW on Azure VMs on RHEL multi-SID guide](./high-availability-guide-rhel-multi-sid.md) to remove cidr_netmask from Pacemaker configuration to allow the resource agent to determine the value automatically - January 12, 2022: Change in [HA for SAP NetWeaver on Azure VMs on Windows with Azure NetApp Files(SMB)](./high-availability-guide-windows-netapp-files-smb.md) to remove obsolete information for the SAP kernel that supports the scenario. - December 08, 2021: Change in [SQL Server Azure Virtual Machines DBMS deployment for SAP NetWeaver](./dbms_guide_sqlserver.md) to clarify Azure Load Balancer settings
virtual-machines Sap Information Lifecycle Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/sap-information-lifecycle-management.md
+
+ Title: SAP Information Lifecycle Management with Microsoft Azure Blob Storage | Microsoft Docs
+description: SAP Information Lifecycle Management with Microsoft Azure Blob Storage
+
+documentationcenter: ''
++
+editor: ''
+tags: azure-resource-manager
+keywords: ''
++
+ vm-linux
+ Last updated : 01/28/2022++++
+# SAP Information Lifecycle Management (ILM) with Microsoft Azure Blob Storage
+
+SAP Information Lifecycle Management (ILM) provides a broad range of capabilities for managing data
+volumes, Retention Management as well as the decommissioning of legacy systems, while balancing the
+total cost of ownership, risk, and legal compliance. SAP ILM Store (a component of ILM) would enable
+storing of these archive files and attachments from SAP system into Microsoft Azure Blob storage, thus
+enabling cloud storage.
+
+![Fig: Azure Blob Storage with ILM Store](media/sap-information-lifecycle-management/ilm-azure.png)
+
+## How to
+
+This document covers creation and configuration of Azure blob storage account to be used with SAP
+ILM. This account will be used to store archive data from S/4HANA System.
+
+The steps to be followed to create a storage account are:
+
+1. Register a new application with your subscription.
+2. Create a Blob storage account.
+3. Create a new custom role or use an existing (build-In or custom) role.
+4. Assign the role to application to allow access to the storage account.
+
+> [!NOTE]
+> Steps 2, 3 and 4 can either be done manually or by using the Microsoft Quickstart template.
+
+### QuickStart template approach:
+
+This is an automated approach to create the Azure account. You can find the template in the [Azure Quickstart Templates library](https://azure.microsoft.com/resources/templates/sap-ilm-store/).
+
+### Manual configuration approach:
+Azure blob storage account can be configured manually.
+The steps to be followed are:
+
+1. Register a new application
+The details are available at [Register an application with the Microsoft identity platform](/azure/active-directory/develop/quickstart-register-app)
+
+ > [!NOTE]
+ > Make sure that Client secret is added as per the section Add Credentials ΓÇô Add a Client Secret
+
+1. Create a Blob Storage account
+Refer steps in the page [Create a storage account](/azure/storage/common/storage-account-create?tabs=azure-portal)
+Ensure "Enable secure transfer" is set.
+It is recommended to set the following property values:
+ * Enable blob public access = false
+ * Minimum TLS Version = 1.2
+ * Enable storage account key access = false
+1. Maintain IAM for the account
+In the Access Control (IAM) setting, go to "Role Assignments" and add "Role assignment" for
+the App created with the role of "Storage Blob Data Contributor". In the App dialog, choose
+"User, group or Service Principal" for "Assign Access to" field.
+
+ > [!NOTE]
+ > Ensure no other user has access to this storage account apart from the registered application.
+
+During the process of the account setup and configuration, it is recommended to refer to [Security recommendations for Blob Storage](/azure/storage/blobs/security-recommendations)
+With the completion of this setup, we are ready to use this blob storage account with SAP ILM
+to store archive files from S/4 HANA System.
+
+## Next steps
+
+* [SAP ILM on the SAP help portal](https://help.sap.com/doc/c3b6eda797634474b7a3aac5a48e84d5/1610%20001/en-US/frameset.htm)
virtual-network Service Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/service-tags-overview.md
By default, service tags reflect the ranges for the entire cloud. Some service t
| **MicrosoftCloudAppSecurity** | Microsoft Defender for Cloud Apps. | Outbound | No | No | | **MicrosoftContainerRegistry** | Container registry for Microsoft container images. <br/><br/>**Note**: This tag has a dependency on the **AzureFrontDoor.FirstParty** tag. | Outbound | Yes | Yes | | **PowerBI** | Power BI. | Both | No | No|
-| **PowerPlatformInfra** | This tag represents the IP addresses used by the infrastructure to host Power Platform services. | Outbound | No | No |
+| **PowerPlatformInfra** | This tag represents the IP addresses used by the infrastructure to host Power Platform services. | Outbound | Yes | No |
| **PowerQueryOnline** | Power Query Online. | Both | No | No | | **ServiceBus** | Azure Service Bus traffic that uses the Premium service tier. | Outbound | Yes | Yes | | **ServiceFabric** | Azure Service Fabric.<br/><br/>**Note**: This tag represents the Service Fabric service endpoint for control plane per region. This enables customers to perform management operations for their Service Fabric clusters from their VNET (endpoint eg. https:// westus.servicefabric.azure.com). | Both | No | No |