Updates from: 05/11/2021 03:05:09
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Contentdefinitions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/contentdefinitions.md
Previously updated : 02/15/2021 Last updated : 05/10/2021
The **ContentDefinition** element contains the following elements:
| Metadata | 0:1 | A collection of key/value pairs that contains the metadata utilized by the content definition. | | LocalizedResourcesReferences | 0:1 | A collection of localized resources references. Use this element to customize the localization of a user interface and claims attribute. |
+### LoadUri
+
+The **LoadUri** element is used to specify the URL of the HTML5 page for the content definition. The Azure AD B2C [custom policy starter-packs](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack) come with content definitions that use Azure AD B2C HTML pages. The **LoadUri** starts with `~`, which is a relative path to your Azure AD B2C tenant.
+
+```XML
+<ContentDefinition Id="api.signuporsignin">
+ <LoadUri>~/tenant/templates/AzureBlue/unified.cshtml</LoadUri>
+ ...
+</ContentDefinition>
+```
+
+You can [customize the user interface with HTML templates](customize-ui-with-html.md). When using HTML templates, provide an absolute URL. The following example illustrates a content definition with HTML template:
+
+```XML
+<ContentDefinition Id="api.signuporsignin">
+ <LoadUri>https://your-storage-account.blob.core.windows.net/your-container/customize-ui.html</LoadUri>
+ ...
+</ContentDefinition>
+```
+ ### DataUri The **DataUri** element is used to specify the page identifier. Azure AD B2C uses the page identifier to load and initiate UI elements and client side JavaScript. The format of the value is `urn:com:microsoft:aad:b2c:elements:page-name:version`. The following table lists the page identifiers you can use.
The [version](page-layout.md) part of the `DataUri` specifies the package of con
The following example shows the **DataUri** of `selfasserted` version `1.2.0`: ```xml
-<ContentDefinition Id="api.localaccountpasswordreset">
-<LoadUri>~/tenant/templates/AzureBlue/selfAsserted.cshtml</LoadUri>
-<RecoveryUri>~/common/default_page_error.html</RecoveryUri>
-<DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:1.2.0</DataUri>
-<Metadata>
- <Item Key="DisplayName">Local account change password page</Item>
-</Metadata>
-</ContentDefinition>
+<!--
+<BuildingBlocks>
+ <ContentDefinitions>-->
+ <ContentDefinition Id="api.localaccountpasswordreset">
+ <LoadUri>~/tenant/templates/AzureBlue/selfAsserted.cshtml</LoadUri>
+ <RecoveryUri>~/common/default_page_error.html</RecoveryUri>
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:1.2.0</DataUri>
+ <Metadata>
+ <Item Key="DisplayName">Local account change password page</Item>
+ </Metadata>
+ </ContentDefinition>
+ <!--
+ </ContentDefinitions>
+</BuildingBlocks> -->
``` #### Migrating to page layout
-The format of the value must contain the word `contract`: _urn:com:microsoft:aad:b2c:elements:**contract**:page-name:version_. To specify a page layout in your custom policies that use an old **DataUri** value, use following table to migrate to the new format.
+To migrate from the old **DataUri** value (without page contract) to page layout version, add the word `contract` follow by a page version. Use following table to migrate from the old **DataUri** value to page layout version.
| Old DataUri value | New DataUri value | | -- | -- | | `urn:com:microsoft:aad:b2c:elements:globalexception:1.0.0` | `urn:com:microsoft:aad:b2c:elements:contract:globalexception:1.2.1` | | `urn:com:microsoft:aad:b2c:elements:globalexception:1.1.0` | `urn:com:microsoft:aad:b2c:elements:contract:globalexception:1.2.1` | | `urn:com:microsoft:aad:b2c:elements:idpselection:1.0.0` | `urn:com:microsoft:aad:b2c:elements:contract:providerselection:1.2.1` |
-| `urn:com:microsoft:aad:b2c:elements:selfasserted:1.0.0` | `urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.2` |
-| `urn:com:microsoft:aad:b2c:elements:selfasserted:1.1.0` | `urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.2` |
-| `urn:com:microsoft:aad:b2c:elements:unifiedssd:1.0.0` | `urn:com:microsoft:aad:b2c:elements:contract:unifiedssd:2.1.2` |
-| `urn:com:microsoft:aad:b2c:elements:unifiedssp:1.0.0` | `urn:com:microsoft:aad:b2c:elements:contract:unifiedssp:2.1.2` |
-| `urn:com:microsoft:aad:b2c:elements:unifiedssp:1.1.0` | `urn:com:microsoft:aad:b2c:elements:contract:unifiedssp:2.1.2` |
+| `urn:com:microsoft:aad:b2c:elements:selfasserted:1.0.0` | `urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.4` |
+| `urn:com:microsoft:aad:b2c:elements:selfasserted:1.1.0` | `urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.4` |
+| `urn:com:microsoft:aad:b2c:elements:unifiedssd:1.0.0` | `urn:com:microsoft:aad:b2c:elements:contract:unifiedssd:2.1.4` |
+| `urn:com:microsoft:aad:b2c:elements:unifiedssp:1.0.0` | `urn:com:microsoft:aad:b2c:elements:contract:unifiedssp:2.1.4` |
+| `urn:com:microsoft:aad:b2c:elements:unifiedssp:1.1.0` | `urn:com:microsoft:aad:b2c:elements:contract:unifiedssp:2.1.4` |
| `urn:com:microsoft:aad:b2c:elements:multifactor:1.0.0` | `urn:com:microsoft:aad:b2c:elements:contract:multifactor:1.2.0` | | `urn:com:microsoft:aad:b2c:elements:multifactor:1.1.0` | `urn:com:microsoft:aad:b2c:elements:contract:multifactor:1.2.0` |
The following example shows the content definition identifiers and the correspon
<DataUri>urn:com:microsoft:aad:b2c:elements:contract:providerselection:1.2.1</DataUri> </ContentDefinition> <ContentDefinition Id="api.signuporsignin">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:unifiedssp:2.1.2</DataUri>
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:unifiedssp:2.1.4</DataUri>
</ContentDefinition> <ContentDefinition Id="api.selfasserted">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.2</DataUri>
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.4</DataUri>
</ContentDefinition> <ContentDefinition Id="api.selfasserted.profileupdate">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.2</DataUri>
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.4</DataUri>
</ContentDefinition> <ContentDefinition Id="api.localaccountsignup">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.2</DataUri>
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.4</DataUri>
</ContentDefinition> <ContentDefinition Id="api.localaccountpasswordreset">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.2</DataUri>
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.4</DataUri>
</ContentDefinition> <ContentDefinition Id="api.phonefactor"> <DataUri>urn:com:microsoft:aad:b2c:elements:contract:multifactor:1.2.2</DataUri>
active-directory-b2c Technicalprofiles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/technicalprofiles.md
Previously updated : 03/04/2021 Last updated : 05/10/2021
The following example illustrates the use of the inclusion:
</TechnicalProfile> <TechnicalProfile Id="REST-UpdateProfile">
- <Metadata>
+ <DisplayName>Update the user profile</DisplayName>
+ <Metadata>
<Item Key="ServiceUrl">https://your-app-name.azurewebsites.NET/api/identity/update</Item> </Metadata>
- <DisplayName>Update the user profile</DisplayName>
<InputClaims> <InputClaim ClaimTypeReferenceId="objectId" /> <InputClaim ClaimTypeReferenceId="email" />
active-directory Howto Configure Publisher Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/howto-configure-publisher-domain.md
If your app isn't registered in a tenant, you'll only see the option to verify a
1. Click the **Verify and save domain** button.
+You're not required to maintain the resources that are used for verification after a domain has been verified. When the verification is finished, you can remove the hosted file.
+ ### To select a verified domain -- If your tenant has verified domains, select one of the domains from the **Select a verified domain** dropdown.
+If your tenant has verified domains, select one of the domains from the **Select a verified domain** dropdown.
->[!Note]
-> The expected 'Content-Type' header that should be returned is `application/json`. You may get an error as mentioned below if you use anything else like `application/json; charset=utf-8`
+> [!NOTE]
+> The expected `Content-Type` header that should be returned is `application/json`. You may get an error as mentioned below if you use anything else, like `application/json; charset=utf-8`:
>
->``` "Verification of publisher domain failed. Error getting JSON file from https:///.well-known/microsoft-identity-association. The server returned an unexpected content type header value. " ```
+> `Verification of publisher domain failed. Error getting JSON file from https:///.well-known/microsoft-identity-association. The server returned an unexpected content type header value.`
> ## Implications on the app consent prompt
active-directory Quickstart V2 Android https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-android.md
We'll now look at these files in more detail and call out the MSAL-specific code
MSAL ([com.microsoft.identity.client](https://javadoc.io/doc/com.microsoft.identity.client/msal)) is the library used to sign in users and request tokens used to access an API protected by Microsoft identity platform. Gradle 3.0+ installs the library when you add the following to **Gradle Scripts** > **build.gradle (Module: app)** under **Dependencies**:
-```gradle
-implementation 'com.microsoft.identity.client:msal:2.+'
-```
-
-You can see this in the sample project in build.gradle (Module: app):
- ```java dependencies { ...
dependencies {
This instructs Gradle to download and build MSAL from maven central.
+You must also add references to maven to the **allprojects** > **repositories** portion of the **build.gradle (Module: app)** like so:
+
+```java
+allprojects {
+ repositories {
+ mavenCentral()
+ google()
+ mavenLocal()
+ maven {
+ url 'https://pkgs.dev.azure.com/MicrosoftDeviceSDK/DuoSDK-Public/_packaging/Duo-SDK-Feed/maven/v1'
+ }
+ maven {
+ name "vsts-maven-adal-android"
+ url "https://identitydivision.pkgs.visualstudio.com/_packaging/AndroidADAL/maven/v1"
+ credentials {
+ username System.getenv("ENV_VSTS_MVN_ANDROIDADAL_USERNAME") != null ? System.getenv("ENV_VSTS_MVN_ANDROIDADAL_USERNAME") : project.findProperty("vstsUsername")
+ password System.getenv("ENV_VSTS_MVN_ANDROIDADAL_ACCESSTOKEN") != null ? System.getenv("ENV_VSTS_MVN_ANDROIDADAL_ACCESSTOKEN") : project.findProperty("vstsMavenAccessToken")
+ }
+ }
+ jcenter()
+ }
+}
+```
+ ### MSAL imports The imports that are relevant to the MSAL library are `com.microsoft.identity.client.*`. For example, you'll see `import com.microsoft.identity.client.PublicClientApplication;` which is the namespace for the `PublicClientApplication` class, which represents your public client application.
active-directory Tutorial V2 Android https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/tutorial-v2-android.md
If you do not already have an Android application, follow these steps to set up
### Add MSAL to your project
-1. In the Android Studio project window, navigate to **app** > **src** > **build.gradle** and add the following:
+1. In the Android Studio project window, navigate to **app** > **build.gradle** and add the following:
```gradle
- repositories{
+ apply plugin: 'com.android.application'
+
+ allprojects {
+ repositories {
+ mavenCentral()
+ google()
+ mavenLocal()
+ maven {
+ url 'https://pkgs.dev.azure.com/MicrosoftDeviceSDK/DuoSDK-Public/_packaging/Duo-SDK-Feed/maven/v1'
+ }
+ maven {
+ name "vsts-maven-adal-android"
+ url "https://identitydivision.pkgs.visualstudio.com/_packaging/AndroidADAL/maven/v1"
+ credentials {
+ username System.getenv("ENV_VSTS_MVN_ANDROIDADAL_USERNAME") != null ? System.getenv("ENV_VSTS_MVN_ANDROIDADAL_USERNAME") : project.findProperty("vstsUsername")
+ password System.getenv("ENV_VSTS_MVN_ANDROIDADAL_ACCESSTOKEN") != null ? System.getenv("ENV_VSTS_MVN_ANDROIDADAL_ACCESSTOKEN") : project.findProperty("vstsMavenAccessToken")
+ }
+ }
jcenter()
+ }
} dependencies{
- implementation 'com.microsoft.identity.client:msal:2.+'
- implementation 'com.microsoft.graph:microsoft-graph:1.5.+'
- }
+ implementation 'com.microsoft.identity.client:msal:2.+'
+ implementation 'com.microsoft.graph:microsoft-graph:1.5.+'
+ }
packagingOptions{
- exclude("META-INF/jersey-module-version")
+ exclude("META-INF/jersey-module-version")
} ``` [More on the Microsoft Graph SDK](https://github.com/microsoftgraph/msgraph-sdk-java/)
active-directory Howto Vm Sign In Azure Ad Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/howto-vm-sign-in-azure-ad-linux.md
+
+ Title: Sign in to Linux virtual machine in Azure using Azure Active Directory (Preview)
+description: Azure AD sign in to an Azure VM running Linux
+++++ Last updated : 05/07/2021++++++++
+# Preview: Login to a Linux virtual machine in Azure with Azure Active Directory using SSH certificate-based authentication
+
+To improve the security of Linux virtual machines (VMs) in Azure, you can integrate with Azure Active Directory (Azure AD) authentication. You can now use Azure AD as a core authentication platform and a certificate authority to SSH into a Linux VM with AD and SSH certificate-based authentication. This functionality allows organizations to centrally control and enforce Azure role-based access control (RBAC) and Conditional Access policies that manage access to the VMs. This article shows you how to create and configure a Linux VM and login with Azure AD using SSH certificate-based authentication.
+
+> [!IMPORTANT]
+> This capability is currently in public preview. [The previous version that made use of device code flow will be deprecated August 15, 2021](../../virtual-machines/linux/login-using-aad.md). To migrate from the old version to this version, see the section, [Migration from previous preview](#migration-from-previous-preview).
+> This preview is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. Use this feature on a test virtual machine that you expect to discard after testing. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+There are many security benefits of using Azure AD with SSH certificate-based authentication to log in to Linux VMs in Azure, including:
+
+- Use your Azure AD credentials to log in to Azure Linux VMs.
+- Get SSH key based authentication without needing to distribute SSH keys to users or provision SSH public keys on any Azure Linux VMs you deploy. This experience is much simpler than having to worry about sprawl of stale SSH public keys that could cause unauthorized access.
+- Reduce reliance on local administrator accounts, credential theft, and weak credentials.
+- Password complexity and password lifetime policies configured for Azure AD help secure Linux VMs as well.
+- With Azure role-based access control, specify who can login to a VM as a regular user or with administrator privileges. When users join or leave your team, you can update the Azure RBAC policy for the VM to grant access as appropriate. When employees leave your organization and their user account is disabled or removed from Azure AD, they no longer have access to your resources.
+- With Conditional Access, configure policies to require multi-factor authentication and/or require client device you are using to SSH be a managed device (for example: compliant device or hybrid Azure AD joined) before you can SSH to Linux VMs.
+- Use Azure deploy and audit policies to require Azure AD login for Linux VMs and to flag use of non-approved local accounts on the VMs.
+- Login to Linux VMs with Azure Active Directory also works for customers that use Federation Services.
+
+## Supported Linux distributions and Azure regions
+
+The following Linux distributions are currently supported during the preview of this feature when deployed in a supported region:
+
+| Distribution | Version |
+| | |
+| CentOS | CentOS 7, CentOS 8.3 |
+| Debian | Debian 9, Debian 10 |
+| openSUSE | openSUSE Leap 42.3 |
+| RedHat Enterprise Linux | RHEL 7.4 to RHEL 7.10, RHEL 8.3 |
+| SUSE Linux Enterprise Server | SLES 12 |
+| Ubuntu Server | Ubuntu Server 16.04 to Ubuntu Server 20.04 |
+
+The following Azure regions are currently supported during the preview of this feature:
+
+- Azure Global
+- Azure Government
+- Azure China
+
+It's not supported to use this extension on Azure Kubernetes Service (AKS) clusters. For more information, see [Support policies for AKS](../../aks/support-policies.md).
+
+If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version 2.22.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+
+## Requirements for login with Azure AD using SSH certificate-based authentication
+
+To enable Azure AD login using SSH certificate-based authentication for your Linux VMs in Azure, you need to ensure the following network, virtual machine, and client (ssh client) requirements are met.
+
+### Network
+
+VM network configuration must permit outbound access to the following endpoints over TCP port 443:
+
+For Azure Global
+
+- https://packages.microsoft.com ΓÇô For package installation and upgrades.
+- http://169.254.169.254 ΓÇô Azure Instance Metadata Service endpoint.
+- https://login.microsoftonline.com ΓÇô For PAM (pluggable authentication modules) based authentication flows.
+- https://pas.windows.net ΓÇô For Azure RBAC flows.
+
+For Azure Government
+
+- https://packages.microsoft.com ΓÇô For package installation and upgrades.
+- http://169.254.169.254 ΓÇô Azure Instance Metadata Service endpoint.
+- https://login.microsoftonline.us ΓÇô For PAM (pluggable authentication modules) based authentication flows.
+- https://pasff.usgovcloudapi.net ΓÇô For Azure RBAC flows.
+
+For Azure China
+
+- https://packages.microsoft.com ΓÇô For package installation and upgrades.
+- http://169.254.169.254 ΓÇô Azure Instance Metadata Service endpoint.
+- https://login.chinacloudapi.cn ΓÇô For PAM (pluggable authentication modules) based authentication flows.
+- https://pas.chinacloudapi.cn ΓÇô For Azure RBAC flows.
+
+### Virtual machine
+
+Ensure your VM is configured with the following functionality:
+
+- System assigned managed identity. This option gets automatically selected when you use Azure portal to create VM and select Azure AD login option. You can also enable System assigned managed identity on a new or an existing VM using the Azure CLI.
+- aadsshlogin and aadsshlogin-selinux (as appropriate). These packages get installed with the AADSSHLoginForLinux VM extension. The extension is installed when you use Azure portal to create VM and enable Azure AD login (Management tab) or via the Azure CLI.
+
+### Client
+
+Ensure your client meets the following requirements:
+
+- SSH client must support OpenSSH based certificates for authentication. You can use Az CLI (2.21.1 or higher) or Azure Cloud Shell to meet this requirement.
+- SSH extension for Az CLI. You can install this using az. You do not need to install this extension when using Azure Cloud Shell as it comes pre-installed.
+- If you are using any other SSH client other than Az CLI or Azure Cloud Shell that supports OpenSSH, you will still need to use Az CLI with SSH extension to retrieve ephemeral SSH cert in a config file and then use the config file with your SSH client.
+
+## Enabling Azure AD login in for Linux VM in Azure
+
+To use Azure AD login in for Linux VM in Azure, you need to first enable Azure AD login option for your Linux VM, configure Azure role assignments for users who are authorized to login in to the VM and then use SSH client that supports OpensSSH such as Az CLI or Az Cloud Shell to SSH to your Linux VM. There are multiple ways you can enable Azure AD login for your Linux VM, as an example you can use:
+
+- Azure portal experience when creating a Linux VM
+- Azure Cloud Shell experience when creating a Windows VM or for an existing Linux VM
+
+### Using Azure portal create VM experience to enable Azure AD login
+
+You can enable Azure AD login for any of the supported Linux distributions mentioned above using the Azure portal.
+
+As an example, to create an Ubuntu Server 18.04 LTS VM in Azure with Azure AD logon:
+
+1. Sign in to the Azure portal, with an account that has access to create VMs, and select **+ Create a resource**.
+1. Click on **Create** under **Ubuntu Server 18.04 LTS** in the **Popular** view.
+1. On the **Management** tab,
+ 1. Check the box to enable **Login with Azure Active Directory (Preview)**.
+ 1. Ensure **System assigned managed identity** is checked.
+1. Go through the rest of the experience of creating a virtual machine. During this preview, you will have to create an administrator account with username and password/SSH public key.
+
+### Using the Azure Cloud Shell experience to enable Azure AD login
+
+Azure Cloud Shell is a free, interactive shell that you can use to run the steps in this article. Common Azure tools are preinstalled and configured in Cloud Shell for you to use with your account. Just select the Copy button to copy the code, paste it in Cloud Shell, and then press Enter to run it. There are a few ways to open Cloud Shell:
+
+- Select Try It in the upper-right corner of a code block.
+- Open Cloud Shell in your browser.
+- Select the Cloud Shell button on the menu in the upper-right corner of the Azure portal.
+
+If you choose to install and use the CLI locally, this article requires that you are running the Azure CLI version 2.22.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see the article Install Azure CLI.
+
+1. Create a resource group with [az group create](/cli/azure/group#az_group_create).
+1. Create a VM with [az vm create](/cli/azure/vm#az_vm_create&preserve-view=true) using a supported distribution in a supported region.
+1. Install the Azure AD login VM extension with [az vm extension set](/cli/azure/vm/extension?view=azure-cli-latest#az_vm_extension_set&preserve-view=true).
+
+The following example deploys a VM named *myVM*, using *Ubuntu 18.04 LTS*, into a resource group named *AzureADLinuxVMPreview*, in the *southcentralus* region. It then installs the *Azure AD login VM extension* to enable Azure AD login for Linux VM. VM extensions are small applications that provide post-deployment configuration and automation tasks on Azure virtual machines.
+
+The example can be customized to support your testing requirements as needed.
+
+```azurecli-interactive
+az group create --name AzureADLinuxVMPreview --location southcentralus
+
+az vm create \
+ --resource-group AzureADLinuxVMPreview \
+ --name myVM \
+ --image UbuntuLTS \
+ --assign-identity \
+ --admin-username azureuser \
+ --generate-ssh-keys
+
+az vm extension set \
+ --publisher Microsoft.Azure.ActiveDirectory \
+ --name AADSSHLoginForLinux \
+ --resource-group AzureADLinuxVMPreview \
+ --vm-name myVM
+```
+
+It takes a few minutes to create the VM and supporting resources.
+
+The AADSSHLoginForLinux extension can be installed on an existing (supported distribution) Linux VM with a running VM agent to enable Azure AD authentication. If deploying this extension to a previously created VM, ensure the machine has at least 1 GB of memory allocated else the extension will fail to install.
+
+The provisioningState of Succeeded is shown once the extension is successfully installed on the VM. The VM must have a running [VM agent](../../virtual-machines/extensions/agent-linux.md) to install the extension.
+
+## Configure role assignments for the VM
+
+Now that you have created the VM, you need to configure Azure RBAC policy to determine who can log in to the VM. Two Azure roles are used to authorize VM login:
+
+- **Virtual Machine Administrator Login**: Users with this role assigned can log in to an Azure virtual machine with administrator privileges.
+- **Virtual Machine User Login**: Users with this role assigned can log in to an Azure virtual machine with regular user privileges.
+
+To allow a user to log in to the VM over SSH, you must assign them either the Virtual Machine Administrator Login or Virtual Machine User Login role. An Azure user with the Owner or Contributor roles assigned for a VM do not automatically have privileges to Azure AD login to the VM over SSH. This separation is to provide audited separation between the set of people who control virtual machines versus the set of people who can access virtual machines.
+
+There are multiple ways you can configure role assignments for VM, as an example you can use:
+
+- Azure AD Portal experience
+- Azure Cloud Shell experience
+
+> [!Note]
+> The Virtual Machine Administrator Login and Virtual Machine User Login roles use dataActions and thus cannot be assigned at management group scope. Currently these roles can only be assigned at the subscription, resource group, or resource scope. It is recommended that the roles be assigned at the subscription or resource level and not at the individual VM level to avoid risk of running out of [Azure role assignments limit](../../role-based-access-control/troubleshooting.md#azure-role-assignments-limit) per subscription.
+
+### Using Azure AD Portal experience
+
+To configure role assignments for your Azure AD enabled Linux VMs:
+
+1. Navigate to the virtual machine to be configured.
+1. Select **Access control (IAM)** from the menu options.
+1. Select **Add**, **Add role assignment** to open the Add role assignment pane.
+1. In the **Role** drop-down list, select the role **Virtual Machine Administrator Login** or **Virtual Machine User Login**.
+1. In the **Select** field, select a user, group, service principal, or managed identity. If you do not see the security principal in the list, you can type in the **Select** box to search the directory for display names, email addresses, and object identifiers.
+1. Select **Save**, to assign the role.
+
+After a few moments, the security principal is assigned the role at the selected scope.
+
+### Using the Azure Cloud Shell experience
+
+The following example uses [az role assignment create](/cli/azure/role/assignment#az_role_assignment_create) to assign the Virtual Machine Administrator Login role to the VM for your current Azure user. The username of your current Azure account is obtained with [az account show](/cli/azure/account#az_account_show), and the scope is set to the VM created in a previous step with [az vm show](/cli/azure/vm#az_vm_show). The scope could also be assigned at a resource group or subscription level, normal Azure RBAC inheritance permissions apply.
+
+```azurecli-interactive
+username=$(az account show --query user.name --output tsv)
+vm=$(az vm show --resource-group AzureADLinuxVMPreview --name myVM --query id -o tsv)
+
+az role assignment create \
+ --role "Virtual Machine Administrator Login" \
+ --assignee $username \
+ --scope $vm
+```
+
+> [!NOTE]
+> If your Azure AD domain and logon username domain do not match, you must specify the object ID of your user account with the `--assignee-object-id`, not just the username for `--assignee`. You can obtain the object ID for your user account with [az ad user list](/cli/azure/ad/user#az_ad_user_list).
+
+For more information on how to use Azure RBAC to manage access to your Azure subscription resources, see the article [Steps to assign an Azure role](../../role-based-access-control/role-assignments-steps.md).
+
+## Install SSH extension for Az CLI
+
+If you are using Azure Cloud Shell, then no other setup is needed as both the minimum required version of Az CLI and SSH extension for Az CLI are already included in the Cloud Shell environment.
+
+Run the following command to add SSH extension for Az CLI
+
+```azurecli
+az extension add --name ssh
+```
+
+The minimum version required for the extension is 0.1.4. Check the installed SSH extension version with the following command.
+
+```azurecli
+az extension show --name ssh
+```
+
+## Using Conditional Access
+
+You can enforce Conditional Access policies such as require MFA for the user, require compliant/Hybrid Azure AD joined device for the device running SSH client, check for low user and sign-in risk before authorizing access to Linux VMs in Azure that are enabled with Azure AD login in.
+
+To apply Conditional Access policy, you must select the "Azure Linux VM Sign-In" app from the cloud apps or actions assignment option and then use user and /or sign-in risk as a condition and Access controls as Grant access after satisfying require multi-factor authentication and/or require compliant/Hybrid Azure AD joined device.
+
+> [!NOTE]
+> Conditional Access policy enforcement requiring device compliance or Hybrid Azure AD join on the client device running SSH client only works with Az CLI running on Windows and macOS. It is not supported when using Az CLI on Linux or Azure Cloud Shell.
+
+## Login using Azure AD user account to SSH into the Linux VM
+
+### Using Az CLI
+
+First do az login and then az ssh vm.
+
+```azurecli
+az login
+```
+
+This command will launch a browser window and a user can sign in using their Azure AD account.
+
+The following example automatically resolves the appropriate IP address for the VM.
+
+```azurecli
+az ssh vm -n myVM -g AzureADLinuxVMPreview
+```
+
+If prompted, enter your Azure AD login credentials at the login page, perform an MFA, and/or satisfy device checks. You will only be prompted if your az CLI session does not already meet any required Conditional Access criteria. Close the browser window, return to the SSH prompt, and you will be automatically connected to the VM.
+
+You are now signed in to the Azure Linux virtual machine with the role permissions as assigned, such as VM User or VM Administrator. If your user account is assigned the Virtual Machine Administrator Login role, you can use sudo to run commands that require root privileges.
+
+### Using Az Cloud Shell
+
+You can use Az Cloud Shell to connect to VMs without needing to install anything locally to your client machine. Start Cloud Shell by clicking the shell icon in the upper right corner of the Azure portal.
+
+Az Cloud Shell will automatically connect to a session in the context of the signed in user. During the Azure AD Login for Linux Preview, **you must run az login again and go through an interactive sign in flow**.
+
+```azurecli
+az login
+```
+
+Then you can use the normal az ssh vm commands to connect using name and resource group or IP address of the VM.
+
+```azurecli
+az ssh vm -n myVM -g AzureADLinuxVMPreview
+```
+
+> [!NOTE]
+> Conditional Access policy enforcement requiring device compliance or Hybrid Azure AD join is not supported when using Az Cloud Shell.
+
+### Login using Azure AD service principal to SSH into the Linux VM
+
+Azure CLI supports authenticating with a service principal instead of a user account. Since service principals are account not tied to any particular user, customers can use them to SSH to a VM to support any automation scenarios they may have. The service principal must have VM Administrator or VM User rights assigned. Assign permissions at the subscription or resource group level.
+
+The following example will assign VM Administrator rights to the service principal at the resource group level. Replace the service principal object ID, subscription ID, and resource group name fields.
+
+```azurecli
+az role assignment create \
+ --role "Virtual Machine Administrator Login" \
+ --assignee-object-id <service-principal-objectid> \
+ --assignee-principal-type ServicePrincipal \
+ --scope ΓÇ£/subscriptions/<subscription-id>/resourceGroups/<resourcegroup-name>"
+```
+
+Use the following example to authenticate to Azure CLI using the service principal. To learn more about signing in using a service principal, see the article [Sign in to Azure CLI with a service principal](/cli/azure/authenticate-azure-cli#sign-in-with-a-service-principal).
+
+```azurecli
+az login --service-principal -u <sp-app-id> -p <password-or-cert> --tenant <tenant-id>
+```
+
+Once authentication with a service principal is complete, use the normal Az CLI SSH commands to connect to the VM.
+
+```azurecli
+az ssh vm -n myVM -g AzureADLinuxVMPreview
+```
+
+### Exporting SSH Configuration for use with SSH clients that support OpenSSH
+
+Login to Azure Linux VMs with Azure AD supports exporting the OpenSSH certificate and configuration, allowing you to use any SSH clients that support OpenSSH based certificates to sign in Azure AD. The following example exports the configuration for all IP addresses assigned to the VM.
+
+```azurecli
+az ssh config --file ~/.ssh/config -n myVM -g AzureADLinuxVMPreview
+```
+
+Alternatively, you can export the config by specifying just the IP address. Replace the IP address in the example with the public or private IP address for your VM. Type `az ssh config -h` for help on this command.
+
+```azurecli
+az ssh config --file ~/.ssh/config --ip 10.11.123.456
+```
+
+You can then connect to the VM through normal OpenSSH usage. Connection can be done through any SSH client that uses OpenSSH.
+
+## Sudo and Azure AD login
+
+Once, users assigned the VM Administrator role successfully SSH into a Linux VM, they will be able to run sudo with no other interaction or authentication requirement. Users assigned the VM User role will not be able to run sudo.
+
+## Virtual machine scale set support
+
+Virtual machine scale sets are supported, but the steps are slightly different for enabling and connecting to virtual machine scale set VMs.
+
+First, create a virtual machine scale set or choose one that already exists. Enable a system assigned managed identity for your virtual machine scale set.
+
+```azurecli
+az vmss identity assign --vmss-name myVMSS --resource-group AzureADLinuxVMPreview
+```
+
+Install the Azure AD extension on your virtual machine scale set.
+
+```azurecli
+az vmss extension set --publisher Microsoft.Azure.ActiveDirectory --name Azure ADSSHLoginForLinux --resource-group AzureADLinuxVMPreview --vmss-name myVMSS
+```
+
+Virtual machine scale set usually do not have public IP addresses, so you must have connectivity to them from another machine that can reach their Azure Virtual Network. This example shows how to use the private IP of a virtual machine scale set VM to connect.
+
+```azurecli
+az ssh vm --ip 10.11.123.456
+```
+
+> [!NOTE]
+> You cannot automatically determine the virtual machine scale set VM's IP addresses using the `--resource-group` and `--name` switches.
+
+## Migration from previous preview
+
+For customers who are using previous version of Azure AD login for Linux that was based on device code flow, complete the following steps.
+
+1. Uninstall the AADLoginForLinux extension on the VM.
+ 1. Using Azure CLI: `az vm extension delete -g MyResourceGroup -n MyVm -n AADLoginForLinux`
+1. Enable System assigned managed identity on your VM.
+ 1. Using Azure CLI: `az vm identity assign -g myResourceGroup -n myVm`
+1. Install the AADSSHLoginForLinux extension on the VM
+ 1. Using Azure CLI:
+ ```azurecli
+ az vm extension set \
+ --publisher Microsoft.Azure.ActiveDirectory \
+ --name AADSSHLoginForLinux \
+ --resource-group myResourceGroup \
+ --vm-name myVM
+ ```
+
+## Troubleshoot sign-in issues
+
+Some common errors when you try to SSH with Azure AD credentials include no Azure roles assigned, and repeated prompts to sign in. Use the following sections to correct these issues.
+
+### Could not retrieve token from local cache
+
+You must run az login again and go through an interactive sign in flow. Review the section [Using Az Cloud Shell](#using-az-cloud-shell).
+
+### Access denied: Azure role not assigned
+
+If you see the following error on your SSH prompt, verify that you have configured Azure RBAC policies for the VM that grants the user either the Virtual Machine Administrator Login or Virtual Machine User Login role. If you are running into issues with Azure role assignments, see the article [Troubleshoot Azure RBAC](../../role-based-access-control/troubleshooting.md#azure-role-assignments-limit).
+
+### Extension Install Errors
+
+Installation of the AADSSHLoginForLinux VM extension to existing computers fails with one of the following known error codes:
+
+#### Non-zero exit code: 22
+
+The Status of the AADSSHLoginForLinux VM extension shows as Transitioning in the portal.
+
+Cause 1: This failure is due to a System Assigned Managed Identity being required.
+
+Solution 1: Perform these actions:
+
+1. Uninstall the failed extension.
+1. Enable a System Assigned Managed Identity on the Azure VM.
+1. Run the extension install command again.
+
+#### Non-zero exit code: 23
+
+The Status of the AADSSHLoginForLinux VM extension shows as Transitioning in the portal.
+
+Cause 1: This failure is due to the older AADLoginForLinux VM extension is still installed.
+
+Solution 1: Perform these actions:
+
+1. Uninstall the older AADLoginForLinux VM extension from the VM. The Status of the new AADSSHLoginForLinux VM extension will change to Provisioning succeeded in the portal.
+
+#### Az ssh vm fails with KeyError: 'access_token'.
+
+Cause 1: An outdated version of the Azure CLI client is being used.
+
+Solution 1: Upgrade the Azure CLI client to version 2.21.0 or higher.
+
+#### SSH Connection closed
+
+After the user has successfully signed in using az login, connection to the VM using `az ssh vm -ip <addres>` or `az ssh vm --name <vm_name> -g <resource_group>` fails with *Connection closed by <ip_address> port 22*.
+
+Cause 1: The user is not assigned to the either the Virtual Machine Administrator/User Login Azure RBAC roles within the scope of this VM.
+
+Solution 1: Add the user to the either of the Virtual Machine Administrator/User Login Azure RBAC roles within the scope of this VM.
+
+Cause 2: The user is in a required Azure RBAC role but the System Assigned managed identity has been disabled on the VM.
+
+Solution 2: Perform these actions:
+
+1. Enable the System Assigned managed identity on the VM.
+1. Allow several minutes to pass before trying to connect using `az ssh vm --ip <ip_address>`.
+
+### Virtual machine scale set Connection Issues
+
+Virtual machine scale set VM connections may fail if the virtual machine scale set instances are running an old model. Upgrading virtual machine scale set instances to the latest model may resolve issues, especially if an upgrade has not been done since the Azure AD Login extension was installed. Upgrading an instance applies a standard virtual machine scale set configuration to the individual instance.
+
+### Other limitations
+
+Users that inherit access rights through nested groups or role assignments aren't currently supported. The user or group must be directly assigned the required role assignments. For example, the use of management groups or nested group role assignments won't grant the correct permissions to allow the user to sign in.
+
+## Preview feedback
+
+Share your feedback about this preview feature or report issues using it on the [Azure AD feedback forum](https://feedback.azure.com/forums/169401-azure-active-directory?category_id=166032).
+
+## Next steps
active-directory Howto Vm Sign In Azure Ad Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
This feature is now available in the following Azure clouds:
To enable Azure AD authentication for your Windows VMs in Azure, you need to ensure your VMs network configuration permits outbound access to the following endpoints over TCP port 443: For Azure Global-- https://enterpriseregistration.windows.net For device registration.-- http://169.254.169.254 For Azure Instance Metadata Service endpoint.-- https://login.microsoftonline.com For authentication flows.-- https://pas.windows.net For Azure RBAC flows.
+- `https://enterpriseregistration.windows.net` - For device registration.
+- `http://169.254.169.254` - Azure Instance Metadata Service endpoint.
+- `https://login.microsoftonline.com` - For authentication flows.
+- `https://pas.windows.net` - For Azure RBAC flows.
For Azure Government-- https://enterpriseregistration.microsoftonline.us For device registration.-- http://169.254.169.254 For Azure Instance Metadata Service.-- https://login.microsoftonline.us For authentication flows.-- https://pasff.usgovcloudapi.net For Azure RBAC flows.
+- `https://enterpriseregistration.microsoftonline.us` - For device registration.
+- `http://169.254.169.254` - Azure Instance Metadata Service.
+- `https://login.microsoftonline.us` - For authentication flows.
+- `https://pasff.usgovcloudapi.net` - For Azure RBAC flows.
For Azure China-- https://enterpriseregistration.partner.microsoftonline.cn For device registration.-- http://169.254.169.254 Azure Instance Metadata Service endpoint.-- https://login.chinacloudapi.cn For authentication flows.-- https://pas.chinacloudapi.cn For Azure RBAC flows.
+- `https://enterpriseregistration.partner.microsoftonline.cn` - For device registration.
+- `http://169.254.169.254` - Azure Instance Metadata Service endpoint.
+- `https://login.chinacloudapi.cn` - For authentication flows.
+- `https://pas.chinacloudapi.cn' - For Azure RBAC flows.
## Enabling Azure AD login in for Windows VM in Azure
You are now signed in to the Windows Server 2019 Azure virtual machine with the
> [!NOTE] > You can save the .RDP file locally on your computer to launch future remote desktop connections to your virtual machine instead of having to navigate to virtual machine overview page in the Azure portal and using the connect option.
+## Using Azure Policy to ensure standards and assess compliance
+
+Use Azure policy to ensure Azure AD login is enabled for your new and existing Windows virtual machines and assess compliance of your environment at scale on your Azure policy compliance dashboard. With this capability, you can use many levels of enforcement: you can flag new and existing Windows VMs within your environment that do not have Azure AD login enabled. You can also use Azure policy to deploy the Azure AD extension on new Windows VMs that do not have Azure AD login enabled, as well as remediate existing Windows VMs to the same standard. In addition to these capabilities, you can also use policy to detect and flag VMs have non-approved local accounts on their machines. To learn more, review [Azure policy](https://www.aka.ms/AzurePolicy).
+ ## Troubleshoot ### Troubleshoot deployment issues
active-directory Users Revoke Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/users-revoke-access.md
Most browser-based applications use session tokens instead of access and refresh
## Revoke access for a user in the hybrid environment
-For a hybrid environment with on-premises Active Directory synchronized with Azure Active Directory, Microsoft recommends IT admins to take the following actions. If you have an **Azure AD only environment**, you may skip the [On-premises Active Directory environment](https://docs.microsoft.com/azure/active-directory/enterprise-users/users-revoke-access#on-premises-active-directory-environment) section.
+For a hybrid environment with on-premises Active Directory synchronized with Azure Active Directory, Microsoft recommends IT admins to take the following actions. If you have an **Azure AD only environment**, skip to the [Azure Active Directory environment](https://docs.microsoft.com/azure/active-directory/enterprise-users/users-revoke-access#azure-active-directory-environment) section.
+ ### On-premises Active Directory environment
active-directory External Identities Pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/external-identities-pricing.md
Previously updated : 09/21/2020 Last updated : 05/05/2021
To take advantage of MAU billing, your Azure AD tenant must be linked to an Azur
## About monthly active users (MAU) billing In your Azure AD tenant, guest user collaboration usage is billed based on the count of unique guest users with authentication activity within a calendar month. This model replaces the 1:5 ratio billing model, which allowed up to five guest users for each Azure AD Premium license in your tenant. When your tenant is linked to a subscription and you use External Identities features to collaborate with guest users, you'll be automatically billed using the MAU-based billing model.
-
-The pricing tier that applies to your guest users is based on the highest pricing tier assigned to your Azure AD tenant. For example, if the highest pricing tier in your tenant is Azure AD Premium P1, the Premium P1 pricing tier also applies to your guest users. If the highest pricing is Azure AD Free, you'll be asked to upgrade to a premium pricing tier when you try to use premium features for guest users.
+
+The pricing tier that applies to your guest users is based on the highest pricing tier assigned to your Azure AD tenant. For more information, see [Azure Active Directory External Identities Pricing](https://azure.microsoft.com/en-us/pricing/details/active-directory/external-identities/).
## Link your Azure AD tenant to a subscription
active-directory Entitlement Management Access Package Approval Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-package-approval-policy.md
For a demonstration of how to add a multi-stage approval to a request policy, wa
Follow these steps to specify the approval settings for requests for the access package:
-**Prerequisite role:** Global administrator, User administrator, Catalog owner, or Access package manager
+**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager
1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
active-directory Entitlement Management Access Package Assignments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-package-assignments.md
To use Azure AD entitlement management and assign users to access packages, you
## View who has an assignment
-**Prerequisite role:** Global administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager
+**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager
1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
active-directory Entitlement Management Access Package Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-package-create.md
Here are the high-level steps to create a new access package.
## Start new access package
-**Prerequisite role:** Global administrator, User administrator, Catalog owner, or Access package manager
+**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager
1. Sign in to the [Azure portal](https://portal.azure.com).
On the **Basics** tab, you give the access package a name and specify which cata
1. In the **Catalog** drop-down list, select the catalog you want to create the access package in. For example, you might have a catalog owner that manages all the marketing resources that can be requested. In this case, you could select the marketing catalog.
- You will only see catalogs you have permission to create access packages in. To create an access package in an existing catalog, you must be a Global administrator or User administrator, or you must be a catalog owner or access package manager in that catalog.
+ You will only see catalogs you have permission to create access packages in. To create an access package in an existing catalog, you must be a Global administrator, Identity Governance administrator or User administrator, or you must be a catalog owner or access package manager in that catalog.
![Access package - Basics](./media/entitlement-management-access-package-create/basics.png)
- If you are a Global administrator, a User administrator, or catalog creator and you would like to create your access package in a new catalog that's not listed, click **Create new catalog**. Enter the Catalog name and description and then click **Create**.
+ If you are a Global administrator, an Identity Governance administrator, a User administrator, or catalog creator and you would like to create your access package in a new catalog that's not listed, click **Create new catalog**. Enter the Catalog name and description and then click **Create**.
The access package you are creating and any resources included in it will be added to the new catalog. You can also add additional catalog owners later.
active-directory Entitlement Management Access Package Edit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-package-edit.md
This article describes how to hide or delete an access package.
Follow these steps to change the **Hidden** setting for an access package.
-**Prerequisite role:** Global administrator, User administrator, Catalog owner, or Access package manager
+**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager
1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
active-directory Entitlement Management Access Package First https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-package-first.md
A resource directory has one or more resources to share. In this step, you creat
An *access package* is a bundle of resources that a team or project needs and is governed with policies. Access packages are defined in containers called *catalogs*. In this step, you create a **Marketing Campaign** access package in the **General** catalog.
-**Prerequisite role:** Global administrator, User administrator, Catalog owner, or Access package manager
+**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager
![Create an access package](./media/entitlement-management-access-package-first/elm-access-package.png)
active-directory Entitlement Management Access Package Lifecycle Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-package-lifecycle-policy.md
To ensure users have the right access to an access package, custom questions can
To change the lifecycle settings for an access package, you need to open the corresponding policy. Follow these steps to open the lifecycle settings for an access package.
-**Prerequisite role:** Global administrator, User administrator, Catalog owner, or Access package manager
+**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager
1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
active-directory Entitlement Management Access Package Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-package-request-policy.md
For information about the priority logic that is used when multiple policies app
If you have a set of users that should have different request and approval settings, you'll likely need to create a new policy. Follow these steps to start adding a new policy to an existing access package:
-**Prerequisite role:** Global administrator, User administrator, Catalog owner, or Access package manager
+**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager
1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
active-directory Entitlement Management Access Package Requests https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-package-requests.md
In Azure AD entitlement management, you can see who has requested access package
## View requests
-**Prerequisite role:** Global administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager
+**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager
1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
active-directory Entitlement Management Access Package Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-package-resources.md
This video provides an overview of how to change an access package.
If you need to add resources to an access package, you should check whether the resources your need are available in the catalog. If you are an access package manager, you cannot add resources to a catalog, even if you own them. You are restricted to using the resources available in the catalog.
-**Prerequisite role:** Global administrator, User administrator, Catalog owner, or Access package manager
+**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager
1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
active-directory Entitlement Management Access Package Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-package-settings.md
As long as the catalog for the access package is [enabled for external users](en
## Share link to request an access package
-**Prerequisite role:** Global administrator, User administrator, Catalog owner, or Access package manager
+**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager
1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
active-directory Entitlement Management Access Reviews Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-reviews-create.md
To reduce the risk of stale access, you should enable periodic reviews of users
To enable reviews of access packages, you must meet the prerequisites for creating an access package: - Azure AD Premium P2-- Global administrator, User administrator, Catalog owner, or Access package manager
+- Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager
For more information, see [License requirements](entitlement-management-overview.md#license-requirements).
active-directory Entitlement Management Access Reviews Review Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-reviews-review-access.md
Azure AD entitlement management simplifies how enterprises manage access to grou
To review users' active access package assignments, you must meet the prerequisites to do an access review: - Azure AD Premium P2-- Global administrator-- Designated User administrator, Catalog owner, or Access package manager
+- Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager
For more information, see [License requirements](entitlement-management-overview.md#license-requirements).
active-directory Entitlement Management Catalog Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-catalog-create.md
A catalog is a container of resources and access packages. You create a catalog when you want to group related resources and access packages. Whoever creates the catalog becomes the first catalog owner. A catalog owner can add additional catalog owners.
-**Prerequisite role:** Global administrator, User administrator, or Catalog creator
+**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, or Catalog creator
1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
The user that created a catalog becomes the first catalog owner. To delegate man
Follow these steps to assign a user to the catalog owner role:
-**Prerequisite role:** Global administrator, User administrator, or Catalog owner
+**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, or Catalog owner
1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
Follow these steps to assign a user to the catalog owner role:
You can edit the name and description for a catalog. Users see this information in an access package's details.
-**Prerequisite role:** Global administrator, User administrator, or Catalog owner
+**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, or Catalog owner
1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
You can edit the name and description for a catalog. Users see this information
You can delete a catalog, but only if it does not have any access packages.
-**Prerequisite role:** Global administrator, User administrator, or Catalog owner
+**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, or Catalog owner
1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
active-directory Entitlement Management Delegate Catalog https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-delegate-catalog.md
To delegate to users who aren't administrators, so that they can create their ow
Follow these steps to assign a user to the catalog creator role.
-**Prerequisite role:** Global administrator or User administrator
+**Prerequisite role:** Global administrator, Identity Governance administrator or User administrator
1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
active-directory Entitlement Management Delegate Managers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-delegate-managers.md
This video provides an overview of how to delegate access governance from catalo
Follow these steps to assign a user to the access package manager role:
-**Prerequisite role:** Global administrator, User administrator, or Catalog owner
+**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, or Catalog owner
1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
active-directory Entitlement Management Delegate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-delegate.md
After delegation, the marketing department might have roles similar to the follo
| User | Job role | Azure AD role | Entitlement management role | | | | | |
-| Hana | IT administrator | Global administrator or User administrator | |
+| Hana | IT administrator | Global administrator, Identity Governance administrator or User administrator | |
| Mamta | Marketing manager | User | Catalog creator and Catalog owner | | Bob | Marketing lead | User | Catalog owner | | Jessica | Marketing project manager | User | Access package manager |
active-directory Entitlement Management External Users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-external-users.md
# Govern access for external users in Azure AD entitlement management
-Azure AD entitlement management utilizes [Azure AD business-to-business (B2B)](../external-identities/what-is-b2b.md) to collaborate with people outside your organization in another directory. With Azure AD B2B, external users authenticate to their home directory, but have a representation in your directory. The representation in your directory enables the user to be assigned access to your resources.
+Azure AD entitlement management uses [Azure AD business-to-business (B2B)](../external-identities/what-is-b2b.md) to share access so you can collaborate with people outside your organization. With Azure AD B2B, external users authenticate to their home directory, but have a representation in your directory. The representation in your directory enables the user to be assigned access to your resources.
This article describes the settings you can specify to govern access for external users. ## How entitlement management can help
-When using the [Azure AD B2B](../external-identities/what-is-b2b.md) invite experience, you must already know the email addresses of the external guest users you want to bring into your resource directory and work with. This works great when you're working on a smaller or short-term project and you already know all the participants, but this is harder to manage if you have lots of users you want to work with or if the participants change over time. For example, you might be working with another organization and have one point of contact with that organization, but over time additional users from that organization will also need access.
+When using the [Azure AD B2B](../external-identities/what-is-b2b.md) invite experience, you must already know the email addresses of the external guest users you want to bring into your resource directory and work with. Directly inviting each user works great when you're working on a smaller or short-term project and you already know all the participants, but this process is harder to manage if you have lots of users you want to work with, or if the participants change over time. For example, you might be working with another organization and have one point of contact with that organization, but over time additional users from that organization will also need access.
-With entitlement management, you can define a policy that allows users from organizations you specify to be able to self-request an access package. You can specify whether approval is required and an expiration date for the access. If approval is required, you can also invite one or more users from the external organization to your directory and designate them as approvers - since they are likely to know which external users from their organization need access. Once you have configured the access package, you can send the access package's link to your contact person (sponsor) at the external organization. That contact can share with other users in the external organization, and they can use this link to request the access package. Users from that organization who have already been invited into your directory can also use that link.
+With entitlement management, you can define a policy that allows users from organizations you specify to be able to self-request an access package. That policy includes whether approval is required, whether access reviews are required, and an expiration date for the access. If approval is required, you might consider inviting one or more users from the external organization to your directory, designating them as sponsors, and configuring that sponsors are approvers - since they are likely to know which external users from their organization need access. Once you have configured the access package, obtain the access package's request link so you can send that link to your contact person (sponsor) at the external organization. That contact can share with other users in their external organization, and they can use this link to request the access package. Users from that organization who have already been invited into your directory can also use that link.
-When a request is approved, entitlement management will provision the user with the necessary access, which may include inviting the user if they're not already in your directory. Azure AD will automatically create a B2B guest account for them. Note that an administrator may have previously limited which organizations are permitted for collaboration, by setting a [B2B allow or deny list](../external-identities/allow-deny-list.md) to allow or block invites to other organizations. If the user is not permitted by the allow or block list, then they will not be invited.
+Typically, when a request is approved, entitlement management will provision the user with the necessary access. If the user is not already in your directory, entitlement management will first invite the user. When the user is invited, Azure AD will automatically create a B2B guest account for them, but will not send the user an email. Note that an administrator may have previously limited which organizations are permitted for collaboration, by setting a [B2B allow or deny list](../external-identities/allow-deny-list.md) to allow or block invites to other organizations. If the user is not permitted by the allow or block list, then they will not be invited, and cannot be assigned access until the lists are updated.
Since you do not want the external user's access to last forever, you specify an expiration date in the policy, such as 180 days. After 180 days, if their access is not extended, entitlement management will remove all access associated with that access package. By default, if the user who was invited through entitlement management has no other access package assignments, then when they lose their last assignment, their guest account will be blocked from signing in for 30 days, and subsequently removed. This prevents the proliferation of unnecessary accounts. As described in the following sections, these settings are configurable.
The following diagram and steps provide an overview of how external users are gr
1. You send a [My Access portal link](entitlement-management-access-package-settings.md) to your contact at the external organization that they can share with their users to request the access package.
-1. An external user (**Requestor A** in this example) uses the My Access portal link to [request access](entitlement-management-request-access.md) to the access package. How the user signs in depends on the authentication type of the directory or domain defined in the connected organization.
+1. An external user (**Requestor A** in this example) uses the My Access portal link to [request access](entitlement-management-request-access.md) to the access package. How the user signs in depends on the authentication type of the directory or domain that's defined in the connected organization and in the external users settings.
1. An approver [approves the request](entitlement-management-request-approve.md) (or the request is auto-approved).
The following diagram and steps provide an overview of how external users are gr
1. To access the resources, the external user can either click the link in the email or attempt to access any of the directory resources directly to complete the invitation process.
-1. Depending on the policy settings, as time passes, the access package assignment for the external user expires, and the external user's access is removed.
+1. If the policy settings includes an expiration date, then later when the access package assignment for the external user expires, the external user's access rights from that access package are removed.
1. Depending on the lifecycle of external users settings, when the external user no longer has any access package assignments, the external user is blocked from signing in and the guest user account is removed from your directory.
To ensure people outside of your organization can request access packages and ge
### Configure your Azure AD B2B external collaboration settings - Allowing guests to invite other guests to your directory means that guest invites can occur outside of entitlement management. We recommend setting **Guests can invite** to **No** to only allow for properly governed invitations.-- If you are using the B2B allow list, you must make sure any domain you want to partner with using entitlement management is added to the list. Alternatively, if you are using the B2B deny list, you must make sure any domain you want to partner with is not added to the list.
+- If you are using the B2B allow list, you must make sure all the domains of all the organizations you want to partner with using entitlement management are added to the list. Alternatively, if you are using the B2B deny list, you must make sure no domain of any organization you want to partner with is not present on that list.
- If you create an entitlement management policy for **All users** (All connected organizations + any new external users), and a user doesnΓÇÖt belong to a connected organization in your directory, a connected organization will automatically be created for them when they request the package. Any B2B allow or deny list settings you have will take precedence. Therefore, be sure to include the domains you intend to include in this policy to your allow list if you are using one, and exclude them from your deny list if you are using a deny list. - If you want to create an entitlement management policy that includes **All users** (All connected organizations + any new external users), you must first enable email one-time passcode authentication for your directory. For more information, see [Email one-time passcode authentication](../external-identities/one-time-passcode.md). - For more information about Azure AD B2B external collaboration settings, see [Enable B2B external collaboration and manage who can invite guests](../external-identities/delegate-invitations.md).
To ensure people outside of your organization can request access packages and ge
## Manage the lifecycle of external users
-You can select what happens when an external user, who was invited to your directory through an access package request being approved, no longer has any access package assignments. This can happen if the user relinquishes all their access package assignments, or their last access package assignment expires. By default, when an external user no longer has any access package assignments, they are blocked from signing in to your directory. After 30 days, their guest user account is removed from your directory.
+You can select what happens when an external user, who was invited to your directory through making an access package request, no longer has any access package assignments. This can happen if the user relinquishes all their access package assignments, or their last access package assignment expires. By default, when an external user no longer has any access package assignments, they are blocked from signing in to your directory. After 30 days, their guest user account is removed from your directory.
-**Prerequisite role:** Global administrator or User administrator
+**Prerequisite role:** Global administrator, Identity Governance administrator or User administrator
1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
active-directory Entitlement Management Organization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-organization.md
For a demonstration of how to add a connected organization, watch the following
To add an external Azure AD directory or domain as a connected organization, follow the instructions in this section.
-**Prerequisite role**: *Global administrator* or *User administrator*
+**Prerequisite role**: *Global administrator*, *Identity Governance administrator*, or *User administrator*
1. In the Azure portal, select **Azure Active Directory**, and then select **Identity Governance**.
active-directory Entitlement Management Reports https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-reports.md
Watch the following video to learn how to view what resources users have access
This report enables you to list all of the access packages a user can request and the access packages that are currently assigned to the user.
-**Prerequisite role:** Global administrator or User administrator
+**Prerequisite role:** Global administrator, Identity Governance administrator or User administrator
1. Click **Azure Active Directory** and then click **Identity Governance**.
This report enables you to list all of the access packages a user can request an
This report enables you to list the resources currently assigned to a user in entitlement management. Note that this report is for resources managed with entitlement management. The user might have access to other resources in your directory outside of entitlement management.
-**Prerequisite role:** Global administrator or User administrator
+**Prerequisite role:** Global administrator, Identity Governance administrator or User administrator
1. Click **Azure Active Directory** and then click **Identity Governance**.
active-directory Entitlement Management Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-troubleshoot.md
This article describes some items you should check to help you troubleshoot Azur
### View a request's delivery errors
-**Prerequisite role:** Global administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager
+**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager
1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
You can only reprocess a request that has a status of **Delivery failed** or **P
- If the error wasn't fixed during the trials window, the request status may be **Delivery failed** or **partially delivered**. You can then use the **reprocess** button. You'll have seven days to reprocess the request.
-**Prerequisite role:** Global administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager
+**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager
1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
You can only reprocess a request that has a status of **Delivery failed** or **P
You can only cancel a pending request that has not yet been delivered or whose delivery has failed.The **cancel** button would be grayed out otherwise.
-**Prerequisite role:** Global administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager
+**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager
1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
active-directory Identity Governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/identity-governance-overview.md
It's a best practice to use the least privileged role to perform administrative
|Privileged Identity Management | Privileged role administrator | | Terms of use | Security administrator or Conditional access administrator |
+>[!NOTE]
+>The least privileged role for Entitlement management will be changing from the User Administrator role to the Identity Governance Administrator role.
+ ## Next steps - [What is Azure AD entitlement management?](entitlement-management-overview.md)
active-directory Services Support Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/services-support-managed-identities.md
Refer to the following list to configure managed identity for Azure Functions (i
Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet | | | :-: | :-: | :-: | :-: | | System assigned | ![Available][check] | ![Available][check] | Not available | ![Available][check] |
-| User assigned | Not available | Not available | Not available | Not available |
+| User assigned | ![Available][check] | ![Available][check] | Not available | ![Available][check] |
Refer to the following list to configure managed identity for Azure IoT Hub (in regions where available): -- [Azure portal](../../iot-hub/virtual-network-support.md#turn-on-managed-identity-for-iot-hub)
+- For more information, please see [Azure IoT Hub support for managed identities](../../iot-hub/iot-hub-managed-identity.md).
### Azure Import/Export
active-directory Pim Apis Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-apis-concept.md
+
+ Title: API concepts in Privileged Identity management - Azure AD | Microsoft Docs
+description: Information for understanding the APIs in Azure AD Privileged Identity Management (PIM).
+
+documentationcenter: ''
++
+editor: ''
++++ Last updated : 05/04/2021++++
+# Understand the Privileged Identity Management APIs
+
+You can perform Privileged Identity Management (PIM) tasks using the Microsoft Graph APIs for Azure Active Directory (Azure AD) roles and the Azure Resource Manager API for Azure resource roles (sometimes called Azure RBAC roles). This article describes important concepts for using the APIs for Privileged Identity Management.
+
+For requests and other details about PIM APIs, check out:
+
+- PIM for Azure AD roles API reference
+- [PIM for Azure resource roles API reference](/rest/api/authorization/roleeligibilityschedulerequests)
+
+> [!IMPORTANT]
+> PIM APIs [!INCLUDE [PREVIEW BOILERPLATE](../../../includes/active-directory-develop-preview.md)]
+
+## PIM API history
+
+There have been several iterations of the PIM API over the past few years. You'll find some overlaps in functionality, but they don't represent a linear progression of versions.
+
+### Iteration 1 ΓÇô only supports Azure AD roles, deprecating
+
+Under the /beta/privilegedRoles endpoint, Microsoft had a classic version of the PIM API which is no longer supported in most tenants. We are in the process of deprecating remaining access to this API on 05/31.
+
+### Iteration 2 ΓÇô supports Azure AD roles and Azure resource roles
+
+Under the /beta/privilegedAccess endpoint, Microsoft supported both /aadRoles and /azureResources. This endpoint is still available in your tenant but Microsoft recommends against starting any new development with this API. This beta API will never be released to general availability and will be eventually deprecated.
+
+### Current iteration ΓÇô Azure AD roles in Microsoft Graph and Azure resource roles in Azure Resource Manager
+
+Now in beta, Microsoft has the final iteration of the PIM API before we release the API to general availability. Based on customer feedback, the Azure AD PIM API is now under the unifiedRoleManagement set of API and the Azure Resource PIM API is now under the Azure Resource Manager role assignment API. These locations also provide a few additional benefits including:
+
+- Alignment of the PIM API for regular role assignment API for both Azure AD roles and Azure Resource roles.
+- Reducing the need to call additional PIM API to onboard a resource, get a resource, or get role definition.
+- Supporting app-only permissions.
+- New features such as approval and email notification configuration.
+
+In the current iteration, there is *no API support* for PIM alerts and privileged access groups. They are on the roadmap for future development.
+
+## Current permissions required
+
+- Azure AD roles
+
+ To call the PIM Graph API for Azure AD roles, you will need at least one of the following permissions:
+
+ - RoleManagement.ReadWrite.Directory
+ - RoleManagement.Read.Directory
+
+ The easiest way to specify the required permissions is to use the Azure AD consent framework.
+
+- Azure resource roles
+
+ The PIM API for Azure resource roles is developed on top of the Azure Resource Manager framework. You will need to give consent to Azure Resource Management but wonΓÇÖt need any graph permission. You will also need to make sure the user or the service principal calling the API has at least the Owner or User Access Administrator role on the resource you are trying to administer.
+
+## Calling PIM API with an app-only token
+
+- Azure AD roles
+
+ PIM API now supports app-only permissions on top of delegated permissions. For app-only permissions, you must call the API with an application that's already been consented to the above permissions. For delegated permission, you must call the PIM API with both a user and an application token. The user must be assigned to either the Global Administrator role or Privileged Role Administrator role, and ensure that the service principal calling the API has at least the Owner or User Access Administrator role on the resource you are trying to administer.
+
+- Azure resource roles
+
+ PIM API for Azure resources supports both user only and application only calls. Simply make sure the service principal has either the owner or user access administrator role on the resource.
+
+## Design of current API iteration
+
+PIM API consists of two categories that are consistent for both the API for Azure AD roles and Azure resource roles: assignment and activation API requests, and policy settings.
+
+### Assignment and activation API
+
+To make eligible assignments, time-bound eligible/active assignments, and to activate assignments, PIM provides the following entities:
+
+- RoleAssignmentSchedule
+- RoleEligibilitySchedule
+- RoleAssignmentScheduleInstance
+- RoleEligibilityScheduleInstance
+- RoleAssignmentScheduleRequest
+- RoleEligibilityScheduleRequest
+
+These entities work alongside pre-existing roleDefinition and roleAssignment entities for both Azure AD roles and Azure roles to allow you to create end to end scenarios.
+
+- If you are trying to create or retrieve a persistent (active) role assignment that does not have a schedule (start or end time), you should avoid these PIM entities and focus on the read/write operations under the roleAssignment entity
+
+- To create an eligible assignment with or without an expiration time you can use the write operation on roleEligibilityScheduleRequest
+
+- To create a persistent (active) assignment with a schedule (start or end time), you can use the write operation on roleAssignmentScheduleRequest
+
+- To activate an eligible assignment, you should also use the write operation on roleAssignmentScheduleRequest with a modified action parameter called selfActivate
+
+Each of the request objects would either create a roleAssignmentSchedule or a roleEligibilitySchedule object. These objects are read-only and show a schedule of all the current and future assignments.
+
+When an eligible assignment is activated, the roleEligibilityScheduleInstance continues to exist. The roleAssignmentScheduleRequest for the activation would create a separate roleAssignmentSchedule and roleAssignmentScheduleInstance for that activated duration.
+
+The instance objects are the actual assignments that currently exist whether it is an eligible assignment or an active assignment. You should use the GET operation on the instance entity to retrieve a list of eligible assignments / active assignments to a role/user.
+
+### Policy setting API
+
+To manage the setting, we provide the following entities:
+
+- roleManagementPolicy
+- roleManagementPolicyAssignment
+
+The *role management policy* defines the setting of the rule. For example, whether MFA/approval is required, whether and who to send the email notifications to, or whether permanent assignments are allowed or not. The *policy assignment* attaches the policy to a specific role.
+
+The two-entity design could support future scenarios such as attaching a policy to multiple roles. For now, use this API is to get a list of all the roleManagementPolicyAssignments, filter it by the roleDefinitionID you want to modify, and then update the policy associated with the policyAssignment.
+
+## Relationship between PIM entities and role assignment entities
+
+The only link between the PIM entity and the role assignment entity for persistent (active) assignment for either Azure AD roles or Azure roles is the roleAssignmentScheduleInstance. There is a one-to-one mapping between the two entities. That mapping means roleAssignment and roleAssignmentScheduleInstance would both include:
+
+- Persistent (active) assignments made outside of PIM
+- Persistent (active) assignments with a schedule made inside PIM
+- Activated eligible assignments
+
+## Next steps
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/permissions-reference.md
Previously updated : 04/26/2021 Last updated : 05/05/2021
This article lists the Azure AD built-in roles you can assign to allow managemen
> | [Guest Inviter](#guest-inviter) | Can invite guest users independent of the 'members can invite guests' setting. | 95e79109-95c0-4d8e-aee3-d01accf2d47b | > | [Helpdesk Administrator](#helpdesk-administrator) | Can reset passwords for non-administrators and Helpdesk Administrators. | 729827e3-9c14-49f7-bb1b-9608f156bbb8 | > | [Hybrid Identity Administrator](#hybrid-identity-administrator) | Can manage AD to Azure AD cloud provisioning, Azure AD Connect, and federation settings. | 8ac3fc64-6eca-42ea-9e69-59f4c7b60eb2 |
+> | [Identity Governance Administrator](#identity-governance-administrator) | Manage access using Azure AD for identity governance scenarios. | 45d8d3c5-c802-45c6-b32a-1d70b5e1e86e |
> | [Insights Administrator](#insights-administrator) | Has administrative access in the Microsoft 365 Insights app. | eb1f4a8d-243a-41f0-9fbd-c7cdf6c5ef7c | > | [Insights Business Leader](#insights-business-leader) | Can view and share dashboards and insights via the M365 Insights app. | 31e939ad-9672-4796-9c2e-873181342d2d | > | [Intune Administrator](#intune-administrator) | Can manage all aspects of the Intune product. | 3a2c62db-5318-420d-8d74-23affee5d9d5 |
Users in this role can create, manage and deploy provisioning configuration setu
> | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Microsoft 365 service requests | > | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center |
+## Identity Governance Administrator
+
+Users with this role can manage Azure AD identity governance configuration, including access packages, access reviews, catalogs and policies, ensuring access is approved and reviewed and guest users who no longer need access are removed.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | microsoft.directory/accessReviews/allProperties/allTasks | Create and delete access reviews, and read and update all properties of access reviews in Azure AD |
+> | microsoft.directory/entitlementManagement/allProperties/allTasks | Create and delete resources, and read and update all properties in Azure AD entitlement management |
+> | microsoft.directory/groups/members/update | Update members of groups, excluding role-assignable groups |
+> | microsoft.directory/servicePrincipals/appRoleAssignedTo/update | Update service principal role assignments |
+ ## Insights Administrator Users in this role can access the full set of administrative capabilities in the [M365 Insights application](https://go.microsoft.com/fwlink/?linkid=2129521). This role has the ability to read directory information, monitor service health, file support tickets, and access the Insights admin settings aspects.
active-directory Ardoq Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/ardoq-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Ardoq | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Ardoq.
++++++++ Last updated : 05/07/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Ardoq
+
+In this tutorial, you'll learn how to integrate Ardoq with Azure Active Directory (Azure AD). When you integrate Ardoq with Azure AD, you can:
+
+* Control in Azure AD who has access to Ardoq.
+* Enable your users to be automatically signed-in to Ardoq with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Ardoq single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Ardoq supports **SP and IDP** initiated SSO.
+* Ardoq supports **Just In Time** user provisioning.
+
+## Adding Ardoq from the gallery
+
+To configure the integration of Ardoq into Azure AD, you need to add Ardoq from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Ardoq** in the search box.
+1. Select **Ardoq** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for Ardoq
+
+Configure and test Azure AD SSO with Ardoq using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Ardoq.
+
+To configure and test Azure AD SSO with Ardoq, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Ardoq SSO](#configure-ardoq-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Ardoq test user](#create-ardoq-test-user)** - to have a counterpart of B.Simon in Ardoq that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Ardoq** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+
+ a. In the **Identifier** text box, type a URL using one of the following patterns:
+
+ | Identifier |
+ ||
+ | `https://<CustomerName>.us.ardoq.com/saml/v2` |
+ | `https://<CustomerName>.ardoq.com/saml/v2` |
+ |
++
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<CustomerName>.ardoq.com/saml/v2`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type a URL using one of the following patterns:
+
+ | Sign-on URL |
+ |-|
+ | `https://<CustomerName>.ardoq.com/saml/v2` |
+ | `https://<CustomerName>.us.ardoq.com/saml/v2` |
+ |
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Ardoq Client support team](mailto:support@ardoq.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Ardoq application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, Ardoq application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | -- | |
+ | displayName | user.displayname |
+ | assignedRoles | user.assignedroles |
+ | mail | user.mail |
+
+ > [!NOTE]
+ > Ardoq expects roles for users assigned to the application. Please set up these roles in Azure AD so that users can be assigned the appropriate roles. To understand how to configure roles in Azure AD, see [here](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps#app-roles-ui).
++
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/metadataxml.png)
+
+1. On the **Set up Ardoq** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Ardoq.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Ardoq**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you have setup the roles as explained in the above, you can select it from the **Select a role** dropdown.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Ardoq SSO
+
+To configure single sign-on on **Ardoq** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Ardoq support team](mailto:support@ardoq.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Ardoq test user
+
+In this section, a user called Britta Simon is created in Ardoq. Ardoq supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Ardoq, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Ardoq Sign on URL where you can initiate the login flow.
+
+* Go to Ardoq Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Ardoq for which you set up the SSO
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Ardoq tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Ardoq for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
++
+## Next steps
+
+Once you configure Ardoq you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
++
active-directory Autodesk Sso Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/autodesk-sso-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Autodesk SSO | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Autodesk SSO.
++++++++ Last updated : 05/04/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Autodesk SSO
+
+In this tutorial, you'll learn how to integrate Autodesk SSO with Azure Active Directory (Azure AD). When you integrate Autodesk SSO with Azure AD, you can:
+
+* Control in Azure AD who has access to Autodesk SSO.
+* Enable your users to be automatically signed-in to Autodesk SSO with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Autodesk SSO single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Autodesk SSO supports **SP** initiated SSO.
+
+* Autodesk SSO supports **Just In Time** user provisioning.
++
+## Adding Autodesk SSO from the gallery
+
+To configure the integration of Autodesk SSO into Azure AD, you need to add Autodesk SSO from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Autodesk SSO** in the search box.
+1. Select **Autodesk SSO** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for Autodesk SSO
+
+Configure and test Azure AD SSO with Autodesk SSO using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Autodesk SSO.
+
+To configure and test Azure AD SSO with Autodesk SSO, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Autodesk SSO SSO](#configure-autodesk-sso-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Autodesk SSO test user](#create-autodesk-sso-test-user)** - to have a counterpart of B.Simon in Autodesk SSO that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Autodesk SSO** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ `https://www.okta.com/saml2/service-provider/<UNIQUE_ID>`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://autodesk-prod.okta.com/sso/saml2/<UNIQUE_ID>`
+
+ c. In the **Sign on URL** text box, type the URL:
+ `https://autodesk-prod.okta.com/sso/saml2/`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Autodesk SSO Client support team](mailto:apps.email@autodesk.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Autodesk SSO application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, Autodesk SSO application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | -- | |
+ | firstName | user.givenname |
+ | lastName | user.surname |
+ | objectGUID | user.objectid |
+ | email | user.mail |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up Autodesk SSO** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Autodesk SSO.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Autodesk SSO**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Autodesk SSO SSO
+
+To configure single sign-on on **Autodesk SSO** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Autodesk SSO support team](mailto:apps.email@autodesk.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Autodesk SSO test user
+
+In this section, a user called Britta Simon is created in Autodesk SSO. Autodesk SSO supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Autodesk SSO, a new one is created after authentication.
+
+## Test SSO
+
+To test the Autodesk SSO, open the Autodesk console and click **Test Connection** button and authenticate using the test account which you have created in the **Create an Azure AD test user** section.
+
+## Next steps
+
+Once you configure Autodesk SSO you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
++
active-directory Check Point Harmony Connect Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/check-point-harmony-connect-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Check Point Harmony Connect | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Check Point Harmony Connect.
++++++++ Last updated : 05/04/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Check Point Harmony Connect
+
+In this tutorial, you'll learn how to integrate Check Point Harmony Connect with Azure Active Directory (Azure AD). When you integrate Check Point Harmony Connect with Azure AD, you can:
+
+* Control in Azure AD who has access to Check Point Harmony Connect.
+* Enable your users to be automatically signed-in to Check Point Harmony Connect with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Check Point Harmony Connect single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Check Point Harmony Connect supports **SP** initiated SSO.
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
++
+## Adding Check Point Harmony Connect from the gallery
+
+To configure the integration of Check Point Harmony Connect into Azure AD, you need to add Check Point Harmony Connect from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Check Point Harmony Connect** in the search box.
+1. Select **Check Point Harmony Connect** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for Check Point Harmony Connect
+
+Configure and test Azure AD SSO with Check Point Harmony Connect using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Check Point Harmony Connect.
+
+To configure and test Azure AD SSO with Check Point Harmony Connect, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Check Point Harmony Connect SSO](#configure-check-point-harmony-connect-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Check Point Harmony Connect test user](#create-check-point-harmony-connect-test-user)** - to have a counterpart of B.Simon in Check Point Harmony Connect that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Check Point Harmony Connect** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+
+ In the **Sign on URL** text box, type the URL:
+ `https://cloudinfra-gw.portal.checkpoint.com/api/saml/sso`
+
+1. Check Point Harmony Connect application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, Check Point Harmony Connect application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | - | |
+ | groups | user.groups |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/metadataxml.png)
+
+1. On the **Set up Check Point Harmony Connect** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Check Point Harmony Connect.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Check Point Harmony Connect**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Check Point Harmony Connect SSO
+
+1. Log in to your Check Point Harmony Connect website as an administrator.
+
+1. Click **SETTINGS**, then go to the **Identity Provider** and click on **CONNECT NOW**.
+
+ ![screenshot for identity provider.](./media/check-point-harmony-connect-tutorial/identity-provider.png)
+
+1. Select **Microsoft Azure AD** as your identity provider and click **NEXT**.
+
+ ![screenshot to select identity provider.](./media/check-point-harmony-connect-tutorial/select-identity-provider.png)
+
+1. On the **Verify Domain** page, enter your organization domain and enter this generated DNS record to your DNS server as TXT record, click **NEXT**.
+
+ ![screenshot for Domain value.](./media/check-point-harmony-connect-tutorial/domain.png)
+
+1. In the Allow Connectivity page, perform the following steps:
+
+ ![screenshot for Allow Connectivity page.](./media/check-point-harmony-connect-tutorial/allow-connectivity.png)
+
+ a. Copy **ENTITY ID** value, paste this value into the **Identifier** text box in the **Basic SAML Configuration** section in the Azure portal.
+
+ b. Copy **REPLY URL** value, paste this value into the **Reply URL** text box in the **Basic SAML Configuration** section in the Azure portal.
+
+ c. Click **NEXT**.
+
+1. In the **Configure Metadata** page, upload the **Federation Metadata XML** that you downloaded from your Azure portal.
+
+1. In the **CONFIRM IDENTITY PROVIDER** page, click **Add** to complete the configuration.
+
+### Create Check Point Harmony Connect test user
+
+1. Log in to your Check Point Harmony Connect website as an administrator.
+
+1. Go to the **Policy** -> **Access Control** and create a **new rule** and click on **(+)** to add **New User**.
+
+ ![screenshot for create new user.](./media/check-point-harmony-connect-tutorial/add-new-user.png)
+
+1. In the **ADD USER** window, enter the Name and User Name in their respective text boxes and click **ADD**.
+
+ ![screenshot for create user.](./media/check-point-harmony-connect-tutorial/add-user.png)
+
+## Test SSO
+
+To test the Check Point Harmony Connect, go to their Authentication service and authenticate using test account which you have created in the **Create an Azure AD test user** section.
+
+## Next steps
+
+Once you configure Check Point Harmony Connect you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
++
active-directory Documo Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/documo-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Documo | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Documo.
++++++++ Last updated : 05/05/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Documo
+
+In this tutorial, you'll learn how to integrate Documo with Azure Active Directory (Azure AD). When you integrate Documo with Azure AD, you can:
+
+* Control in Azure AD who has access to Documo.
+* Enable your users to be automatically signed-in to Documo with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Documo single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Documo supports **SP and IDP** initiated SSO.
+* Documo supports **Just In Time** user provisioning.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Adding Documo from the gallery
+
+To configure the integration of Documo into Azure AD, you need to add Documo from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Documo** in the search box.
+1. Select **Documo** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for Documo
+
+Configure and test Azure AD SSO with Documo using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Documo.
+
+To configure and test Azure AD SSO with Documo, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Documo SSO](#configure-documo-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Documo test user](#create-documo-test-user)** - to have a counterpart of B.Simon in Documo that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Documo** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type the URL:
+ `https://app.documo.com/sso`
+
+1. Click **Save**.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/metadataxml.png)
+
+1. On the **Set up Documo** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Documo.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Documo**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Documo SSO
+
+1. Log in to your Documo website as an administrator.
+
+1. Go to the **Account Settings** -> **Security**.
+
+ ![screenshot for security page.](./media/documo-tutorial/security.png)
+
+1. In the security tab, click on **Configure SSO** button at the bottom of the page.
+
+ ![screenshot for configure button.](./media/documo-tutorial/configure-sso.png)
+
+1. Perform the following steps in the **Setup SAML** page.
+
+ ![screenshot for configuration page.](./media/documo-tutorial/setup-saml.png)
+
+ a. In the **Entity Id** textbox, paste the **Azure AD Identifier** value which you have copied from the Azure portal.
+
+ b. In the **SSO URL(Redirect URL)** textbox, paste the **Login URL** value which you have copied from the Azure portal.
+
+ c. Give the **Email Domain** value in the text box.
+
+ d. Enter the value in the **Field Name in SAML Token containing Identity email** text box.
+
+ e. Open the downloaded **Federation Metadata XML** from the Azure portal into Notepad and paste the content into the **Signer Certificate** textbox.
+
+ f. Click **Submit**.
+
+### Create Documo test user
+
+In this section, a user called Britta Simon is created in Documo. Documo supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Documo, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Documo Sign on URL where you can initiate the login flow.
+
+* Go to Documo Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Documo for which you set up the SSO
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Documo tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Documo for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
++
+## Next steps
+
+Once you configure Documo you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
++
active-directory Leadfamly Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/leadfamly-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Leadfamly | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Leadfamly.
++++++++ Last updated : 05/05/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Leadfamly
+
+In this tutorial, you'll learn how to integrate Leadfamly with Azure Active Directory (Azure AD). When you integrate Leadfamly with Azure AD, you can:
+
+* Control in Azure AD who has access to Leadfamly.
+* Enable your users to be automatically signed-in to Leadfamly with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Leadfamly single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Leadfamly supports **SP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add Leadfamly from the gallery
+
+To configure the integration of Leadfamly into Azure AD, you need to add Leadfamly from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Leadfamly** in the search box.
+1. Select **Leadfamly** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Leadfamly
+
+Configure and test Azure AD SSO with Leadfamly using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Leadfamly.
+
+To configure and test Azure AD SSO with Leadfamly, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Leadfamly SSO](#configure-leadfamly-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Leadfamly test user](#create-leadfamly-test-user)** - to have a counterpart of B.Simon in Leadfamly that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Leadfamly** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://appv2.leadfamly.com/saml-sso/<INSTANCE ID>`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Sign on URL. Contact [Leadfamly Client support team](mailto:support@leadfamly.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/metadataxml.png)
+
+1. On the **Set up Leadfamly** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Leadfamly.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Leadfamly**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Leadfamly SSO
+
+1. Log in to your Leadfamly company site as an administrator.
+
+2. Go to **Account** ->**Customer information** ->**SAML SSO**.
+
+![Account](./media/leadfamly-tutorial/configuration.png "Account")
+
+3. Enable **SAML SSO** and select **Azure AD** Provider from the dropdown list and perform the following steps.
+
+![Information](./media/leadfamly-tutorial/account.png "Information")
+
+ a. Copy **Identifier** value, paste this value into the **Identifier** URL text box in the **Basic SAML Configuration** section in the Azure portal.
+
+ b. Copy **Reply URL** value, paste this value into the **Reply URL** text box in the **Basic SAML Configuration** section in the Azure portal.
+
+ c. Copy **Sign on URL** value, paste this value into the **Sign on URL** text box in the **Basic SAML Configuration** section in the Azure portal.
+
+ d. Open the downloaded **Federation Metadata XML** file from the Azure portal into Notepad and upload the content into **Federation Metadata XML**.
+
+ e.Click **Save**.
+
+### Create Leadfamly test user
+
+1. In a different web browser window, sign into Leadfamly website as an administrator.
+
+2. Go to **Account** -> **Users** -> **Invite user**.
+
+![Users Section](./media/leadfamly-tutorial/users.png "Users Section")
+
+3. Fill the required values in the following fields and click **Save**.
+
+![Modify Users](./media/leadfamly-tutorial/modify-user.png "Modify Users")
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Leadfamly Sign-on URL where you can initiate the login flow.
+
+* Go to Leadfamly Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Leadfamly tile in the My Apps, this will redirect to Leadfamly Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
+
+## Next steps
+
+Once you configure Leadfamly you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory Servicenow Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/servicenow-provisioning-tutorial.md
Previously updated : 12/10/2019 Last updated : 05/10/2021
After you've configured provisioning, use the following resources to monitor you
* The Azure AD provisioning service currently operates under particular [IP ranges](../app-provisioning/use-scim-to-provision-users-and-groups.md#ip-ranges). If necessary, you can restrict other IP ranges and add these particular IP ranges to the allow list of your application. That technique will allow traffic flow from the Azure AD provisioning service to your application.
+* Self-hosted ServiceNow instances are not supported.
+ ## Additional resources * [Managing user account provisioning for enterprise apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
aks Support Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/support-policies.md
Microsoft provides technical support for the following examples:
* Connectivity to other Azure services and applications * Ingress controllers and ingress or load balancer configurations * Network performance and latency
+ * [Network policies](use-network-policies.md#differences-between-azure-and-calico-policies-and-their-capabilities)
> [!NOTE]
automation Automation Dsc Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-dsc-getting-started.md
You can assign a node to use a different node configuration than the one you ini
## Unregister a node
-If you no longer want a node to be managed by State Configuration, you can unregister it. See [How to remove a configuration and node from Automation State Configuration](./how-to/remove-desired-state-configuration-package.md).
+If you no longer want a node to be managed by State Configuration, you can unregister it. See [How to remove a configuration and node from Automation State Configuration](./state-configuration/remove-node-and-configuration-package.md).
## Next steps
automation Remove Node And Configuration Package https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/state-configuration/remove-node-and-configuration-package.md
+
+ Title: Remove DSC and node from Automation State Configuration
+description: This article explains how to remove an Azure Automation State Configuration (DSC) configuration document assigned and unregister a managed node.
+++ Last updated : 04/16/2021+++
+# How to remove a configuration and node from Automation State Configuration
+
+This article covers how to unregister a node managed by Automation State Configuration, and safely remove a PowerShell Desired State Configuration (DSC) configuration from managed nodes. For both Windows and Linux nodes, you need to [unregister the node](#unregister-a-node) and [delete the configuration](#delete-a-configuration-from-the-azure-portal). For Linux nodes only, you can optionally delete the DSC packages from the nodes as well. See [Remove the DSC package from a Linux node](#remove-the-dsc-package-from-a-linux-node).
+
+## Unregister a node
+
+If you no longer want a node to be managed by State Configuration (DSC), you can unregister it from the Azure portal or with Azure PowerShell using the following steps.
+
+Unregistering a node from the service only sets the Local Configuration Manager settings so the node is no longer connecting to the service. This does not effect the configuration that's currently applied to the node, and leaves the related files in place on the node. You can optionally clean up those files. See [Delete a configuration](#delete-a-configuration).
+
+### Unregister in the Azure portal
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Search for and select **Automation Accounts**.
+1. On the **Automation Accounts** page, select your Automation account from the list.
+1. From your Automation account, select **State configuration (DSC)** under **Configuration Management**.
+1. On the **State configuration (DSC)** page, click the **Nodes** tab.
+1. On the **Nodes** tab, select the name of the node you want to unregister.
+1. On the pane for that node, click **Unregister**.
+
+ :::image type="content" source="./media/remove-node-and-configuration-package/unregister-node.png" alt-text="Screenshot of the Node details page highlighting the Unregister button." lightbox="./media/remove-node-and-configuration-package/unregister-node.png":::
+
+### Unregister using PowerShell
+
+You can also unregister a node using the PowerShell cmdlet [Unregister-AzAutomationDscNode](/powershell/module/az.automation/unregister-azautomationdscnode).
+
+>[!NOTE]
+>If your organization still uses the deprecated AzureRM modules, you can use [Unregister-AzureRmAutomationDscNode](/powershell/module/azurerm.automation/unregister-azurermautomationdscnode).
+
+## Delete a configuration
+
+When you're ready to remove an imported DSC configuration document (which is a Managed Object Format (MOF) or .mof file) that's assigned to one or more nodes, follow these steps.
+
+### Delete a configuration from the Azure portal
+
+You can delete configurations for both Windows and Linux nodes from the Azure portal.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Search for and select **Automation Accounts**.
+1. On the **Automation Accounts** page, select your Automation account from the list.
+1. From your Automation account, select **State configuration (DSC)** under **Configuration Management**.
+1. On the **State configuration (DSC)** page, click the **Configurations** tab, then select the name of the configuration you want to delete.
+
+ :::image type="content" source="./media/remove-node-and-configuration-package/configurations-tab.png" alt-text="Screenshot of configurations tab." lightbox="./media/remove-node-and-configuration-package/configurations-tab.png":::
+
+1. On the configuration's detail page, click **Delete** to remove the configuration.
+
+ :::image type="content" source="./media/remove-node-and-configuration-package/delete-extension.png" alt-text="Screenshot of deleting an extension." lightbox="./media/remove-node-and-configuration-package/delete-extension.png":::
+
+## Manually delete a configuration file from a node
+
+If you don't want to use the Azure portal, you can manually delete the .mof configuration files as follows.
+
+### Delete a Windows configuration using PowerShell
+
+To remove an imported DSC configuration document (.mof), use the [Remove-DscConfigurationDocument](/powershell/module/psdesiredstateconfiguration/remove-dscconfigurationdocument) cmdlet.
+
+### Delete a Linux configuration
+
+The configuration files are stored in /etc/opt/omi/conf/dsc/configuration/. Remove the .mof files in this directory to delete the node's configuration.
+
+## Remove the DSC package from a Linux node
+
+This step is optional. Unregistering a Linux node from State Configuration (DSC) doesn't remove the DSC and OMI packages from the machine. Use the commands below to remove the packages as well as all logs and related data.
+
+To find the package names and other relevant details, see the [PowerShell Desired State Configuration for Linux](https://github.com/Microsoft/PowerShell-DSC-for-Linux) GitHub repository.
+
+### RPM-based systems
+
+```bash
+RPM -e <package name>
+```
+
+### dpkg-based systems
+
+```bash
+dpkg -P <package name>
+```
+
+ ## Next steps
+
+- If you want to re-register the node, or register a new one, see [Register a VM to be managed by State Configuration](../tutorial-configure-servers-desired-state.md#register-a-vm-to-be-managed-by-state-configuration).
+
+- If you want to add the configuration back and recompile, see [Compile DSC configurations in Azure Automation State Configuration](../automation-dsc-compile.md).
azure-arc Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/agent-overview.md
Title: Overview of the Connected Machine agent description: This article provides a detailed overview of the Azure Arc enabled servers agent available, which supports monitoring virtual machines hosted in hybrid environments. Previously updated : 04/27/2021 Last updated : 05/10/2021
Arc enabled servers support the installation of the Connected Machine agent on a
The following versions of the Windows and Linux operating system are officially supported for the Azure Connected Machine agent: - Windows Server 2008 R2, Windows Server 2012 R2 and higher (including Server Core)-- Ubuntu 16.04 and 18.04 LTS (x64)
+- Ubuntu 16.04, 18.04, and 20.04 LTS (x64)
- CentOS Linux 7 and 8 (x64)-- SUSE Linux Enterprise Server (SLES) 15 (x64)
+- SUSE Linux Enterprise Server (SLES) 12 and 15 (x64)
- Red Hat Enterprise Linux (RHEL) 7 and 8 (x64) - Amazon Linux 2 (x64) - Oracle Linux 7
azure-functions Bring Dependency To Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/bring-dependency-to-functions.md
+
+ Title: Bring dependencies and third-party libraries to Azure Functions
+description: Learn how to bring files or third party library
Last updated : 4/6/2020+
+zone_pivot_groups: "bring-third-party-dependency-programming-functions"
++
+# Bring dependencies or third party library to Azure Functions
+
+In this article, you learn to bring in third-party dependencies into your functions apps. Examples of third-party dependencies are json files, binary files and machine learning models.
+
+In this article, you learn how to:
+> [!div class="checklist"]
+> * Bring in dependencies via Functions Code project
+> [!div class="checklist"]
+> * Bring in dependencies via mounting Azure Fileshare
+
+## Bring in dependencies from the project directory
+One of the simplest ways to bring in dependencies is to put the files/artifact together with the functions app code in Functions project directory structure. Here's an example of the directory samples in a Python functions project:
+```
+<project_root>/
+ | - my_first_function/
+ | | - __init__.py
+ | | - function.json
+ | | - example.py
+ | - dependencies/
+ | | - dependency1
+ | - .funcignore
+ | - host.json
+ | - local.settings.json
+```
+By putting the dependencies in a folder inside functions app project directory, the dependencies folder will get deployed together with the code. As a result, your function code can access the dependencies in the cloud via file system api.
+
+### Accessing the dependencies in your code
+
+Here's an example to access and execute ```ffmpeg``` dependency that is put into ```<project_root>/ffmpeg_lib``` directory.
++
+```python
+import logging
+
+import azure.functions as func
+import subprocess
+
+FFMPEG_RELATIVE_PATH = "../ffmpeg_lib/ffmpeg"
+
+def main(req: func.HttpRequest,
+ context: func.Context) -> func.HttpResponse:
+ logging.info('Python HTTP trigger function processed a request.')
+
+ command = req.params.get('command')
+ # If no command specified, set the command to help
+ if not command:
+ command = "-h"
+
+ # context.function_directory returns the current directory in which functions is executed
+ ffmpeg_path = "/".join([str(context.function_directory), FFMPEG_RELATIVE_PATH])
+
+ try:
+ byte_output = subprocess.check_output([ffmpeg_path, command])
+ return func.HttpResponse(byte_output.decode('UTF-8').rstrip(),status_code=200)
+ except Exception as e:
+ return func.HttpResponse("Unexpected exception happened when executing ffmpeg. Error message:" + str(e),status_code=200)
+```
+>[!NOTE]
+> You may need to use `chmod` to provide `Execute` rights to the ffmpeg binary in a Linux environment
+
+One of the simplest ways to bring in dependencies is to put the files/artifact together with the functions app code in functions project directory structure. Here's an example of the directory samples in a Java functions project:
+```
+<project_root>/
+ | - src/
+ | | - main/java/com/function
+ | | | - Function.java
+ | | - test/java/com/function
+ | - artifacts/
+ | | - dependency1
+ | - host.json
+ | - local.settings.json
+ | - pom.xml
+```
+For java specifically, you need to specifically include the artifacts into the build/target folder when copying resources. Here's an example on how to do it in Maven:
+
+```xml
+...
+<execution>
+ <id>copy-resources</id>
+ <phase>package</phase>
+ <goals>
+ <goal>copy-resources</goal>
+ </goals>
+ <configuration>
+ <overwrite>true</overwrite>
+ <outputDirectory>${stagingDirectory}</outputDirectory>
+ <resources>
+ <resource>
+ <directory>${project.basedir}</directory>
+ <includes>
+ <include>host.json</include>
+ <include>local.settings.json</include>
+ <include>artifacts/**</include>
+ </includes>
+ </resource>
+ </resources>
+ </configuration>
+</execution>
+...
+```
+By putting the dependencies in a folder inside functions app project directory, the dependencies folder will get deployed together with the code. As a result, your function code can access the dependencies in the cloud via file system api.
+
+### Accessing the dependencies in your code
+
+Here's an example to access and execute ```ffmpeg``` dependency that is put into ```<project_root>/ffmpeg_lib``` directory.
++
+```java
+public class Function {
+ final static String BASE_PATH = "BASE_PATH";
+ final static String FFMPEG_PATH = "/artifacts/ffmpeg/ffmpeg.exe";
+ final static String HELP_FLAG = "-h";
+ final static String COMMAND_QUERY = "command";
+
+ @FunctionName("HttpExample")
+ public HttpResponseMessage run(
+ @HttpTrigger(
+ name = "req",
+ methods = {HttpMethod.GET, HttpMethod.POST},
+ authLevel = AuthorizationLevel.ANONYMOUS)
+ HttpRequestMessage<Optional<String>> request,
+ final ExecutionContext context) throws IOException{
+ context.getLogger().info("Java HTTP trigger processed a request.");
+
+ // Parse query parameter
+ String flags = request.getQueryParameters().get(COMMAND_QUERY);
+
+ if (flags == null || flags.isBlank()) {
+ flags = HELP_FLAG;
+ }
+
+ Runtime rt = Runtime.getRuntime();
+ String[] commands = { System.getenv(BASE_PATH) + FFMPEG_PATH, flags};
+ Process proc = rt.exec(commands);
+
+ BufferedReader stdInput = new BufferedReader(new
+ InputStreamReader(proc.getInputStream()));
+
+ String out = stdInput.lines().collect(Collectors.joining("\n"));
+ if(out.isEmpty()) {
+ BufferedReader stdError = new BufferedReader(new
+ InputStreamReader(proc.getErrorStream()));
+ out = stdError.lines().collect(Collectors.joining("\n"));
+ }
+ return request.createResponseBuilder(HttpStatus.OK).body(out).build();
+
+ }
+```
+>[!NOTE]
+> To get this snippet of code to work in Azure, you need to specify a custom application setting of "BASE_PATH" with value of "/home/site/wwwroot"
++
+## Bring dependencies by mounting a file share
+
+When running your function app on Linux, there's another way to bring in third-party dependencies. Functions lets you mount a file share hosted in Azure Files. Consider this approach when you want to decouple dependencies or artifacts from your application code.
+
+First, you need to create an Azure Storage Account. In the account, you also need to create file share in Azure files. To create these resources, follow this [guide](../storage/files/storage-how-to-use-files-portal.md)
+
+After you created the storage account and file share, use the [az webapp config storage-account add](/cli/azure/webapp/config/storage-account#az_webapp_config_storage_account_add) command to attach the file share to your functions app, as shown in the following example.
+
+```console
+az webapp config storage-account add \
+ --name < Function-App-Name > \
+ --resource-group < Resource-Group > \
+ --subscription < Subscription-Id > \
+ --custom-id < Unique-Custom-Id > \
+ --storage-type AzureFiles \
+ --account-name < Storage-Account-Name > \
+ --share-name < File-Share-Name > \
+ --access-key < Storage-Account-AccessKey > \
+ --mount-path </path/to/mount>
+```
+++
+| Flag | Value |
+| - | - |
+| custom-id | Any unique string |
+| storage-type | Only AzureFiles is supported currently |
+| share-name | Pre-existing share |
+| mount-path | Path at which the share will be accessible inside the container. Value has to be of the format `/dir-name` and it can't start with `/home` |
+
+More commands to modify/delete the file share configuration can be found [here](/cli/azure/webapp/config/storage-account#az-webapp-config-storage-account-update)
++
+### Uploading the dependencies to Azure Files
+
+One option to upload your dependency into Azure Files is through Azure portal. Refer to this [guide](../storage/files/storage-how-to-use-files-portal.md#upload-a-file) for instruction to upload dependencies using portal. Other options to upload your dependencies into Azure Files are through [Azure CLI](../storage/files/storage-how-to-use-files-cli.md#upload-a-file) and [PowerShell](../storage/files/storage-how-to-use-files-powershell.md#upload-a-file).
++
+### Accessing the dependencies in your code
+
+After your dependencies are uploaded in the file share, you can access the dependencies from your code. The mounted share is available at the specified *mount-path*, such as ```/path/to/mount```. You can access the target directory by using file system APIs.
+
+The following example shows HTTP trigger code that accesses the `ffmpeg` library, which is stored in a mounted file share.
+
+```python
+import logging
+
+import azure.functions as func
+import subprocess
+
+FILE_SHARE_MOUNT_PATH = os.environ['FILE_SHARE_MOUNT_PATH']
+FFMPEG = "ffmpeg"
+
+def main(req: func.HttpRequest) -> func.HttpResponse:
+ logging.info('Python HTTP trigger function processed a request.')
+
+ command = req.params.get('command')
+ # If no command specified, set the command to help
+ if not command:
+ command = "-h"
+
+ try:
+ byte_output = subprocess.check_output(["/".join(FILE_SHARE_MOUNT_PATH, FFMPEG), command])
+ return func.HttpResponse(byte_output.decode('UTF-8').rstrip(),status_code=200)
+ except Exception as e:
+ return func.HttpResponse("Unexpected exception happened when executing ffmpeg. Error message:" + str(e),status_code=200)
+```
+
+When you deploy this code to a function app in Azure, you need to [create an app setting](functions-how-to-use-azure-function-app-settings.md#settings) with a key name of `FILE_SHARE_MOUNT_PATH` and value of the mounted file share path, which for this example is `/azure-files-share`. To do local debugging, you need to populate the `FILE_SHARE_MOUNT_PATH` with the file path where your dependencies are stored in your local machine. Here's an example to set `FILE_SHARE_MOUNT_PATH` using `local.settings.json`:
+
+```json
+{
+ "IsEncrypted": false,
+ "Values": {
+ "AzureWebJobsStorage": "",
+ "FUNCTIONS_WORKER_RUNTIME": "python",
+ "FILE_SHARE_MOUNT_PATH" : "PATH_TO_LOCAL_FFMPEG_DIR"
+ }
+}
+
+```
+++
+## Next steps
+++ [Azure Functions Python developer guide](functions-reference-python.md)++ [Azure Functions Java developer guide](functions-reference-java.md)++ [Azure Functions developer reference](functions-reference.md)
azure-functions Functions Create Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-create-vnet.md
Last updated 2/22/2021
# Tutorial: Integrate Azure Functions with an Azure virtual network by using private endpoints
-This tutorial shows you how to use Azure Functions to connect to resources in an Azure virtual network by using private endpoints. You'll create a function by using a storage account that's locked behind a virtual network. The virtual network uses a service bus queue trigger.
+This tutorial shows you how to use Azure Functions to connect to resources in an Azure virtual network by using private endpoints. You'll create a function by using a storage account that's locked behind a virtual network. The virtual network uses a Service Bus queue trigger.
In this tutorial, you'll: > [!div class="checklist"] > * Create a function app in the Premium plan.
-> * Create Azure resources, such as the service bus, storage account, and virtual network.
+> * Create Azure resources, such as the Service Bus, storage account, and virtual network.
> * Lock down your storage account behind a private endpoint.
-> * Lock down your service bus behind a private endpoint.
-> * Deploy a function app that uses both the service bus and HTTP triggers.
+> * Lock down your Service Bus behind a private endpoint.
+> * Deploy a function app that uses both the Service Bus and HTTP triggers.
> * Lock down your function app behind a private endpoint. > * Test to see that your function app is secure inside the virtual network. > * Clean up resources.
Congratulations! You've successfully created your premium function app.
## Create Azure resources
-Next, you'll create a storage account, a service bus, and a virtual network.
+Next, you'll create a storage account, a Service Bus, and a virtual network.
### Create a storage account Your virtual networks will need a storage account that's separate from the one you created with your function app.
Your virtual networks will need a storage account that's separate from the one y
1. Select **Review + create**. After validation finishes, select **Create**.
-### Create a service bus
+### Create a Service Bus
1. On the Azure portal menu or the **Home** page, select **Create a resource**.
-1. On the **New** page, search for *service bus*. Then select **Create**.
+1. On the **New** page, search for *Service Bus*. Then select **Create**.
-1. On the **Basics** tab, use the following table to configure the service bus settings. All other settings can use the default values.
+1. On the **Basics** tab, use the following table to configure the Service Bus settings. All other settings can use the default values.
| Setting | Suggested value | Description | | | - | - | | **Subscription** | Your subscription | The subscription under which your resources are created. | | **[Resource group](../azure-resource-manager/management/overview.md)** | myResourceGroup | The resource group you created with your function app. |
- | **Name** | myServiceBus| The name of the service bus that the private endpoint will be applied to. |
- | **[Region](https://azure.microsoft.com/regions/)** | myFunctionRegion | The region where you created your function app. |
+ | **Namespace name** | myServiceBus| The name of the Service Bus that the private endpoint will be applied to. |
+ | **[Location](https://azure.microsoft.com/regions/)** | myFunctionRegion | The region where you created your function app. |
| **Pricing tier** | Premium | Choose this tier to use private endpoints with Azure Service Bus. | 1. Select **Review + create**. After validation finishes, select **Create**.
Create the virtual network to which the function app integrates:
| | - | - | | **Subscription** | Your subscription | The subscription under which your resources are created. | | **[Resource group](../azure-resource-manager/management/overview.md)** | myResourceGroup | The resource group you created with your function app. |
- | **Name** | myVirtualNet| The name of the virtual network that your function app will connect to. |
+ | **Name** | myVirtualNet| The name of the virtual network to which your function app will connect. |
| **[Region](https://azure.microsoft.com/regions/)** | myFunctionRegion | The region where you created your function app. | 1. On the **IP Addresses** tab, select **Add subnet**. Use the following table to configure the subnet settings.
Create the virtual network to which the function app integrates:
| Setting | Suggested value | Description | | | - | - |
- | **Subnet name** | functions | The name of the subnet your function app will connect to. |
+ | **Subnet name** | functions | The name of the subnet to which your function app will connect. |
| **Subnet address range** | 10.0.1.0/24 | The subnet address range. In the preceding image, notice that the IPv4 address space is 10.0.0.0/16. If the value were 10.1.0.0/16, the recommended subnet address range would be 10.1.1.0/24. | 1. Select **Review + create**. After validation finishes, select **Create**.
Create the private endpoints for Azure Files storage and Azure Blob Storage by u
| | - | - | | **Subscription** | Your subscription | The subscription under which your resources are created. | | **Resource type** | Microsoft.Storage/storageAccounts | The resource type for storage accounts. |
+ | **Name** | blob-endpoint | The name of the private endpoint for blobs from your storage account. |
| **Resource** | mysecurestorage | The storage account you created. | | **Target sub-resource** | blob | The private endpoint that will be used for blobs from the storage account. |
+1. After the private endpoints are created, return to the **Firewalls and virtual networks** section of your storage account.
+1. Ensure **Selected networks** is selected. It's not necessary to add an existing virtual network.
-## Lock down your service bus
+Resources in the virtual network can now communicate with the storage account using the private endpoint.
+## Lock down your Service Bus
-Create the private endpoint to lock down your service bus:
+Create the private endpoint to lock down your Service Bus:
-1. In your new service bus, in the menu on the left, select **Networking**.
+1. In your new Service Bus, in the menu on the left, select **Networking**.
1. On the **Private endpoint connections** tab, select **Private endpoint**.
- :::image type="content" source="./media/functions-create-vnet/3-navigate-private-endpoint-service-bus.png" alt-text="Screenshot of how to go to private endpoints for the service bus.":::
+ :::image type="content" source="./media/functions-create-vnet/3-navigate-private-endpoint-service-bus.png" alt-text="Screenshot of how to go to private endpoints for the Service Bus.":::
1. On the **Basics** tab, use the private endpoint settings shown in the following table.
Create the private endpoint to lock down your service bus:
| Setting | Suggested value | Description | | | - | - | | **Subscription** | Your subscription | The subscription under which your resources are created. |
- | **Resource type** | Microsoft.ServiceBus/namespaces | The resource type for the service bus. |
- | **Resource** | myServiceBus | The service bus you created earlier in the tutorial. |
- | **Target subresource** | namespace | The private endpoint that will be used for the namespace from the service bus. |
+ | **Resource type** | Microsoft.ServiceBus/namespaces | The resource type for the Service Bus. |
+ | **Resource** | myServiceBus | The Service Bus you created earlier in the tutorial. |
+ | **Target subresource** | namespace | The private endpoint that will be used for the namespace from the Service Bus. |
1. On the **Configuration** tab, for the **Subnet** setting, choose **default**. 1. Select **Review + create**. After validation finishes, select **Create**.
+1. After the private endpoint is created, return to the **Firewalls and virtual networks** section of your Service Bus namespace.
+1. Ensure **Selected networks** is selected.
+1. Select **+ Add existing virtual network** to add the recently created virtual network.
+1. On the **Add networks** tab, use the network settings from the following table:
+
+ | Setting | Suggested value | Description|
+ ||--||
+ | **Subscription** | Your subscription | The subscription under which your resources are created. |
+ | **Virtual networks** | myVirtualNet | The name of the virtual network to which your function app will connect. |
+ | **Subnets** | functions | The name of the subnet to which your function app will connect. |
-Resources in the virtual network can now communicate with the service bus.
+1. Select **Add your client IP address** to give your current client IP access to the namespace.
+ > [!NOTE]
+ > Allowing your client IP address is necessary to enable the Azure portal to [publish messages to the queue later in this tutorial](#test-your-locked-down-function-app).
+1. Select **Enable** to enable the service endpoint.
+1. Select **Add** to add the selected virtual network and subnet to the firewall rules for the Service Bus.
+1. Select **Save** to save the updated firewall rules.
+
+Resources in the virtual network can now communicate with the Service Bus using the private endpoint.
## Create a file share
Resources in the virtual network can now communicate with the service bus.
## Create a queue
-Create the queue where your Azure Functions service bus trigger will get events:
+Create the queue where your Azure Functions Service Bus trigger will get events:
-1. In your service bus, in the menu on the left, select **Queues**.
+1. In your Service Bus, in the menu on the left, select **Queues**.
-1. Select **Shared access policies**. For the purposes of this tutorial, name the list *queue*.
+1. Select **Queue**. For the purposes of this tutorial, provide the name *queue* as the name of the new queue.
- :::image type="content" source="./media/functions-create-vnet/6-create-queue.png" alt-text="Screenshot of how to create a service bus queue.":::
+ :::image type="content" source="./media/functions-create-vnet/6-create-queue.png" alt-text="Screenshot of how to create a Service Bus queue.":::
1. Select **Create**.
-## Get a service bus connection string
+## Get a Service Bus connection string
-1. In your service bus, in the menu on the left, select **Shared access policies**.
+1. In your Service Bus, in the menu on the left, select **Shared access policies**.
1. Select **RootManageSharedAccessKey**. Copy and save the **Primary Connection String**. You'll need this connection string when you configure the app settings.
- :::image type="content" source="./media/functions-create-vnet/7-get-service-bus-connection-string.png" alt-text="Screenshot of how to get a service bus connection string.":::
+ :::image type="content" source="./media/functions-create-vnet/7-get-service-bus-connection-string.png" alt-text="Screenshot of how to get a Service Bus connection string.":::
## Integrate the function app
To use your function app with virtual networks, you need to join it to a subnet.
1. Under **Virtual Network**, select the virtual network you created earlier.
-1. Select the **functions** subnet you created earlier. Your function app is now integrated with your virtual network!
+1. Select the **functions** subnet you created earlier. Select **OK**. Your function app is now integrated with your virtual network!
:::image type="content" source="./media/functions-create-vnet/9-connect-app-subnet.png" alt-text="Screenshot of how to connect a function app to a subnet.":::
To use your function app with virtual networks, you need to join it to a subnet.
| **AzureWebJobsStorage** | mysecurestorageConnectionString | The connection string of the storage account you created. This storage connection string is from the [Get the storage account connection string](#get-the-storage-account-connection-string) section. This setting allows your function app to use the secure storage account for normal operations at runtime. | | **WEBSITE_CONTENTAZUREFILECONNECTIONSTRING** | mysecurestorageConnectionString | The connection string of the storage account you created. This setting allows your function app to use the secure storage account for Azure Files, which is used during deployment. | | **WEBSITE_CONTENTSHARE** | files | The name of the file share you created in the storage account. Use this setting with WEBSITE_CONTENTAZUREFILECONNECTIONSTRING. |
- | **SERVICEBUS_CONNECTION** | myServiceBusConnectionString | Create this app setting for the connection string of your service bus. This storage connection string is from the [Get a service bus connection string](#get-a-service-bus-connection-string) section.|
+ | **SERVICEBUS_CONNECTION** | myServiceBusConnectionString | Create this app setting for the connection string of your Service Bus. This storage connection string is from the [Get a Service Bus connection string](#get-a-service-bus-connection-string) section.|
| **WEBSITE_CONTENTOVERVNET** | 1 | Create this app setting. A value of 1 enables your function app to scale when your storage account is restricted to a virtual network. | | **WEBSITE_DNS_SERVER** | 168.63.129.16 | Create this app setting. When your app integrates with a virtual network, it will use the same DNS server as the virtual network. Your function app needs this setting so it can work with Azure DNS private zones. It's required when you use private endpoints. This setting and WEBSITE_VNET_ROUTE_ALL will send all outbound calls from your app into your virtual network. | | **WEBSITE_VNET_ROUTE_ALL** | 1 | Create this app setting. When your app integrates with a virtual network, it uses the same DNS server as the virtual network. Your function app needs this setting so it can work with Azure DNS private zones. It's required when you use private endpoints. This setting and WEBSITE_DNS_SERVER will send all outbound calls from your app into your virtual network. |
To use your function app with virtual networks, you need to join it to a subnet.
:::image type="content" source="./media/functions-create-vnet/11-enable-runtime-scaling.png" alt-text="Screenshot of how to enable runtime-driven scaling for Azure Functions.":::
-## Deploy a service bus trigger and HTTP trigger
+## Deploy a Service Bus trigger and HTTP trigger
-1. In GitHub, go to the following sample repository. It contains a function app and two functions, an HTTP trigger, and a service bus queue trigger.
+1. In GitHub, go to the following sample repository. It contains a function app and two functions, an HTTP trigger, and a Service Bus queue trigger.
<https://github.com/Azure-Samples/functions-vnet-tutorial>
To use your function app with virtual networks, you need to join it to a subnet.
| | - | - | | **Source** | GitHub | You should have created a GitHub repository for the sample code in step 2. | | **Organization** | myOrganization | The organization your repo is checked into. It's usually your account. |
- | **Repository** | myRepo | The repository you created for the sample code. |
+ | **Repository** | functions-vnet-tutorial | The repository forked from https://github.com/Azure-Samples/functions-vnet-tutorial. |
| **Branch** | main | The main branch of the repository you created. | | **Runtime stack** | .NET | The sample code is in C#. |
+ | **Version** | .NET Core 3.1 | The runtime version. |
1. Select **Save**.
For more information, see the [private endpoint documentation](../private-link/p
1. Select **OK** to add the private endpoint.
-Congratulations! You've successfully secured your function app, service bus, and storage account by adding private endpoints!
+Congratulations! You've successfully secured your function app, Service Bus, and storage account by adding private endpoints!
### Test your locked-down function app
Here's an alternative way to monitor your function by using Application Insights
1. In the menu on the left, select **Live metrics**.
-1. Open a new tab. In your service bus, in the menu on the left, select **Queues**.
+1. Open a new tab. In your Service Bus, in the menu on the left, select **Queues**.
1. Select your queue.
Here's an alternative way to monitor your function by using Application Insights
1. Select **Send** to send the message.
- :::image type="content" source="./media/functions-create-vnet/17-send-service-bus-message.png" alt-text="Screenshot of how to send service bus messages by using the portal.":::
+ :::image type="content" source="./media/functions-create-vnet/17-send-service-bus-message.png" alt-text="Screenshot of how to send Service Bus messages by using the portal.":::
-1. On the **Live metrics** tab, you should see that your service bus queue trigger has fired. If it hasn't, resend the message from **Service Bus Explorer**.
+1. On the **Live metrics** tab, you should see that your Service Bus queue trigger has fired. If it hasn't, resend the message from **Service Bus Explorer**.
:::image type="content" source="./media/functions-create-vnet/18-hello-world.png" alt-text="Screenshot of how to view messages by using live metrics for function apps.":::
The following DNS zones were created in this tutorial:
## Next steps
-In this tutorial, you created a Premium function app, storage account, and service bus. You secured all of these resources behind private endpoints.
-
-Use the following links to learn more about the available networking features:
-
-> [!div class="nextstepaction"]
-> [Networking options in Azure Functions](./functions-networking-options.md)
+In this tutorial, you created a Premium function app, storage account, and Service Bus. You secured all of these resources behind private endpoints.
+Use the following links to learn more Azure Functions networking options and private endpoints:
-> [!div class="nextstepaction"]
-> [Azure Functions Premium plan](./functions-premium-plan.md)
+- [Networking options in Azure Functions](./functions-networking-options.md)
+- [Azure Functions Premium plan](./functions-premium-plan.md)
+- [Service Bus private endpoints](../service-bus-messaging/private-link-service.md)
+- [Azure Storage private endpoints](../storage/common/storage-private-endpoints.md)
azure-monitor Alerts Unified Log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-unified-log.md
For example, if your rule [**Aggregation granularity**](#aggregation-granularity
## State and resolving alerts
-Log alerts can either be stateless or stateful (currently in preview).
+Log alerts can either be stateless or stateful (currently in preview when using the API).
Stateless alerts fire each time the condition is met, even if fired previously. You can [mark the alert as closed](../alerts/alerts-managing-alert-states.md) once the alert instance is resolved. You can also mute actions to prevent them from triggering for a period after an alert rule fired. In Log Analytics Workspaces and Application Insights, it's called **Suppress Alerts**. In all other resource types, it's called **Mute Actions**.
See this alert evaluation example:
| 00:15 | TRUE | Alert fires and action groups called. New alert state ACTIVE. | 00:20 | FALSE | Alert doesn't fire. No actions called. Pervious alerts state remains ACTIVE.
-Stateful alerts fire once per incident and resolve. You can set this using **Automatically resolve alerts** in the alert details section.
+Stateful alerts fire once per incident and resolve. When creating new or updating existing log alert rules, add the `autoMitigate` flag with value `true` of type `Boolean`, under the `properties` section. You can use this feature in these API versions: `2018-04-16` and `2020-05-01-preview`.
## Location selection in log alerts
azure-monitor Resource Logs Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/resource-logs-schema.md
Title: Azure Resource Logs supported services and schemas description: Understand the supported services and event schema for Azure resource logs. Previously updated : 04/07/2020 Last updated : 05/10/2021 # Common and service-specific schema for Azure Resource Logs
The schema for resource logs varies depending on the resource and log category.
| Kubernetes Service |[Azure Kubernetes Logging](../../aks/view-control-plane-logs.md#log-event-schema) | | Load Balancer |[Log analytics for Azure Load Balancer](../../load-balancer/load-balancer-monitor-log.md) | | Logic Apps |[Logic Apps B2B custom tracking schema](../../logic-apps/logic-apps-track-integration-account-custom-tracking-schema.md) |
+| Media Services | [Media services monitoring schemas](../../media-services/latest/monitoring/monitor-media-services-data-reference.md#schemas) |
| Network Security Groups |[Log analytics for network security groups (NSGs)](../../virtual-network/virtual-network-nsg-manage-log.md) | | Power BI Dedicated | [Logging for Power BI Embedded in Azure](/power-bi/developer/azure-pbie-diag-logs) | | Recovery Services | [Data Model for Azure Backup](../../backup/backup-azure-reports-data-model.md)|
The schema for resource logs varies depending on the resource and log category.
* [Learn more about resource logs](../essentials/platform-logs-overview.md) * [Stream resource resource logs to **Event Hubs**](./resource-logs.md#send-to-azure-event-hubs) * [Change resource log diagnostic settings using the Azure Monitor REST API](/rest/api/monitor/diagnosticsettings)
-* [Analyze logs from Azure storage with Log Analytics](./resource-logs.md#send-to-log-analytics-workspace)
+* [Analyze logs from Azure storage with Log Analytics](./resource-logs.md#send-to-log-analytics-workspace)
azure-monitor Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/customer-managed-keys.md
Key rotation has two modes:
All your data remains accessible after the key rotation operation, since data always encrypted with Account Encryption Key (AEK) while AEK is now being encrypted with your new Key Encryption Key (KEK) version in Key Vault.
-## Customer-managed key for saved queries
+## Customer-managed key for saved queries and log alerts
-The query language used in Log Analytics is expressive and can contain sensitive information in comments you add to queries or in the query syntax. Some organizations require that such information is kept protected under Customer-managed key policy and you need save your queries encrypted with your key. Azure Monitor enables you to store *saved-searches* and *log-alerts* queries encrypted with your key in your own storage account when connected to your workspace.
+The query language used in Log Analytics is expressive and can contain sensitive information in comments you add to queries or in the query syntax. Some organizations require that such information is kept protected under Customer-managed key policy and you need save your queries encrypted with your key. Azure Monitor enables you to store *saved-searches* and *log alerts* queries encrypted with your key in your own storage account when connected to your workspace.
> [!NOTE] > Log Analytics queries can be saved in various stores depending on the scenario used. Queries remain encrypted with Microsoft key (MMK) in the following scenarios regardless Customer-managed key configuration: Workbooks in Azure Monitor, Azure dashboards, Azure Logic App, Azure Notebooks and Automation Runbooks.
-When you Bring Your Own Storage (BYOS) and link it to your workspace, the service uploads *saved-searches* and *log-alerts* queries to your storage account. That means that you control the storage account and the [encryption-at-rest policy](../../storage/common/customer-managed-keys-overview.md) either using the same key that you use to encrypt data in Log Analytics cluster, or a different key. You will, however, be responsible for the costs associated with that storage account.
+When you Bring Your Own Storage (BYOS) and link it to your workspace, the service uploads *saved-searches* and *log alerts* queries to your storage account. That means that you control the storage account and the [encryption-at-rest policy](../../storage/common/customer-managed-keys-overview.md) either using the same key that you use to encrypt data in Log Analytics cluster, or a different key. You will, however, be responsible for the costs associated with that storage account.
**Considerations before setting Customer-managed key for queries** * You need to have 'write' permissions to both your workspace and Storage Account
When you Bring Your Own Storage (BYOS) and link it to your workspace, the servic
* The *saves searches* in storage is considered as service artifacts and their format may change * Existing *saves searches* are removed from your workspace. Copy and any *saves searches* that you need before the configuration. You can view your *saved-searches* using [PowerShell](/powershell/module/az.operationalinsights/get-azoperationalinsightssavedsearch) * Query history isn't supported and you won't be able to see queries that you ran
-* You can link a single storage account to workspace for the purpose of saving queries, but is can be used fro both *saved-searches* and *log-alerts* queries
+* You can link a single storage account to workspace for the purpose of saving queries, but is can be used fro both *saved-searches* and *log alerts* queries
* Pin to dashboard isn't supported
+* Fired log alerts will not contains search results or alert query. You can use [alert dimensions](../alerts/alerts-unified-log.md#split-by-alert-dimensions) to get context in the fired alerts.
**Configure BYOS for saved-searches queries**
Content-type: application/json
After the configuration, any new *saved search* query will be saved in your storage.
-**Configure BYOS for log-alerts queries**
+**Configure BYOS for log alerts queries**
-Link a storage account for *Alerts* to your workspace -- *log-alerts* queries are saved in your storage account.
+Link a storage account for *Alerts* to your workspace -- *log alerts* queries are saved in your storage account.
# [Azure portal](#tab/portal)
azure-monitor Data Collector Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/data-collector-api.md
Use this format to encode the **SharedKey** signature string:
``` StringToSign = VERB + "\n" + Content-Length + "\n" +
- Content-Type + "\n" +
- x-ms-date + "\n" +
+ Content-Type + "\n" +
+ "x-ms-date:" + x-ms-date + "\n" +
"/api/logs"; ```
While the Data Collector API should cover most of your needs to collect free-for
## Next steps - Use the [Log Search API](./log-query-overview.md) to retrieve data from the Log Analytics workspace. -- Learn more about how [create a data pipeline with the Data Collector API](create-pipeline-datacollector-api.md) using Logic Apps workflow to Azure Monitor.
+- Learn more about how [create a data pipeline with the Data Collector API](create-pipeline-datacollector-api.md) using Logic Apps workflow to Azure Monitor.
azure-percept How To Set Up Over The Air Updates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-set-up-over-the-air-updates.md
Keep your Azure Percept DK secure and up to date using over-the-air updates. In
1. Go to the [Azure portal](https://portal.azure.com) and sign in with the Azure account you are using with Azure Percept.
-1. In the search bar at the top of the page, enter **Device Update for IoT Hub**.
+1. In the search bar at the top of the page, enter **Device Update for IoT Hubs**.
-1. Select **Device Update for IoT Hub** when it appears in the search bar.
+1. Select **Device Update for IoT Hubs** when it appears in the search bar.
-1. Click the **+Add** button in the upper-left portion of the page.
+1. Select the **+Add** button in the upper-left portion of the page.
1. Select the **Azure Subscription** and **Resource Group** associated with your Azure Percept device and its IoT Hub. 1. Specify a **Name** and **Location** for your Device Update Account.
+1. Check the box that says **Assign Device Update Administrator role.**
+ 1. Review the details and select **Review + Create**.
+1. Select the **Create** button.
+ 1. Once deployment is complete, click **Go to resource**. ## Create a Device Update Instance
azure-percept How To Update Over The Air https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-update-over-the-air.md
Follow this guide to learn how to update the OS and firmware of the carrier boar
> [!NOTE] > If you have already imported the update, you can skip directly to **Create a device update group**.
-1. [Download the latest manifest file (.json)](https://go.microsoft.com/fwlink/?linkid=2155625) and [update file (.swu)](https://go.microsoft.com/fwlink/?linkid=2161538) for your Azure Percept device.
+1. Determine which [manifest and update package](./how-to-select-update-package.md) is appropriate for your dev kit.
1. Navigate to the Azure IoT Hub that you are using for your Azure Percept device. On the left-hand menu panel, select **Device Updates** under **Automatic Device Management**.
Follow this guide to learn how to update the OS and firmware of the carrier boar
1. Select **+ Import New Update** below the **Ready to Deploy** header.
-1. Click on the boxes under **Select Import Manifest File** and **Select Update Files** to select your manifest file (.json) and update file (.swu).
+1. Select on the boxes under **Select Import Manifest File** and **Select Update Files** to select your manifest file (.json) and update file (.swu).
-1. Select the folder icon or text box under **Select a storage container** and select the appropriate storage account. If youΓÇÖve already created a storage container, you may re-use it. Otherwise, select **+ Container** to create a new storage container for OTA updates. Select the container you wish to use and click **Select**.
+1. Select the folder icon or text box under **Select a storage container** and select the appropriate storage account. If youΓÇÖve already created a storage container, you may reuse it. Otherwise, select **+ Container** to create a new storage container for OTA updates. Select the container you wish to use and click **Select**.
1. Select **Submit** to start the import process. Due to the image size, the submission process may take up to 5 minutes. > [!NOTE] > You may be asked to add a Cross Origin Request (CORS) rule to access the selected storage container. Select **Add rule and retry** to proceed.
-1. When the import process begins, you will be redirected to the **Import History** tab of the **Device Updates** page. Click **Refresh** to monitor progress while the import process is completed. Depending on the size of the update, this may take a few minutes or longer (during peak times, the import service may to take up to 1 hour).
+1. When the import process begins, you will be redirected to the **Import History** tab of the **Device Updates** page. Click **Refresh** to monitor progress while the import process is completed. Depending on the size of the update, this may take a few minutes or longer (during peak times, the import service may take up to 1 hour).
1. When the **Status** column indicates that the import has succeeded, select the **Ready to Deploy** tab and click **Refresh**. You should now see your imported update in the list.
Group Tag Requirements:
- You can add any value to your tag except for "Uncategorized", which is a reserved value. - Tag value cannot exceed 255 characters. - Tag value can only contain these special characters: ΓÇ£.ΓÇ¥,ΓÇ¥-ΓÇ£,ΓÇ¥_ΓÇ¥,ΓÇ¥~ΓÇ¥.-- Tag and group names are case sensitive.
+- Tag and group names are case-sensitive.
- A device can only have one tag. Any subsequent tag added to the device will override the previous tag. - A device can only belong to one group.
Group Tag Requirements:
1. From **IoT Edge** on the left navigation pane, find your Azure Percept DK and navigate to its **Device Twin**.
- 1. Add a new **Device Update for IoT Hub** tag value as shown below (```<CustomTagValue>``` refers to your tag value/name, e.g. AzurePerceptGroup1). Learn more about device twin [JSON document tags](../iot-hub/iot-hub-devguide-device-twins.md#device-twins).
+ 1. Add a new **Device Update for IoT Hub** tag value as shown below (```<CustomTagValue>``` refers to your tag value/name, for example, AzurePerceptGroup1). Learn more about device twin [JSON document tags](../iot-hub/iot-hub-devguide-device-twins.md#device-twins).
``` "tags": {
azure-percept How To Update Via Usb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-update-via-usb.md
Title: Update your Azure Percept DK over a USB connection
-description: Learn how to update the Azure Percept DK over a USB connection
+ Title: Update your Azure Percept DK over a USB-C cable connection
+description: Learn how to update the Azure Percept DK over a USB-C cable connection
Last updated 03/18/2021-+
-# How to update Azure Percept DK over a USB connection
+# How to update Azure Percept DK over a USB-C cable connection
-Although using over-the-air (OTA) updates is the best method of keeping your dev kit's operating system and firmware up to date, there are scenarios where updating (or "flashing") the dev kit over a USB connection is necessary:
--- An OTA update is not possible due to connectivity or other technical issues-- The device needs to be reset back to its factory state-
-This guide will show you how to successfully update your dev kit's operating system and firmware over a USB connection.
+This guide will show you how to successfully update your dev kit's operating system and firmware over a USB connection. Here is an overview of what you will be doing during this procedure.
+1. Download the update package to a host computer
+1. Run the command that transfers the update package to the dev kit
+1. Set the dev kit into "USB mode" (using SSH) so that it can be detected by the host computer and receive the update package
+1. Connect the dev kit to the host computer via the USB-C cable
+1. Wait for the update to complete
> [!WARNING] > Updating your dev kit over USB will delete all existing data on the device, including AI models and containers. > > Follow all instructions in order. Skipping steps could put your dev kit in an unusable state. + ## Prerequisites - An Azure Percept DK
This guide will show you how to successfully update your dev kit's operating sys
## Download software tools and update files
-1. [NXP UUU tool](https://github.com/NXPmicro/mfgtools/releases). Download the **Latest Release** uuu.exe file (for Windows) or the uuu file (for Linux) under the **Assets** tab.
-
-1. [7-Zip](https://www.7-zip.org/). This software will be used for extracting the raw image file from its XZ compressed file. Download and install the appropriate .exe file.
+1. [NXP UUU tool](https://github.com/NXPmicro/mfgtools/releases). Download the **Latest Release** uuu.exe file (for Windows) or the uuu file (for Linux) under the **Assets** tab. UUU is a tool created by NXP used to update software on NXP dev boards.
1. [Download the update files](https://go.microsoft.com/fwlink/?linkid=2155734). They are all contained in a zip file that you will extract in the next section. 1. Ensure all three build artifacts are present:
- - Azure-Percept-DK-*&lt;version number&gt;*.raw.xz
+ - Azure-Percept-DK-*&lt;version number&gt;*.raw
- fast-hab-fw.raw - emmc_full.txt
This guide will show you how to successfully update your dev kit's operating sys
## Update your device
-1. [SSH into your dev kit](./how-to-ssh-into-percept-dk.md).
+This procedure uses the dev kit's single USB-C port for updating. If your computer has a USB-C port, you can disconnect the Azure Percept Vision device and use that cable. If your computer only has a USB-A port, disconnect the Azure Percept Vision device from the dev kitΓÇÖs USB-C port and connect a USB-C to USB-A cable (sold separately) to the dev kit and host computer.
+
+1. Open a Windows command prompt (Start > cmd) or a Linux terminal and **navigate to the folder where the update files and UUU tool are stored**.
-1. Next, open a Windows command prompt (**Start** > **cmd**) or a Linux terminal and navigate to the folder where the update files and UUU tool are stored. Enter the following command in the command prompt or terminal to prepare your computer to receive a flashable device:
+1. Enter the following command in the command prompt or terminal.
- Windows:
This guide will show you how to successfully update your dev kit's operating sys
sudo ./uuu -b emmc_full.txt fast-hab-fw.raw Azure-Percept-DK-<version number>.raw ```
-1. Disconnect the Azure Percept Vision device from the carrier board's USB-C port.
+1. The command prompt window will display a message that say "**Waiting for Known USB Device to Appear...**" The UUU tool is now waiting for the dev kit to be detected by the host computer. It is now ok to proceed to the next steps.
+
+1. Connect the supplied USB-C cable to the dev kit's USB-C port and to the host computer's USB-C port. If your computer only has a USB-A port, connect a USB-C to USB-A cable (sold separately) to the dev kit and host computer.
-1. Connect the supplied USB-C cable to the carrier board's USB-C port and to the host computer's USB-C port. If your computer only has a USB-A port, connect a USB-C to USB-A cable (sold separately) to the carrier board and host computer.
+1. Connect to your dev kit via SSH. If you need help to SSH, [follow these instructions](./how-to-ssh-into-percept-dk.md).
-1. In the SSH client prompt, enter the following commands:
+1. In the SSH terminal, enter the following commands:
1. Set the device to USB update mode:
This guide will show you how to successfully update your dev kit's operating sys
> [!NOTE] > After updating, your device will be reset to factory settings and you will lose your Wi-Fi connection and SSH login.
-1. Once the update is complete, power off the carrier board. Unplug the USB cable from the PC.ΓÇ»
+1. Once the update is complete, power off the dev kit. Unplug the USB cable from the PC.ΓÇ»
## Next steps
azure-percept Vision Solution Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/vision-solution-troubleshooting.md
To update your TelemetryIntervalNeuralNetworkMs value, follow these steps:
:::image type="content" source="./media/vision-solution-troubleshooting/module-page-inline.png" alt-text="Screenshot of module page." lightbox= "./media/vision-solution-troubleshooting/module-page.png":::
-1. Scroll down to **properties**. Note that the properties "Running" and "Logging" are not active at this time.
+1. Scroll down to **properties**. The properties "Running" and "Logging" are not active at this time.
:::image type="content" source="./media/vision-solution-troubleshooting/module-identity-twin-inline.png" alt-text="Screenshot of module twin properties." lightbox= "./media/vision-solution-troubleshooting/module-identity-twin.png":::
To update your TelemetryIntervalNeuralNetworkMs value, follow these steps:
View your device's RTSP video stream in [Azure Percept Studio](./how-to-view-video-stream.md) or [VLC media player](https://www.videolan.org/vlc/https://docsupdatetracker.net/index.html).
-To open the RTSP stream in VLC media player, go to **Media** -> **Open network stream** -> **rtsp://[device IP address]/result**.
+To open the RTSP stream in VLC media player, go to **Media** -> **Open network stream** -> **rtsp://[device IP address]:8554/result**.
## Next steps
azure-sql Arm Templates Content Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/arm-templates-content-guide.md
ms.devlang: --++ Last updated 02/04/2019
The following table includes links to Azure Resource Manager templates for Azure
| [SQL Managed Instance with P2S connection](https://github.com/Azure/azure-quickstart-templates/tree/master/201-sqlmi-new-vnet-w-point-to-site-vpn) | This deployment will create an Azure virtual network with two subnets, `ManagedInstance` and `GatewaySubnet`. SQL Managed Instance will be deployed in the ManagedInstance subnet. A virtual network gateway will be created in the `GatewaySubnet` subnet and configured for Point-to-Site VPN connection. | | [SQL Managed Instance with a virtual machine](https://github.com/Azure/azure-quickstart-templates/tree/master/201-sqlmi-new-vnet-w-jumpbox) | This deployment will create an Azure virtual network with two subnets, `ManagedInstance` and `Management`. SQL Managed Instance will be deployed in the `ManagedInstance` subnet. A virtual machine with the latest version of SQL Server Management Studio (SSMS) will be deployed in the `Management` subnet. | -+
azure-sql Auto Failover Group Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/auto-failover-group-overview.md
Previously updated : 04/29/2021 Last updated : 05/10/2021 # Use auto-failover groups to enable transparent and coordinated failover of multiple databases
When you set up a failover group between primary and secondary SQL Managed Insta
## Upgrading or downgrading a primary database
-You can upgrade or downgrade a primary database to a different compute size (within the same service tier, not between General Purpose and Business Critical) without disconnecting any secondary databases. When upgrading, we recommend that you upgrade all of the secondary databases first, and then upgrade the primary. When downgrading, reverse the order: downgrade the primary first, and then downgrade all of the secondary databases. When you upgrade or downgrade the database to a different service tier, this recommendation is enforced.
+You can upgrade or downgrade a primary database to a different compute size without disconnecting any secondary databases. When upgrading, we recommend that you upgrade all of the secondary databases first, and then upgrade the primary. When downgrading, reverse the order: downgrade the primary first, and then downgrade all of the secondary databases. When you upgrade or downgrade the database to a different service tier, this recommendation is enforced.
This sequence is recommended specifically to avoid the problem where the secondary at a lower SKU gets overloaded and must be re-seeded during an upgrade or downgrade process. You could also avoid the problem by making the primary read-only, at the expense of impacting all read-write workloads against the primary.
azure-sql Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/connectivity-architecture.md
Details of how traffic shall be migrated to new Gateways in specific regions are
| India Central | 104.211.96.159, 104.211.86.30 , 104.211.86.31 | | India South | 104.211.224.146 | | India West | 104.211.160.80, 104.211.144.4 |
-| Japan East | 13.78.61.196, 40.79.184.8, 13.78.106.224, 40.79.192.5 |
+| Japan East | 13.78.61.196, 40.79.184.8, 13.78.106.224, 40.79.192.5, 13.78.104.32 |
| Japan West | 104.214.148.156, 40.74.100.192, 40.74.97.10 |
-| Korea Central | 52.231.32.42, 52.231.17.22 ,52.231.17.23 |
+| Korea Central | 52.231.32.42, 52.231.17.22 ,52.231.17.23, 20.44.24.32, 20.194.64.33 |
| Korea South | 52.231.200.86, 52.231.151.96 | | North Central US | 23.96.178.199, 23.98.55.75, 52.162.104.33, 52.162.105.9 | | North Europe | 40.113.93.91, 52.138.224.1, 13.74.104.113 |
Details of how traffic shall be migrated to new Gateways in specific regions are
| UAE Central | 20.37.72.64 | | UAE North | 65.52.248.0 | | UK South | 51.140.184.11, 51.105.64.0, 51.140.144.36, 51.105.72.32 |
-| UK West | 51.141.8.11 |
+| UK West | 51.141.8.11, 51.140.208.96, 51.140.208.97 |
| West Central US | 13.78.145.25, 13.78.248.43, 13.71.193.32, 13.71.193.33 | | West Europe | 40.68.37.158, 104.40.168.105, 52.236.184.163 | | West US | 104.42.238.205, 13.86.216.196 |
azure-sql Gateway Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/gateway-migration.md
The most up-to-date information will be maintained in the [Azure SQL Database ga
## Status updates # [In progress](#tab/in-progress-ip)
+## June 2021
+New SQL Gateways are being added to the following regions:
+- UK West: 51.140.208.96, 51.140.208.97
+- Korea Central US: 20.44.24.32, 20.194.64.33
+- Japan East: 13.78.104.32
+
+This SQL Gateway shall start accepting customer traffic on 1 June 2021.
+ ## May 2021 New SQL Gateways are being added to the following regions: - UK South: 51.140.144.36, 51.105.72.32
azure-sql Migrate Sql Server Users To Instance Transact Sql Tsql Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/migrate-sql-server-users-to-instance-transact-sql-tsql-tutorial.md
Previously updated : 10/30/2019 Last updated : 05/10/2021 # Tutorial: Migrate Windows users and groups in a SQL Server instance to Azure SQL Managed Instance using T-SQL DDL syntax [!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
-> [!NOTE]
-> The syntax used to migrate users and groups to SQL Managed Instance in this article is in **public preview**.
- This article takes you through the process of migrating your on-premises Windows users and groups in your SQL Server to Azure SQL Managed Instance using T-SQL syntax. In this tutorial, you learn how to:
azure-sql Replication Transactional Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/replication-transactional-overview.md
Previously updated : 04/20/2020 Last updated : 05/10/2020 # Transactional replication with Azure SQL Managed Instance (Preview) [!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
In this configuration, a database in Azure SQL Database or Azure SQL Managed Ins
## With failover groups
-[Active geo-replication](../database/active-geo-replication-overview.md) is not supported with a SQL Managed Instance using transactional replication. Instead of active geo-replication, use [Auto-failover groups](../database/auto-failover-group-overview.md), but note that the publication has to be [manually deleted](transact-sql-tsql-differences-sql-server.md#replication) from the primary managed instance and re-created on the secondary SQL Managed Instance after failover.
- If a **publisher** or **distributor** SQL Managed Instance is in a [failover group](../database/auto-failover-group-overview.md), the SQL Managed Instance administrator must clean up all publications on the old primary and reconfigure them on the new primary after a failover occurs. The following activities are needed in this scenario: 1. Stop all replication jobs running on the database, if there are any.
If a **publisher** or **distributor** SQL Managed Instance is in a [failover gro
EXEC sp_dropdistributor 1,1 ```
-If geo-replication is enabled on a **subscriber** instance in a failover group, the publication should be configured to connect to the failover group listener endpoint for the subscriber managed instance. In the event of a failover, subsequent action by the managed instance administrator depends on the type of failover that occurred:
+If a **subscriber** SQL Managed Instance is in a failover group, the publication should be configured to connect to the failover group listener endpoint for the subscriber managed instance. In the event of a failover, subsequent action by the managed instance administrator depends on the type of failover that occurred:
- For a failover with no data loss, replication will continue working after failover. - For a failover with data loss, replication will work as well. It will replicate the lost changes again.
azure-vmware Manage Dhcp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/manage-dhcp.md
Title: Manage DHCP for Azure VMware Solution
description: Learn how to create and manage DHCP for your Azure VMware Solution private cloud. Previously updated : 11/09/2020 Last updated : 05/10/2021 # Manage DHCP for Azure VMware Solution
Applications and workloads running in a private cloud environment require DHCP s
- If you're using a third-party external DHCP server in your network, you'll need to [create DHCP relay service](#create-dhcp-relay-service). When you create a relay to a DHCP server, whether using NSX-T or a third-party to host your DHCP server, you'll need to specify the DHCP IP address range. >[!IMPORTANT]
->DHCP does not work for virtual machines (VMs) on the VMware HCX L2 stretch network when the DHCP server is in the on-premises datacenter. NSX, by default, blocks all DHCP requests from traversing the L2 stretch. For the solution, see the [Send DHCP requests to the on-premises DHCP server](#send-dhcp-requests-to-the-on-premises-dhcp-server) procedure.
+>DHCP does not work for virtual machines (VMs) on the VMware HCX L2 stretch network when the DHCP server is in the on-premises datacenter. NSX, by default, blocks all DHCP requests from traversing the L2 stretch. For the solution, see the [Send DHCP requests to a non-NSX-T based DHCP server](#send-dhcp-requests-to-a-non-nsx-t-based-dhcp-server) procedure.
## Create a DHCP server
If you want to use a third-party external DHCP server, you'll need to create a D
:::image type="content" source="./media/manage-dhcp/assigned-to-segment.png" alt-text="DHCP server pool assigned to segment" border="true":::
+## Send DHCP requests to a non-NSX-T based DHCP server
+If you want to send DHCP requests from your Azure VMware Solution VMs to a non-NSX-T DHCP server, you'll create a new security segment profile.
-## Send DHCP requests to the on-premises DHCP server
+>[!IMPORTANT]
+>VMs on the same L2 segment that runs as DHCP servers are blocked from serving client requests. Because of this, it's important to follow the steps in this section.
-If you want to send DHCP requests from your Azure VMware Solution VMs on the L2 extended segment to the on-premises DHCP server, you'll create a security segment profile.
+1. (Optional) If you need to locate the segment name of the L2 extension:
-1. Sign in to your on-premises vCenter, and under **Home**, select **HCX**.
+ 1. Sign in to your on-premises vCenter, and under **Home**, select **HCX**.
-1. Select **Network Extension** under **Services**.
+ 1. Select **Network Extension** under **Services**.
-1. Select the network extension you want to support DHCP requests from Azure VMware Solution to on-premises.
+ 1. Select the network extension you want to support DHCP requests from Azure VMware Solution to on-premises.
-1. Take note of the destination network name.
+ 1. Take note of the destination network name.
- :::image type="content" source="media/manage-dhcp/hcx-find-destination-network.png" alt-text="Screenshot of a network extension in VMware vSphere Client" lightbox="media/manage-dhcp/hcx-find-destination-network.png":::
+ :::image type="content" source="media/manage-dhcp/hcx-find-destination-network.png" alt-text="Screenshot of a network extension in VMware vSphere Client" lightbox="media/manage-dhcp/hcx-find-destination-network.png":::
-1. In the Azure VMware Solution NSX-T Manager, select **Networking** > **Segments** > **Segment Profiles**.
+1. In the Azure VMware Solution NSX-T Manager, select **Networking** > **Segments** > **Segment Profiles**.
1. Select **Add Segment Profile** and then **Segment Security**. :::image type="content" source="media/manage-dhcp/add-segment-profile.png" alt-text="Screenshot of how to add a segment profile in NSX-T" lightbox="media/manage-dhcp/add-segment-profile.png":::- 1. Provide a name and a tag, and then set the **BPDU Filter** toggle to ON and all the DHCP toggles to OFF. :::image type="content" source="media/manage-dhcp/add-segment-profile-bpdu-filter-dhcp-options.png" alt-text="Screenshot showing the BPDU Filter toggled on and the DHCP toggles off" lightbox="media/manage-dhcp/add-segment-profile-bpdu-filter-dhcp-options.png":::-
-1. Remove all the MAC addresses, if any, under the **BPDU Filter Allow List**. Then select **Save**.
-
- :::image type="content" source="media/manage-dhcp/add-segment-profile-bpdu-filter-allow-list.png" alt-text="Screenshot showing MAC addresses in the BPDU Filter Allow List":::
-
-1. Under **Networking** > **Segments** > **Segments**, in the search area, enter the definition network name.
-
- :::image type="content" source="media/manage-dhcp/networking-segments-search.png" alt-text="Screenshot of the Networking > Segments filter field":::
-
-1. Select the vertical ellipsis on the segment name and select **Edit**.
-
- :::image type="content" source="media/manage-dhcp/edit-network-segment.png" alt-text="Screenshot of the edit button for the segment" lightbox="media/manage-dhcp/edit-network-segment.png":::
-
-1. Change the **Segment Security** to the segment profile you created earlier.
-
+
:::image type="content" source="media/manage-dhcp/edit-segment-security.png" alt-text="Screenshot of the Segment Security field" lightbox="media/manage-dhcp/edit-segment-security.png":::
-## Next steps
+## Next steps
Learn more about [Host maintenance and lifecycle management](concepts-private-clouds-clusters.md#host-maintenance-and-lifecycle-management).
azure-vmware Reset Vsphere Credentials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/reset-vsphere-credentials.md
Title: Reset vSphere credentials for Azure VMware Solution description: Learn how to reset vSphere credentials for your Azure VMware Solution private cloud and ensure the HCX connector has the latest vSphere credentials. Previously updated : 03/31/2021 Last updated : 05/10/2021 # Reset vSphere credentials for Azure VMware Solution
-In this article, we'll walk through the steps to reset the vCenter Server and NSX-T Manager credentials for your Azure VMware Solution private cloud. This will allow you to ensure the HCX connector has the latest vCenter Server credentials.
+This article walks you through the steps to reset the vCenter Server and NSX-T Manager credentials for your Azure VMware Solution private cloud. It allows you to ensure the HCX connector has the latest vCenter Server credentials.
In addition to this how-to, you can also view the video for [resetting the vCenter CloudAdmin & NSX-T Admin password](https://youtu.be/cK1qY3knj88).
-## Reset your Azure VMware Solution credentials
+## Prerequisites
+
+If you use your cloudadmin credentials for connected services like HCX, vRealize Orchestrator, vRealize Operations Manager, or VMware Horizon, your connections will stop working once you update your password. Stop these services before initiating the password rotation. If you don't stop these services, you'll experience temporary locks on your vCenter CloudAdmin and NSX-T admin accounts, as these services continuously call using your old credentials. For more information about setting up separate accounts for connected services, see [Access and Identity Concepts](./concepts-identity.md).
- First let's reset your Azure VMare Solution components credentials. Your vCenter Server CloudAdmin and NSX-T admin credentials donΓÇÖt expire; however, you can follow these steps to generate new passwords for these accounts.
+## Reset your Azure VMware Solution credentials
-> [!NOTE]
-> If you use your CloudAdmin credentials for connected services like HCX, vRealize Orchestrator, vRealizae Operations Manager or VMware Horizon, your connections will stop working once you update your password. These services should be stopped before initiating the password rotation. Failure to do so may result in temporary locks on your vCenter CloudAdmin and NSX-T admin accounts, as these services will continuously call using your old credentials. For more information about setting up separate accounts for connected services, see [Access and Identity Concepts](./concepts-identity.md).
+In this step, you'll reset the credentials for your Azure VMware Solution components. Although your vCenter and NSX-T credentials don't expire, you can generate new passwords for these accounts.
1. From the Azure portal, open an Azure Cloud Shell session.
In addition to this how-to, you can also view the video for [resetting the vCent
az resource invoke-action --action rotateNSXTPassword --ids "/subscriptions/{SubscriptionID}/resourceGroups/{ResourceGroup}/providers/Microsoft.AVS/privateClouds/{PrivateCloudName}" --api-version "2020-07-17-preview" ```
-## Ensure the HCX connector has your latest vCenter Server credentials
+## Verify the HCX Connector has the latest vCenter Server credentials
-Now that you've reset your credentials, follow these steps to ensure the HCX connector has your updated credentials.
+In this step, you'll verify that the HCX connector has the updated credentials.
1. Once your password is changed, go to the on-premises HCX connector web interface using https://{ip of the HCX connector appliance}:443. Be sure to use port 443. Log in using your new credentials.
backup Sap Hana Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/sap-hana-backup-support-matrix.md
Azure Backup supports the backup of SAP HANA databases to Azure. This article su
| -- | | | | **Topology** | SAP HANA running in Azure Linux VMs only | HANA Large Instances (HLI) | | **Regions** | **GA:**<br> **Americas** ΓÇô Central US, East US 2, East US, North Central US, South Central US, West US 2, West Central US, West US, Canada Central, Canada East, Brazil South <br> **Asia Pacific** ΓÇô Australia Central, Australia Central 2, Australia East, Australia Southeast, Japan East, Japan West, Korea Central, Korea South, East Asia, Southeast Asia, Central India, South India, West India, China East, China North, China East2, China North 2 <br> **Europe** ΓÇô West Europe, North Europe, France Central, UK South, UK West, Germany North, Germany West Central, Switzerland North, Switzerland West, Central Switzerland North, Norway East, Norway West <br> **Africa / ME** - South Africa North, South Africa West, UAE North, UAE Central <BR> **Azure Government regions** | France South, Germany Central, Germany Northeast, US Gov IOWA |
-| **OS versions** | SLES 12 with SP2, SP3,SP4 and SP5; SLES 15 with SP0, SP1, SP2 <br><br> RHEL 7.4, 7.6, 7.7, 8.1 & 8.2 | |
-| **HANA versions** | SDC on HANA 1.x, MDC on HANA 2.x SPS04, SPS05 Rev <= 53 (validated for encryption enabled scenarios as well) | |
+| **OS versions** | SLES 12 with SP2, SP3,SP4 and SP5; SLES 15 with SP0, SP1, SP2 <br><br> RHEL 7.4, 7.6, 7.7, 7.9, 8.1 & 8.2 | |
+| **HANA versions** | SDC on HANA 1.x, MDC on HANA 2.x SPS04, SPS05 Rev <= 55 (validated for encryption enabled scenarios as well) | |
| **HANA deployments** | SAP HANA on a single Azure VM - Scale up only. <br><br> For high availability deployments, both the nodes on the two different machines are treated as individual nodes with separate data chains. | Scale-out <br><br> In high availability deployments, backup doesnΓÇÖt failover to the secondary node automatically. Configuring backup should be done separately for each node. | | **HANA Instances** | A single SAP HANA instance on a single Azure VM ΓÇô scale up only | Multiple SAP HANA instances on a single VM. You can protect only one of these multiple instances at a time. | | **HANA database types** | Single Database Container (SDC) ON 1.x, Multi-Database Container (MDC) on 2.x | MDC in HANA 1.x |
blockchain Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/blockchain/service/migration-guide.md
+
+ Title: Azure Blockchain Service retirement notification and guidance
+description: Migrate Azure Blockchain Service to a managed or self-managed blockchain offering
Last updated : 05/10/2021++
+#Customer intent: As a network operator, I want to migrate Azure Blockchain Service to an alterative offering so that I can use blockchain after Azure Blockchain Service retirement.
++
+# Migrate Azure Blockchain Service
+
+You can migrate ledger data from Azure Blockchain Service to an alternate offering.
+
+> [!IMPORTANT]
+> On **September 10, 2021**, Azure Blockchain will be retired. Please migrate ledger data from Azure Blockchain Service to an alternative offering based on your development status in production or evaluation.
+
+## Evaluate alternatives
+
+The first step when planning a migration is to evaluate alternative offerings. Evaluate the following alternatives based on your development status of being in production or evaluation.
+
+### Production or pilot phase
+
+If you have already deployed and developed a blockchain solution that is in the production or pilot phase, consider the following alternatives.
+
+#### Quorum Blockchain Service
+
+Quorum Blockchain Service is a managed offering by ConsenSys on Azure that supports Quorum as ledger technology.
+
+- **Managed offering** - Quorum Blockchain Service has no extra management overhead compared to Azure Blockchain Service.
+- **Ledger technology** - Based on ConsenSys Quorum which is an enhanced version of the GoQuorum Ledger technology used in Azure Blockchain Service. No new learning is required. For more information, see the [Consensys Quorum FAQ](https://consensys.net/quorum/faq).
+- **Continuity** - You can migrate your existing data on to Quorum Blockchain Service by ConsenSys. For more information, see [Export data from Azure Blockchain Service](#export-data-from-azure-blockchain-service)
+
+For more information, see [Quorum Blockchain Service](https://consensys.net/QBS).
+
+#### Azure VM-based deployment
+
+There are several blockchain resource management templates you can use to deploy blockchain on IaaS VMs.
+
+- **Ledger technology** - You can continue to use Quorum ledger technology including the new ConsenSys Quorum.
+- **Self-management** - Once deployed, you manage the infrastructure and blockchain stack.
+
+### New deployment or evaluation phase
+
+If you are starting to develop a new solution or are in an evaluation phase, consider the following alternatives based on your scenario requirements.
+
+- [Quorum template from Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/consensys.quorum-dev-quickstart?tab=Overview)
+- [Besu template from Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/consensys.hyperledger-besu-quickstart?tab=Overview)
+
+### How to migrate to an alternative
+
+To migrate a production workload, first [export your data from Azure Blockchain Service](#export-data-from-azure-blockchain-service). Once you have a copy of your data, you can transition this data to your preferred alternative.
+
+The recommended migration destination is ConsenSys Quorum Blockchain Service. To onboard to this service, register at the [Quorum Blockchain Service](https://consensys.net/QBS) page.
+
+To self-manage your blockchain solution using virtual machines in Azure, see [Azure VM-based Quorum guidance](#azure-vm-based-quorum-guidance) to set up transaction and validator nodes.
+## Export data from Azure Blockchain Service
+
+Based on your current development state, you can either opt to use existing ledger data on Azure Blockchain Service or start a new network and use the solution of your choice. We recommend creating a new consortium based on a solution of your choice in all scenarios where you do not need or intend to use existing ledger data on Azure Blockchain Service.
+
+### Open support case
+
+If you have a paid support plan, open a Microsoft Support ticket to pause the consortium and export your blockchain data.
+
+1. Use the Azure portal to open a support ticket. In *Problem description*, enter the following details:
+
+ ![Support ticket problem description form in the Azure portal](./media/migration-guide/problem-description.png)
+
+ | Field | Response |
+ |-| |
+ | Issue type | Technical |
+ | Service | Azure Blockchain Service - Preview |
+ | Summary | Request data for migration |
+ | Problem type | other |
+
+1. In *Additional details*, include the following details:
+
+ ![Support ticket additional details form in the Azure portal](./media/migration-guide/additional-details.png)
+
+ - Subscription ID or Azure Resource Manager resource ID
+ - Tenant
+ - Consortium name
+ - Region
+ - Member name
+ - Preferred Datetime for initiating migration
+
+If your consortium has multiple members, each member is required to open a separate support ticket for respective member data.
+
+### Pause consortium
+
+You are required to coordinate with members of consortium to data export since the consortium will be paused for data export and transactions during this time will fail.
+
+Azure Blockchain Service team pauses the consortium, exports a snapshot of data, and makes the data available through short-lived SAS URL for download in an encrypted format. The consortium is resumed after taking the snapshot.
+
+> [!IMPORTANT]
+> You should stop all applications initiating new
+> blockchain transactions on to the network. Active applications may lead to data loss or your original and migrated networks being out of sync.
+
+### Download data
+
+Download the data using the Microsoft Support provided short-lived SAS URL link.
+
+> [!IMPORTANT]
+> You are required to download your data within seven days.
+
+Decrypt the data using the API access key. You can [get the key from the Azure portal](configure-transaction-nodes.md#access-keys) or [through the REST API](/rest/api/blockchain/2019-06-01-preview/blockchainmembers/listapikeys).
+
+> [!CAUTION]
+> Only the default transaction node API access key 1 is used to encrypt all the nodes data of that member.
+>
+> Do not reset the API access key in between of the migration.
+
+You can use the data with either ConsenSys Quorum Blockchain service or your IaaS VM-based deployment.
+
+For ConsenSys Quorum Blockchain Service migration, contact ConsenSys at [qbsmigration@consensys.net](mailto:qbsmigration@consensys.net).
+
+For using the data with your IaaS VM-based deployment, follow the steps in the [Azure VM based Quorum guidance](#azure-vm-based-quorum-guidance) section of this article.
+
+### Delete resources
+
+Once you have completed your data copy, it is recommended that you delete the Azure Blockchain member resources. You will continue to get billed while these resources exist.
+
+## Azure VM-based Quorum guidance
+
+Use the following the steps to create transaction nodes and validator nodes.
+
+### Transaction node
+
+A transaction node has two components. Tessera is used for the private transactions and Geth is used for the Quorum application. Validator nodes require only the Geth component.
+
+#### Tessera
+
+1. Install Java 11. For example, `apt install default-jre`.
+1. Update paths in `tessera-config.json`. Change all references of `/working-dir/**` to `/opt/blockchain/data/working-dir/**`.
+1. Update the IP address of other transaction nodes as per new IP address. HTTPS won't work since it is not enabled in the Tessera configuration. For information on how to configure TLS, see the [Tessera configure TLS](https://docs.tessera.consensys.net/en/stable/HowTo/Configure/TLS/) article.
+1. Update NSG rules to allow inbound connections to port 9000.
+1. Run Tessera using the following command:
+
+ ```bash
+ java -Xms512M -Xmx1731M -Dlogback.configurationFile=/tessera/logback-tessera.xml -jar tessera.jar -configfile /opt/blockchain/data/working-dir/tessera-config.json > tessera.log 2>&1 &
+ ```
+
+#### Geth
+
+1. Update IPs in enode addresses in `/opt/blockchain/data/working-dir/dd/static-nodes.json`. Public IP address is allowed.
+1. Make the same IP address changes under StaticNodes key in `/geth/config.toml`.
+1. Update NSG rules to allow inbound connections to port 30303.
+1. Run Geth using the following commands:
+
+ ```bash
+ export NETWORK_ID='' # Get network ID from metadata. The network ID is the same for consortium.
+
+ PRIVATE_CONFIG=tm.ipc geth --config /geth/config.toml --datadir /opt/blockchain/data/working-dir/dd --networkid $NETWORK_ID --istanbul.blockperiod 5 --nodiscover --nousb --allow-insecure-unlock --verbosity 3 --txpool.globalslots 80000 --txpool.globalqueue 80000 --txpool.accountqueue 50000 --txpool.accountslots 50000 --targetgaslimit 700000000 --miner.gaslimit 800000000 --syncmode full --rpc --rpcaddr 0.0.0.0 --rpcport 3100 --rpccorsdomain '*' --rpcapi admin,db,eth,debug,net,shh,txpool,personal,web3,quorum,istanbul --ws --wsaddr 0.0.0.0 --wsport 3000 --wsorigins '*' --wsapi admin,db,eth,debug,net,shh,txpool,personal,web3,quorum,istanbul
+ ```
+
+### Validator Node
+
+Validator node steps are similar to the transaction node except that Geth startup command will have the additional flag `-mine`. Tessera is not started on a validator node. To run Geth without a paired Tessera, you pass `PRIVATE_CONFIG=ignore` in the Geth command. Run Geth using the following commands:
+
+```bash
+export NETWORK_ID=`j q '.APP_SETTINGS | fromjson | ."network-id"' env.json`
+
+PRIVATE_CONFIG=ignore geth --config /geth/config.toml --datadir /opt/blockchain/data/working-dir/dd --networkid $NETWORK_ID --istanbul.blockperiod 5 --nodiscover --nousb --allow-insecure-unlock --verbosity 3 --txpool.globalslots 80000 --txpool.globalqueue 80000 --txpool.accountqueue 50000 --txpool.accountslots 50000 --targetgaslimit 700000000 --miner.gaslimit 800000000 --syncmode full --rpc --rpcaddr 0.0.0.0 --rpcport 3100 --rpccorsdomain '*' --rpcapi admin,db,eth,debug,net,shh,txpool,personal,web3,quorum,istanbul --ws --wsaddr 0.0.0.0 --wsport 3000 --wsorigins '*' --wsapi admin,db,eth,debug,net,shh,txpool,personal,web3,quorum,istanbul ΓÇômine
+```
+
+## Upgrading Quorum
+
+Azure Blockchain Service may be in running one of the following listed versions of Quorum. You can choose to use the same Quorum version or follow the below steps to use latest version of ConsenSys Quorum.
+
+### Upgrade Quorum version 2.6.0 or 2.7.0 to ConsenSys 21.1.0
+
+Upgrading from Quorum version 2.6 or 2.7 version is straightforward. Download and update using the following links.
+1. Download [ConsenSys Quorum and related binaries v21.1.0](https://github.com/ConsenSys/quorum/releases/tag/v21.1.0).
+1. Download the latest version of Tessera [tessera-app-21.1.0-app.jar](https://github.com/ConsenSys/tessera/releases/tag/tessera-21.1.0).
+
+### Upgrade Quorum version 2.5.0 to ConsenSys 21.1.0
+
+1. Download [ConsenSys Quorum and related binaries v21.1.0](https://github.com/ConsenSys/quorum/releases/tag/v21.1.0).
+1. Download the latest version of Tessera [tessera-app-21.1.0-app.jar](https://github.com/ConsenSys/tessera/releases/tag/tessera-21.1.0).
+For versions 2.5.0, there are some minor genesis file changes. Make the following changes in the genesis file.
+
+1. The value `byzantiumBlock` was set to 1 and it cannot be less than `constantinopleBlock` which is 0. Set the `byzantiumBlock` value to 0.
+1. Set `petersburgBlock`, `istanbulBlock` to a future block. This value should be same across all nodes.
+1. This step is optional. `ceil2Nby3Block` was incorrectly placed in Azure Blockchain Service Quorum 2.5.0 version. This needs to be inside the istanbul config and set the value future block. This value should be same across all nodes.
+1. Run geth to reinitialize genesis block using following command:
+
+ ```bash
+ geth --datadir "Data Directory Path" init "genesis file path"
+ ```
+
+1. Run Geth.
+
+## Exported data reference
+
+This section describes the metadata, and folder structure to help import the data into your IaaS VM deployment.
+
+### Metadata info
+
+| Name | Sample | Description |
+|--|--|--|
+| consortium_name | \<ConsortiumName\> | Consortium name (unique across Azure Blockchain Service). |
+| Consortium_Member_Count || Number of members in the consortium |
+| member_name | \<memberName\> | Blockchain member name (unique across Azure Blockchain Service). |
+| node_name | transaction-node | Node name (each member has multiple nodes). |
+| network_id | 543 | Geth network ID. |
+| is_miner | False | Is_Miner == true (Validator Node), Is_Miner == false (Transaction node) |
+| quorum_version | 2.7.0 | Version of Quorum |
+| tessera_version | 0.10.5 | Tessera version |
+| java_version | java-11-openjdk-amd64 | Java version Tessera uses |
+| CurrentBlockNumber | | Current block number for the blockchain network |
+
+## Migrated Data Folder structure
+
+At the top level, there are folders that correspond to each of the nodes of the members.
+
+- **Standard SKU** - Two validator nodes (validator-node-0
+and validator-node-1)
+- **Basic SKU** - One validator node (validator-node-0)
+- **Transaction Node** - Default transaction node named transaction-node.
+
+Other transaction node folders are named after the transaction node name.
+
+### Node level folder structure
+
+Each node level folder contains a zip file that is encrypted using the encryption key. For details on the obtaining the encryption key, see the [Download data](#download-data) section of this article.
+
+| Directory/File | Description |
+|-|--|
+| /config/config.toml | Geth parameters. Command line parameters take precedence |
+| /config/genesis.json | Genesis file |
+| /config/logback-tessera.xml | Logback configuration for Tessera |
+| /config/static-nodes.json | Static nodes. Bootstrap nodes are removed and auto-discovery is disabled. |
+| /config/tessera-config.json | Tessera configuration |
+| /data/c/ | Tessera DB |
+| /data/dd/ | Geth data directory |
+| /env/env | Metadata |
+| /keys/ | Tessera keys |
+| /scripts/ | Startup scripts (provided for reference only) |
+
+## Frequently asked questions
+
+### What does service retirement mean for existing customers?
+
+The existing Azure Blockchain Service deployments cannot be continued beyond September 10, 2021. Start evaluating alternatives suggested in this article before retirement based on your requirements.
+
+### What happens to existing deployments after the announcement of retirement?
+
+Existing deployments are supported until September 10, 2021. Evaluate the suggested alternatives, migrate the data to the alternate offering, operate your requirement on the alternative offering, and start migrating from the deployment on Azure Blockchain Service.
+
+### How long will the existing deployments be supported on Azure Blockchain Service?
+
+Existing deployments are supported until September 10, 2021.
+
+### Will I be allowed to create new Azure Blockchain members while in retirement phase?
+
+After May 10, 2021, no new member creation or deployments are supported.
certification Program Requirements Azure Certified Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/certification/program-requirements-azure-certified-device.md
Promise of Azure Certified Device certification are:
| **Applies To** | Any device | | **OS** | Agnostic | | **Validation Type** | Automated |
-| **Validation** | Device supports easy input of target DPS ID scope ownership without needing to recompile the embedded code. Microsoft provides the [portal workflow](https://certify.azure.com) to execute the tests to validate that the device supports DPS **1.** User must select one of the attestation methods (X.509, TPM and SAS key) **2.** Depending on the attestation method, user needs to take corresponding action such as **a)** Upload X.509 cert to AICS managed DPS scope **b)** Implement SAS key or endorsement key into the device |
+| **Validation** | Device supports easy input of target DPS ID scope ownership. Microsoft provides the [portal workflow](https://certify.azure.com) to execute the tests to validate that the device supports DPS **1.** User must select one of the attestation methods (X.509, TPM and SAS key) **2.** Depending on the attestation method, user needs to take corresponding action such as **a)** Upload X.509 cert to AICS managed DPS scope **b)** Implement SAS key or endorsement key into the device |
| **Resources** | [Device provisioning service overview](../iot-dps/about-iot-dps.md) | **[If implemented] Cloud to device: The purpose of test is to make sure messages can be sent from cloud to devices**
Promise of Azure Certified Device certification are:
| **Validation Type** | Automated | | **Validation** | Device must send any telemetry schemas to IoT Hub. Microsoft provides the [portal workflow](https://certify.azure.com) to execute the tests. Device twin property (if implemented) **1.** AICS validates the read/write-able property in device twin JSON **2.** User has to specify the JSON payload to be changed **3.** AICS validates the specified desired properties sent from IoT Hub and ACK message received by the device | | **Resources** | **a)** [Certification steps](./overview.md) (has all the additional resources) **b)** [Use device twins with IoT Hub](../iot-hub/iot-hub-devguide-device-twins.md) |+
+**[Required] Limit Recompile: The purpose of this policy ensures devices by default should not need users to re-compile code to deploy the device.**
+
+| **Name** | AzureCertified.Policy.LimitRecompile |
+| -- | |
+| **Target Availability** | Policy |
+| **Applies To** | Any device |
+| **OS** | Agnostic |
+| **Validation Type** | Policy |
+| **Validation** | To simplify device configuration for users, we require all devices can be configured to connect to Azure without the need to recompile and deploy device source code. This includes DPS information, such as Scope ID, which should be set as configuration settings and not compiled. However, if your device contains certain secure hardware or if there are extenuating circumstances in which the user will expect to compile and deploy code, contact the certification team to request an exception review. |
+| **Resources** | **a)** [Device provisioning service overview](../iot-dps/about-iot-dps.md) **b)** [Sample config file for DPS ID Scope transfer](https://github.com/Azure/azure-iot-sdk-c/tree/public-preview-pnp/serializer/samples/devicetwin_simplesample) |
certification Program Requirements Pnp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/certification/program-requirements-pnp.md
IoT Plug and Play Preview enables solution builders to integrate smart devices w
Promise of IoT Plug and Play certification are: 1. Defined device models and interfaces are compliant with the [Digital Twin Definition Language](https://github.com/Azure/opendigitaltwins-dtdl)
-2. Secure provisioning and easy transfer of ID scope ownership in Device Provisioning Services
-3. Easy integration with Azure IoT based solutions using the [Digital Twin APIs](../iot-pnp/concepts-digital-twin.md) : Azure IoT Hub and Azure IoT Central
-4. Validated product truth on certified devices
+1. Easy integration with Azure IoT based solutions using the [Digital Twin APIs](../iot-pnp/concepts-digital-twin.md) : Azure IoT Hub and Azure IoT Central
+1. Validated product truth on certified devices
+1. Meets all requirements of [Azure Certified Device](./program-requirements-azure-certified-device.md)
## Requirements
Promise of IoT Plug and Play certification are:
| **Validation** | Device must send any telemetry schemas to IoT Hub. Microsoft provides the [portal workflow](https://certify.azure.com) to execute the tests. Device to cloud (required): **1.** Validates that the device can send message to AICS managed IoT Hub **2.** User must specify the number and frequency of messages. **3.** AICS validates the telemetry is received by the Hub instance | | **Resources** | [Certification steps](./overview.md) (has all the additional resources) |
-**[Required] DPS: The purpose of test is to check the device implements and supports IoT Hub Device Provisioning Service with one of the three attestation methods**
-
-| **Name** | IoTPnP.DPS |
-| -- | |
-| **Target Availability** | Available now |
-| **Applies To** | Any device |
-| **OS** | Agnostic |
-| **Validation Type** | Automated |
-| **Validation** | Device must implement easy transfer of DPS ID Scope ownership without needing to recompile the embedded code. Microsoft provides the [portal workflow](https://certify.azure.com) to execute the tests to validate that the device supports DPS **1.** User must select one of the attestation methods (X.509, TPM and SAS key) **2.** Depending on the attestation method, user needs to take corresponding action such as **a)** Upload X.509 cert to AICS managed DPS scope **b)** Implement SAS key or endorsement key into the device |
-| **Resources** | **a)** [Device provisioning service overview](../iot-dps/about-iot-dps.md), **b)** [Sample config file for DPS ID Scope transfer](https://github.com/Azure/azure-iot-sdk-c/tree/public-preview-pnp/serializer/samples/devicetwin_simplesample) |
**[Required] DTDL v2: The purpose of test to ensure defined device models and interfaces are compliant with the Digital Twins Definition Language v2.**
cognitive-services Concept Brand Detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/concept-brand-detection.md
In some cases, the brand detector will pick up both the logo image and the styli
## Use the API
-The brand detection feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-ga/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Brands` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"brands"` section.
+The brand detection feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Brands` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"brands"` section.
* [Quickstart: Computer Vision REST API or client libraries](./quickstarts-sdk/client-library.md?pivots=programming-language-csharp)
cognitive-services Concept Categorizing Images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/concept-categorizing-images.md
The following table illustrates a typical image set and the category returned by
## Use the API
-The categorization feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-ga/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Categories` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"categories"` section.
+The categorization feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Categories` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"categories"` section.
* [Quickstart: Computer Vision REST API or client libraries](./quickstarts-sdk/client-library.md?pivots=programming-language-csharp)
cognitive-services Concept Describing Images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/concept-describing-images.md
The following JSON response illustrates what Computer Vision returns when descri
## Use the API
-The image description feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-ga/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Description` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"description"` section.
+The image description feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Description` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"description"` section.
* [Quickstart: Computer Vision REST API or client libraries](./quickstarts-sdk/client-library.md?pivots=programming-language-csharp)
cognitive-services Concept Detecting Adult Content https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/concept-detecting-adult-content.md
The "adult" classification contains several different categories:
## Use the API
-You can detect adult content with the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-gash;which represent confidence scores between zero and one for each respective category.
+You can detect adult content with the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. When you add the value of `Adult` to the **visualFeatures** query parameter, the API returns three boolean properties&mdash;`isAdultContent`, `isRacyContent`, and `isGoryContent`&mdash;in its JSON response. The method also returns corresponding properties&mdash;`adultScore`, `racyScore`, and `goreScore`&mdash;which represent confidence scores between zero and one for each respective category.
- [Quickstart: Computer Vision REST API or client libraries](./quickstarts-sdk/client-library.md?pivots=programming-language-csharp)
cognitive-services Concept Detecting Color Schemes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/concept-detecting-color-schemes.md
The following table shows Computer Vision's black and white evaluation in the sa
## Use the API
-The color scheme detection feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-ga/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Color` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"color"` section.
+The color scheme detection feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Color` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"color"` section.
* [Quickstart: Computer Vision REST API or client libraries](./quickstarts-sdk/client-library.md?pivots=programming-language-csharp)
cognitive-services Concept Detecting Domain Content https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/concept-detecting-domain-content.md
There are two ways to use the domain-specific models: by themselves (scoped anal
### Scoped analysis
-You can analyze an image using only the chosen domain-specific model by calling the [Models/\<model\>/Analyze](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-ga/operations/56f91f2e778daf14a499f21b) API.
+You can analyze an image using only the chosen domain-specific model by calling the [Models/\<model\>/Analyze](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API.
The following is a sample JSON response returned by the **models/celebrities/analyze** API for the given image:
The following is a sample JSON response returned by the **models/celebrities/ana
### Enhanced categorization analysis
-You can also use domain-specific models to supplement general image analysis. You do this as part of [high-level categorization](concept-categorizing-images.md) by specifying domain-specific models in the *details* parameter of the [Analyze](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-ga/operations/56f91f2e778daf14a499f21b) API call.
+You can also use domain-specific models to supplement general image analysis. You do this as part of [high-level categorization](concept-categorizing-images.md) by specifying domain-specific models in the *details* parameter of the [Analyze](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API call.
In this case, the 86-category taxonomy classifier is called first. If any of the detected categories have a matching domain-specific model, the image is passed through that model as well and the results are added.
Currently, Computer Vision supports the following domain-specific models:
| celebrities | Celebrity recognition, supported for images classified in the `people_` category | | landmarks | Landmark recognition, supported for images classified in the `outdoor_` or `building_` categories |
-Calling the [Models](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-ga/operations/56f91f2e778daf14a499f20e) API will return this information along with the categories to which each model can apply:
+Calling the [Models](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f20e) API will return this information along with the categories to which each model can apply:
```json {
cognitive-services Concept Detecting Faces https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/concept-detecting-faces.md
The next example demonstrates the JSON response returned for an image containing
## Use the API
-The face detection feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-ga/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Faces` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"faces"` section.
+The face detection feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Faces` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"faces"` section.
* [Quickstart: Computer Vision REST API or client libraries](./quickstarts-sdk/client-library.md?pivots=programming-language-csharp)
cognitive-services Concept Detecting Image Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/concept-detecting-image-types.md
# Detecting image types with Computer Vision
-With the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-ga/operations/56f91f2e778daf14a499f21b) API, Computer Vision can analyze the content type of images, indicating whether an image is clip art or a line drawing.
+With the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API, Computer Vision can analyze the content type of images, indicating whether an image is clip art or a line drawing.
## Detecting clip art
The following JSON responses illustrates what Computer Vision returns when indic
## Use the API
-The image type detection feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-ga/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `ImageType` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"imageType"` section.
+The image type detection feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `ImageType` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"imageType"` section.
* [Quickstart: Computer Vision REST API or client libraries](./quickstarts-sdk/client-library.md?pivots=programming-language-csharp)
cognitive-services Concept Object Detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/concept-object-detection.md
It's important to note the limitations of object detection so you can avoid or m
## Use the API
-The object detection feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-ga/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Objects` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"objects"` section.
+The object detection feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Objects` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"objects"` section.
* [Quickstart: Computer Vision REST API or client libraries](./quickstarts-sdk/client-library.md?pivots=programming-language-csharp)
cognitive-services Concept Tagging Images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/concept-tagging-images.md
The following JSON response illustrates what Computer Vision returns when taggin
## Use the API
-The tagging feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-ga/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Tags` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"tags"` section.
+The tagging feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Tags` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"tags"` section.
* [Quickstart: Computer Vision REST API or client libraries](./quickstarts-sdk/client-library.md?pivots=programming-language-csharp)
cognitive-services Overview Image Analysis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/overview-image-analysis.md
keywords: computer vision, computer vision applications, computer vision service
The Computer Vision Image Analysis service can extract a wide variety of visual features from your images. For example, it can determine whether an image contains adult content, find specific brands or objects, or find human faces.
-You can use Image Analysis through a client library SDK or by calling the [REST API](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v2-g) to get started.
+You can use Image Analysis through a client library SDK or by calling the [REST API](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-g) to get started.
This documentation contains the following types of articles: * The [quickstarts](./quickstarts-sdk/image-analysis-client-library.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
This documentation contains the following types of articles:
## Image Analysis features
-You can analyze images to provide insights about their visual features and characteristics. All of the features in the list below are provided by the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-g) to get started.
+You can analyze images to provide insights about their visual features and characteristics. All of the features in the list below are provided by the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. Follow a [quickstart](./quickstarts-sdk/image-analysis-client-library.md) to get started.
### Tag visual features
cognitive-services Custom Speech Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/custom-speech-overview.md
# What is Custom Speech?
-[Custom Speech](https://aka.ms/customspeech) is a set of UI-based tools that allow you to evaluate and improve the Microsoft speech-to-text accuracy for your applications and products. All it takes to get started is a handful of test audio files. Follow the links in this article to start creating a custom speech-to-text experience.
+[Custom Speech](https://aka.ms/customspeech) is a UI-based tool that allows you to evaluate and improve the Microsoft speech-to-text accuracy for your applications and products. All it takes to get started is a handful of test audio files. Follow the links in this article to start creating a custom speech-to-text experience.
## What's in Custom Speech?
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/overview.md
# What is the Speech service?
-The Speech service is the unification of speech-to-text, text-to-speech, and speech-translation into a single Azure subscription. It's easy to speech enable your applications, tools, and devices with the [Speech CLI](spx-overview.md), [Speech SDK](./speech-sdk.md), [Speech Devices SDK](./speech-devices-sdk-quickstart.md?pivots=platform-android), [Speech Studio](https://speech.microsoft.com/), or [REST APIs](#reference-docs).
+The Speech service is the unification of speech-to-text, text-to-speech, and speech-translation into a single Azure subscription. It's easy to speech enable your applications, tools, and devices with the [Speech CLI](spx-overview.md), [Speech SDK](./speech-sdk.md), [Speech Devices SDK](./speech-devices-sdk-quickstart.md?pivots=platform-android), [Speech Studio](speech-studio-overview.md), or [REST APIs](#reference-docs).
> [!IMPORTANT] > The Speech service has replaced Bing Speech API and Translator Speech. See the _Migration_ section for migration instructions.
cognitive-services Speech Studio Test Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/quickstarts/speech-studio-test-model.md
- Title: "Test a model using audio files - Speech Studio"-
-description: In this how-to, you use Speech Studio to test recognition of speech in an audio file.
------ Previously updated : 02/12/2021---
-# Test a model using an audio file in Speech Studio
-
-In this how-to, you use Speech Studio to convert speech from an audio file to text. Speech Studio lets you test, compare, improve, and deploy speech recognition models using related text, audio with human-labeled transcripts, and pronunciation guidance you provide.
-
-## Prerequisites
-
-Before you use Speech Studio, [follow these instructions to create an Azure account and subscribe to the Speech service](../custom-speech-overview.md#set-up-your-azure-account). This unified subscription gives you access to speech-to-text, text-to-speech, speech translation, and the Speech Studio.
-
-## Download an audio file
-
-Follow these steps to download an audio file that contains speech and package it into a zip file.
-
-1. Download the **[sample wav file from this link](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-speech-sdk/f9807b1079f3a85f07cbb6d762c6b5449d536027/samples/cpp/windows/console/samples/whatstheweatherlike.wav)** by right-clicking the link and selecting **Save link as**. Click **Save** to download the `whatstheweatherlike.wav` file.
-2. Using a file explorer or terminal window with a zip tool, create a zip file named `whatstheweatherlike.zip` that contains the `whatstheweatherlike.wav` file you downloaded. In Windows, you can open Windows Explorer, navigate to the `Downloads` folder, right-click `whatstheweatherliike.wav`, click **Send to**, click **Compressed (zipped) folder**, and press enter to accept the default filename.
-
-## Create a project in the Speech Studio
-
-Follow these steps to create a project that contains your zip of one audio file.
-
-1. Open [Speech Studio](https://speech.microsoft.com/), and click **New project**. Type a name for this project, and click **Create**. Your project appears in the Custom Speech list.
-2. Click the name of your project. In the Data tab, click **Upload data**.
-3. The speech data type defaults to **Audio only**, so click **Next**.
-4. Name your new speech dataset `MyZipOfAudio`, and click **Next**.
-5. Click **Browse files...**, navigate to your `whatstheweatherlike.zip` file, and click **Open**.
-6. Click the **Upload** button. The browser uploads your zip file to Speech Studio, and Speech Studio processes the contents.
-
-## Test a model
-
-After Speech Studio processes the contents of your zip file, you can play the source audio while examining the transcription to look for errors or omissions. Follow these steps to examine transcription quality in the browser.
-
-1. Click the **Testing** tab, and click **Add test**.
-2. In this test, we are inspecting quality of audio-only data, so click **Next** to accept this test type.
-3. Name this test `MyModelTest`, and click **Next**.
-4. Click the radio button left of `MyZipOfAudio`, and click **Next**.
-5. The **Model 1** dropdown defaults to the latest recognition model, so click **Create**. After processing the contents of your audio dataset, the test status will change to **Succeeded**.
-6. Click **MyModelTest**. The results of speech recognition appear. Click the right-pointing triangle within the circle to hear the audio, and compare what you hear to the text by the circle.
-
-## Download detailed results
-
-You can download files that describe transcriptions in in much greater detail. The files include lexical form of speech in your audio files, and JSON files that contain offset, duration, and transcription confidence details about each word. Follow these steps to see these files.
-
-1. Click **Download**.
-2. On the Download dialog, unselect **Audio**, and click **Download**.
-3. Unzip the downloaded zip file, and examine the extracted files.
-
-## Next steps
-
-Learn about improving the accuracy of speech recognition by [training a custom model](../how-to-custom-speech-test-and-train.md).
cognitive-services Speech Studio Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-studio-overview.md
+
+ Title: "Speech Studio overview - Speech service"
+
+description: Speech Studio is a set of UI-based tools for building and integrating features from Azure Speech service in your applications.
++++++ Last updated : 05/07/2021+++
+# What is Speech Studio?
+
+[Speech Studio](https://speech.microsoft.com) is a set of UI-based tools for building and integrating features from Azure Speech service in your applications. You create projects in Speech Studio using a no-code approach, and then reference the assets you create in your applications using the [Speech SDK](speech-sdk.md), [Speech CLI](spx-overview.md), or various REST APIs.
+
+## Set up your Azure account
+
+You need to have an Azure account and Speech service subscription before you can use [Speech Studio](https://speech.microsoft.com). If you don't have an account and subscription, [try the Speech service for free](overview.md#try-the-speech-service-for-free).
+
+> [!NOTE]
+> Please be sure to create a standard (S0) subscription. Free (F0) subscriptions aren't supported.
+
+After you create an Azure account and a Speech service subscription:
+
+1. Sign in to the [Speech Studio](https://speech.microsoft.com).
+1. Select the subscription you need to work in and create a speech project.
+1. If you want to modify your subscription, select the cog button in the top menu.
+
+## Speech Studio features
+
+The following Speech service features are available as project types in Speech Studio.
+
+* **Real-time speech-to-text**: Quickly test speech-to-text by dragging and dropping audio files without using any code. This is a demo tool for seeing how speech-to-text works on your audio samples, but see the [overview](speech-to-text.md) for speech-to-text to explore the full functionality that's available.
+* **Custom Speech**: Custom Speech allows you to create speech recognition models that are tailored to specific vocabulary sets and styles of speaking. In contrast to using a base speech recognition model, Custom Speech models become part of your unique competitive advantage because they are not publicly accessible. See the [quickstart](how-to-custom-speech-test-and-train.md) to get started with uploading sample audio to create a Custom Speech model.
+* **Pronunciation Assessment**: Pronunciation assessment evaluates speech pronunciation and gives speakers feedback on the accuracy and fluency of spoken audio. Speech Studio provides a sandbox for testing this feature quickly with no code, but see the [how-to](how-to-pronunciation-assessment.md) article for using the feature with the Speech SDK in your applications.
+* **Custom Voice**: Custom Voice allows you to create custom, one-of-a-kind voices for text-to-speech. You supply audio files and create matching transcriptions in Speech Studio, and then use the custom voices in your applications. See the [how-to](how-to-custom-voice-create-voice.md) article on creating and using custom voices via endpoints. Note that Custom Voice can only be used from the [REST API](rest-text-to-speech.md).
+* **Audio Content Creation**: [Audio Content Creation](how-to-audio-content-creation.md) is an easy-to-use tool that lets you build highly natural audio content for a variety of scenarios, like audiobooks, news broadcasts, video narrations, and chat bots. Speech Studio allows you to export your created audio files to use in your applications.
+* **Custom Keyword**: A Custom Keyword is a word or short phrase that allows your product to be voice-activated. You create a Custom Keyword in Speech Studio, and then generate a binary file to [use with the Speech SDK](custom-keyword-basics.md) in your applications.
+* **Custom Commands**: Custom Commands makes it easy to build rich voice commanding apps optimized for voice-first interaction experiences. It provides a code-free authoring experience in Speech Studio, an automatic hosting model, and relatively lower complexity, helping you focus on building the best solution for your voice commanding scenarios. See the [how-to](how-to-develop-custom-commands-application.md) guide for building Custom Commands applications, and also see the guide for [integrating your Custom Commands application with the Speech SDK](how-to-custom-commands-setup-speech-sdk.md).
+
+## Next steps
+
+[Explore Speech Studio](https://speech.microsoft.com) and create a project.
++++
communication-services Web Calling Sample https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/samples/web-calling-sample.md
Title: Azure Communication Services - Web calling sample description: Learn about the Communication Services web calling sample-+
data-factory Connector Odbc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-odbc.md
description: Learn how to copy data from and to ODBC data stores by using a copy
Previously updated : 04/22/2020 Last updated : 05/10/2021 # Copy data from and to ODBC data stores using Azure Data Factory
The following properties are supported for ODBC linked service:
| Property | Description | Required | |: |: |: | | type | The type property must be set to: **Odbc** | Yes |
-| connectionString | The connection string excluding the credential portion. You can specify the connection string with pattern like `"Driver={SQL Server};Server=Server.database.windows.net; Database=TestDatabase;"`, or use the system DSN (Data Source Name) you set up on the Integration Runtime machine with `"DSN=<name of the DSN on IR machine>;"` (you need still specify the credential portion in linked service accordingly).<br>You can also put a password in Azure Key Vault and pull the `password` configuration out of the connection string. Refer to [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md) with more details.| Yes |
+| connectionString | The connection string excluding the credential portion. You can specify the connection string with pattern like `Driver={SQL Server};Server=Server.database.windows.net; Database=TestDatabase;`, or use the system DSN (Data Source Name) you set up on the Integration Runtime machine with `DSN=<name of the DSN on IR machine>;` (you need still specify the credential portion in linked service accordingly).<br>You can also put a password in Azure Key Vault and pull the `password` configuration out of the connection string. Refer to [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md) with more details.| Yes |
| authenticationType | Type of authentication used to connect to the ODBC data store.<br/>Allowed values are: **Basic** and **Anonymous**. | Yes | | userName | Specify user name if you are using Basic authentication. | No | | password | Specify password for the user account you specified for the userName. Mark this field as a SecureString to store it securely in Data Factory, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | No |
data-factory Data Flow Parse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-parse.md
Last updated 02/08/2021
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-Use the Parse transformation to parse columns in your data that are in document form. The current supported types of embedded documents that can be parsed are JSON and delimited text.
+Use the Parse transformation to parse columns in your data that are in document form. The current supported types of embedded documents that can be parsed are JSON, XML, and delimited text.
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RWykdO]
In the parse transformation configuration panel, you will first pick the type of
### Column
-Similar to derived columns and aggregates, this is where you will either modify an exiting column by selecting it from the drop-down picker. Or you can type in the name of a new column here. ADF will store the parsed source data in this column.
+Similar to derived columns and aggregates, this is where you will either modify an exiting column by selecting it from the drop-down picker. Or you can type in the name of a new column here. ADF will store the parsed source data in this column. In most cases, you will want to define a new column that parses the incoming embedded document field.
### Expression
ParseCsv select(mapColumn(
``` parse(json = jsonString ? (trade as boolean, customers as string[]),
- format: 'json',
+ format: 'json|XML|delimited',
documentForm: 'singleDocument') ~> ParseJson parse(csv = csvString ? (id as integer,
data-factory How To Configure Azure Ssis Ir Custom Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-configure-azure-ssis-ir-custom-setup.md
To view and reuse some samples of standard custom setups, complete the following
* A *MYSQL ODBC* folder, which contains a custom setup script (*main.cmd*) to install the MySQL ODBC drivers on each node of your Azure-SSIS IR. This setup lets you use the ODBC connectors (Connection Manager, Source, and Destination) to connect to the MySQL server. First, [download the latest 64-bit and 32-bit versions of the MySQL ODBC driver installers](https://dev.mysql.com/downloads/connector/odbc/) (for example, *mysql-connector-odbc-8.0.13-winx64.msi* and *mysql-connector-odbc-8.0.13-win32.msi*), and then upload them all together with *main.cmd* to your blob container.
+
+ If Data Source Name (DSN) is used in connection, DSN configuration is needed in setup script. For example: C:\Windows\SysWOW64\odbcconf.exe /A {CONFIGSYSDSN "MySQL ODBC 8.0 Unicode Driver" "DSN=\<dsnname\>|PORT=3306|SERVER=\<servername\>"}
* An *ORACLE ENTERPRISE* folder, which contains a custom setup script (*main.cmd*) and silent installation config file (*client.rsp*) to install the Oracle connectors and OCI driver on each node of your Azure-SSIS IR Enterprise Edition. This setup lets you use the Oracle Connection Manager, Source, and Destination to connect to the Oracle server.
databox-online Azure Stack Edge Gpu Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-overview.md
Azure Stack Edge Pro has the following capabilities:
|Data refresh | Ability to refresh local files with the latest from cloud.| |Encryption | BitLocker support to locally encrypt data and secure data transfer to cloud over *https*.| |Bandwidth throttling| Throttle to limit bandwidth usage during peak hours.|+ <!--|ExpressRoute | Added security through ExpressRoute. Use peering configuration where traffic from local devices to the cloud storage endpoints travels over the ExpressRoute. For more information, see [ExpressRoute overview](../expressroute/expressroute-introduction.md).|--> ## Components
Azure Stack Edge service is a non-regional service. For more information, see [R
- Review the [Azure Stack Edge Pro system requirements](azure-stack-edge-gpu-system-requirements.md). - Understand the [Azure Stack Edge Pro limits](azure-stack-edge-limits.md).-- Deploy [Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-prep.md) in Azure portal.
+- Deploy [Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-prep.md) in Azure portal.
databox Data Box Deploy Picked Up https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-deploy-picked-up.md
Previously updated : 05/04/2021 Last updated : 05/06/2021 ms.localizationpriority: high
If you're using Data Box in US Government, Japan, Singapore, Korea, India, South
``` > [!NOTE]
- > Required information for return may vary by region.
+ > - Required information for return may vary by region.
+ > - If you're returning a Data Box in Brazil, see [Use self-managed shipping for Azure Data Box](data-box-portal-customer-managed-shipping.md) for detailed instructions.
::: zone target="chromeless"
databox Data Box Disk Deploy Picked Up https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-disk-deploy-picked-up.md
Previously updated : 05/04/2021 Last updated : 05/07/2021 ms.localizationpriority: high
Take the following steps if returning the device in China.
| Contact information | Details | |||
-|Name: | Bao Ying|
+|Name: | `Bao Ying`|
|Designation | Senior OneCall Representative | |Phone: | 400.889.6066 ext. 3693 | |E-mail: | [ying.bao@fedex.com](mailto:ying.bao@fedex.com) |
Take the following steps if returning the device in China.
| Contact information | Details | |||
-|Name: | He Xun|
+|Name: | `He Xun`|
|Designation | OneCall Representative | |Phone: | 400.889.6066 ext. 3603 | |E-mail: | [739951@fedex.com](mailto:739951@fedex.com) |
If you are using Data Box Disk in US Government, Japan, Singapore, Korea, United
``` > [!NOTE]
- > Required information for return may vary by region.
+ > - Required information for return may vary by region.
+ > - If you're returning a Data Box Disk in Brazil, see [Use self-managed shipping for Azure Data Box Disk](data-box-disk-portal-customer-managed-shipping.md) for detailed instructions.
3. Azure Data Box Operations team will work with you to arrange the drop-off to the Azure datacenter.
databox Data Box Disk Portal Customer Managed Shipping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-disk-portal-customer-managed-shipping.md
Previously updated : 05/04/2021 Last updated : 05/07/2021
When you place a Data Box Disk order, you can choose self-managed shipping optio
![Schedule pickup](media\data-box-disk-portal-customer-managed-shipping\data-box-disk-user-pickup-02c.png)
- > [!NOTE]
- > Required information in the email may vary by region.
+ **Instructions for Brazil:** If you're scheduling a device pickup in Brazil, include the following information in your email. The datacenter will schedule the pickup after they receive an inbound `Nota Fiscal`, which can take up to 4 business days.
+
+ ```xml
+ Subject: Request Azure Data Box Disk pickup for order: <ordername>
+
+ - Order name
+ - Company name
+ - Company legal name (if different)
+ - Tax ID
+ - Address
+ - Country
+ - Phone number
+ - Contact name of the person who will pick up the Data Box Disk (A government-issued photo ID will be required to validate the contactΓÇÖs identity upon arrival.)
+ ```
6. After you've scheduled your device pickup, you can view your authorization code in **Schedule pickup for Azure**.
When you place a Data Box Disk order, you can choose self-managed shipping optio
* Government-approved photo ID. The ID will be validated at the datacenter, and the name and details of the person picking up the device must be provided when the pickup is scheduled.
+ > [!NOTE]
+ > If a scheduled appointment is missed, you'll need to schedule a new appointment.
+ 8. Your order automatically moves to the **Picked up** state after the device is picked up from the datacenter. ![Picked up](media\data-box-disk-portal-customer-managed-shipping\data-box-disk-ready-disk-01b.png)
When you place a Data Box Disk order, you can choose self-managed shipping optio
> [!NOTE] > Do not share the authorization code over email. This is only to be verified at the datacenter during drop-off.
+ **Instructions for Brazil:** To schedule a device return in Brazil, send an email to [adbops@microsoft.com](mailto:adbops@microsoft.com) with the following information:
+
+ ```xml
+ Subject: Request Azure Data Box Disk dropoff for order: <ordername>
+
+ - Order name
+ - Contact name of the person who will drop off the Data Box Disk (A government-issued photo ID will be required to validate the contactΓÇÖs identity upon arrival.)
+ - Inbound Nota Fiscal (A copy of the inbound Nota Fiscal will be required at dropoff.)
+ ```
+ 10. After you receive an appointment for drop-off, the order should be in the **Ready to receive at Azure datacenter** state in the Azure portal. ![Screenshot of the Add Shipping Address dialog box with the Ship using options out and the Add shipping address option called out.](media\data-box-disk-portal-customer-managed-shipping\data-box-disk-authcode-dropoff-02b.png)
databox Data Box Portal Customer Managed Shipping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-portal-customer-managed-shipping.md
Previously updated : 05/04/2021 Last updated : 05/07/2021
When you place a Data Box order, you can choose the self-managed shipping option
![Schedule pickup for Azure instructions](media\data-box-portal-customer-managed-shipping\data-box-portal-schedule-pickup-email-01.png)
- > [!NOTE]
- > Required information in the email may vary by region.<!--How can they get this information?-->
+ **Instructions for Brazil:** If you're scheduling a device pickup in Brazil, include the following information in your email. The datacenter will schedule the pickup after they receive an inbound `Nota Fiscal`, which can take up to 4 business days.
+
+ ```xml
+ Subject: Request Azure Data Box Disk pickup for order: <ordername>
+
+ - Order name
+ - Company name
+ - Company legal name (if different)
+ - Tax ID
+ - Address
+ - Country
+ - Phone number
+ - Contact name of the person who will pick up the Data Box Disk (A government-issued photo ID will be required to validate the contactΓÇÖs identity upon arrival.)
+ ```
6. After you schedule your device pickup, you can view your device authorization code in the **Schedule pickup for Azure** pane.
When you place a Data Box order, you can choose the self-managed shipping option
* Government-approved photo ID. The ID will be validated at the datacenter, and the name and details of the person picking up the device must be provided when the pickup is scheduled.
+ > [!NOTE]
+ > If a scheduled appointment is missed, you'll need to schedule a new appointment.
+ 8. Your order automatically moves to the **Picked up** state once the device has been picked up from the datacenter. ![An order in Picked up state](media\data-box-portal-customer-managed-shipping\data-box-portal-picked-up-boxed-01.png)
When you place a Data Box order, you can choose the self-managed shipping option
> [!NOTE] > Do not share the authorization code over email. This is only to be verified at the datacenter during drop off.
+ **Instructions for Brazil:** To schedule a device return in Brazil, send an email to [adbops@microsoft.com](mailto:adbops@microsoft.com) with the following information:
+
+ ```xml
+ Subject: Request Azure Data Box Disk dropoff for order: <ordername>
+
+ - Order name
+ - Contact name of the person who will drop off the Data Box Disk (A government-issued photo ID will be required to validate the contactΓÇÖs identity upon arrival.)
+ - Inbound Nota Fiscal (A copy of the inbound Nota Fiscal will be required at dropoff.)
+ ```
+ 10. If you've received an appointment for drop-off, the order should have **Ready to receive at Azure datacenter** status in the Azure portal. Follow the instructions under **Schedule drop-off** to return the device. ![Instructions for device drop-off](media\data-box-portal-customer-managed-shipping\data-box-portal-received-complete-02b.png)
defender-for-iot Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/getting-started.md
Title: 'Quickstart: Getting started' description: In this quickstart, learn how to get started with understanding the basic workflow for Defender for IoT deployment. Previously updated : 04/17/2021 Last updated : 05/10/2021 # Quickstart: Get started with Defender for IoT
defender-for-iot Quickstart Building The Defender Micro Agent From Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/quickstart-building-the-defender-micro-agent-from-source.md
Title: 'Quickstart: Build the Defender micro agent from source code (Preview)' description: In this quickstart, learn about the Micro Agent which includes an infrastructure that can be used to customize your distribution. Previously updated : 1/18/2021 Last updated : 05/10/2021
If you require a different configuration for production scenarios, contact the D
## Next steps
-[Configure your Azure Defender for IoT solution](quickstart-configure-your-solution.md).
+> [!div class="nextstepaction"]
+> [Configure your Azure Defender for IoT solution](quickstart-configure-your-solution.md).
defender-for-iot Quickstart Create Micro Agent Module Twin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/quickstart-create-micro-agent-module-twin.md
Title: 'Quickstart: Create a Defender IoT micro agent module twin (Preview)' description: In this quickstart, learn how to create individual DefenderIotMicroAgent module twins for new devices. Previously updated : 1/20/2021 Last updated : 05/10/2021
You can create individualΓÇ»**DefenderIotMicroAgent** module twins for new devic
## Prerequisites -- None
+None
## Device twins
dms Tutorial Sql Server Managed Instance Online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-sql-server-managed-instance-online.md
To complete this tutorial, you need to:
* Make a note of a Windows user (and password) that has full control privilege on the network share that you previously created. Azure Database Migration Service impersonates the user credential to upload the backup files to Azure Storage container for restore operation. * Create an Azure Active Directory Application ID that generates the Application ID key that Azure Database Migration Service can use to connect to target Azure Database Managed Instance and Azure Storage Container. For more information, see the article [Use portal to create an Azure Active Directory application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md).
+ > [!NOTE]
+ > The Application ID used by the Azure Database Migration Service supports secret (password-based) authentication for service principals. It does not support certificate-based authentication.
+ > [!NOTE] > Azure Database Migration Service requires the Contributor permission on the subscription for the specified Application ID. Alternatively, you can create custom roles that grant the specific permissions that Azure Database Migration Service requires. For step-by-step guidance about using custom roles, see the article [Custom roles for SQL Server to SQL Managed Instance online migrations](./resource-custom-roles-sql-db-managed-instance.md).
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations-providers.md
The following table shows connectivity locations and the service providers for e
| **Johannesburg** | [Teraco JB1](https://www.teraco.co.za/data-centre-locations/johannesburg/#jb1) | 3 | South Africa North | 10G | BCX, British Telecom, Internet Solutions - Cloud Connect, Liquid Telecom, Orange, Teraco | | **Kuala Lumpur** | [TIME dotCom Menara AIMS](https://www.time.com.my/enterprise/connectivity/direct-cloud) | 2 | n/a | n/a | TIME dotCom | | **Las Vegas** | [Switch LV](https://www.switch.com/las-vegas) | 1 | n/a | 10G, 100G | CenturyLink Cloud Connect, Megaport, PacketFabric |
-| **London** | [Equinix LD5](https://www.equinix.com/locations/europe-colocation/united-kingdom-colocation/london-data-centers/ld5/) | 1 | UK South | 10G, 100G | AT&T NetBond, British Telecom, Colt, Equinix, euNetworks, InterCloud, Internet Solutions - Cloud Connect, Interxion, Jisc, Level 3 Communications, Megaport, MTN, NTT Communications, Orange, PCCW Global Limited, Tata Communications, Telehouse - KDDI, Telenor, Telia Carrier, Verizon, Vodafone, Zayo |
-| **London2** | [Telehouse North Two](https://www.telehouse.net/data-centres/emea/uk-data-centres/london-data-centres/north-two) | 1 | UK South | 10G, 100G | British Telecom, CenturyLink Cloud Connect, Colt, GTT, IX Reach, Equinix, JISC, Megaport, SES, Sohonet, Telehouse - KDDI |
+| **London** | [Equinix LD5](https://www.equinix.com/locations/europe-colocation/united-kingdom-colocation/london-data-centers/ld5/) | 1 | UK South | 10G, 100G | AT&T NetBond, British Telecom, CenturyLink, Colt, Equinix, euNetworks, InterCloud, Internet Solutions - Cloud Connect, Interxion, Jisc, Level 3 Communications, Megaport, MTN, NTT Communications, Orange, PCCW Global Limited, Tata Communications, Telehouse - KDDI, Telenor, Telia Carrier, Verizon, Vodafone, Zayo |
+| **London2** | [Telehouse North Two](https://www.telehouse.net/data-centres/emea/uk-data-centres/london-data-centres/north-two) | 1 | UK South | 10G, 100G | BICS, British Telecom, CenturyLink Cloud Connect, Colt, GTT, IX Reach, Equinix, JISC, Megaport, SES, Sohonet, Telehouse - KDDI |
| **Los Angeles** | [CoreSite LA1](https://www.coresite.com/data-centers/locations/los-angeles/one-wilshire) | 1 | n/a | 10G, 100G | CoreSite, Equinix, Megaport, Neutrona Networks, NTT, Zayo | | **Los Angeles2** | [Equinix LA1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/los-angeles-data-centers/la1/) | 1 | n/a | 10G, 100G | Equinix |
-| **Madrid** | [Interxion MAD1](https://www.interxion.com/es/donde-estamos/europa/madrid) | 1 | West Europe | 10G, 100G | Interxion |
+| **Madrid** | [Interxion MAD1](https://www.interxion.com/es/donde-estamos/europa/madrid) | 1 | West Europe | 10G, 100G | Interxion, Megaport |
| **Marseille** |[Interxion MRS1](https://www.interxion.com/Locations/marseille/) | 1 | France South | n/a | Colt, DE-CIX, GEANT, Interxion, Jaguar Network, Ooredoo Cloud Connect | | **Melbourne** | [NextDC M1](https://www.nextdc.com/data-centres/m1-melbourne-data-centre) | 2 | Australia Southeast | 10G, 100G | AARNet, Devoli, Equinix, Megaport, NEXTDC, Optus, Telstra Corporation, TPG Telecom | | **Miami** | [Equinix MI1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/miami-data-centers/mi1/) | 1 | n/a | 10G, 100G | Claro, C3ntro, Equinix, Megaport, Neutrona Networks |
The following table shows connectivity locations and the service providers for e
| **Sydney** | [Equinix SY2](https://www.equinix.com/locations/asia-colocation/australia-colocation/sydney-data-centers/sy2/) | 2 | Australia East | 10G, 100G | AARNet, AT&T NetBond, British Telecom, Devoli, Equinix, Kordia, Megaport, NEXTDC, NTT Communications, Optus, Orange, Spark NZ, Telstra Corporation, TPG Telecom, Verizon, Vocus Group NZ | | **Sydney2** | [NextDC S1](https://www.nextdc.com/data-centres/s1-sydney-data-centre) | 2 | Australia East | 10G, 100G | Megaport, NextDC | | **Taipei** | Chief Telecom | 2 | n/a | 10G | Chief Telecom, Chunghwa Telecom, FarEasTone |
-| **Tokyo** | [Equinix TY4](https://www.equinix.com/locations/asia-colocation/japan-colocation/tokyo-data-centers/ty4/) | 2 | Japan East | 10G, 100G | Aryaka Networks, AT&T NetBond, BBIX, British Telecom, CenturyLink Cloud Connect, Colt, Equinix, Internet Initiative Japan Inc. - IIJ, Megaport, NTT Communications, NTT EAST, Orange, Softbank, Verizon |
+| **Tokyo** | [Equinix TY4](https://www.equinix.com/locations/asia-colocation/japan-colocation/tokyo-data-centers/ty4/) | 2 | Japan East | 10G, 100G | Aryaka Networks, AT&T NetBond, BBIX, British Telecom, CenturyLink Cloud Connect, Colt, Equinix, Intercloud, Internet Initiative Japan Inc. - IIJ, Megaport, NTT Communications, NTT EAST, Orange, Softbank, Verizon |
| **Tokyo2** | [AT TOKYO](https://www.attokyo.com/) | 2 | Japan East | 10G, 100G | AT TOKYO, Megaport, Tokai Communications | | **Toronto** | [Cologix TOR1](https://www.cologix.com/data-centers/toronto/tor1/) | 1 | Canada Central | 10G, 100G | AT&T NetBond, Bell Canada, CenturyLink Cloud Connect, Cologix, Equinix, IX Reach Megaport, Telus, Verizon, Zayo | | **Toronto2** | [Allied REIT](https://www.alliedreit.com/property/905-king-st-w/) | 1 | Canada Central | 10G, 100G | |
Azure national clouds are isolated from each other and from global commercial Az
| | | | | | | **Atlanta** | [Equinix AT1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/atlanta-data-centers/at1/) | n/a | 10G, 100G | Equinix | | **Chicago** | [Equinix CH1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/chicago-data-centers/ch1/) | n/a | 10G, 100G | AT&T NetBond, British Telecom, Equinix, Level 3 Communications, Verizon |
-| **Dallas** | [Equinix DA3](https://www.equinix.com/locations/americas-colocation/united-states-colocation/dallas-data-centers/da3/) | n/a | 10G, 100G | Equinix, Megaport, Verizon |
+| **Dallas** | [Equinix DA3](https://www.equinix.com/locations/americas-colocation/united-states-colocation/dallas-data-centers/da3/) | n/a | 10G, 100G | Equinix, Internet2, Megaport, Verizon |
| **New York** | [Equinix NY5](https://www.equinix.com/locations/americas-colocation/united-states-colocation/new-york-data-centers/ny5/) | n/a | 10G, 100G | Equinix, CenturyLink Cloud Connect, Verizon | | **Phoenix** | [CyrusOne Chandler](https://cyrusone.com/locations/arizona/phoenix-arizona-chandler/) | US Gov Arizona | 10G, 100G | AT&T NetBond, CenturyLink Cloud Connect, Megaport | | **San Antonio** | [CyrusOne SA2](https://cyrusone.com/locations/texas/san-antonio-texas-ii/) | US Gov Texas | 10G, 100G | CenturyLink Cloud Connect, Megaport |
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **[Ascenty Data Centers](https://www.ascenty.com/en/cloud/microsoft-express-route)** |Supported |Supported |Sao Paulo | | **[AT&T NetBond](https://www.synaptic.att.com/clouduser/html/productdetail/ATT_NetBond.htm)** |Supported |Supported |Amsterdam, Chicago, Dallas, Frankfurt, London, Silicon Valley, Singapore, Sydney, Tokyo, Toronto, Washington DC | | **[AT TOKYO](https://www.attokyo.com/connectivity/azure.html)** | Supported | Supported | Osaka, Tokyo2 |
-| **[BICS](https://bics.com/bics-solutions-suite/cloud-connect/bics-cloud-connect-an-official-microsoft-azure-technology-partner/)** | Supported | Supported | Amsterdam2 |
+| **[BICS](https://bics.com/bics-solutions-suite/cloud-connect/bics-cloud-connect-an-official-microsoft-azure-technology-partner/)** | Supported | Supported | Amsterdam2, London2 |
| **[BBIX](https://www.bbix.net/en/service/ix/)** | Supported | Supported | Osaka, Tokyo | | **[BCX](https://www.bcx.co.za/solutions/connectivity/data-networks)** |Supported |Supported |Cape Town, Johannesburg| | **[Bell Canada](https://business.bell.ca/shop/enterprise/cloud-connect-access-to-cloud-partner-services)** |Supported |Supported |Montreal, Toronto, Quebec City |
The following table shows locations by service provider. If you want to view ava
| **[BSNL](https://www.bsnl.co.in/opencms/bsnl/BSNL/services/enterprises/cloudway.html)** |Supported |Supported |Chennai, Mumbai | | **[C3ntro](https://www.c3ntro.com/)** |Supported |Supported |Miami | | **CDC** | Supported | Supported | Canberra, Canberra2 |
-| **[CenturyLink Cloud Connect](https://www.centurylink.com/cloudconnect)** |Supported |Supported |Amsterdam2, Chicago, Dublin, Frankfurt, Hong Kong, Las Vegas, London2, New York, Paris, San Antonio, Silicon Valley, Tokyo, Toronto, Washington DC, Washington DC2 |
+| **[CenturyLink Cloud Connect](https://www.centurylink.com/cloudconnect)** |Supported |Supported |Amsterdam2, Chicago, Dublin, Frankfurt, Hong Kong, Las Vegas, London, London2, New York, Paris, San Antonio, Silicon Valley, Tokyo, Toronto, Washington DC, Washington DC2 |
| **[Chief Telecom](https://www.chief.com.tw/)** |Supported |Supported |Hong Kong, Taipei | | **China Mobile International** |Supported |Supported | Hong Kong, Hong Kong2, Singapore | | **China Telecom Global** |Supported |Supported |Hong Kong, Hong Kong2 |
The following table shows locations by service provider. If you want to view ava
| **[Global Cloud Xchange (GCX)](https://globalcloudxchange.com/cloud-platform/cloud-x-fusion/)** | Supported| Supported | Chennai, Mumbai | | **[iAdvantage](https://www.scx.sunevision.com/)** | Supported | Supported | Hong Kong2 | | **Intelsat** | Supported | Supported | Washington DC2 |
-| **[InterCloud](https://www.intercloud.com/)** |Supported |Supported |Amsterdam, Chicago, Frankfurt, Hong Kong, London, New York, Paris, Silicon Valley, Singapore, Washington DC, Zurich |
+| **[InterCloud](https://www.intercloud.com/)** |Supported |Supported |Amsterdam, Chicago, Frankfurt, Hong Kong, London, New York, Paris, Silicon Valley, Singapore, Tokyo, Washington DC, Zurich |
| **[Internet2](https://internet2.edu/services/cloud-connect/#service-cloud-connect)** |Supported |Supported |Chicago, Dallas, Silicon Valley, Washington DC | | **[Internet Initiative Japan Inc. - IIJ](https://www.iij.ad.jp/en/news/pressrelease/2015/1216-2.html)** |Supported |Supported |Osaka, Tokyo | | **[Internet Solutions - Cloud Connect](https://www.is.co.za/solution/cloud-connect/)** |Supported |Supported |Cape Town, Johannesburg, London |
The following table shows locations by service provider. If you want to view ava
| **[Level 3 Communications](https://www.lumen.com/en-us/hybrid-it-cloud/cloud-connect.html)** |Supported |Supported |Amsterdam, Chicago, Dallas, London, Newport (Wales), Sao Paulo, Seattle, Silicon Valley, Singapore, Washington DC | | **LG CNS** |Supported |Supported |Busan, Seoul | | **[Liquid Telecom](https://www.liquidtelecom.com/products-and-services/cloud.html)** |Supported |Supported |Cape Town, Johannesburg |
-| **[Megaport](https://www.megaport.com/services/microsoft-expressroute/)** |Supported |Supported |Amsterdam, Atlanta, Auckland, Chennai, Chicago, Dallas, Denver, Dubai2, Dublin, Frankfurt, Geneva, Hong Kong, Hong Kong2, Las Vegas, London, London2, Los Angeles, Melbourne, Miami, Minneapolis, Montreal, New York, Osaka, Oslo, Paris, Perth, Quebec City, San Antonio, Seattle, Silicon Valley, Singapore, Singapore2, Stavanger, Stockholm, Sydney, Sydney2, Tokyo, Tokyo2 Toronto, Vancouver, Washington DC, Washington DC2, Zurich |
+| **[Megaport](https://www.megaport.com/services/microsoft-expressroute/)** |Supported |Supported |Amsterdam, Atlanta, Auckland, Chennai, Chicago, Dallas, Denver, Dubai2, Dublin, Frankfurt, Geneva, Hong Kong, Hong Kong2, Las Vegas, London, London2, Los Angeles, Madrid, Melbourne, Miami, Minneapolis, Montreal, New York, Osaka, Oslo, Paris, Perth, Quebec City, San Antonio, Seattle, Silicon Valley, Singapore, Singapore2, Stavanger, Stockholm, Sydney, Sydney2, Tokyo, Tokyo2 Toronto, Vancouver, Washington DC, Washington DC2, Zurich |
| **[MTN](https://www.mtnbusiness.co.za/en/Cloud-Solutions/Pages/microsoft-express-route.aspx)** |Supported |Supported |London | | **[National Telecom](https://www.nc.ntplc.co.th/cat/category/264/855/CAT+Direct+Cloud+Connect+for+Microsoft+ExpressRoute?lang=en_EN)** |Supported |Supported |Bangkok | | **[Neutrona Networks](https://www.neutrona.com/index.php/azure-expressroute/)** |Supported |Supported |Dallas, Los Angeles, Miami, Sao Paulo, Washington DC |
Azure national clouds are isolated from each other and from global commercial Az
| **[AT&T NetBond](https://www.synaptic.att.com/clouduser/html/productdetail/ATT_NetBond.htm)** |Supported |Supported |Chicago, Phoenix, Silicon Valley, Washington DC | | **[CenturyLink Cloud Connect](https://www.centurylink.com/cloudconnect)** |Supported |Supported |New York, Phoenix, San Antonio, Washington DC | | **[Equinix](https://www.equinix.com/partners/microsoft-azure/)** |Supported |Supported |Atlanta, Chicago, Dallas, New York, Seattle, Silicon Valley, Washington DC |
+| **[Internet2]()** |Supported |Supported |Dallas |
| **[Level 3 Communications](http://your.level3.com/LP=882?WT.tsrc=02192014LP882AzureVanityAzureText)** |Supported |Supported |Chicago, Silicon Valley, Washington DC | | **[Megaport](https://www.megaport.com/services/microsoft-expressroute/)** |Supported | Supported | Chicago, Dallas, San Antonio, Seattle, Washington DC | | **[Verizon](http://news.verizonenterprise.com/2014/04/secure-cloud-interconnect-solutions-enterprise/)** |Supported |Supported |Chicago, Dallas, New York, Silicon Valley, Washington DC |
frontdoor Concept Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/concept-private-link.md
When you enable Private Link to your origin in Azure Front Door Premium configur
:::image type="content" source="../media/concept-private-link/enable-private-endpoint.png" alt-text="Enable Private Endpoint"::: > [!NOTE]
-> Once you enable a Private Link origin and approve the private endpoint conenction, it takes a few minutes for the connection to be established. During this time, requests to the origin will receive a Front Door error message. The error message will go away once the connection is established.
+> Once you enable a Private Link origin and approve the private endpoint connection, it takes a few minutes for the connection to be established. During this time, requests to the origin will receive a Front Door error message. The error message will go away once the connection is established.
## Limitations
iot-central Concepts App Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-app-templates.md
You choose the application template when you create your application. You can't
## Custom templates
-If you want to create your application from scratch, choose one of the **Custom application** templates.
+If you want to create your application from scratch, choose the **Custom application** template. The custom application template id is `iotc-pnp-preview`.
## Industry focused templates
-Azure IoT Central is an industry agnostic application platform. Application templates are industry focused examples available for these industries today, with more to come in the future:
--- [Retail](../retail/overview-iot-central-retail.md)
- - Connected logistics
- - Digital distribution center
- - In-store analytics - condition monitoring
- - In-store analytics - checkout
- - Smart Inventory Management
- - Video analytics - object and motion detection
-- [Energy](../energy/overview-iot-central-energy.md)
- - Smart meter monitoring
- - Solar panel monitoring
-- [Government](../government/overview-iot-central-government.md)
- - Connected waste management
- - Water consumption monitoring
- - Water quality monitoring
-- [Healthcare](../healthcare/overview-iot-central-healthcare.md).
- - Continuous patient monitoring
+Azure IoT Central is an industry agnostic application platform. Application templates are industry focused examples available for these industries today:
+ ## Next steps
iot-central How To Connect Iot Edge Transparent Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/how-to-connect-iot-edge-transparent-gateway.md
This article uses virtual machines to host the downstream device and gateway. In
## Prerequisites
-To complete the steps in this tutorial, you need an active Azure subscription.
+To complete the steps in this article, you need:
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-Complete the [Create an Azure IoT Central application](./quick-deploy-iot-central.md) quickstart to create an IoT Central application using the **Custom app > Custom application** template.
+- An [IoT Central application created](howto-create-iot-central-application.md) from the **Custom application** template. To learn more, see [Create an IoT Central application](howto-create-iot-central-application.md).
To follow the steps in this article, download the following files to your computer:
iot-central Howto Build Iotc Device Bridge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-build-iotc-device-bridge.md
The device bridge solution provisions several Azure resources into your Azure su
## Prerequisites
-To complete the steps in this how-to guide, you need an active Azure subscription.
+To complete the steps in this how-to guide, you need:
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-
-Complete the [Create an Azure IoT Central application](./quick-deploy-iot-central.md) quickstart to create an IoT Central application using the **Custom app > Custom application** template.
## Overview
iot-central Howto Configure Rules Advanced https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-configure-rules-advanced.md
The Azure IoT Central V3 connector for Power Automate and Azure Logic Apps lets
## Prerequisites
-To complete the steps in this how-to guide, you need an active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+To complete the steps in this how-to guide, you need:
-Setting up the solution requires a version 3 IoT Central application. To learn how to check your application version, see [About your application](./howto-get-app-info.md). To learn how to create an IoT Central application, see [Create an Azure IoT Central application](./quick-deploy-iot-central.md).
> [!NOTE] > If you're using a version 2 IoT Central application, see [Build workflows with the IoT Central connector in Azure Logic Apps](/previous-versions/azure/iot-central/core/howto-build-azure-logic-apps) on the previous versions documentation site and use the Azure IoT Central V2 connector
iot-central Howto Connect Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-connect-powerbi.md
This solution sets up a pipeline that reads data from your [legacy data export](
## Prerequisites
-To complete the steps in this how-to guide, you need an active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+To complete the steps in this how-to guide, you need:
-Setting up the solution requires the following resources:
-- A version 3 IoT Central application. To learn how to check your application version, see [About your application](./howto-get-app-info.md). To learn how to create an IoT Central application, see [Create an Azure IoT Central application](./quick-deploy-iot-central.md). - Legacy continuous data export which is configured to export telemetry, devices, and device templates to Azure Blob storage. To learn more, see [legacy data export documentation](howto-export-data-legacy.md). - Make sure that only your IoT Central application is exporting data to the blob container. - Your [devices must send JSON encoded messages](../../iot-hub/iot-hub-devguide-messages-d2c.md). Devices must specify `contentType:application/JSON` and `contentEncoding:utf-8` or `contentEncoding:utf-16` or `contentEncoding:utf-32` in the message system properties.
iot-central Howto Connect Rigado Cascade 500 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-connect-rigado-cascade-500.md
Cascade 500 IoT gateway is a hardware offering from Rigado that is included as p
Cascade 500 is certified for Azure IoT Plug and Play and allows you to easily onboard the device into your end to end solutions. The Cascade gateway allows you to wirelessly connect to a variety of condition monitoring sensors that are in proximity to the gateway device. These sensors can be onboarded into IoT Central via the gateway device. ## Prerequisites
-To step through this how-to guide, you need the following resources:
-* A Rigado Cascade 500 device. For more information, please visit [Rigado](https://www.rigado.com/).
-* An Azure IoT Central application. For more information, see the [create a new application](./quick-deploy-iot-central.md).
+To complete the steps in this how-to guide, you need:
++
+- A Rigado Cascade 500 device. For more information, please visit [Rigado](https://www.rigado.com/).
## Add a device template
iot-central Howto Connect Ruuvi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-connect-ruuvi.md
Please follow the [instructions here](./howto-connect-rigado-cascade-500.md) if
To connect RuuviTag sensors, you need the following resources:
-* A RuuviTag sensor. For more information, please visit [RuuviTag](https://ruuvi.com/).
-* A Rigado Cascade 500 device or another BLE gateway. For more information, please visit [Rigado](https://www.rigado.com/).
-* An Azure IoT Central application. For more information, see the [create a new application](./quick-deploy-iot-central.md).
+
+- A RuuviTag sensor. For more information, please visit [RuuviTag](https://ruuvi.com/).
+
+- A Rigado Cascade 500 device or another BLE gateway. For more information, please visit [Rigado](https://www.rigado.com/).
+ ## Add a RuuviTag device template
iot-central Howto Create Iot Central Application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-create-iot-central-application.md
+
+ Title: Create an IoT Central application | Microsoft Docs
+description: This article describes the options to create an IoT Central application including from the Azure IoT Central site, the Azure portal, and from a command-line environment.
++++ Last updated : 05/11/2021+++
+# Create an IoT Central application
+
+You have several ways to create an IoT Central application. You can use one of the GUI-based methods if you prefer a manual approach, or one of the CLI or programmatic methods if you want to automate the process.
+
+Whichever approach you choose, the configuration options are the same, and the process typically takes less than a minute to complete.
++
+## Options
+
+This section describes the available options when you create an IoT Central application. Depending on the method you choose, you might need to supply the options on a form or as command-line parameters:
+
+### Pricing plans
+
+The *free* plan lets you create an IoT Central application to try for seven days. The free plan:
+
+- Doesn't require an Azure subscription.
+- Can only be created and managed on the [Azure IoT Central](https://aka.ms/iotcentral) site.
+- Lets you connect up to five devices.
+- Can be upgraded to a standard plan if you want to keep your application.
+
+The *standard* plans:
+
+- Do require an Azure subscription. You should have at least **Contributor** access in your Azure subscription. If you created the subscription yourself, you're automatically an administrator with sufficient access. To learn more, see [What is Azure role-based access control?](../../role-based-access-control/overview.md).
+- Let you create and manage IoT Central applications using any of the available methods.
+- Let you connect as many devices as you need. You're billed by device. To learn more, see [Azure IoT Central pricing](/pricing/details/iot-central/).
+- Cannot be downgraded to a free plan, but can be upgraded or downgraded to other standard plans.
+
+The following table summarizes the differences between the three standard plans:
+
+| Plan name | Free devices | Messages/month | Use case |
+| | | -- | -- |
+| S0 | 2 | 400 | A few messages per day |
+| S1 | 2 | 5,000 | A few messages per hour |
+| S2 | 2 | 30,000 | Messages every few minutes |
+
+To learn more, see [Manage your bill in an IoT Central application](howto-view-bill.md).
+
+### Application name
+
+The _application name_ you choose appears in the title bar on every page in your IoT Central application. It also appears on your application's tile on the **My apps** page on the [Azure IoT Central](https://aka.ms/iotcentral) site.
+
+The _subdomain_ you choose uniquely identifies your application. The subdomain is part of the URL you use to access the application. The URL for an IoT Central application looks like `https://yoursubdomain.azureiotcentral.com`.
+
+### Application template ID
+
+The application template you choose determines the initial contents of your application, such as dashboards and device templates. The template ID For a custom application, use `iotc-pnp-preview` as the template ID.
+
+To learn more about custom and industry-focused application templates, see [What are application templates?](concepts-app-templates.md).
+
+### Billing information
+
+If you choose one of the standard plans, you need to provide billing information:
+
+- The Azure subscription you're using.
+- The directory that contains the subscription you're using.
+- The location to host your application. IoT Central uses Azure geographies as locations: United States, Europe, Asia Pacific, Australia, United Kingdom, or Japan.
+
+## Azure IoT Central site
+
+The easiest way to get started creating IoT Central applications is on the [Azure IoT Central](https://aka.ms/iotcentral) site.
+
+The [Build](https://apps.azureiotcentral.com/build) lets you select the application template you want to use:
++
+If you select **Create app**, you can provide the necessary information to create an application from the template:
++
+The **My apps** page lists all the IoT Central applications you have access to. The list includes applications you created and applications that you've been granted access to.
+
+> [!TIP]
+> All the applications you create using a standard pricing plan on the Azure IoT Central site use the **IOTC** resource group in your subscription. The approaches decribed in the following section let you choose a resource group to use.
+
+## Other approaches
+
+You can also use the following approaches to create an IoT Central application:
+
+- [Create an IoT Central application from the Azure portal](howto-manage-iot-central-from-portal.md#create-iot-central-applications)
+- [Create an IoT Central application using the Azure CLI](howto-manage-iot-central-from-cli.md#create-an-application)
+- [Create an IoT Central application using PowerShell](howto-manage-iot-central-from-powershell.md#create-an-application)
+- [Create an IoT Central application programmatically](howto-manage-iot-central-programmatically.md)
+
+## Next steps
+
+Now that you've learned how to manage Azure IoT Central applications from Azure CLI, here's the suggested next step:
+
+> [!div class="nextstepaction"]
+> [Administer your application](howto-administer.md)
iot-central Howto Manage Iot Central From Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-iot-central-from-cli.md
These commands first create a resource group in the east US region for the appli
| template | The application template to use. For more information, see the following table. | | display-name | The name of the application as displayed in the UI. |
+### Application templates
+ [!INCLUDE [iot-central-template-list](../../../includes/iot-central-template-list.md)]
+If you've created your own application template, you can use it to create a new application. When asked for an application template, enter the app ID shown in the exported app's URL shareable link under the [Application template export](howto-use-app-templates.md#create-an-application-template) section of your app.
+ ## View your applications Use the [az iot central app list](/cli/azure/iot/central/app#az_iot_central_app_list) command to list your IoT Central applications and view metadata.
iot-central Howto Manage Iot Central From Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-iot-central-from-portal.md
To create an application, navigate to the [IoT Central Application](https://ms.p
Once you choose a location, you can't later move your application to a different location.
-After filling out all fields, select **Create**. To learn more, see the [Create an IoT Central application](quick-deploy-iot-central.md) quickstart.
+After filling out all fields, select **Create**. To learn more, see [Create an IoT Central application](howto-create-iot-central-application.md).
## Manage existing IoT Central applications
iot-central Howto Manage Iot Central From Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-iot-central-from-powershell.md
The script first creates a resource group in the east US region for the applicat
|Template | The application template to use. For more information, see the following table. | |DisplayName |The name of the application as displayed in the UI. |
+### Application templates
+ [!INCLUDE [iot-central-template-list](../../../includes/iot-central-template-list.md)]
+If you've created your own application template, you can use it to create a new application. When asked for an application template, enter the app ID shown in the exported app's URL shareable link under the [Application template export](howto-use-app-templates.md#create-an-application-template) section of your app.
+ ## View your IoT Central applications Use the [Get-AzIotCentralApp](/powershell/module/az.iotcentral/Get-AzIotCentralApp) cmdlet to list your IoT Central applications and view metadata.
iot-central Howto Monitor Application Health https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-monitor-application-health.md
Applications that use the free trial plan don't have an associated Azure subscri
## View metrics in the Azure portal
-The following steps assume you have an [IoT Central application](./quick-deploy-iot-central.md) with some [connected devices](./tutorial-connect-device.md) or a running [data export](howto-export-data.md).
+The following steps assume you have an [IoT Central application](./howto-create-iot-central-application.md) with some [connected devices](./tutorial-connect-device.md) or a running [data export](howto-export-data.md).
To view IoT Central metrics in the portal:
iot-central Howto Transform Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-transform-data.md
The following table shows three example transformation types:
## Prerequisites
-To complete the steps in this article, you need an active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+To complete the steps in this how-to guide, you need:
-To set up the solution, you need an IoT Central application. To learn how to create an IoT Central application, see [Create an Azure IoT Central application](quick-deploy-iot-central.md).
## Data transformation at ingress
iot-central Howto Use App Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-use-app-templates.md
You have two options:
You can create a copy of any application, minus any device instances, device data history, and user data. The copy uses a standard pricing plan that you'll be billed for. You can't create an application that uses the free pricing plan by copying an application.
-Select **Copy**. In the dialog box, enter the details for the new application. Then select **Copy** to confirm that you want to continue. To learn more about the fields in the form, see the [Create an application](quick-deploy-iot-central.md) quickstart.
+Select **Copy**. In the dialog box, enter the details for the new application. Then select **Copy** to confirm that you want to continue. To learn more about the fields in the form, see [Create an application](howto-create-iot-central-application.md).
![Screenshot that shows the "Copy Application" settings page.](media/howto-use-app-templates/appcopy2.png)
iot-central Tutorial Add Edge As Leaf Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/tutorial-add-edge-as-leaf-device.md
In this tutorial, you learn how to:
## Prerequisites
-Complete the [Create an Azure IoT Central application](./quick-deploy-iot-central.md) quickstart to create an IoT Central application using the **Custom app > Custom application** template.
+To complete the steps in this tutorial, you need:
-To complete the steps in this tutorial, you need an active Azure subscription.
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
Download the IoT Edge manifest file from GitHub. Right-click on the following link and then select **Save link as**: [EnvironmentalSensorManifest.json](https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/master/iotedge/EnvironmentalSensorManifest.json)
iot-central Tutorial Define Gateway Device Type https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/tutorial-define-gateway-device-type.md
As well as enabling downstream devices to communicate with your IoT Central appl
## Prerequisites
-To complete this tutorial, you need to [Create an Azure IoT Central application](./quick-deploy-iot-central.md).
+To complete the steps in this tutorial, you need:
+ ## Create downstream device templates
iot-central Tutorial Connected Waste Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/government/tutorial-connected-waste-management.md
This template includes a sample connected waste bin device template, a simulated
* **URL**. Optionally, you can choose your desired URL. You can change the URL later. * **Pricing plan**. If you have an Azure subscription, enter your directory, Azure subscription, and region in the appropriate fields of the **Billing info** dialog box. If you don't have a subscription, select **Free** to enable 7-day trial subscription, and complete the required contact information.
- For more information about directories and subscriptions, see [Quickstart - Create an Azure IoT Central application](../core/quick-deploy-iot-central.md).
- 1. At the bottom of the page, select **Create**. ![Screenshot of Azure IoT Central Create New application dialog box.](./media/tutorial-connectedwastemanagement/new-application-connectedwastemanagement.png)
iot-central Tutorial Water Consumption Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/government/tutorial-water-consumption-monitoring.md
This template includes a sample water consumption device template, a simulated d
* **URL**: Azure IoT Central autogenerates a URL based on the application name. You can choose to update the URL to your liking. You can change the URL later, too. * If you have an Azure subscription, enter your **Directory**, **Azure subscription**, and **Location** information. If you don't have a subscription, you can select the **7-day free trial** option and complete the required contact information.
- For more information about directories and subscriptions, see [Create an application quickstart](../core/quick-deploy-iot-central.md).
- 1. Select **Create** at the bottom of the page. ![Azure IoT Central New application page](./media/tutorial-waterconsumptionmonitoring/new-application-waterconsumptionmonitoring.png)
iot-central Tutorial Water Quality Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/government/tutorial-water-quality-monitoring.md
In this section, you use the Azure IoT Central **Water quality monitoring** temp
* **URL**: You can enter any URL you want or change the URL value later. * If you have an Azure subscription, enter values for **Directory**, **Azure subscription**, and **Location**. If you don't have a subscription, you can turn on **7-day free trial** and complete the required contact information.
- For more information about directories and subscriptions, see the [Create an application](../core/quick-deploy-iot-central.md) quickstart.
- 1. Select the **Create** button on the lower-left part of the page. ![The Azure IoT Central new-application page](./media/tutorial-waterqualitymonitoring/new-application-waterqualitymonitoring1.png)
iot-central Tutorial In Store Analytics Create App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-in-store-analytics-create-app.md
To create a new in-store analytics checkout application:
1. If you have an Azure subscription, enter your *Directory, Azure subscription, and Region*. If you don't have a subscription, you can enable **7-day free trial** and complete the required contact information.
- For more information about directories and subscriptions, see the [create an application quickstart](../core/quick-deploy-iot-central.md).
- 1. Select **Create**. ![Azure IoT Central Create Application page](./media/tutorial-in-store-analytics-create-app/preview-application-template.png)
iot-central Tutorial Micro Fulfillment Center https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-micro-fulfillment-center.md
To create a new micro-fulfillment center application that uses preview features:
1. If you have an Azure subscription, enter your directory, Azure subscription, and region. If you don't have a subscription, you can enable 7-day free trial, and complete the required contact information.
- For more information about directories and subscriptions, see the [Create an application](../core/quick-deploy-iot-central.md) quickstart.
- 1. Select **Create**. ![Screenshot of Azure IoT Central New application page](./media/tutorial-micro-fulfillment-center-app/iotc-retail-create-app-mfc.png)
iot-develop Quickstart Send Telemetry Central https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-send-telemetry-central.md
In this quickstart, you learn a basic Azure IoT application development workflow
## View telemetry After the simulated device connects to IoT Central, it begins sending telemetry. You can view the telemetry and other details about connected devices in IoT Central.
-In IoT Central, select **Devices**, click your device name, then select the **Raw data** tab. This view displays the raw telemetry from the simulated device.
+In IoT Central, select **Devices**, click your device name, then select the **Overview** tab. This view displays a graph of the temperatures from the two thermostat devices.
+
+Select the **Raw data** tab. This view displays the telemetry each time a thermostat reading is sent.
+ Your device is now securely connected and sending telemetry to Azure IoT.
iot-hub About Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/about-iot-hub.md
Title: Introduction to Azure IoT Hub | Microsoft Docs
-description: Learn about Azure IoT Hub. This IoT service is built for scalable data ingestion, device management, and security.
-- Previously updated : 08/08/2019
+ Title: What is Azure IoT Hub | Microsoft Docs
+description: This article explains the uses for an Azure IoT Hub. It allows you to read data in a scalable manner, and enables you to manage devices securely.
++ Last updated : 05/03/2021
-# What is Azure IoT Hub?
+# What is Azure IoT Hub
-IoT Hub is a managed service, hosted in the cloud, that acts as a central message hub for bi-directional communication between your IoT application and the devices it manages. You can use Azure IoT Hub to build IoT solutions with reliable and secure communications between millions of IoT devices and a cloud-hosted solution backend. You can connect virtually any device to IoT Hub.
+IoT Hub is a managed service hosted in the cloud that acts as a central message hub for communications in both directions between an IoT application and its attached devices. You can connect millions of devices and their backend solutions reliably and securely. Almost any device can be connected to an IoT Hub.
-IoT Hub supports communications both from the device to the cloud and from the cloud to the device. IoT Hub supports multiple messaging patterns such as device-to-cloud telemetry, file upload from devices, and request-reply methods to control your devices from the cloud. IoT Hub monitoring helps you maintain the health of your solution by tracking events such as device creation, device failures, and device connections.
+Several messaging patterns are supported, including device-to-cloud telemetry, uploading files from devices, and request-reply methods to control your devices from the cloud. IoT Hub also supports monitoring to help you track creating devices, connecting devices, and device failures.
-IoT Hub's capabilities help you build scalable, full-featured IoT solutions such as managing industrial equipment used in manufacturing, tracking valuable assets in healthcare, and monitoring office building usage.
+With IoT Hub's capabilities, you can build scalable, full-featured IoT solutions such as managing industrial equipment used in manufacturing, tracking valuable assets in healthcare, and monitoring office building usage.
## Scale your solution
-IoT Hub scales to millions of simultaneously connected devices and millions of events per second to support your IoT workloads. For more information about scaling your IoT Hub, see [IoT Hub Scaling](iot-hub-scaling.md). To learn more about the multiple tiers of service offered by IoT Hub and how to best fit your scalability needs, check out the [pricing page](https://azure.microsoft.com/pricing/details/iot-hub/).
+IoT Hub scales to millions of simultaneously connected devices and millions of events per second to support your IoT workloads. For more information about scaling your IoT Hub, see [IoT Hub Scaling](iot-hub-scaling.md). To learn more about the multiple tiers of service offered by IoT Hub and how to best fit your scalability needs, check out the [pricing page](https://azure.microsoft.com/pricing/details/iot-hub/).
## Secure your communications
-IoT Hub gives you a secure communication channel for your devices to send data.
+You can send data securely using IoT Hub.
* Per-device authentication enables each device to connect securely to IoT Hub and for each device to be managed securely.
IoT Hub gives you a secure communication channel for your devices to send data.
* The [IoT Hub Device Provisioning Service](../iot-dps/index.yml) automatically provisions devices to the right IoT hub when the device first boots up.
-* Multiple authentication types support a variety of device capabilities:
+* Multiple authentication types enable support of a variety of device capabilities:
- * SAS token-based authentication to quickly get started with your IoT solution.
+ * SAS token-based authentication allows you to quickly get started with your IoT solution.
- * Individual X.509 certificate authentication for secure, standards-based authentication.
+ * Individual X.509 certificate authentication is available for secure, standards-based authentication.
- * X.509 CA authentication for simple, standards-based enrollment.
+ * X.509 CA authentication can be used for simple, standards-based enrollment.
## Route device data Built-in message routing functionality gives you flexibility to set up automatic rules-based message fan-out:
-* Use [message routing](iot-hub-devguide-messages-d2c.md) to control where your hub sends device telemetry.
+* [Message routing](iot-hub-devguide-messages-d2c.md) is used to control where your hub sends device telemetry.
* There is no additional cost to route messages to multiple endpoints.
-* No-code routing rules take the place of custom message dispatcher code.
+* Routing rules can be configured to automatically direct messages based on content in those messages without having to write any code.
## Integrate with other services
You can manage your devices connected to IoT Hub with an array of built-in funct
## Make your solution highly available
-There's a 99.9% [Service Level Agreement for IoT Hub](https://azure.microsoft.com/support/legal/sla/iot-hub/). The full [Azure SLA](https://azure.microsoft.com/support/legal/sla/) explains the guaranteed availability of Azure as a whole.
+IoT Hub has a 99.9% [Service Level Agreement for IoT Hub](https://azure.microsoft.com/support/legal/sla/iot-hub/). The full [Azure SLA](https://azure.microsoft.com/support/legal/sla/) explains the guaranteed availability of Azure as a whole.
## Connect your devices
If your solution cannot use one of the supported protocols, you can extend IoT H
## Quotas and limits
-Each Azure subscription has default quota limits in place to prevent service abuse, and these limits could impact the scope of your IoT solution. The current limit on a per-subscription basis is 50 IoT hubs per subscription. You can request quota increases by contacting support. For more information, see [IoT Hub Quotas and Throttling](iot-hub-devguide-quotas-throttling.md). For more details on quota limits, see one of the following articles:
+Each Azure subscription has default quota limits in place to prevent service abuse. These limits could impact the scope of your IoT solution. The current limit on a per-subscription basis is 50 IoT hubs per subscription. You can request quota increases by contacting support. For more information, see [IoT Hub Quotas and Throttling](iot-hub-devguide-quotas-throttling.md). For more details on quota limits, see one of the following articles:
* [Azure subscription service limits](../azure-resource-manager/management/azure-subscription-service-limits.md)
iot-hub Iot Hub Bulk Identity Mgmt https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-bulk-identity-mgmt.md
while(true)
``` > [!NOTE]
-> If your storage account has firewall configurations that restrict IoT Hub's connectivity, consider using [Microsoft trusted first party exception](./virtual-network-support.md#egress-connectivity-to-storage-account-endpoints-for-routing) (available in select regions for IoT hubs with managed service identity).
+> If your storage account has firewall configurations that restrict IoT Hub's connectivity, consider using [Microsoft trusted first party exception](./virtual-network-support.md#egress-connectivity-from-iot-hub-to-other-azure-resources) (available in select regions for IoT hubs with managed service identity).
## Device import/export job limits
iot-hub Iot Hub Devguide File Upload https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-file-upload.md
# Upload files with IoT Hub
-As detailed in the [IoT Hub endpoints](iot-hub-devguide-endpoints.md) article, a device can start a file upload by sending a notification through a device-facing endpoint (**/devices/{deviceId}/files**). When a device notifies IoT Hub that an upload is complete, IoT Hub sends a file upload notification message through the **/messages/servicebound/filenotifications** service-facing endpoint.
+There are many scenarios in which you can't easily map the data your devices send into the relatively small device-to-cloud messages that IoT Hub readily accepts. For example:
+* Large image files
+* Video Files
+* Vibration data sampled at high frequency
+* Some form of preprocessed data
-Instead of brokering messages through IoT Hub itself, IoT Hub instead acts as a dispatcher to an associated Azure Storage account. A device requests a storage token from IoT Hub that is specific to the file the device wishes to upload. The device uses the SAS URI to upload the file to storage, and when the upload is complete the device sends a notification of completion to IoT Hub. IoT Hub checks the file upload is complete and then adds a file upload notification message to the service-facing file notification endpoint.
-
-Before you upload a file to IoT Hub from a device, you must configure your hub by [associating an Azure Storage](iot-hub-devguide-file-upload.md#associate-an-azure-storage-account-with-iot-hub) account to it.
-
-Your device can then [initialize an upload](iot-hub-devguide-file-upload.md#initialize-a-file-upload) and then [notify IoT hub](iot-hub-devguide-file-upload.md#notify-iot-hub-of-a-completed-file-upload) when the upload completes. Optionally, when a device notifies IoT Hub that the upload is complete, the service can generate a [notification message](iot-hub-devguide-file-upload.md#file-upload-notifications).
+When you need to upload such files from a device, you can still use the security and reliability of IoT Hub. Instead of brokering messages through IoT Hub itself, however, IoT Hub acts as a dispatcher to an associated Azure Storage account. A device requests a storage token from IoT Hub that is specific to the file the device wishes to upload. The device uses the SAS URI to upload the file to storage, and when the upload is complete the device sends a notification of completion to IoT Hub. IoT Hub checks that the file upload is complete.
[!INCLUDE [iot-hub-include-x509-ca-signed-file-upload-support-note](../../includes/iot-hub-include-x509-ca-signed-file-upload-support-note.md)] ### When to use
-Use file upload to send media files and large telemetry batches uploaded by intermittently connected devices or compressed to save bandwidth.
-
-Refer to [Device-to-cloud communication guidance](iot-hub-devguide-d2c-guidance.md) if in doubt between using reported properties, device-to-cloud messages, or file upload.
+Use file upload to send media files and large telemetry batches uploaded by intermittently connected devices or compressed to save bandwidth. Refer to [Device-to-cloud communication guidance](iot-hub-devguide-d2c-guidance.md) if in doubt between using reported properties, device-to-cloud messages, or file upload.
## Associate an Azure Storage account with IoT Hub
-To use the file upload functionality, you must first link an Azure Storage account to the IoT Hub. You can complete this task either through the Azure portal, or programmatically through the [IoT Hub resource provider REST APIs](/rest/api/iothub/iothubresource). Once you've associated an Azure Storage account with your IoT Hub, the service returns a SAS URI to a device when the device starts a file upload request.
+You must have an Azure Storage account associated with your IoT hub.
+
+To learn how to create one using the portal, see [Create a storage account](../storage/common/storage-account-create.md).
+
+You can also create one programmatically using the using the [IoT Hub resource provider REST APIs](/rest/api/iothub/iothubresource).
+
+When you associate an Azure Storage account with an IoT hub, the IoT hub generates a SAS URI. A device can use this SAS URI to securely upload a file to a blob container.
+
+## Create a container
-The [Upload files from your device to the cloud with IoT Hub](iot-hub-csharp-csharp-file-upload.md) how-to guides provide a complete walkthrough of the file upload process. These how-to guides show you how to use the Azure portal to associate a storage account with an IoT hub.
+ To create a blob container through the portal:
+
+1. In the left pane of your storage account, under **Data Storage**, select **Containers**.
+1. In the Container blade, select **+ Container**.
+1. In the **New container** pane that opens, give your container a name and select **Create**.
+
+After creating a container, follow the instructions in [Configure file uploads using the Azure portal](iot-hub-configure-file-upload.md). Make sure that a blob container is associated with your IoT hub and that file notifications are enabled.
+
+You can also use the [IoT Hub resource provider REST APIs](/rest/api/iothub/iothubresource) to create a container associated with the storage for your IoT Hub.
+
+## File upload using an SDK
+
+The following how-to guides provide complete walkthroughs of the file upload process in a variety of SDK languages. These guides show you how to use the Azure portal to associate a storage account with an IoT hub. They also contain code snippets or refer to samples that guide you through the upload process.
+
+* [.NET](iot-hub-csharp-csharp-file-upload.md)
+* [Java](iot-hub-java-java-file-upload.md)
+* [Node.js](iot-hub-node-node-file-upload.md)
+* [Python](iot-hub-python-python-file-upload.md)
> [!NOTE] > The [Azure IoT SDKs](iot-hub-devguide-sdks.md) automatically handle retrieving the shared access signature URI, uploading the file, and notifying IoT Hub of a completed upload. If a firewall blocks access to the Blob Storage endpoint but allows access to the IoT Hub endpoint, the file upload process fails and shows the following error for the IoT C# device SDK:
The [Upload files from your device to the cloud with IoT Hub](iot-hub-csharp-csh
>
-## Initialize a file upload
-IoT Hub has an endpoint specifically for devices to request a SAS URI for storage to upload a file. To start the file upload process, the device sends a POST request to `{iot hub}.azure-devices.net/devices/{deviceId}/files` with the following JSON body:
+## Initialize a file upload (REST)
+
+You can use REST APIs rather than one of the SDKs to upload a file. IoT Hub has an endpoint specifically for devices to request a SAS URI for storage to upload a file. To start the file upload process, the device sends a POST request to `{iot hub}.azure-devices.net/devices/{deviceId}/files` with the following JSON body:
```json {
IoT Hub has two REST endpoints to support file upload, one to get the SAS URI fo
* A correlation ID to be used once the upload is completed.
-## Notify IoT Hub of a completed file upload
+## Notify IoT Hub of a completed file upload (REST)
The device uploads the file to storage using the Azure Storage SDKs. When the upload is complete, the device sends a POST request to `{iot hub}.azure-devices.net/devices/{deviceId}/files/notifications` with the following JSON body:
The value of `isSuccess` is a Boolean that indicates whether the file was upload
The following reference topics provide you with more information about uploading files from a device.
-## File upload notifications
+### File upload notifications
Optionally, when a device notifies IoT Hub that an upload is complete, IoT Hub generates a notification message. This message contains the name and storage location of the file.
As explained in [Endpoints](iot-hub-devguide-endpoints.md), IoT Hub delivers fil
} ```
-## File upload notification configuration options
+### File upload notification configuration options
Each IoT hub has the following configuration options for file upload notifications:
iot-hub Iot Hub Devguide Identity Registry https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-identity-registry.md
Title: Understand the Azure IoT Hub identity registry | Microsoft Docs description: Developer guide - description of the IoT Hub identity registry and how to use it to manage your devices. Includes information about the import and export of device identities in bulk. - Previously updated : 08/29/2018 Last updated : 05/06/2021
All these operations can use optimistic concurrency, as specified in [RFC7232](h
An IoT Hub identity registry: * Does not contain any application metadata.
-* Can be accessed like a dictionary, by using the **deviceId** or **moduleId** as the key.
-* Does not support expressive queries.
-
-An IoT solution typically has a separate solution-specific store that contains application-specific metadata. For example, the solution-specific store in a smart building solution would record the room in which a temperature sensor is deployed.
> [!IMPORTANT] > Only use the identity registry for device management and provisioning operations. High throughput operations at run time should not depend on performing operations in the identity registry. For example, checking the connection state of a device before sending a command is not a supported pattern. Make sure to check the [throttling rates](iot-hub-devguide-quotas-throttling.md) for the identity registry, and the [device heartbeat](iot-hub-devguide-identity-registry.md#device-heartbeat) pattern.
iot-hub Iot Hub Devguide Messages D2c https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-messages-d2c.md
An IoT hub has a default built-in-endpoint (**messages/events**) that is compati
Each message is routed to all endpoints whose routing queries it matches. In other words, a message can be routed to multiple endpoints.
-If your custom endpoint has firewall configurations, consider using the Microsoft trusted first party exception, to give your IoT Hub access to the specific endpoint - [Azure Storage](./virtual-network-support.md#egress-connectivity-to-storage-account-endpoints-for-routing), [Azure Event Hubs](./virtual-network-support.md#egress-connectivity-to-event-hubs-endpoints-for-routing) and [Azure Service Bus](./virtual-network-support.md#egress-connectivity-to-service-bus-endpoints-for-routing). This is available in select regions for IoT Hubs with [managed service identity](./virtual-network-support.md).
+If your custom endpoint has firewall configurations, consider using the [Microsoft trusted first party exception](./virtual-network-support.md#egress-connectivity-from-iot-hub-to-other-azure-resources)
IoT Hub currently supports the following endpoints:
iot-hub Iot Hub Devguide Quotas Throttling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-quotas-throttling.md
Last updated 04/05/2021-+ # Reference - IoT Hub quotas and throttling
iot-hub Iot Hub Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-managed-identity.md
+
+ Title: Azure IoT Hub Managed Identity | Microsoft Docs
+description: How to use managed identities to allow egress connectivity from your IoT Hub to other Azure resources.
++++ Last updated : 05/11/2021+++
+# IoT Hub support for Managed Identities
+
+Managed identities provide Azure services with an automatically managed identity in Azure AD in a secure manner. This eliminates the needs for developers having to manage credentials by providing an identity. There are two types of managed identities: system-assigned and user-assigned. IoT Hub supports both.
+
+In IoT Hub, managed identities can be used for egress connectivity from IoT Hub to other Azure services for features such as [message routing](iot-hub-devguide-messages-d2c.md), [file upload](iot-hub-devguide-file-upload.md), and [bulk device import/export](iot-hub-bulk-identity-mgmt.md). In this article, you learn how to use system-assigned and user-assigned managed identities in your IoT Hub for different functionalities.
++
+## Prerequisites
+1. Read the documentation of [managed identities for Azure resources](./../active-directory/managed-identities-azure-resources/overview.md) to understand the differences between system-assigned and user-assigned managed identity.
+
+2. If you donΓÇÖt have an IoT Hub, [create an IoT Hub](iot-hub-create-through-portal.md) before continuing.
++
+## System-assigned managed identity
+
+### Add and remove a system-assigned managed identity in Azure portal
+1. Sign in to the Azure portal and navigate to your desired IoT Hub.
+2. Navigate to **Identity** in your IoT Hub portal
+3. Under **System-assigned** tab, select **On** and click **Save**.
+4. To remove system-assigned managed identity from an IoT Hub, select **Off** and click **Save**.
++
+### Enable managed identity at hub creation time using ARM template
+
+To enable the system-assigned managed identity in your IoT hub at resource provisioning time, use the ARM template below. This ARM template has two required resources, and they both need to be deployed before creating other resources like `Microsoft.Devices/IotHubs/eventHubEndpoints/ConsumerGroups`.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "resources": [
+ {
+ "type": "Microsoft.Devices/IotHubs",
+ "apiVersion": "2020-03-01",
+ "name": "<provide-a-valid-resource-name>",
+ "location": "<any-of-supported-regions>",
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "sku": {
+ "name": "<your-hubs-SKU-name>",
+ "tier": "<your-hubs-SKU-tier>",
+ "capacity": 1
+ }
+ },
+ {
+ "type": "Microsoft.Resources/deployments",
+ "apiVersion": "2018-02-01",
+ "name": "createIotHub",
+ "dependsOn": [
+ "[resourceId('Microsoft.Devices/IotHubs', '<provide-a-valid-resource-name>')]"
+ ],
+ "properties": {
+ "mode": "Incremental",
+ "template": {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "0.9.0.0",
+ "resources": [
+ {
+ "type": "Microsoft.Devices/IotHubs",
+ "apiVersion": "2020-03-01",
+ "name": "<provide-a-valid-resource-name>",
+ "location": "<any-of-supported-regions>",
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "sku": {
+ "name": "<your-hubs-SKU-name>",
+ "tier": "<your-hubs-SKU-tier>",
+ "capacity": 1
+ }
+ }
+ ]
+ }
+ }
+ }
+ ]
+}
+```
+
+After substituting the values for your resource `name`, `location`, `SKU.name` and `SKU.tier`, you can use Azure CLI to deploy the resource in an existing resource group using:
+
+```azurecli-interactive
+az deployment group create --name <deployment-name> --resource-group <resource-group-name> --template-file <template-file.json>
+```
+
+After the resource is created, you can retrieve the managed service identity assigned to your hub using Azure CLI:
+
+```azurecli-interactive
+az resource show --resource-type Microsoft.Devices/IotHubs --name <iot-hub-resource-name> --resource-group <resource-group-name>
+```
+## User-assigned managed identity
+In this section, you learn how to add and remove a user-assigned managed identity from an IoT Hub using Azure portal.
+1. First you need to create a user-assigned managed identity as a standalone resource. You can follow the instructions [here](./../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md#create-a-user-assigned-managed-identity) to create a user-assigned managed identity.
+2. Go to your IoT Hub, navigate to the **Identity** in your IoT Hub portal.
+3. Under **User-Assigned** tab, click **Add user-assigned managed identity**. Choose the user-assigned managed identity you want to add to IoT Hub and then click **Select**.
+4. You can remove a user-assigned identity from an IoT Hub. Choose the user-assigned identity you want to remove, and click **Remove** button. Note you are only removing it from IoT Hub, and this removal does not delete the user-assigned identity as a resource. To delete the user-assigned identity as a resource, follow the instructions [here](./../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md#delete-a-user-assigned-managed-identity).
++
+## Egress connectivity from IoT Hub to other Azure resources
+In IoT Hub, managed identities can be used for egress connectivity from IoT Hub to other Azure services for [message routing](iot-hub-devguide-messages-d2c.md), [file upload](iot-hub-devguide-file-upload.md), and [bulk device import/export](iot-hub-bulk-identity-mgmt.md). You can choose which managed identity to use for each IoT Hub egress connectivity to customer-owned endpoints including storage accounts, event hubs, and service bus endpoints.
+
+### Message routing
+In this section, we use the [message routing](iot-hub-devguide-messages-d2c.md) to event hub custom endpoint as an example. The same thing applies to other routing custom endpoints.
+
+1. First we need to go to your event hub in Azure portal, to assign the managed identity the right access. In your event hub, navigate to the **Access control (IAM)** tab and click **Add** then **Add a role assignment**.
+3. Select **Event Hubs Data Sender as role**.
+
+> [!NOTE]
+> For storage account, select **Storage Blob Data Contributor** ([*not* Contributor or Storage Account Contributor](../storage/common/storage-auth-aad-rbac-portal.md#azure-roles-for-blobs-and-queues)) as **role**. For service bus, select **Service bus Data Sender** as **role**.
+
+4. For user-assigned, choose **User-assigned managed identity** under **Assign access to**. Select your subscription and your user-assigned managed identity in the drop-down list. Click the **Save** button.
++
+5. For system-assigned, under **Assign access to** choose **User, group, or service principal** and select your IoT Hub's resource name in the drop-down list. Click **Save**.
++
+If you need to restrict the connectivity to your custom endpoint through a VNet, you need to turn on the trusted Microsoft first party exception, to give your IoT Hub access to the specific endpoint. For example, if you're adding an event hub custom endpoint, navigate to the **Firewalls and virtual networks** tab in your event hub and enable **Allow access from selected networks** option. Under the **Exceptions** list, check the box for **Allow trusted Microsoft services to access event hubs**. Click the **Save** button. This also applies to storage account and service bus. Learn more about [IoT Hub support for virtual networks](./virtual-network-support.md).
+
+> [!NOTE]
+> You need to complete above steps to assign the managed identity the right access before adding the event hub as a custom endpoint in IoT Hub. Please wait a few minutes for the role assignment to propagate.
+
+6. Next, go to your IoT Hub. In your Hub, navigate to **Message Routing**, then click **Custom endpoints**. Click **Add** and choose the type of endpoint you would like to use. In this section, we use event hub as the example.
+7. At the bottom of the page, choose your preferred **Authentication type**. In this section, we use the **User-Assigned** as the example. In the dropdown, select the preferred user-assigned managed identity then click **Create**.
++
+8. Custom endpoint successfully created.
+9. After creation, you can still change the authentication type. Select the custom endpoint that you want to change the authentication type, then click **Change authentication type**.
++
+10. Choose the new authentication type to be updated for this endpoint, click **Save**.
+
+### File Upload
+IoT Hub's [file upload](iot-hub-devguide-file-upload.md) feature allows devices to upload files to a customer-owned storage account. To allow the file upload to function, IoT Hub needs to have connectivity to the storage account. Similar to message routing, you can pick the preferred authentication type and managed identity for IoT Hub egress connectivity to your Azure Storage account.
+
+1. In the Azure portal, navigate to your storage account's **Access control (IAM)** tab and click **Add** under the **Add a role assignment** section.
+2. Select **Storage Blob Data Contributor** (not Contributor or Storage Account Contributor) as role.
+3. For user-assigned, choose **User-assigned managed identity** under Assign access to. Select your subscription and your user-assigned managed identity in the drop-down list. Click the **Save** button.
+4. For system-assigned, under **Assign access to** choose **User, group, or service principal** and select your IoT Hub's resource name in the drop-down list. Click **Save**.
+
+If you need to restrict the connectivity to your storage account through a VNet, you need to turn on the trusted Microsoft first party exception, to give your IoT Hub access to the storage account. On your storage account resource page, navigate to the **Firewalls and virtual networks** tab and enable **Allow access from selected networks** option. Under the **Exceptions** list, check the box for **Allow trusted Microsoft services to access this storage account**. Click the **Save** button. Learn more about [IoT Hub support for virtual networks](./virtual-network-support.md).
++
+> [!NOTE]
+> You need to complete above steps to assign the managed identity the right access before saving the storage account in IoT Hub for file upload using the managed identity. Please wait a few minutes for the role assignment to propagate.
+
+5. On your IoT Hub's resource page, navigate to **File upload** tab.
+6. On the page that shows up, select the container that you intend to use in your blob storage, configure the **File notification settings, SAS TTL, Default TTL, and Maximum delivery count** as desired. Choose the preferred authentication type, and click **Save**.
++
+### Bulk device import/export
+
+IoT Hub supports the functionality to [import/export devices](iot-hub-bulk-identity-mgmt.md)' information in bulk from/to a customer-provided storage blob. This functionality requires connectivity from IoT Hub to the storage account.
+
+1. In the Azure portal, navigate to your storage account's **Access control (IAM)** tab and click **Add** under the **Add a role assignment** section.
+2. Select **Storage Blob Data Contributor** (not Contributor or Storage Account Contributor) as role.
+3. For user-assigned, choose **User-assigned managed identity** under Assign access to. Select your subscription and your user-assigned managed identity in the drop-down list. Click the **Save** button.
+4. For system-assigned, under **Assign access to** choose **User, group, or service principal** and select your IoT Hub's resource name in the drop-down list. Click **Save**.
++
+### Using Rest API or SDK for import and export jobs
+
+You can now use the Azure IoT REST APIs for creating import and export jobs. You will need to provide the following properties in the request body
+1. storageAuthenticationType - The value should be set to 'identityBased'
+1. inputBlobContainerUri - Used only in import job
+1. outputBlobContainerUri - Used for both import and export job
+1. identity - The managed identity to use
++
+Azure IoT Hub SDKs also support this functionality in the service client's registry manager. The following code snippet shows how to initiate an import job or export job in using the C# SDK.
+
+**C# code snippet**
+
+```csharp
+ // Create an export job
+ // see note below
+
+ using RegistryManager srcRegistryManager = RegistryManager.CreateFromConnectionString(hubConnectionString);
+
+ JobProperties jobProperties = new JobProperties
+ {
+ OutputBlobContainerUri = blobContainerUri,
+ StorageAuthenticationType = StorageAuthenticationType.IdentityBased,
+ Identity = new ManagedIdentity
+ {
+ userAssignedIdentity = "<resource ID of user assigned managed identity>"
+ }
+ };
+
+ JobProperties jobResult = await srcRegistryManager
+ .ExportDevicesAsync(jobProperties);
+```
+
+```csharp
+ // Create an import job
+ // see note below
+
+ using RegistryManager destRegistryManager = RegistryManager.CreateFromConnectionString(hubConnectionString);
+
+ JobProperties jobProperties = new JobProperties
+ {
+ InputBlobContainerUri = blobContainerUri,
+ OutputBlobContainerUri = blobContainerUri,
+ StorageAuthenticationType = StorageAuthenticationType.IdentityBased,
+ Identity = new ManagedIdentity
+ {
+ userAssignedIdentity = "<resource ID of user assigned managed identity>"
+ }
+ };
+
+ JobProperties jobResult = await destRegistryManager
+ .ImportDevicesAsync(jobProperties);
+```
+
+**Python code snippet**
+
+```python
+# see note below
+iothub_job_manager = IoTHubJobManager("<IoT Hub connection string>")
+
+# Create an import job
+result = iothub_job_manager.create_import_export_job(JobProperties(
+ type="import",
+ input_blob_container_uri="<input container URI>",
+ output_blob_container_uri="<output container URI>",
+ storage_authentication_type="identityBased",
+ identity=ManagedIdentity(
+ user_assigned_identity="<resource ID of user assigned managed identity>"
+ )
+))
+
+# Create an export job
+result = iothub_job_manager.create_import_export_job(JobProperties(
+ type="export",
+ output_blob_container_uri="<output container URI>",
+ storage_authentication_type="identityBased",
+ exclude_keys_in_export=True,
+ identity=ManagedIdentity(
+ user_assigned_identity="<resource ID of user assigned managed identity>"
+ )
+))
+```
+
+> [!NOTE]
+> 1. If **storageAuthenticationType** is set to **identityBased** and **userAssignedIdentity** property is not **null**, the jobs will use the specified user-assigned managed identity.
+> 1. If the IoT Hub is not configured with the user-assigned managed identity specified in **userAssignedIdentity**, the job will fail.
+> 1. If **storageAuthenticationType** is set to **identityBased** the **userAssignedIdentity** property is null, the jobs will use system-assigned identity.
+> 1. If the IoT Hub is not configured with the user-assigned managed identity, the job will fail.
+> 1. If **storageAuthenticationType** is set to **identityBased** and neither **user-assigned** nor **system-assigned** managed identities are configured on the hub, the job will fail.
+
+## SDK samples
+- [.NET SDK sample](https://aka.ms/iothubmsicsharpsample)
+- [Java SDK sample](https://aka.ms/iothubmsijavasample)
+- [Python SDK sample](https://aka.ms/iothubmsipythonsample)
+- Node.js SDK samples: [bulk device import](https://aka.ms/iothubmsinodesampleimport), [bulk device export](https://aka.ms/iothubmsinodesampleexport)
+## Next steps
+
+Use the links below to learn more about IoT Hub features:
+
+* [Message routing](./iot-hub-devguide-messages-d2c.md)
+* [File upload](./iot-hub-devguide-file-upload.md)
+* [Bulk device import/export](./iot-hub-bulk-identity-mgmt.md)
iot-hub Iot Hub Troubleshoot Error 403006 Devicemaximumactivefileuploadlimitexceeded https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-troubleshoot-error-403006-devicemaximumactivefileuploadlimitexceeded.md
You can easily exceed the limit if your device doesn't notify IoT Hub when file
## Solution
-Ensure the device can promptly [notify IoT Hub file upload completion](./iot-hub-devguide-file-upload.md#notify-iot-hub-of-a-completed-file-upload). Then, try [reducing the SAS token TTL for file upload configuration](iot-hub-configure-file-upload.md).
+Ensure the device can promptly [notify IoT Hub file upload completion](./iot-hub-devguide-file-upload.md#notify-iot-hub-of-a-completed-file-upload-rest). Then, try [reducing the SAS token TTL for file upload configuration](iot-hub-configure-file-upload.md).
## Next steps
iot-hub Quickstart Control Device Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-control-device-python.md
A device must be registered with your IoT hub before it can connect. In this qui
```azurecli-interactive az iot hub connection-string show \ --policy-name service \
- --name {YourIoTHubName} \
+ --hub-name {YourIoTHubName} \
--output table ```
iot-hub Virtual Network Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/virtual-network-support.md
For pricing details, see [Azure Private Link pricing](https://azure.microsoft.co
## Egress connectivity from IoT Hub to other Azure resources
-IoT Hub can connect to your Azure blob storage, event hub, service bus resources for [message routing](./iot-hub-devguide-messages-d2c.md), [file upload](./iot-hub-devguide-file-upload.md), and [bulk device import/export](./iot-hub-bulk-identity-mgmt.md) over the resources' public endpoint. Binding your resource to a VNet blocks connectivity to the resource by default. As a result, this configuration prevents IoT Hub's from working sending data to your resources. To fix this issue, enable connectivity from your IoT Hub resource to your storage account, event hub, or service bus resources via the **trusted Microsoft service** option.
-
-### Turn on managed identity for IoT Hub
-
-To allow other services to find your IoT hub as a trusted Microsoft service, it must have a system-assigned managed identity.
-
-1. Navigate to **Identity** in your IoT Hub portal
-
-1. Under **Status**, select **On**, then click **Save**.
-
- :::image type="content" source="media/virtual-network-support/managed-identity.png" alt-text="Screenshot showing how to turn on managed identity for IoT Hub":::
-
-To use Azure CLI to turn on managed identity:
-
-```azurecli-interactive
-az iot hub update --name <iot-hub-resource-name> --set identity.type="SystemAssigned"
-```
-
-### Assign managed identity to your IoT Hub at creation time using ARM template
-
-To assign managed identity to your IoT hub at resource provisioning time, use the ARM template below. This ARM template has two required resources, and they both need to be deployed before creating other resources like `Microsoft.Devices/IotHubs/eventHubEndpoints/ConsumerGroups`.
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "resources": [
- {
- "type": "Microsoft.Devices/IotHubs",
- "apiVersion": "2020-03-01",
- "name": "<provide-a-valid-resource-name>",
- "location": "<any-of-supported-regions>",
- "identity": {
- "type": "SystemAssigned"
- },
- "sku": {
- "name": "<your-hubs-SKU-name>",
- "tier": "<your-hubs-SKU-tier>",
- "capacity": 1
- }
- },
- {
- "type": "Microsoft.Resources/deployments",
- "apiVersion": "2018-02-01",
- "name": "createIotHub",
- "dependsOn": [
- "[resourceId('Microsoft.Devices/IotHubs', '<provide-a-valid-resource-name>')]"
- ],
- "properties": {
- "mode": "Incremental",
- "template": {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "0.9.0.0",
- "resources": [
- {
- "type": "Microsoft.Devices/IotHubs",
- "apiVersion": "2020-03-01",
- "name": "<provide-a-valid-resource-name>",
- "location": "<any-of-supported-regions>",
- "identity": {
- "type": "SystemAssigned"
- },
- "sku": {
- "name": "<your-hubs-SKU-name>",
- "tier": "<your-hubs-SKU-tier>",
- "capacity": 1
- }
- }
- ]
- }
- }
- }
- ]
-}
-```
-
-After substituting the values for your resource `name`, `location`, `SKU.name` and `SKU.tier`, you can use Azure CLI to deploy the resource in an existing resource group using:
-
-```azurecli-interactive
-az deployment group create --name <deployment-name> --resource-group <resource-group-name> --template-file <template-file.json>
-```
-
-After the resource is created, you can retrieve the managed service identity assigned to your hub using Azure CLI:
-
-```azurecli-interactive
-az resource show --resource-type Microsoft.Devices/IotHubs --name <iot-hub-resource-name> --resource-group <resource-group-name>
-```
-
-### Pricing for managed identity
-
-Trusted Microsoft first party services exception feature is free of charge. Charges for the provisioned storage accounts, event hubs, or service bus resources apply separately.
-
-### Egress connectivity to storage account endpoints for routing
-
-IoT Hub can route messages to a customer-owned storage account. To allow the routing functionality to access a storage account while firewall restrictions are in place, your hub needs to use a managed identity to access the storage account. First your hub will need a [managed identity](#turn-on-managed-identity-for-iot-hub). Once a managed identity is provisioned, follow the steps below to give Azure RBAC permission to your hub's resource identity to access your storage account.
-
-1. In the Azure portal, navigate to your storage account's **Access control (IAM)** tab and click **Add** under the **Add a role assignment** section.
-
-2. Select **Storage Blob Data Contributor** ([*not* Contributor or Storage Account Contributor](../storage/common/storage-auth-aad-rbac-portal.md#azure-roles-for-blobs-and-queues)) as **role**, **Azure AD user, group, or service principal** as **Assigning access to** and select your IoT Hub's resource name in the drop-down list. Click the **Save** button.
-
-3. Navigate to the **Firewalls and virtual networks** tab in your storage account and enable **Allow access from selected networks** option. Under the **Exceptions** list, check the box for **Allow trusted Microsoft services to access this storage account**. Click the **Save** button.
-
-4. On your IoT Hub's resource page, navigate to **Message routing** tab.
-
-5. Navigate to **Custom endpoints** section and click **Add**. Select **Storage** as the endpoint type.
-
-6. On the page that shows up, provide a name for your endpoint, select the container that you intend to use in your blob storage, provide encoding, and file name format. Select **Identity-based** as the **Authentication type** to your storage endpoint. Click the **Create** button.
-
-Now your custom storage endpoint is set up to use your hub's system assigned identity, and it has permission to access your storage resource despite its firewall restrictions. You can now use this endpoint to set up a routing rule.
-
-### Egress connectivity to event hubs endpoints for routing
-
-IoT Hub can be configured to route messages to a customer-owned event hubs namespace. To allow the routing functionality to access an event hubs resource while firewall restrictions are in place, your IoT Hub needs to use a managed identity to access the event hubs resource. First your hub will need a managed identity. Once a managed identity is created, follow the steps below to give Azure RBAC permission to your hub's resource identity to access your event hubs.
-
-1. In the Azure portal, navigate to your event hubs **Access control (IAM)** tab and click **Add** under the **Add a role assignment** section.
-
-2. Select **Event Hubs Data Sender** as **role**, **Azure AD user, group, or service principal** as **Assigning access to** and select your IoT Hub's resource name in the drop-down list. Click the **Save** button.
-
-3. Navigate to the **Firewalls and virtual networks** tab in your event hubs and enable **Allow access from selected networks** option. Under the **Exceptions** list, check the box for **Allow trusted Microsoft services to access event hubs**. Click the **Save** button.
-
-4. On your IoT Hub's resource page, navigate to **Message routing** tab.
-
-5. Navigate to **Custom endpoints** section and click **Add**. Select **Event hubs** as the endpoint type.
-
-6. On the page that shows up, provide a name for your endpoint, select your event hubs namespace and instance. Select **Identity-based** as the **Authentication type**, and click the **Create** button.
-
-Now your custom event hubs endpoint is set up to use your hub's system assigned identity, and it has permission to access your event hubs resource despite its firewall restrictions. You can now use this endpoint to set up a routing rule.
-
-### Egress connectivity to service bus endpoints for routing
-
-IoT Hub can be configured to route messages to a customer-owned service bus namespace. To allow the routing functionality to access a service bus resource while firewall restrictions are in place, your IoT Hub needs to use a managed identity to access the service bus resource. First your hub will need a managed identity. Once a managed identity is provisioned, follow the steps below to give Azure RBAC permission to your hub's resource identity to access your service bus.
-
-1. In the Azure portal, navigate to your service bus' **Access control (IAM)** tab and click **Add** under the **Add a role assignment** section.
-
-2. Select **Service bus Data Sender** as **role**, **Azure AD user, group, or service principal** as **Assigning access to** and select your IoT Hub's resource name in the drop-down list. Click the **Save** button.
-
-3. Navigate to the **Firewalls and virtual networks** tab in your service bus and enable **Allow access from selected networks** option. Under the **Exceptions** list, check the box for **Allow trusted Microsoft services to access this service bus**. Click the **Save** button.
-
-4. On your IoT Hub's resource page, navigate to **Message routing** tab.
-
-5. Navigate to **Custom endpoints** section and click **Add**. Select **Service bus queue** or **Service Bus topic** (as applicable) as the endpoint type.
-
-6. On the page that shows up, provide a name for your endpoint, select your service bus' namespace and queue or topic (as applicable). Select **Identity-based** as the **Authentication type**, and click the **Create** button.
-
-Now your custom service bus endpoint is set up to use your hub's system assigned identity, and it has permission to access your service bus resource despite its firewall restrictions. You can now use this endpoint to set up a routing rule.
-
-### Egress connectivity to storage accounts for file upload
-
-IoT Hub's file upload feature allows devices to upload files to a customer-owned storage account. To allow the file upload to function, both devices and IoT Hub need to have connectivity to the storage account. If firewall restrictions are in place on the storage account, your devices need to use any of the supported storage account's mechanism (including [private endpoints](../private-link/tutorial-private-endpoint-storage-portal.md), [service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md), or [direct firewall configuration](../storage/common/storage-network-security.md)) to gain connectivity. Similarly, if firewall restrictions are in place on the storage account, IoT Hub needs to be configured to access the storage resource via the trusted Microsoft services exception. For this purpose, your IoT Hub must have a managed identity. Once a managed identity is provisioned, follow the steps below to give Azure RBAC permission to your hub's resource identity to access your storage account.
--
-1. In the Azure portal, navigate to your storage account's **Access control (IAM)** tab and click **Add** under the **Add a role assignment** section.
-
-2. Select **Storage Blob Data Contributor** ([*not* Contributor or Storage Account Contributor](../storage/common/storage-auth-aad-rbac-portal.md#azure-roles-for-blobs-and-queues)) as **role**, **Azure AD user, group, or service principal** as **Assigning access to** and select your IoT Hub's resource name in the drop-down list. Click the **Save** button.
-
-3. Navigate to the **Firewalls and virtual networks** tab in your storage account and enable **Allow access from selected networks** option. Under the **Exceptions** list, check the box for **Allow trusted Microsoft services to access this storage account**. Click the **Save** button.
-
-4. On your IoT Hub's resource page, navigate to **File upload** tab.
-
-5. On the page that shows up, select the container that you intend to use in your blob storage, configure the **File notification settings**, **SAS TTL**, **Default TTL**, and **Maximum delivery count** as desired. Select **Identity-based** as the **Authentication type** to your storage endpoint. Click the **Create** button. If you get an error at this step, temporarily set your storage account to allow access from **All networks**, then try again. You can configure firewall on the storage account once the File upload configuration is complete.
-
-Now your storage endpoint for file upload is set up to use your hub's system assigned identity, and it has permission to access your storage resource despite its firewall restrictions.
-
-### Egress connectivity to storage accounts for bulk device import/export
-
-IoT Hub supports the functionality to [import/export](./iot-hub-bulk-identity-mgmt.md) devices' information in bulk from/to a customer-provided storage blob. To allow bulk import/export feature to function, both devices and IoT Hub need to have connectivity to the storage account.
-
-This functionality requires connectivity from IoT Hub to the storage account. To access a service bus resource while firewall restrictions are in place, your IoT Hub needs to have a managed identity. Once a managed identity is provisioned, follow the steps below to give Azure RBAC permission to your hub's resource identity to access your service bus.
-
-1. In the Azure portal, navigate to your storage account's **Access control (IAM)** tab and click **Add** under the **Add a role assignment** section.
-
-2. Select **Storage Blob Data Contributor** ([*not* Contributor or Storage Account Contributor](../storage/common/storage-auth-aad-rbac-portal.md#azure-roles-for-blobs-and-queues)) as **role**, **Azure AD user, group, or service principal** as **Assigning access to** and select your IoT Hub's resource name in the drop-down list. Click the **Save** button.
-
-3. Navigate to the **Firewalls and virtual networks** tab in your storage account and enable **Allow access from selected networks** option. Under the **Exceptions** list, check the box for **Allow trusted Microsoft services to access this storage account**. Click the **Save** button.
-
-You can now use the Azure IoT REST APIs for [creating import export jobs](/rest/api/iothub/service/jobs/getimportexportjobs) for information on how to use the bulk import/export functionality. You will need to provide the `storageAuthenticationType="identityBased"` in your request body and use `inputBlobContainerUri="https://..."` and `outputBlobContainerUri="https://..."` as the input and output URLs of your storage account, respectively.
-
-Azure IoT Hub SDKs also support this functionality in the service client's registry manager. The following code snippet shows how to initiate an import job or export job in using the C# SDK.
-
-```csharp
-// Call an import job on the IoT Hub
-JobProperties importJob =
-await registryManager.ImportDevicesAsync(
- JobProperties.CreateForImportJob(inputBlobContainerUri, outputBlobContainerUri, null, StorageAuthenticationType.IdentityBased),
- cancellationToken);
-
-// Call an export job on the IoT Hub to retrieve all devices
-JobProperties exportJob =
-await registryManager.ExportDevicesAsync(
- JobProperties.CreateForExportJob(outputBlobContainerUri, true, null, StorageAuthenticationType.IdentityBased),
- cancellationToken);
-```
-
-To use this version of the Azure IoT SDKs with virtual network support for C#, Java, and Node.js:
-
-1. Create an environment variable named `EnableStorageIdentity` and set its value to `1`.
-
-2. Download the SDK: [Java](https://aka.ms/vnetjavasdk) | [C#](https://aka.ms/vnetcsharpsdk) | [Node.js](https://aka.ms/vnetnodesdk)
-
-For Python, download our limited version from GitHub.
-
-1. Navigate to the [GitHub release page](https://aka.ms/vnetpythonsdk).
-
-2. Download the following file, which you'll find at the bottom of the release page under the header named **assets**.
- > *azure_iot_hub-2.2.0_limited-py2.py3-none-any.whl*
-
-3. Open a terminal and navigate to the folder with the downloaded file.
-
-4. Run the following command to install the Python Service SDK with support for virtual networks:
- > pip install ./azure_iot_hub-2.2.0_limited-py2.py3-none-any.whl
+IoT Hub can connect to your Azure blob storage, event hub, service bus resources for [message routing](./iot-hub-devguide-messages-d2c.md), [file upload](./iot-hub-devguide-file-upload.md), and [bulk device import/export](./iot-hub-bulk-identity-mgmt.md) over the resources' public endpoint. Binding your resource to a VNet blocks connectivity to the resource by default. As a result, this configuration prevents IoT Hub's from working sending data to your resources. To fix this issue, enable connectivity from your IoT Hub resource to your storage account, event hub, or service bus resources via the **trusted Microsoft service** option.
+To allow other services to find your IoT hub as a trusted Microsoft service, your hub must use the managed identity. Once a managed identity is provisioned, you need to grant the Azure RBAC permission to your hub's managed identity to access your custom endpoint. Follow the article [Managed identities support in IoT Hub](./iot-hub-managed-identity.md) to provision a managed identity with Azure RBAC permission, and add the custom endpoint to your IoT Hub. Make sure you turn on the trusted Microsoft first party exception to allow your IoT Hub's access to the custom endpoint if you have the firewall configurations in place.
+### Pricing for trusted Microsoft service option
+Trusted Microsoft first party services exception feature is free of charge. Charges for the provisioned storage accounts, event hubs, or service bus resources apply separately.
## Next steps Use the links below to learn more about IoT Hub features:
lab-services Class Type Jupyter Notebook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lab-services/class-type-jupyter-notebook.md
Configure **Virtual machine size** and **Virtual machine image** settings as sho
| Virtual machine size | <p>The size you pick here depends on the workload you want to run:</p><ul><li>Small or Medium ΓÇô good for a basic setup of accessing Jupyter Notebooks</li><li>Small GPU (Compute) ΓÇô best suited for compute-intensive and network-intensive applications like Artificial Intelligence and Deep Learning</li></ul> | | Virtual machine image | <p>Choose one of the following images based on your operating system needs:</p><ul><li>[Data Science Virtual Machine ΓÇô Windows Server 2019](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2019)</li><li>[Data Science Virtual Machine ΓÇô Ubuntu 18.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-1804?tab=Overview)</li></ul> |
+When you create a lab with the **Small GPU (Compute)** size, you have the option to [Install GPU drivers](./how-to-setup-lab-gpu.md#ensure-that-the-appropriate-gpu-drivers-are-installed). This option installs recent NVIDIA drivers and Compute Unified Device Architecture (CUDA) toolkit which are required to enable high-performance computing with the GPU. For more information, see the article [Set up a lab with GPU virtual machines](./how-to-setup-lab-gpu.md).
### Template virtual machine Once you create a lab, a template VM will be created based on the virtual machine size and image you chose. You configure the template VM with everything you want to provide to your students for this class. To learn more, see [how to manage the template virtual machine](how-to-create-manage-template.md).
The Data Science VM images by default come with many of data science frameworks
- [Jupyter Notebooks](http://jupyter-notebook.readthedocs.io/): A web application that allows data scientists to take raw data, run computations, and see the results all in the same environment. It will run locally in the template VM. - [Visual Studio Code](https://code.visualstudio.com/): An integrated development environment (IDE) that provides a rich interactive experience when writing and testing a notebook. For more information, see [Working with Jupyter Notebooks in Visual Studio Code](https://code.visualstudio.com/docs/python/jupyter-support).
+If you are using the **Small GPU (Compute)** size, we recommend that you verify that the Data Science frameworks and libraries are properly set up with the GPU. To properly set up the frameworks and libraries, you may need to install a different version of the NVIDIA Drivers and CUDA toolkit. For example, to validate that the GPU is configured for TensorFlow, you can connect to the template VM and run the following Python-TensorFlow code in Jupyter Notebooks:
+
+```python
+import tensorflow as tf
+from tensorflow.python.client import device_lib
+
+print(device_lib.list_local_devices())
+```
+
+If the output from the above code looks like the following, this means that the GPU isn't configured for TensorFlow:
+
+```python
+[name: "/device:CPU:0"
+device_type: "CPU"
+memory_limit: 268435456
+locality {
+}
+incarnation: 15833696144144374634
+]
+```
+To properly configure the GPU, you should consult the framework's or library's documentation. Continuing with the above example, TensorFlow provides the following guidance:
+- [TensorFlow GPU Support](https://www.tensorflow.org/install/gpu)
+
+Their guidance covers the required version of the [NVIDIA drivers](https://www.nvidia.com/drivers) and [CUDA Toolkit](https://developer.nvidia.com/cuda-toolkit-archive). Their guidance also includes installing the [NVIDIA CUDA Deep Neural Network library (cudDNN)](https://developer.nvidia.com/cudnn).
+
+After you've followed TensorFlow's steps to configure the GPU, when you rerun the above code, you should see output similar to the following:
+
+```python
+[name: "/device:CPU:0"
+device_type: "CPU"
+memory_limit: 268435456
+locality {
+}
+incarnation: 15833696144144374634
+, name: "/device:GPU:0"
+device_type: "GPU"
+memory_limit: 11154792128
+locality {
+ bus_id: 1
+ links {
+ }
+}
+incarnation: 2659412736190423786
+physical_device_desc: "device: 0, name: NVIDIA Tesla K80, pci bus id: 0001:00:00.0, compute capability: 3.7"
+]
+```
+ ### Provide notebooks for the class The next task is to provide students with notebooks that you want them to use. To provide your own notebooks, you can save notebooks locally on the template VM.
Now, to connect to the VM, follow these steps:
2. Enter the password to connect to the VM. (You may have to give X2Go permission to bypass your firewall to finish connecting.) 3. You should now see the graphical interface for your Ubuntu Data Science VM. - #### SSH tunnel to Jupyter server on the VM Some students may want to connect directly from their local computer directly to the Jupyter server inside their VMs. The SSH protocol enables port forwarding between the local computer and a remote server (in our case, the studentΓÇÖs lab VM), so that an application running on a certain port on the server is **tunneled** to the mapping port on the local computer. Students should follow these steps to SSH tunnel to the Jupyter server on their lab VMs:
lab-services How To Setup Lab Gpu https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lab-services/how-to-setup-lab-gpu.md
To take advantage of the GPU capabilities of your lab VMs, ensure that the appro
![Screenshot of the "New lab" showing the "Install GPU drivers" option](./media/how-to-setup-gpu/lab-gpu-drivers.png)
-As shown in the preceding image, this option is enabled by default, which ensures that the *latest* drivers are installed for the type of GPU and image that you selected.
-- When you select a *compute* GPU size, your lab VMs are powered by the [NVIDIA Tesla K80](https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/tesla-product-literature/Tesla-K80-BoardSpec-07317-001-v05.pdf) GPU. In this case, the latest [Compute Unified Device Architecture (CUDA)](http://developer.download.nvidia.com/compute/cuda/2_0/docs/CudaReferenceManual_2.0.pdf) drivers are installed, which enables high-performance computing.-- When you select a *visualization* GPU size, your lab VMs are powered by the [NVIDIA Tesla M60](https://images.nvidia.com/content/tesla/pdf/188417-Tesla-M60-DS-A4-fnl-Web.pdf) GPU and [GRID technology](https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/solutions/resources/documents1/NVIDIA_GRID_vPC_Solution_Overview.pdf). In this case, the latest GRID drivers are installed, which enables the use of graphics-intensive applications.
+As shown in the preceding image, this option is enabled by default, which ensures that recently released drivers are installed for the type of GPU and image that you selected:
+- When you select a *compute* GPU size, your lab VMs are powered by the [NVIDIA Tesla K80](https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/tesla-product-literature/Tesla-K80-BoardSpec-07317-001-v05.pdf) GPU. In this case, recent [Compute Unified Device Architecture (CUDA)](http://developer.download.nvidia.com/compute/cuda/2_0/docs/CudaReferenceManual_2.0.pdf) drivers are installed, which enables high-performance computing.
+- When you select a *visualization* GPU size, your lab VMs are powered by the [NVIDIA Tesla M60](https://images.nvidia.com/content/tesla/pdf/188417-Tesla-M60-DS-A4-fnl-Web.pdf) GPU and [GRID technology](https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/solutions/resources/documents1/NVIDIA_GRID_vPC_Solution_Overview.pdf). In this case, recent GRID drivers are installed, which enables the use of graphics-intensive applications.
+
+> [!IMPORTANT]
+> The **Install GPU drivers** option only installs the drivers when they aren't present on your lab's image. For example, the GPU drivers are already installed on the Azure marketplace's [Data Science image](../machine-learning/data-science-virtual-machine/overview.md#whats-included-on-the-dsvm). If you create a lab using the Data Science image and choose to **Install GPU drivers**, the drivers won't be updated to a more recent version. To update the drivers, you will need to manually install them as explained in the next section.
### Install the drivers manually
-You might need to install a driver version other than the latest version. This section shows how to manually install the appropriate drivers, depending on whether you're using a *compute* GPU or a *visualization* GPU.
+You might need to install a different version of the drivers than the version that Azure Lab Services installs for you. This section shows how to manually install the appropriate drivers, depending on whether you're using a *compute* GPU or a *visualization* GPU.
#### Install the compute GPU drivers
-To manually install drivers for the compute GPU size, do the following:
+To manually install drivers for the *compute* GPU size, do the following:
1. In the lab creation wizard, when you're [creating your lab](./how-to-manage-classroom-labs.md), disable the **Install GPU drivers** setting.
To manually install drivers for the compute GPU size, do the following:
#### Install the visualization GPU drivers
-To manually install drivers for the visualization GPU size, do the following:
+To manually install drivers for the *visualization* GPU sizes, do the following:
1. In the lab creation wizard, when you're [creating your lab](./how-to-manage-classroom-labs.md), disable the **Install GPU drivers** setting. 1. After your lab is created, connect to the template VM to install the appropriate drivers.
This section describes how to validate that your GPU drivers are properly instal
> [!IMPORTANT] > The NVIDIA Control Panel settings can be accessed only for *visualization* GPUs. If you attempt to open the NVIDIA Control Panel for a compute GPU, you'll get the following error: "NVIDIA Display settings are not available. You are not currently using a display attached to an NVIDIA GPU." Similarly, the GPU performance information in Task Manager is provided only for visualization GPUs.
+ Depending on your scenario, you may also need to do additional validation to ensure the GPU is properly configured. Read the class type about [Python and Jupyter Notebooks](./class-type-jupyter-notebook.md#template-virtual-machine) that explains an example where specific versions of drivers are needed.
+ #### Linux images Follow the instructions in the "Verify driver installation" section of [Install NVIDIA GPU drivers on N-series VMs running Linux](../virtual-machines/linux/n-series-driver-setup.md#verify-driver-installation).
lighthouse Monitor At Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/how-to/monitor-at-scale.md
Title: Monitor delegated resources at scale description: Learn how to effectively use Azure Monitor Logs in a scalable way across the customer tenants you're managing. Previously updated : 02/11/2021 Last updated : 05/10/2021
We recommend creating these workspaces directly in the customer tenants. This wa
You can create a Log Analytics workspace by using the [Azure portal](../../azure-monitor/logs/quick-create-workspace.md), by using [Azure CLI](../../azure-monitor/logs/quick-create-workspace-cli.md), or by using [Azure PowerShell](../../azure-monitor/logs/powershell-workspace-configuration.md). > [!IMPORTANT]
-> Even if all of the workspaces are created in the customer tenant, the Microsoft.Insights resource provider must also be registered on a subscription in the managing tenant.
+> Even if all of the workspaces are created in the customer tenant, the Microsoft.Insights resource provider must also be registered on a subscription in the managing tenant. If your managing tenant doesn't have a subscription, you'll need to create one, then [register the resource provider](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).
## Deploy policies that log data
machine-learning Concept Plan Manage Cost https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-plan-manage-cost.md
Title: Plan and manage costs
+ Title: Plan to manage costs
description: Plan and manage costs for Azure Machine Learning with cost analysis in Azure portal. Learn further cost-saving tips to lower your cost when building ML models.
Previously updated : 05/08/2020 Last updated : 05/07/2021
-# Plan and manage costs for Azure Machine Learning
+# Plan to manage costs for Azure Machine Learning
This article describes how to plan and manage costs for Azure Machine Learning. First, you use the Azure pricing calculator to help plan for costs before you add any resources. Next, as you add the Azure resources, review the estimated costs. Finally, use cost-saving tips as you train your model with managed Azure Machine Learning compute clusters.
When you train your machine learning models, use managed Azure Machine Learning
## Prerequisites
-Cost analysis supports different kinds of Azure account types. To view the full list of supported account types, see [Understand Cost Management data](../cost-management-billing/costs/understand-cost-mgt-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). To view cost data, you need at least read access for your Azure account.
+Cost analysis in Cost Management supports most Azure account types, but not all of them. To view the full list of supported account types, see [Understand Cost Management data](../cost-management-billing/costs/understand-cost-mgt-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
-For information about assigning access to Azure Cost Management data, see [Assign access to data](../cost-management-billing/costs/assign-access-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+To view cost data, you need at least read access for an Azure account. For information about assigning access to Azure Cost Management data, see [Assign access to data](../cost-management/assign-access-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
## Estimate costs before using Azure Machine Learning
-Use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to estimate costs before you create the resources in an Azure Machine Learning account. On the left, select **AI + Machine Learning**, then select **Azure Machine Learning** to begin.
+- Use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate costs before you create the resources in an Azure Machine Learning workspace.
+On the left, select **AI + Machine Learning**, then select **Azure Machine Learning** to begin.
The following screenshot shows the cost estimation by using the calculator: As you add new resources to your workspace, return to this calculator and add the same resource here to update your cost estimates.
For more information, see [Azure Machine Learning pricing](https://azure.microso
Azure Machine Learning runs on Azure infrastructure that accrues costs along with Azure Machine Learning when you deploy the new resource. It's important to understand that additional infrastructure might accrue cost. You need to manage that cost when you make changes to deployed resources. +++++ ### Costs that typically accrue with Azure Machine Learning When you create resources for an Azure Machine Learning workspace, resources for other Azure services are also created. They are:
When you create resources for an Azure Machine Learning workspace, resources for
### Costs might accrue after resource deletion
-When you delete an Azure Machine Learning workspace in the Azure portal or with Azure CLI, the following resources continue to exist. They continue to accrue costs until you delete them.
+After you delete an Azure Machine Learning workspace in the Azure portal or with Azure CLI, the following resources continue to exist. They continue to accrue costs until you delete them.
* Azure Container Registry * Azure Block Blob Storage
If you create Azure Kubernetes Service (AKS) in your workspace, or if you attach
### Using Azure Prepayment credit with Azure Machine Learning
-You can pay for Azure Machine Learning charges with your Azure Prepayment (previously called monetary commitment) credit. However, you can't use Azure Prepayment to pay for charges for third party products and services including those from the Azure Marketplace.
+You can pay for Azure Machine Learning charges with your Azure Prepayment credit. However, you can't use Azure Prepayment credit to pay for charges for third party products and services including those from the Azure Marketplace.
+
+## Review estimated costs in the Azure portal
+
+<!-- Note for Azure service writer: If your service shows estimated costs when a user is creating resources in the Azure portal, at a minimum, insert this section as a brief walkthrough that steps through creating a Azure Machine Learning resource where the estimated cost is shown to the user, updated for your service. Add a screenshot where the estimated costs or subscription credits are shown.
+
+If your service doesn't show costs as they create a resource or if estimated costs aren't shown to users before they use your service, then omit this section.
+
+For example, you might start with the following (modify for your service):
+-->
+
+As you create compute resources for Azure Machine Learning, you see estimated costs.
+
+To create a *compute instance *and view the estimated price:
+
+1. Sign into the [Azure Machine Learning studio](https://ml.azure.com)
+1. On the left side, select **Compute**.
+1. On the top toolbar, select **+New**.
+1. Review the estimated price shown in for each available virtual machine size.
+1. Finish creating the resource.
+++
+If your Azure subscription has a spending limit, Azure prevents you from spending over your credit amount. As you create and use Azure resources, your credits are used. When you reach your credit limit, the resources that you deployed are disabled for the rest of that billing period. You can't change your credit limit, but you can remove it. For more information about spending limits, see [Azure spending limit](../cost-management-billing/manage/spending-limit.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+
+## Monitor costs
+
+As you use Azure resources with Azure Machine Learning, you incur costs. Azure resource usage unit costs vary by time intervals (seconds, minutes, hours, and days) or by unit usage (bytes, megabytes, and so on.) As soon as Azure Machine Learning use starts, costs are incurred and you can see the costs in [cost analysis](../cost-management/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+
+When you use cost analysis, you view Azure Machine Learning costs in graphs and tables for different time intervals. Some examples are by day, current and prior month, and year. You also view costs against budgets and forecasted costs. Switching to longer views over time can help you identify spending trends. And you see where overspending might have occurred. If you've created budgets, you can also easily see where they're exceeded.
+
+To view Azure Machine Learning costs in cost analysis:
+
+1. Sign in to the Azure portal.
+2. Open the scope in the Azure portal and select **Cost analysis** in the menu. For example, go to **Subscriptions**, select a subscription from the list, and then select **Cost analysis** in the menu. Select **Scope** to switch to a different scope in cost analysis.
+3. By default, cost for services are shown in the first donut chart. Select the area in the chart labeled Azure Machine Learning.
+
+Actual monthly costs are shown when you initially open cost analysis. Here's an example showing all monthly usage costs.
+++
+To narrow costs for a single service, like Azure Machine Learning, select **Add filter** and then select **Service name**. Then, select **Azure Machine Learning**.
+
+Here's an example showing costs for just Azure Machine Learning.
+
+<!-- Note to Azure service writer: The image shows an example for Azure Storage. Replace the example image with one that shows costs for your service. -->
+In the preceding example, you see the current cost for the service. Costs by Azure regions (locations) and Azure Machine Learning costs by resource group are also shown. From here, you can explore costs on your own.
## Create budgets You can create [budgets](../cost-management-billing/costs/tutorial-acm-create-budgets.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to manage costs and create [alerts](../cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) that automatically notify stakeholders of spending anomalies and overspending risks. Alerts are based on spending compared to budget and cost thresholds. Budgets and alerts are created for Azure subscriptions and resource groups, so they're useful as part of an overall cost monitoring strategy.
Azure Machine Learning Compute supports reserved instances inherently. If you pu
- Learn [how to optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Learn about how to [prevent unexpected costs](../cost-management-billing/cost-management-billing-overview.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Learn about how to [prevent unexpected costs](../cost-management-billing/understand/analyze-unexpected-charges.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
- Take the [Cost Management](/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
machine-learning How To Use Environments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-environments.md
for env in envs:
> [!WARNING] > Don't start your own environment name with the _AzureML_ prefix. This prefix is reserved for curated environments.
+To customize a curated environment, clone and rename the environment.
+```python
+env = Environment.get(workspace=ws, name="AzureML-Minimal")
+curated_clone = env.clone("customize_curated")
+```
+ ### Use Conda dependencies or pip requirements files You can create an environment from a Conda specification or a pip requirements file. Use the [`from_conda_specification()`](/python/api/azureml-core/azureml.core.environment.environment#from-conda-specification-name--file-path-) method or the [`from_pip_requirements()`](/python/api/azureml-core/azureml.core.environment.environment#from-pip-requirements-name--file-path-) method. In the method argument, include your environment name and the file path of the file that you want.
machine-learning Quickstart Create Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/quickstart-create-resources.md
The workspace is the top-level resource for your machine learning activities, pr
## Create the workspace
-If you already have a workspace, skip this section and continue to [Explore the workspace](#studio).
+If you already have a workspace, skip this section and continue to [Create a compute instance](#instance).
If you don't yet have a workspace, create one now:
marketplace Cloud Solution Providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/cloud-solution-providers.md
Title: Cloud Solution Provider - Microsoft commercial marketplace
+ Title: Cloud Solution Provider - Microsoft commercial marketplace - Azure
description: Learn how to sell your offers through the Microsoft Cloud Solution Provider (CSP) program partner channel in the commercial marketplace.
The following offers are eligible to be opted in to be sold by partners in the C
## How to configure an offer
-Configure the CSP program opt-in setting when you create the offer in Partner Center. [Learn more about the changing publisher experience](https://www.microsoftpartnercommunity.com/t5/Azure-Marketplace-and-AppSource/Cloud-Marketplace-In-Partner-Center/m-p/9738#M293).
+Configure the CSP program opt-in setting when you create the offer in Partner Center.
### Partner Center opt-in
If you've opted into the CSP channel in Partner Center, publishers must enter a
## Next steps - Learn more about [Go-to-market services](https://partner.microsoft.com/reach-customers/gtm).-- Sign in to [Partner Center](https://partner.microsoft.com/dashboard/account/v3/enrollment/introduction/partnership) to create and configure your offer.
+- Sign in to [Partner Center](https://partner.microsoft.com/dashboard/account/v3/enrollment/introduction/partnership) to create and configure your offer.
media-services Create Streaming Locator Build Url https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/create-streaming-locator-build-url.md
private static async Task<IList<string>> GetStreamingUrlsAsync(
} ```
-See the full code sample: [EncodingWithMESPredefinedPreset](https://github.com/Azure-Samples/media-services-v3-dotnet/blob/master/VideoEncoding/EncodingWithMESPredefinedPreset/Program.cs)
+See the full code sample: [EncodingWithMESPredefinedPreset](https://github.com/Azure-Samples/media-services-v3-dotnet/blob/main/VideoEncoding/Encoding_PredefinedPreset/Program.cs)
## See also
media-services Job Download Results How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/job-download-results-how-to.md
private async static Task DownloadResults(IAzureMediaServicesClient client, stri
} ```
-See the full code sample: [EncodingWithMESPredefinedPreset](https://github.com/Azure-Samples/media-services-v3-dotnet/blob/master/VideoEncoding/EncodingWithMESPredefinedPreset/Program.cs)
+See the full code sample: [EncodingWithMESPredefinedPreset](https://github.com/Azure-Samples/media-services-v3-dotnet/blob/main/VideoEncoding/Encoding_PredefinedPreset/Program.cs)
## Next steps
media-services Monitor Media Services Data Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/monitoring/monitor-media-services-data-reference.md
Media Services supports the following resource logs:
For detailed description of the top-level diagnostic logs schema, see [Supported services, schemas, and categories for Azure Diagnostic Logs](../../../azure-monitor/essentials/resource-logs-schema.md).
-## Key delivery log schema properties
+### Key delivery
These properties are specific to the key delivery log schema.
Properties of the key delivery requests schema.
## Next steps
media-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/release-notes.md
See the latest available languages in the [Analyzing Video And Audio Files conce
The Standard Encoder now supports 8-bit HEVC (H.265) encoding support. HEVC content can be delivered and packaged through the Dynamic Packager using the 'hev1' format.
-A new .NET custom encoding with HEVC sample is available in the [media-services-v3-dotnet Git Hub repository](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/VideoEncoding/EncodingWithMESCustomPreset_HEVC).
+A new .NET custom encoding with HEVC sample is available in the [media-services-v3-dotnet Git Hub repository](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/VideoEncoding/Encoding_HEVC).
In addition to custom encoding, the following new built-in HEVC encoding presets are now available: - H265ContentAwareEncoding
See the official [Azure Updates announcement](https://azure.microsoft.com/update
In addition to the new added support for HEVC (H.265) encoding, the following features are now available in the 2020-05-01 version of the encoding API. - Multiple Input File stitching is now supported using the new **JobInputClip** support.
- - An example is available for .NET showing how to [stitch two assets together](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/VideoEncoding/EncodingWithMESCustomStitchTwoAssets).
+ - An example is available for .NET showing how to [stitch two assets together](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/VideoEncoding/Encoding_StitchTwoAssets).
- Audio track selection allows customers to select and map the incoming audio tracks and route them to the output for encoding - See the [REST API OpenAPI for details](https://github.com/Azure/azure-rest-api-specs/blob/8d15dc681b081cca983e4d67fbf6441841d94ce4/specification/mediaservices/resource-manager/Microsoft.Media/stable/2020-05-01/Encoding.json#L385) on **AudioTrackDescriptor** and track selection - Track selection for encoding ΓÇô allows customers to choose tracks from an ABR source file or live archive that has multiple bitrate tracks. Extremely helpful for generating MP4s from the live event archive files.
Check out the [Azure Media Services community](media-services-community.md) arti
## Next steps - [Overview](media-services-overview.md)-- [Media Services v2 release notes](../previous/media-services-release-notes.md)
+- [Media Services v2 release notes](../previous/media-services-release-notes.md)
media-services Samples Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/samples-overview.md
You'll find description and links to the samples you may be looking for in each
| Folder | Description | |-|-|
-| [VideoEncoding/EncodingWithMESPredefinedPreset](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/VideoEncoding/EncodingWithMESPredefinedPreset)|How to submit a job using a built-in preset and an HTTP URL input, publish output asset for streaming, and download results for verification.|
-| [VideoEncoding/EncodingWithMESCustomPreset_H264](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/VideoEncoding/EncodingWithMESCustomPreset_H264)|How to submit a job using a custom H.264 encoding preset and an HTTP URL input, publish output asset for streaming, and download results for verification.|
-| [VideoEncoding/EncodingWithMESCustomPreset_HEVC](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/VideoEncoding/EncodingWithMESCustomPreset_HEVC)|How to submit a job using a custom HEVC encoding preset and an HTTP URL input, publish output asset for streaming, and download results for verification.|
-| [VideoEncoding/EncodingWithMESCustomStitchTwoAssets](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/VideoEncoding/EncodingWithMESCustomStitchTwoAssets)|How to submit a job using the JobInputSequence to stitch together 2 or more assets that may be clipped by start or end time. The resulting encoded file is a single video with all assets stitched together. The sample will also publish output asset for streaming and download results for verification.|
-| [VideoEncoding/EncodingWithMESCustomPresetAndSprite](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/VideoEncoding/EncodingWithMESCustomPresetAndSprite)|How to submit a job using a custom preset with a thumbnail sprite and an HTTP URL input, publish output asset for streaming, and download results for verification.|
+| [VideoEncoding/EncodingWithMESPredefinedPreset](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/VideoEncoding/Encoding_PredefinedPreset)|How to submit a job using a built-in preset and an HTTP URL input, publish output asset for streaming, and download results for verification.|
+| [VideoEncoding/EncodingWithMESCustomPreset_H264](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/VideoEncoding/Encoding_H264)|How to submit a job using a custom H.264 encoding preset and an HTTP URL input, publish output asset for streaming, and download results for verification.|
+| [VideoEncoding/EncodingWithMESCustomPreset_HEVC](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/VideoEncoding/Encoding_HEVC)|How to submit a job using a custom HEVC encoding preset and an HTTP URL input, publish output asset for streaming, and download results for verification.|
+| [VideoEncoding/EncodingWithMESCustomStitchTwoAssets](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/VideoEncoding/Encoding_StitchTwoAssets)|How to submit a job using the JobInputSequence to stitch together 2 or more assets that may be clipped by start or end time. The resulting encoded file is a single video with all assets stitched together. The sample will also publish output asset for streaming and download results for verification.|
+| [VideoEncoding/EncodingWithMESCustomPresetAndSprite](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/VideoEncoding/Encoding_SpriteThumbnail)|How to submit a job using a custom preset with a thumbnail sprite and an HTTP URL input, publish output asset for streaming, and download results for verification.|
| [Live/LiveEventWithDVR](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/Live/LiveEventWithDVR)|How to create a LiveEvent with a full archive up to 25 hours and an filter on the asset with 5 minutes DVR window. How to use a filter to create a locator for streaming.| | [VideoAnalytics/VideoAnalyzer](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/VideoAnalytics/VideoAnalyzer)|How to create a video analyzer transform, upload a video file to an input asset, submit a job with the transform and download the results for verification.| | [AudioAnalytics/AudioAnalyzer](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/AudioAnalytics/AudioAnalyzer)|How to create a audio analyzer transform, upload a media file to an input asset, submit a job with the transform and download the results for verification.|
Currently, there is one Python sample, [Basic Encoding with Python](https://gith
|[EncodingWithMESCustomPreset](https://github.com/Azure-Samples/media-services-v3-java/tree/master/VideoEncoding/EncodingWithMESCustomPreset)|How to create a custom encoding Transform using the StandardEncoderPreset settings.| |[EncodingWithMESPredefinedPreset](https://github.com/Azure-Samples/media-services-v3-java/tree/master/VideoEncoding/EncodingWithMESPredefinedPreset)|How to submit a job using a built-in preset and an HTTP URL input, publish output asset for streaming, and download results for verification.| -+
media-services Stream Live Tutorial With Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/stream-live-tutorial-with-api.md
Title: Stream live with Media Services v3
-: Azure Media Services
-description: Learn how to stream live with Azure Media Services v3.
+ Title: Stream live with Media Services by using .NET Core
+
+description: Learn how to stream live events by using .NET Core.
documentationcenter: ''
-# Tutorial: Stream live with Media Services
+# Tutorial: Stream live with Media Services by using .NET Core
-> [!NOTE]
-> Even though the tutorial uses [.NET SDK](/dotnet/api/microsoft.azure.management.media.models.liveevent) examples, the general steps are the same for [REST API](/rest/api/medi#sdks).
+In Azure Media Services, [live events](/rest/api/media/liveevents) are responsible for processing live streaming content. A live event provides an input endpoint (ingest URL) that you then provide to a live encoder. The live event receives input streams from the live encoder and makes them available for streaming through one or more [streaming endpoints](/rest/api/media/streamingendpoints). Live events also provide a preview endpoint (preview URL) that you use to preview and validate your stream before further processing and delivery.
-In Azure Media Services, [Live Events](/rest/api/media/liveevents) are responsible for processing live streaming content. A Live Event provides an input endpoint (ingest URL) that you then provide to a live encoder. The Live Event receives live input streams from the live encoder and makes it available for streaming through one or more [Streaming Endpoints](/rest/api/media/streamingendpoints). Live Events also provide a preview endpoint (preview URL) that you use to preview and validate your stream before further processing and delivery. This tutorial shows how to use .NET Core to create a **pass-through** type of a live event.
-
-The tutorial shows you how to:
+This tutorial shows how to use .NET Core to create a *pass-through* type of a live event. In this tutorial, you will:
> [!div class="checklist"]
-> * Download the sample app described in the topic.
+> * Download a sample app.
> * Examine the code that performs live streaming.
-> * Watch the event with [Azure Media Player](https://amp.azure.net/libs/amp/latest/docs/https://docsupdatetracker.net/index.html) at [https://ampdemo.azureedge.net](https://ampdemo.azureedge.net).
+> * Watch the event with [Azure Media Player](https://amp.azure.net/libs/amp/latest/docs/https://docsupdatetracker.net/index.html) on the [Media Player demo site](https://ampdemo.azureedge.net).
> * Clean up resources. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+> [!NOTE]
+> Even though the tutorial uses [.NET SDK](/dotnet/api/microsoft.azure.management.media.models.liveevent) examples, the general steps are the same for [REST API](/rest/api/medi#sdks).
+ ## Prerequisites
-The following items are required to complete the tutorial:
+You need the following items to complete the tutorial:
- Install Visual Studio Code or Visual Studio.-- [Create a Media Services account](./account-create-how-to.md).<br/>Make sure to copy the API Access details in JSON format or store the values needed to connect to the Media Services account in the .env file format used in this sample.-- Follow the steps in [Access Azure Media Services API with the Azure CLI](./access-api-howto.md) and save the credentials. You'll need to use them to access the API in this sample, or enter them into the .env file format.
+- [Create a Media Services account](./account-create-how-to.md). Be sure to copy the **API Access** details in JSON format or store the values needed to connect to the Media Services account in the *.env* file format used in this sample.
+- Follow the steps in [Access the Azure Media Services API with the Azure CLI](./access-api-howto.md) and save the credentials. You'll need to use them to access the API in this sample, or enter them into the *.env* file format.
+
+You need these additional items for live-streaming software:
+ - A camera or a device (like a laptop) that's used to broadcast an event.-- An on-premises software encoder that encodes your camera stream and sends it to the Media Services live streaming service using the RTMP protocol, see [recommended on-premises live encoders](encode-recommended-on-premises-live-encoders.md). The stream has to be in **RTMP** or **Smooth Streaming** format. -- For this sample, it is recommended to start with a software encoder like the free [Open Broadcast Software OBS Studio](https://obsproject.com/download) to make it simple to get started.
+- An on-premises software encoder that encodes your camera stream and sends it to the Media Services live-streaming service through the Real-Time Messaging Protocol (RTMP). For more information, see [Recommended on-premises live encoders](encode-recommended-on-premises-live-encoders.md). The stream has to be in RTMP or Smooth Streaming format.
+
+ This sample assumes that you'll use Open Broadcaster Software (OBS) Studio to broadcast RTMP to the ingest endpoint. [Install OBS Studio](https://obsproject.com/download).
> [!TIP]
-> Make sure to review [Live streaming with Media Services v3](stream-live-streaming-concept.md) before proceeding.
+> Review [Live streaming with Media Services v3](stream-live-streaming-concept.md) before proceeding.
## Download and configure the sample
-Clone the following Git Hub repository that contains the live streaming .NET sample to your machine using the following command:
+Clone the GitHub repository that contains the live-streaming .NET sample to your machine by using the following command:
+
+```bash
+git clone https://github.com/Azure-Samples/media-services-v3-dotnet.git
+```
- ```bash
- git clone https://github.com/Azure-Samples/media-services-v3-dotnet.git
- ```
+The live-streaming sample is in the [Live](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/Live) folder.
-The live streaming sample is located in the [Live](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/Live) folder.
+Open [appsettings.json](https://github.com/Azure-Samples/media-services-v3-dotnet/blob/main/Live/LiveEventWithDVR/appsettings.json) in your downloaded project. Replace the values with the credentials that you got from [Access the Azure Media Services API with the Azure CLI](./access-api-howto.md).
-Open [appsettings.json](https://github.com/Azure-Samples/media-services-v3-dotnet/blob/main/Live/LiveEventWithDVR/appsettings.json) in your downloaded project. Replace the values with the credentials you got from [accessing APIs](./access-api-howto.md).
+Note that you can also use the *.env* file format at the root of the project to set your environment variables only once for all projects in the .NET samples repository. Just copy the *sample.env* file, and then fill out the information that you got from the Media Services **API Access** page in the Azure portal or from the Azure CLI. Rename the *sample.env* file to just *.env* to use it across all projects.
-Note that you can also use the .env file format at the root of the project to set your environment variables only once for all projects in the .NET samples repository. Just copy the sample.env file, fill out the information that you obtain from the Azure portal Media Services API Access page, or from the Azure CLI. Rename the sample.env file to just ".env" to use it across all projects.
-The .gitignore file is already configured to avoid publishing the contents of this file to your forked repository.
+The *.gitignore* file is already configured to prevent publishing this file into your forked repository.
> [!IMPORTANT]
-> This sample uses a unique suffix for each resource. If you cancel the debugging or terminate the app without running it through, you'll end up with multiple Live Events in your account. <br/>Make sure to stop the running Live Events. Otherwise, you'll be **billed**!
+> This sample uses a unique suffix for each resource. If you cancel the debugging or terminate the app without running it through, you'll end up with multiple live events in your account.
+>
+> Be sure to stop the running live events. Otherwise, *you'll be billed*!
## Examine the code that performs live streaming
This section examines functions defined in the [Program.cs](https://github.com/A
The sample creates a unique suffix for each resource so that you don't have name collisions if you run the sample multiple times without cleaning up.
-### Start using Media Services APIs with .NET SDK
+### Start using Media Services APIs with the .NET SDK
-To start using Media Services APIs with .NET, you need to create an **AzureMediaServicesClient** object. To create the object, you need to supply credentials needed for the client to connect to Azure using Azure AD. In the code you cloned at the beginning of the article, the **GetCredentialsAsync** function creates the ServiceClientCredentials object based on the credentials supplied in local configuration file (appsettings.json) or through the .env environment variables file located at the root of the repository.
+To start using Media Services APIs with .NET, you need to create an `AzureMediaServicesClient` object. To create the object, you need to supply credentials for the client to connect to Azure by using Azure Active Directory. In the code that you cloned at the beginning of the article, the `GetCredentialsAsync` function creates the `ServiceClientCredentials` object based on the credentials supplied in the local configuration file (*appsettings.json*) or through the *.env* environment variables file in the root of the repository.
[!code-csharp[Main](../../../media-services-v3-dotnet/Live/LiveEventWithDVR/Program.cs#CreateMediaServicesClient)] ### Create a live event
-This section shows how to create a **pass-through** type of Live Event (LiveEventEncodingType set to None). For more information about the other available types of Live Events, see [Live Event types](live-event-outputs-concept.md#live-event-types). In addition to pass-through, you can use a live transcoding Live Event for 720P or 1080P adaptive bitrate cloud encoding.
+This section shows how to create a *pass-through* type of live event (`LiveEventEncodingType` set to `None`). For information about the available types, see [Live event types](live-event-outputs-concept.md#live-event-types). In addition to pass-through, you can use a live transcoding event for 720p or 1080p adaptive bitrate cloud encoding.
-Some things that you might want to specify when creating the live event are:
+You might want to specify the following things when you're creating the live event:
+
+* **The ingest protocol for the live event**. Currently, the RTMP, RTMPS, and Smooth Streaming protocols are supported. You can't change the protocol option while the live event or its associated live outputs are running. If you need different protocols, create a separate live event for each streaming protocol.
+* **IP restrictions on the ingest and preview**. You can define the IP addresses that are allowed to ingest a video to this live event. Allowed IP addresses can be specified as one of these choices:
+
+ * A single IP address (for example, `10.0.0.1`)
+ * An IP range that uses an IP address and a Classless Inter-Domain Routing (CIDR) subnet mask (for example, `10.0.0.1/22`)
+ * An IP range that uses an IP address and a dotted decimal subnet mask (for example, `10.0.0.1(255.255.252.0)`)
+
+ If no IP addresses are specified and there's no rule definition, then no IP address will be allowed. To allow any IP address, create a rule and set `0.0.0.0/0`. The IP addresses have to be in one of the following formats: IPv4 address with four numbers or a CIDR address range.
+* **Autostart on an event as you create it**. When autostart is set to `true`, the live event will start after creation. That means the billing starts as soon as the live event starts running. You must explicitly call `Stop` on the live event resource to halt further billing. For more information, see [Live event states and billing](live-event-states-billing-concept.md).
-* The ingest protocol for the Live Event (currently, the RTMP(S) and Smooth Streaming protocols are supported).<br/>You can't change the protocol option while the Live Event or its associated Live Outputs are running. If you require different protocols, create separate Live Event for each streaming protocol.
-* IP restrictions on the ingest and preview. You can define the IP addresses that are allowed to ingest a video to this Live Event. Allowed IP addresses can be specified as either a single IP address (for example '10.0.0.1'), an IP range using an IP address and a CIDR subnet mask (for example, '10.0.0.1/22'), or an IP range using an IP address and a dotted decimal subnet mask (for example, '10.0.0.1(255.255.252.0)').<br/>If no IP addresses are specified and there's no rule definition, then no IP address will be allowed. To allow any IP address, create a rule and set 0.0.0.0/0.<br/>The IP addresses have to be in one of the following formats: IpV4 address with four numbers or CIDR address range.
-* When creating the event, you can specify to autostart it. <br/>When autostart is set to true, the Live Event will be started after creation. That means the billing starts as soon as the Live Event starts running. You must explicitly call Stop on the Live Event resource to halt further billing. For more information, see [Live Event states and billing](live-event-states-billing-concept.md).
-There are also standby modes available to start the Live Event in a lower cost 'allocated' state that makes it faster to move to a 'Running' state. This is useful for situations like hotpools that need to hand out channels quickly to streamers.
-* For an ingest URL to be predictive and easier to maintain in a hardware based live encoder, set the "useStaticHostname" property to true. For detailed information, see [Live Event ingest URLs](live-event-outputs-concept.md#live-event-ingest-urls).
+ Standby modes are available to start the live event in a lower-cost "allocated" state that makes it faster to move to a running state. This is useful for situations like hot pools that need to hand out channels quickly to streamers.
+* **A static host name and a unique GUID**. For an ingest URL to be predictive and easier to maintain in a hardware-based live encoder, set the `useStaticHostname` property to `true`. For detailed information, see [Live event ingest URLs](live-event-outputs-concept.md#live-event-ingest-urls).
[!code-csharp[Main](../../../media-services-v3-dotnet/Live/LiveEventWithDVR/Program.cs#CreateLiveEvent)] ### Get ingest URLs
-Once the Live Event is created, you can get ingest URLs that you'll provide to the live encoder. The encoder uses these URLs to input a live stream.
+After the Live Event is created, you can get ingest URLs that you'll provide to the live encoder. The encoder uses these URLs to input a live stream.
[!code-csharp[Main](../../../media-services-v3-dotnet/Live/LiveEventWithDVR/Program.cs#GetIngestURL)] ### Get the preview URL
-Use the previewEndpoint to preview and verify that the input from the encoder is actually being received.
+Use `previewEndpoint` to preview and verify that the input from the encoder is being received.
> [!IMPORTANT]
-> Make sure that the video is flowing to the Preview URL before continuing.
+> Make sure that the video is flowing to the preview URL before you continue.
[!code-csharp[Main](../../../media-services-v3-dotnet/Live/LiveEventWithDVR/Program.cs#GetPreviewURLs)]
-### Create and manage Live Events and Live Outputs
+### Create and manage live events and live outputs
-Once you have the stream flowing into the Live Event, you can begin the streaming event by creating an Asset, Live Output, and Streaming Locator. This will archive the stream and make it available to viewers through the Streaming Endpoint.
+After you have the stream flowing into the live event, you can begin the streaming event by creating an asset, live output, and streaming locator. This will archive the stream and make it available to viewers through the streaming endpoint.
-When learning these concepts, it is best to think of the "Asset" object as the tape that you would insert into a video tape recorder in the old days. The "Live Output" is the tape recorder machine. The "Live Event" is just the video signal coming into the back of the machine.
+When you're learning these concepts, it's helpful to think of the asset object as the tape that you would insert into a video tape recorder in the old days. The live output is the tape recorder machine. The live event is just the video signal coming into the back of the machine.
-You first create the signal by creating the "Live Event". The signal is not flowing until you start that Live Event and connect your encoder to the input.
+You first create the signal by creating the live event. The signal is not flowing until you start that live event and connect your encoder to the input.
-The tape can be created at any time. It is just an empty "Asset" that you will hand to the Live Output object, the tape recorder in this analogy.
+The "tape" can be created at any time. It's just an empty asset that you'll hand to the live output object, the "tape recorder" in this analogy.
-The tape recorder can be created at any time. Meaning you can create a Live Output before starting the signal flow, or after. If you need to speed things up, it is sometimes helpful to create it before you start the signal flow.
+The "tape recorder" can also be created at any time. You can create a live output before starting the signal flow, or after. If you need to speed up things, it's sometimes helpful to create the output before you start the signal flow.
-To stop the tape recorder, you call delete on the LiveOutput. This does not delete the contents on the tape "Asset". The Asset is always kept with the archived video content until you call delete explicitly on the Asset itself.
+To stop the "tape recorder," you call `delete` on `LiveOutput`. This action doesn't delete the *contents* of the "tape" (asset). The asset is always kept with the archived video content until you call `delete` explicitly on the asset itself.
-The next section will walk through the creation of the Asset ("tape") and the Live Output ("tape recorder").
+The next section will walk through the creation of the asset and the live output.
-#### Create an Asset
+#### Create an asset
-Create an Asset for the Live Output to use. In the analogy above, this will be our tape that we record the live video signal onto. Viewers will be able to see the contents live or on-demand from this virtual tape.
+Create an asset for the live output to use. In our analogy, this will be the "tape" that we record the live video signal onto. Viewers will be able to see the contents live or on demand from this virtual tape.
[!code-csharp[Main](../../../media-services-v3-dotnet/Live/LiveEventWithDVR/Program.cs#CreateAsset)]
-#### Create a Live Output
+#### Create a live output
-Live Outputs start on creation and stop when deleted. This is going to be the "tape recorder" for our event. When you delete the Live Output, you're not deleting the underlying Asset or content in the asset. Think of it as ejecting the tape. The Asset with the recording will last as long as you like, and when it is ejected (meaning, when the Live Output is deleted) it will be available for on-demand viewing immediately.
+Live outputs start when they're created and stop when they're deleted. When you delete the live output, you're not deleting the underlying asset or content in the asset. Think of it as ejecting the "tape." The asset with the recording will last as long as you like. When it's ejected (meaning, when the live output is deleted), it will be available for on-demand viewing immediately.
[!code-csharp[Main](../../../media-services-v3-dotnet/Live/LiveEventWithDVR/Program.cs#CreateLiveOutput)]
-#### Create a Streaming Locator
+#### Create a streaming locator
> [!NOTE]
-> When your Media Services account is created, a **default** streaming endpoint is added to your account in the **Stopped** state. To start streaming your content and take advantage of [dynamic packaging](encode-dynamic-packaging-concept.md) and dynamic encryption, the streaming endpoint from which you want to stream content has to be in the **Running** state.
+> When your Media Services account is created, a default streaming endpoint is added to your account in the stopped state. To start streaming your content and take advantage of [dynamic packaging](encode-dynamic-packaging-concept.md) and dynamic encryption, the streaming endpoint from which you want to stream content has to be in the running state.
-When you publish the Asset using a Streaming Locator, the Live Event (up to the DVR window length) will continue to be viewable until the Streaming Locator's expiry or deletion, whichever comes first. This is how you make the virtual "tape" recording available for your viewing audience to see live and on-demand. The same URL can be used to watch the live event, DVR window, or the on-demand asset when the recording is complete (when the Live Output is deleted.)
+When you publish the asset by using a streaming locator, the live event (up to the DVR window length) will continue to be viewable until the streaming locator's expiration or deletion, whichever comes first. This is how you make the virtual "tape" recording available for your viewing audience to see live and on demand. The same URL can be used to watch the live event, the DVR window, or the on-demand asset when the recording is complete (when the live output is deleted).
[!code-csharp[Main](../../../media-services-v3-dotnet/Live/LiveEventWithDVR/Program.cs#CreateStreamingLocator)] ```csharp
-// Get the url to stream the output
+// Get the URL to stream the output
ListPathsResponse paths = await client.StreamingLocators.ListPathsAsync(resourceGroupName, accountName, locatorName); foreach (StreamingPath path in paths.StreamingPaths)
foreach (StreamingPath path in paths.StreamingPaths)
} ```
-### Cleaning up resources in your Media Services account
+### Clean up resources in your Media Services account
-If you're done streaming events and want to clean up the resources provisioned earlier, follow the following procedure:
+If you're done streaming events and want to clean up the resources provisioned earlier, use the following procedure:
-* Stop pushing the stream from the encoder.
-* Stop the Live Event. Once the Live Event is stopped, it won't incur any charges. When you need to start it again, it will have the same ingest URL so you won't need to reconfigure your encoder.
-* You can stop your Streaming Endpoint, unless you want to continue to provide the archive of your live event as an on-demand stream. If the Live Event is in a stopped state, it won't incur any charges.
+1. Stop pushing the stream from the encoder.
+1. Stop the live event. After the live event is stopped, it won't incur any charges. When you need to start it again, it will have the same ingest URL so you won't need to reconfigure your encoder.
+1. Stop your streaming endpoint, unless you want to continue to provide the archive of your live event as an on-demand stream. If the live event is in a stopped state, it won't incur any charges.
[!code-csharp[Main](../../../media-services-v3-dotnet/Live/LiveEventWithDVR/Program.cs#CleanupLiveEventAndOutput)]
If you're done streaming events and want to clean up the resources provisioned e
## Watch the event
-To watch the event, copy the streaming URL that you got when you ran code described in Create a Streaming Locator. You can use a media player of your choice. [Azure Media Player](https://amp.azure.net/libs/amp/latest/docs/https://docsupdatetracker.net/index.html) is available to test your stream at https://ampdemo.azureedge.net.
+To watch the event, copy the streaming URL that you got when you ran the code to create a streaming locator. You can use a media player of your choice. [Azure Media Player](https://amp.azure.net/libs/amp/latest/docs/https://docsupdatetracker.net/index.html) is available to test your stream at the [Media Player demo site](https://ampdemo.azureedge.net).
-Live Event automatically converts events to on-demand content when stopped. Even after you stop and delete the event, users can stream your archived content as a video on demand for as long as you don't delete the asset. An asset can't be deleted if it's used by an event; the event must be deleted first.
+A live event automatically converts events to on-demand content when it's stopped. Even after you stop and delete the event, users can stream your archived content as a video on demand for as long as you don't delete the asset. An asset can't be deleted if an event is using it; the event must be deleted first.
-## Clean up resources
+## Clean up remaining resources
-If you no longer need any of the resources in your resource group, including the Media Services and storage accounts you created for this tutorial, delete the resource group you created earlier.
+If you no longer need any of the resources in your resource group, including the Media Services and storage accounts that you created for this tutorial, delete the resource group that you created earlier.
-Execute the following CLI command:
+Run the following CLI command:
```azurecli-interactive az group delete --name amsResourceGroup ``` > [!IMPORTANT]
-> Leaving the Live Event running incurs billing costs. Be aware, if the project/program crashes or is closed out for any reason, it could leave the Live Event running in a billing state.
+> Leaving the live event running incurs billing costs. Be aware that if the project or program stops responding or is closed out for any reason, it might leave the live event running in a billing state.
## Ask questions, give feedback, get updates
media-services Stream Live Tutorial With Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/stream-live-tutorial-with-nodejs.md
Title: Stream live with Media Services v3 Node.js
+ Title: Stream live with Media Services by using Node.js and TypeScript
-description: Learn how to stream live using Node.js.
+description: Learn how to stream live events by using Node.js, TypeScript, and OBS Studio.
documentationcenter: ''
-# Tutorial: Stream live with Media Services using Node.js and TypeScript
+# Tutorial: Stream live with Media Services by using Node.js and TypeScript
-> [!NOTE]
-> Even though the tutorial uses Node.js examples, the general steps are the same for [REST API](/rest/api/medi#sdks).
+In Azure Media Services, [live events](/rest/api/media/liveevents) are responsible for processing live streaming content. A live event provides an input endpoint (ingest URL) that you then provide to a live encoder. The live event receives input streams from the live encoder and makes them available for streaming through one or more [streaming endpoints](/rest/api/media/streamingendpoints). Live events also provide a preview endpoint (preview URL) that you use to preview and validate your stream before further processing and delivery.
-In Azure Media Services, [Live Events](/rest/api/media/liveevents) are responsible for processing live streaming content. A Live Event provides an input endpoint (ingest URL) that you then provide to a live encoder. The Live Event receives live input streams from the live encoder and makes it available for streaming through one or more [Streaming Endpoints](/rest/api/media/streamingendpoints). Live Events also provide a preview endpoint (preview URL) that you use to preview and validate your stream before further processing and delivery. This tutorial shows how to use Node.js to create a **pass-through** type of a live event and broadcast a live stream to it using [OBS Studio](https://obsproject.com/download).
+This tutorial shows how to use Node.js and TypeScript to create a *pass-through* type of a live event and broadcast a live stream to it by using [OBS Studio](https://obsproject.com/download).
-The tutorial shows you how to:
+In this tutorial, you will:
> [!div class="checklist"]
-> * Download the sample code described in the topic.
+> * Download sample code.
> * Examine the code that configures and performs live streaming.
-> * Watch the event with [Azure Media Player](https://amp.azure.net/libs/amp/latest/docs/https://docsupdatetracker.net/index.html) at [https://ampdemo.azureedge.net](https://ampdemo.azureedge.net).
+> * Watch the event with [Azure Media Player](https://amp.azure.net/libs/amp/latest/docs/https://docsupdatetracker.net/index.html) on the [Media Player demo site](https://ampdemo.azureedge.net).
> * Clean up resources. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+> [!NOTE]
+> Even though the tutorial uses Node.js examples, the general steps are the same for [REST API](/rest/api/medi#sdks).
+ ## Prerequisites
-The following items are required to complete the tutorial:
+You need the following items to complete the tutorial:
-- Install [Node.js](https://nodejs.org/en/download/)-- Install [TypeScript](https://www.typescriptlang.org/)-- [Create a Media Services account](./create-account-howto.md).<br/>Make sure to remember the values that you used for the resource group name and Media Services account name.-- Follow the steps in [Access Azure Media Services API with the Azure CLI](./access-api-howto.md) and save the credentials. You will need to use them to access the API and configure your environment variables file.-- Walk through the [Configure and Connect with Node.js](./configure-connect-nodejs-howto.md) how-to first to understand how to use the Node.js client SDK
+- Install [Node.js](https://nodejs.org/en/download/).
+- Install [TypeScript](https://www.typescriptlang.org/).
+- [Create a Media Services account](./create-account-howto.md). Remember the values that you use for the resource group name and Media Services account name.
+- Follow the steps in [Access the Azure Media Services API with the Azure CLI](./access-api-howto.md) and save the credentials. You'll need them to access the API and configure your environment variables file.
+- Walk through the [Configure and connect with Node.js](./configure-connect-nodejs-howto.md) article to understand how to use the Node.js client SDK.
- Install Visual Studio Code or Visual Studio.-- [Setup your Visual Studio Code environment](https://code.visualstudio.com/Docs/languages/typescript) to support the TypeScript language.
+- [Set up your Visual Studio Code environment](https://code.visualstudio.com/Docs/languages/typescript) to support the TypeScript language.
-## Additional settings for live streaming software
+You need these additional items for live-streaming software:
- A camera or a device (like a laptop) that's used to broadcast an event.-- An on-premises software encoder that encodes your camera stream and sends it to the Media Services live streaming service using the RTMP protocol, see [recommended on-premises live encoders](encode-recommended-on-premises-live-encoders.md). The stream has to be in **RTMP** or **Smooth Streaming** format. -- For this sample, it is recommended to start with a software encoder like the free [Open Broadcast Software OBS Studio](https://obsproject.com/download) to make it simple to get started.
+- An on-premises software encoder that encodes your camera stream and sends it to the Media Services live-streaming service through the Real-Time Messaging Protocol (RTMP). For more information, see [Recommended on-premises live encoders](encode-recommended-on-premises-live-encoders.md). The stream has to be in RTMP or Smooth Streaming format.
+
+ This sample assumes that you'll use Open Broadcaster Software (OBS) Studio to broadcast RTMP to the ingest endpoint. [Install OBS Studio](https://obsproject.com/download).
-This sample assumes that you will use OBS Studio to broadcast RTMP to the ingest endpoint. Install OBS Studio first.
-Use the following encoding settings in OBS Studio:
+ Use the following encoding settings in OBS Studio:
-- Encoder: NVIDIA NVENC (if available) or x264-- Rate Control: CBR-- Bitrate: 2500 Kbps (or something reasonable for your laptop)-- Keyframe Interval: 2 s, or 1 s for low latency -- Preset: Low-latency Quality or Performance (NVENC) or "veryfast" using x264-- Profile: high-- GPU: 0 (Auto)-- Max B-frames: 2
+ - Encoder: NVIDIA NVENC (if available) or x264
+ - Rate control: CBR
+ - Bit rate: 2,500 Kbps (or something reasonable for your computer)
+ - Keyframe interval: 2 s, or 1 s for low latency
+ - Preset: Low-latency Quality or Performance (NVENC) or "veryfast" using x264
+ - Profile: high
+ - GPU: 0 (Auto)
+ - Max B-frames: 2
> [!TIP]
-> Make sure to review [Live streaming with Media Services v3](stream-live-streaming-concept.md) before proceeding.
+> Review [Live streaming with Media Services v3](stream-live-streaming-concept.md) before proceeding.
## Download and configure the sample
-Clone the following Git Hub repository that contains the live streaming Node.js sample to your machine using the following command:
+Clone the GitHub repository that contains the live-streaming Node.js sample to your machine by using the following command:
- ```bash
- git clone https://github.com/Azure-Samples/media-services-v3-node-tutorials.git
- ```
+```bash
+git clone https://github.com/Azure-Samples/media-services-v3-node-tutorials.git
+```
-The live streaming sample is located in the [Live](https://github.com/Azure-Samples/media-services-v3-node-tutorials/tree/main/AMSv3Samples/Live) folder.
+The live-streaming sample is in the [Live](https://github.com/Azure-Samples/media-services-v3-node-tutorials/tree/main/AMSv3Samples/Live) folder.
-In the [AMSv3Samples](https://github.com/Azure-Samples/media-services-v3-node-tutorials/tree/main/AMSv3Samples) folder copy the file named "sample.env" to a new file called ".env" to store your environment variable settings that you gathered in the article [Access Azure Media Services API with the Azure CLI](./access-api-howto.md).
-Make sure that the file includes the "dot" (.) in front of .env" for this to work with the code sample correctly.
+In the [AMSv3Samples](https://github.com/Azure-Samples/media-services-v3-node-tutorials/tree/main/AMSv3Samples) folder, copy the file named *sample.env* to a new file called *.env* to store your environment variable settings that you gathered in the article [Access the Azure Media Services API with the Azure CLI](./access-api-howto.md).
+Make sure that the file name includes the dot (.) in front of "env" so it can work with the code sample correctly.
-The [.env file](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/AMSv3Samples/sample.env) contains your AAD Application key and secret along with account name and subscription information required to authenticate SDK access to your Media Services account. The .gitignore file is already configured to prevent publishing this file into your forked repository. Do not allow these credentials to be leaked as they are important secrets for your account.
+The [.env file](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/AMSv3Samples/sample.env) contains your Azure Active Directory (Azure AD) application key and secret. It also contains the account name and subscription information required to authenticate SDK access to your Media Services account. The *.gitignore* file is already configured to prevent publishing this file into your forked repository. Don't allow these credentials to be leaked, because they're important secrets for your account.
> [!IMPORTANT]
-> This sample uses a unique suffix for each resource. If you cancel the debugging or terminate the app without running it through, you'll end up with multiple Live Events in your account. <br/>Make sure to stop the running Live Events. Otherwise, you'll be **billed**! Run the program all the way through to completion to clean-up resources automatically. If the program crashes, or you inadvertently stop the debugger and break out of the program execution, you should double check the portal to confirm that you have not left any live events in the Running or Stand-by states that would result in unwanted billing charges.
+> This sample uses a unique suffix for each resource. If you cancel the debugging or terminate the app without running it through, you'll end up with multiple live events in your account.
+>
+> Be sure to stop the running live events. Otherwise, *you'll be billed*! Run the program all the way to completion to clean up resources automatically. If the program stops, or you inadvertently stop the debugger and break out of the program execution, you should double check the portal to confirm that you haven't left any live events in the running or standby state that would result in unwanted billing charges.
## Examine the TypeScript code for live streaming
This section examines functions defined in the [index.ts](https://github.com/Azu
The sample creates a unique suffix for each resource so that you don't have name collisions if you run the sample multiple times without cleaning up.
-### Start using Media Services SDK for Node.js with TypeScript
+### Start using the Media Services SDK for Node.js with TypeScript
-To start using Media Services APIs with Node.js, you need to first add the [@azure/arm-mediaservices](https://www.npmjs.com/package/@azure/arm-mediaservices) SDK module using the npm package manager
+To start using Media Services APIs with Node.js, you need to first add the [@azure/arm-mediaservices](https://www.npmjs.com/package/@azure/arm-mediaservices) SDK module by using the npm package
```bash npm install @azure/arm-mediaservices ```
-In the package.json, this is already configured for you, so you just need to run *npm install* to load the modules and dependencies.
+In the *package.json* file, this is already configured for you. You just need to run `npm install` to load the modules and dependencies:
-1. Open a **command prompt**, browse to the sample's directory.
-1. Change directory into the AMSv3Samples folder.
+1. Open a command prompt and browse to the sample's directory.
+1. Change directory into the *AMSv3Samples* folder:
```bash cd AMSv3Samples ```
-1. Install the packages used in the *packages.json* file.
+1. Install the packages used in the *packages.json* file:
```bash npm install ```
-1. Launch Visual Studio Code from the *AMSv3Samples* Folder. (This is required to launch from the folder where the *.vscode* folder and *tsconfig.json* files are located.)
+1. Open Visual Studio Code from the *AMSv3Samples* folder. (This is required to start from the folder where the *.vscode* folder and *tsconfig.json* files are located.)
```bash cd ..
In the package.json, this is already configured for you, so you just need to run
``` Open the folder for *Live*, and open the *index.ts* file in the Visual Studio Code editor.
-While in the *index.ts* file, press F5 to launch the debugger.
+
+While you're in the *index.ts* file, select the F5 key to open the debugger.
### Create the Media Services client The following code snippet shows how to create the Media Services client in Node.js.
-Notice that in this code we are first setting the **longRunningOperationRetryTimeout** property of the AzureMediaServicesOptions to 2 seconds to reduce the time it takes to poll for the status of a long running operation on the Azure Resource Management endpoint. Since most of the operations on Live Events are going to be asynchronous, and could take some time to complete, you should reduce this polling interval on the SDK from the default value of 30 seconds to speed up the time it takes to complete major operations like creating Live Events, Starting and Stopping which are all asynchronous calls. Two seconds is the recommended value for most use case scenarios.
+
+In this code, you're changing the `longRunningOperationRetryTimeout` property of `AzureMediaServicesOptions` from the default value of 30 seconds to 2 seconds. This change reduces the time it takes to poll for the status of a long-running operation on the Azure Resource Manager endpoint. It will shorten the time to complete major operations like creating live events, starting, and stopping, which are all asynchronous calls. We recommend a value of 2 seconds for most scenarios.
[!code-typescript[Main](../../../media-services-v3-node-tutorials/AMSv3Samples/Live/index.ts#CreateMediaServicesClient)] ### Create a live event
-This section shows how to create a **pass-through** type of Live Event (LiveEventEncodingType set to None). For more information about the other available types of Live Events, see [Live Event types](live-event-outputs-concept.md#live-event-types). In addition to pass-through, you can use a live transcoding Live Event for 720P or 1080P adaptive bitrate cloud encoding.
+This section shows how to create a *pass-through* type of live event (`LiveEventEncodingType` set to `None`). For information about the available types, see [Live event types](live-event-outputs-concept.md#live-event-types). In addition to pass-through, you can use a live encoding event for 720p or 1080p adaptive bitrate cloud encoding.
-Some things that you might want to specify when creating the live event are:
+You might want to specify the following things when you're creating the live event:
+
+* **The ingest protocol for the live event**. Currently, the RTMP, RTMPS, and Smooth Streaming protocols are supported. You can't change the protocol option while the live event or its associated live outputs are running. If you need different protocols, create a separate live event for each streaming protocol.
+* **IP restrictions on the ingest and preview**. You can define the IP addresses that are allowed to ingest a video to this live event. Allowed IP addresses can be specified as one of these choices:
+
+ * A single IP address (for example, `10.0.0.1`)
+ * An IP range that uses an IP address and a Classless Inter-Domain Routing (CIDR) subnet mask (for example, `10.0.0.1/22`)
+ * An IP range that uses an IP address and a dotted decimal subnet mask (for example, `10.0.0.1(255.255.252.0)`)
-* The ingest protocol for the Live Event (currently, the RTMP(S) and Smooth Streaming protocols are supported).<br/>You can't change the protocol option while the Live Event or its associated Live Outputs are running. If you require different protocols, create separate Live Event for each streaming protocol.
-* IP restrictions on the ingest and preview. You can define the IP addresses that are allowed to ingest a video to this Live Event. Allowed IP addresses can be specified as either a single IP address (for example '10.0.0.1'), an IP range using an IP address and a CIDR subnet mask (for example, '10.0.0.1/22'), or an IP range using an IP address and a dotted decimal subnet mask (for example, '10.0.0.1(255.255.252.0)').<br/>If no IP addresses are specified and there's no rule definition, then no IP address will be allowed. To allow any IP address, create a rule and set 0.0.0.0/0.<br/>The IP addresses have to be in one of the following formats: IpV4 address with four numbers or CIDR address range.
-* When creating the event, you can specify to autostart it. <br/>When autostart is set to true, the Live Event will be started after creation. That means the billing starts as soon as the Live Event starts running. You must explicitly call Stop on the Live Event resource to halt further billing. For more information, see [Live Event states and billing](live-event-states-billing-concept.md).
-There are also standby modes available to start the Live Event in a lower cost 'allocated' state that makes it faster to move to a 'Running' state. This is useful for situations like hot pools that need to hand out channels quickly to streamers.
-* For an ingest URL to be predictive and easier to maintain in a hardware based live encoder, set the "useStaticHostname" property to true, as well as use a custom unique GUID in the "accessToken". For detailed information, see [Live Event ingest URLs](live-event-outputs-concept.md#live-event-ingest-urls).
+ If no IP addresses are specified and there's no rule definition, then no IP address will be allowed. To allow any IP address, create a rule and set `0.0.0.0/0`. The IP addresses have to be in one of the following formats: IPv4 address with four numbers or a CIDR address range.
+* **Autostart on an event as you create it**. When autostart is set to `true`, the live event will start after creation. That means the billing starts as soon as the live event starts running. You must explicitly call `Stop` on the live event resource to halt further billing. For more information, see [Live event states and billing](live-event-states-billing-concept.md).
+
+ Standby modes are available to start the live event in a lower-cost "allocated" state that makes it faster to move to a running state. This is useful for situations like hot pools that need to hand out channels quickly to streamers.
+* **A static host name and a unique GUID**. For an ingest URL to be predictive and easier to maintain in a hardware-based live encoder, set the `useStaticHostname` property to `true`. For `accessToken`, use a custom, unique GUID. For detailed information, see [Live event ingest URLs](live-event-outputs-concept.md#live-event-ingest-urls).
[!code-typescript[Main](../../../media-services-v3-node-tutorials/AMSv3Samples/Live/index.ts#CreateLiveEvent)]
-### Create an Asset to record and archive the live event
+### Create an asset to record and archive the live event
+
+In the following block of code, you create an empty asset to use as the "tape" to record your live event archive to.
-In this block of code, you will create an empty Asset to use as the "tape" to record your live event archive to.
-When learning these concepts, it is best to think of the "Asset" object as the tape that you would insert into a video tape recorder in the old days. The "Live Output" is the tape recorder machine. The "Live Event" is just the video signal coming into the back of the machine.
+When you're learning these concepts, it's helpful to think of the asset object as the tape that you would insert into a video tape recorder in the old days. The live output is the tape recorder machine. The live event is just the video signal coming into the back of the machine.
-Keep in mind tha the Asset, or "tape", can be created at any time. It is just an empty "Asset" that you will hand to the Live Output object, the tape recorder in this analogy.
+Keep in mind that the asset, or "tape," can be created at any time. You'll hand the empty asset to the live output object, the "tape recorder" in this analogy.
[!code-typescript[Main](../../../media-services-v3-node-tutorials/AMSv3Samples/Live/index.ts#CreateAsset)]
-### Create the Live Output
+### Create the live output
-In this section, we create a Live Output that uses the Asset name as input to tell where to record the live event to. In addition, we set up the time-shifting (DVR) window to be used in the recording.
-The sample code shows how to set up a 1 hour time-shifting window. This will allow clients to play back anywhere in the last hour of the event. In addition, only the last 1 hour of the live event will remain in the archive. You can extend this to be up to 25 hours long if needed. Also note that you are able to control the output manifest naming used the HLS and DASH manifests in your URL paths when published.
+In this section, you create a live output that uses the asset name as input to tell where to record the live event to. In addition, you set up the time-shifting (DVR) window to be used in the recording.
-The Live Output, or "tape recorder" in our analogy, can be created at any time as well. Meaning you can create a Live Output before starting the signal flow, or after. If you need to speed up things, it is often helpful to create it before you start the signal flow.
+The sample code shows how to set up a 1-hour time-shifting window. This window will allow clients to play back anything in the last hour of the event. In addition, only the last 1 hour of the live event will remain in the archive. You can extend this window to be up to 25 hours if needed. Also note that you can control the output manifest naming that the HTTP Live Streaming (HLS) and Dynamic Adaptive Streaming over HTTP (DASH) manifests use in your URL paths when published.
-Live Outputs start on creation and stop when deleted. When you delete the Live Output, you're not deleting the underlying Asset or content in the asset. Think of it as ejecting the tape. The Asset with the recording will last as long as you like, and when it is ejected (meaning, when the Live Output is deleted) it will be available for on-demand viewing immediately.
+The live output, or "tape recorder" in our analogy, can be created at any time as well. You can create a live output before starting the signal flow, or after. If you need to speed up things, it's often helpful to create the output before you start the signal flow.
+
+Live outputs start when they're created and stop when they're deleted. When you delete the live output, you're not deleting the underlying asset or content in the asset. Think of it as ejecting the "tape." The asset with the recording will last as long as you like. When it's ejected (meaning, when the live output is deleted), it will be available for on-demand viewing immediately.
[!code-typescript[Main](../../../media-services-v3-node-tutorials/AMSv3Samples/Live/index.ts#CreateLiveOutput)] ### Get ingest URLs
-Once the Live Event is created, you can get ingest URLs that you'll provide to the live encoder. The encoder uses these URLs to input a live stream using the RTMP protocol
+After the live event is created, you can get ingest URLs that you'll provide to the live encoder. The encoder uses these URLs to input a live stream by using the RTMP protocol.
[!code-typescript[Main](../../../media-services-v3-node-tutorials/AMSv3Samples/Live/index.ts#GetIngestURL)] ### Get the preview URL
-Use the previewEndpoint to preview and verify that the input from the encoder is actually being received.
+Use `previewEndpoint` to preview and verify that the input from the encoder is being received.
> [!IMPORTANT]
-> Make sure that the video is flowing to the Preview URL before continuing.
+> Make sure that the video is flowing to the preview URL before you continue.
[!code-typescript[Main](../../../media-services-v3-node-tutorials/AMSv3Samples/Live/index.ts#GetPreviewURL)]
-### Create and manage Live Events and Live Outputs
+### Create and manage live events and live outputs
+
+After you have the stream flowing into the live event, you can begin the streaming event by publishing a streaming locator for your client players to use. This will make it available to viewers through the streaming endpoint.
-Once you have the stream flowing into the Live Event, you can begin the streaming event by publishing a Streaming Locator for your client players to use. This will make it available to viewers through the Streaming Endpoint.
+You first create the signal by creating the live event. The signal is not flowing until you start that live event and connect your encoder to the input.
-You first create the signal by creating the "Live Event". The signal is not flowing until you start that Live Event and connect your encoder to the input.
+To stop the "tape recorder," you call `delete` on `LiveOutput`. This action doesn't delete the *contents* of your archive on the "tape" (asset). It only deletes the "tape recorder" and stops the archiving. The asset is always kept with the archived video content until you call `delete` explicitly on the asset itself. As soon as you delete `LiveOutput`, the recorded content of the asset is still available to play back through any published streaming locator URLs.
-To stop the "tape recorder", you call delete on the LiveOutput. This does not actually delete the **contents** of your archive on the tape "Asset", it only deletes the "tape recorder" and stops the archiving. The Asset is always kept with the archived video content until you call delete explicitly on the Asset itself. As soon as you delete the liveOutput, the recorded content of the "Asset" is still available to play back through any already published Streaming Locator URLs. If you wish to remove the ability for a customer to play back the archived content you would first need to remove all locators from the asset and also flush the CDN cache on the URL path if you are using a CDN for delivery. Otherwise the content will live in the CDN's cache for the standard time-to-live setting on the CDN (which could be up to 72 hours.)
+If you want to remove the ability of a client to play back the archived content, you first need to remove all locators from the asset. You also flush the content delivery network (CDN) cache on the URL path, if you're using a CDN for delivery. Otherwise, the content will live in the CDN's cache for the standard time-to-live setting on the CDN (which might be up to 72 hours).
-#### Create a Streaming Locator to publish HLS and DASH manifests
+#### Create a streaming locator to publish HLS and DASH manifests
> [!NOTE]
-> When your Media Services account is created, a **default** streaming endpoint is added to your account in the **Stopped** state. To start streaming your content and take advantage of [dynamic packaging](encode-dynamic-packaging-concept.md) and dynamic encryption, the streaming endpoint from which you want to stream content has to be in the **Running** state.
+> When your Media Services account is created, a default streaming endpoint is added to your account in the stopped state. To start streaming your content and take advantage of [dynamic packaging](encode-dynamic-packaging-concept.md) and dynamic encryption, the streaming endpoint from which you want to stream content has to be in the running state.
-When you publish the Asset using a Streaming Locator, the Live Event (up to the DVR window length) will continue to be viewable until the Streaming Locator's expiry or deletion, whichever comes first. This is how you make the virtual "tape" recording available for your viewing audience to see live and on-demand. The same URL can be used to watch the live event, DVR window, or the on-demand asset when the recording is complete (when the Live Output is deleted.)
+When you publish the asset by using a streaming locator, the live event (up to the DVR window length) will continue to be viewable until the streaming locator's expiration or deletion, whichever comes first. This is how you make the virtual "tape" recording available for your viewing audience to see live and on demand. The same URL can be used to watch the live event, the DVR window, or the on-demand asset when the recording is complete (when the live output is deleted).
[!code-typescript[Main](../../../media-services-v3-node-tutorials/AMSv3Samples/Live/index.ts#CreateStreamingLocator)] #### Build the paths to the HLS and DASH manifests
-The method BuildManifestPaths in the sample shows how to deterministically create the streaming paths to use for DASH or HLS delivery to various clients and player frameworks.
+The method `BuildManifestPaths` in the sample shows how to deterministically create the streaming paths to use for HLS or DASH delivery to various clients and player frameworks.
[!code-typescript[Main](../../../media-services-v3-node-tutorials/AMSv3Samples/Live/index.ts#BuildManifestPaths)] ## Watch the event
-To watch the event, copy the streaming URL that you got when you ran code described in Create a Streaming Locator. You can use a media player of your choice. [Azure Media Player](https://amp.azure.net/libs/amp/latest/docs/https://docsupdatetracker.net/index.html) is available to test your stream at https://ampdemo.azureedge.net.
+To watch the event, copy the streaming URL that you got when you ran the code to create a streaming locator. You can use a media player of your choice. [Azure Media Player](https://amp.azure.net/libs/amp/latest/docs/https://docsupdatetracker.net/index.html) is available to test your stream at the [Media Player demo site](https://ampdemo.azureedge.net).
-Live Event automatically converts events to on-demand content when stopped. Even after you stop and delete the event, users can stream your archived content as a video on demand for as long as you don't delete the asset. An asset can't be deleted if it's used by an event; the event must be deleted first.
+A live event automatically converts events to on-demand content when it's stopped. Even after you stop and delete the event, users can stream your archived content as a video on demand for as long as you don't delete the asset. An asset can't be deleted if an event is using it; the event must be deleted first.
-### Cleaning up resources in your Media Services account
+## Clean up resources in your Media Services account
-If you run the application all the way through, it will automatically clean up all of the resources used in the function called "cleanUpResources". Make sure that the application or debugger runs all the way to completion or you may leak resources and end up with running live events in your account. Double check in the Azure portal to confirm that all resources are cleaned up in your Media Services account.
+If you run the application all the way through, it will automatically clean up all of the resources used in the `cleanUpResources` function. Make sure that the application or debugger runs all the way to completion, or you might leak resources and end up with running live events in your account. Double check in the Azure portal to confirm that all resources are cleaned up in your Media Services account.
-In the sample code, refer to the **cleanUpResources** method for details.
+In the sample code, refer to the `cleanUpResources` method for details.
> [!IMPORTANT]
-> Leaving the Live Event running incurs billing costs. Be aware, if the project/program crashes or is closed out for any reason, it could leave the Live Event running in a billing state.
+> Leaving the live event running incurs billing costs. Be aware that if the project or program stops responding or is closed out for any reason, it might leave the live event running in a billing state.
## Ask questions, give feedback, get updates
Check out the [Azure Media Services community](media-services-community.md) arti
## More developer documentation for Node.js on Azure -- [Azure for JavaScript & Node.js developers](/azure/developer/javascript/)-- [Media Services source code in the @azure/azure-sdk-for-js Git Hub repo](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/mediaservices/arm-mediaservices)-- [Azure Package Documentation for Node.js developers](/javascript/api/overview/azure/)
+- [Azure for JavaScript and Node.js developers](/azure/developer/javascript/)
+- [Media Services source code in the @azure/azure-sdk-for-js GitHub repo](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/mediaservices/arm-mediaservices)
+- [Azure package documentation for Node.js developers](/javascript/api/overview/azure/)
## Next steps
media-services Transform Create Thumbnail Sprites How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/transform-create-thumbnail-sprites-how-to.md
Add the code snippets for your preferred development language.
[!INCLUDE [code snippet for thumbnail sprites using REST](./includes/task-create-thumb-sprites-dotnet.md)]
-See also thumbnail sprite creation in a [complete encoding sample](https://github.com/Azure-Samples/media-services-v3-dotnet/blob/master/VideoEncoding/EncodingWithMESCustomPresetAndSprite/Program.cs#L261-L287) at Azure Samples.
+See also thumbnail sprite creation in a [complete encoding sample](https://github.com/Azure-Samples/media-services-v3-dotnet/blob/main/VideoEncoding/Encoding_SpriteThumbnail/Program.cs#L261-L287) at Azure Samples.
media-services Transform Custom Presets How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/transform-custom-presets-how-to.md
Clone a GitHub repository that contains the full .NET Core sample to your machin
git clone https://github.com/Azure-Samples/media-services-v3-dotnet.git ```
-The custom preset sample is located in the [Encoding with a custom preset using .NET](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/VideoEncoding/EncodingWithMESCustomPreset_H264) folder.
+The custom preset sample is located in the [Encoding with a custom preset using .NET](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/VideoEncoding/Encoding_H264) folder.
## Create a transform with a custom preset
media-services Transform Stitch How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/transform-stitch-how-to.md
The following example illustrates how you can generate a preset to stitch two or
## Prerequisites
-Clone or download the [Media Services .NET samples](https://github.com/Azure-Samples/media-services-v3-dotnet/). The code that is referenced below is located in the [EncodingWithMESCustomStitchTwoAssets folder](https://github.com/Azure-Samples/media-services-v3-dotnet/blob/main/VideoEncoding/EncodingWithMESCustomStitchTwoAssets/Program.cs).
+Clone or download the [Media Services .NET samples](https://github.com/Azure-Samples/media-services-v3-dotnet/). The code that is referenced below is located in the [EncodingWithMESCustomStitchTwoAssets folder](https://github.com/Azure-Samples/media-services-v3-dotnet/blob/main/VideoEncoding/Encoding_StitchTwoAssets/Program.cs).
media-services Media Services Sspk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/previous/media-services-sspk.md
Interim and Final SSPK licensees can submit technical questions to [smoothpk@mic
* ZTE Corporation ## Microsoft Smooth Streaming Client Final Product Agreement Licensees
-* Advanced Digital Broadcast SA
* AirTies Kablosuz Iletism Sanayive Dis Ticaret A.S. * AmTRAN Technology Co., Ltd * Arcadyan Technology Corporation
media-services Observed People Tracing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/video-indexer/observed-people-tracing.md
- Title: Trace observed people in a video-
-description: This topic gives an overview of a Trace observed people in a video concept.
------- Previously updated : 04/30/2021---
-# Trace observed people in a video
-
-Video Indexer detects observed people in videos and provides information such as the location of the person in the video frame and the exact timestamp (start, end) when a person appears. The API returns the bounding box coordinates (in pixels) for each person instance detected, including detection confidence.
-
-Some scenarios where this feature could be useful:
-
-* Post-event analysisΓÇödetect and track a personΓÇÖs movement to better analyze an accident or crime post-event (for example, explosion, bank robbery, incident).
-* Improve efficiency when creating raw data for content creators, like video advertising, news, or sport games (for example, find people wearing a red shirt in a video archive).
-* Create a summary out of a long video, like court evidence of a specific personΓÇÖs appearance in a video, using the same detected personΓÇÖs ID.
-* Learn and analyze trends over time, for exampleΓÇöhow customers move across aisles in a shopping mall or how much time they spend in checkout lines.
-
-For example, if a video contains a person, the detect operation will list the personΓÇÖs appearances together with their coordinates in the video frames. You can use this functionality to determine the personΓÇÖs path in a video. It also lets you determine whether there are multiple instances of the same person in a video.
-
-The newly added **Observed people tracing** feature is available when indexing your file by choosing the **Advanced option** -> **Advanced video** or **Advanced video + audio** preset (under **Video + audio indexing**). Standard indexing will not include this new advanced model.
--
-
-When you choose to see **Insights** of your video on the [Video Indexer](https://www.videoindexer.ai/account/login) website, the Observed People Tracing will show up on the page with all detected people thumbnails. You can choose a thumbnail of a person and see where the person appears in the video player.
-
-The following JSON response illustrates what Video Indexer returns when tracing observed people:
-
-```json
- {
- ...
- "videos": [
- {
- ...
- "insights": {
- ...
- "observedPeople": [{
- "id": 1,
- "thumbnailId": "560f2cfb-90d0-4d6d-93cb-72bd1388e19d",
- "instances": [
- {
- "adjustedStart": "0:00:01.5682333",
- "adjustedEnd": "0:00:02.7027",
- "start": "0:00:01.5682333",
- "end": "0:00:02.7027"
- }
- ]
- },
- {
- "id": 2,
- "thumbnailId": "9c97ae13-558c-446b-9989-21ac27439da0",
- "instances": [
- {
- "adjustedStart": "0:00:16.7167",
- "adjustedEnd": "0:00:18.018",
- "start": "0:00:16.7167",
- "end": "0:00:18.018"
- }
- ]
- },]
- }
- ...
- }
- ]
-}
-```
-
-## Limitations and assumptions
-
-It's important to note the limitations of Observed People Tracing, to avoid or mitigate the effects of false negatives (missed detections) and limited detail.
-
-* To optimize the detector results, use video footage from static cameras (although a moving camera or mixed scenes will also give results).
-* People are generally not detected if they appear small (minimum person height is 200 pixels).
-* Maximum frame size is HD
-* People are generally not detected if they're not standing or walking.
-* Low quality video (for example, dark lighting conditions) may impact the detection results.
-* The recommended frame rate ΓÇöat least 30 FPS.
-* Recommended video input should contain up to 10 people in a single frame. The feature could work with more people in a single frame, but the detection result retrieves up to 10 people in a frame with the detection highest confidence.
-* People with similar clothes (for example, people wear uniforms, players in sport games) could be detected as the same person with the same ID number.
-* Occlusions ΓÇô there maybe errors where there are occlusions (scene/self or occlusions by other people).
-* Pose: The tracks may be split due to different poses (back/front)
-
-## Next steps
-
-Review [overview](video-indexer-overview.md)
media-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/video-indexer/release-notes.md
To stay up-to-date with the most recent developments, this article provides you
* Known issues * Bug fixes * Deprecated functionality-
-## April 2021
-
-### Observed people tracing (public preview)
-
-Video Indexer now detects observed people in videos and provides information such as the location of the person in the video frame and the exact timestamp (start, end) when a person appears. The API returns the bounding box coordinates (in pixels) for each person instance detected, including its confidence.
-
-For example, if a video contains a person, the detect operation will list the person appearances together with their coordinates in the video frames. You can use this functionality to determine the person path in a video. It also lets you determine whether there are multiple instances of the same person in a video.
-
-The newly added observed people tracing feature is available when indexing your file by choosing the **Advanced option** -> **Advanced video** or **Advanced video + audio** preset (under Video + audio indexing). Standard and basic indexing presets will not include this new advanced model.
-
-When you choose to see Insights of your video on the Video Indexer website, the Observed People Tracing will show up on the page with all detected people thumbnails. You can choose a thumbnail of a person and see where the person appears in the video player.
-
-The feature is also available in the JSON file generated by Video Indexer. For more information, see [Trace observed people in a video](observed-people-tracing.md).
-
-### Acoustic event detection(AED) available in closed captions
-
-Video Indexer Closed captions file can now include the detected acoustic events. It can be downloaded from the Video Indexer portal and available as an artifact in the GetArtifact API.
-
-### Improved upload experience in the portal
-
-Video Indexer has a new upload experience in the portal:
-
-* New developer portal in available in Fairfax
-
-Video Indexer new [Developer Portal](https://api-portal.videoindexer.ai), is now also available in Gov-cloud.
- ## March 2021 ### Audio analysis
media-services Video Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/video-indexer/video-indexer-overview.md
The following list shows the insights you can retrieve from your videos using Vi
* **Rolling credits**: Identifies the beginning and end of the rolling credits in the end of TV shows and movies. * **Animated characters detection** (preview): Detection, grouping, and recognition of characters in animated content via integration with [Cognitive Services custom vision](https://azure.microsoft.com/services/cognitive-services/custom-vision-service/). For more information, see [Animated character detection](animated-characters-recognition.md). * **Editorial shot type detection**: Tagging shots based on their type (like wide shot, medium shot, close up, extreme close up, two shot, multiple people, outdoor and indoor, and so on). For more information, see [Editorial shot type detection](scenes-shots-keyframes.md#editorial-shot-type-detection).
-* **Observed People Tracing**: detects observed people in videos and provides information such as the location of the person in the video frame (using bounding boxes) and the exact timestamp (start, end) and confidence when a person appears. For more information, see [Trace observed people in a video](observed-people-tracing.md).
### Audio insights
migrate How To Use Azure Migrate With Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/how-to-use-azure-migrate-with-private-endpoints.md
ms. Previously updated : 04/07/2020 Last updated : 05/10/2020 # Using Azure Migrate with private endpoints
The private endpoint connectivity method is recommended when there is an organiz
#### Other integrated tools
-Some migration tools may not be able to upload usage data to the Azure Migrate project if public network access is disabled. The Azure Migrate project should be configured to allow traffic from all networks to receive data from other Microsoft or external [independent software vendor (ISV)](./migrate-services-overview.md#isv-integration) offerings.
+Other migration tools may not be able to upload usage data to the Azure Migrate project if the public network access is disabled. The Azure Migrate project should be configured to allow traffic from all networks to receive data from other Microsoft or external [independent software vendor (ISV)](./migrate-services-overview.md#isv-integration) offerings.
To enable public network access for the Azure Migrate project, go to the Azure Migrate **properties page** on the Azure portal, select **No**, and select **Save**.
This creates a migrate project and attaches a private endpoint to it.
#### Download the appliance installer file
-Azure Migrate: Discovery and assessment use a lightweight Azure Migrate appliance. The appliance performs server discovery and sends server configuration and performance metadata to Azure Migrate.
+Azure Migrate: Discovery and assessment use a lightweight Azure Migrate appliance. The appliance performs server discovery and sends server configuration and performance metadata to Azure Migrate.
+
+> [!Note]
+> The option to deploy an appliance using a template (OVA for servers on VMware environment and VHD Hyper-V environment) isn't supported for Azure Migrate projects with private endpoint connectivity.
To set up the appliance, download the zipped file containing the installer script from the portal. Copy the zipped file on the server that will host the appliance. After downloading the zipped file, verify the file security and run the installer script to deploy the appliance.
Make sure the private endpoint is an approved state.
![View Private Endpoint connection](./media/how-to-use-azure-migrate-with-private-endpoints/private-endpoint-connection.png) +
+### Validate the Data flow through the private endpoints
+Review the data flow metrics to verify the traffic flow through private endpoints. Select the private endpoint in the Azure Migrate: Server Assessment and Server Migration Properties page. This will redirect to the private endpoint overview section in Azure Private Link Center. In the left menu, select **Metrics** to view the _Data Bytes In_ and _Data Bytes Out_ information to view the traffic flow.
+ ### Verify DNS resolution The on-premises appliance (or replication provider) will access the Azure Migrate resources using their fully qualified private link domain names (FQDNs). You may require additional DNS settings to resolve the private IP address of the private endpoints from the source environment. [Use this article](../private-link/private-endpoint-dns.md#on-premises-workloads-using-a-dns-forwarder) to understand the DNS configuration scenarios that can help troubleshoot any network connectivity issues.
postgresql Howto Hyperscale Create Users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/howto-hyperscale-create-users.md
Last updated 1/8/2019
# Create users in Azure Database for PostgreSQL - Hyperscale (Citus)
-> [!NOTE]
-> The term "users" refers to users within a Hyperscale (Citus)
-> server group. To learn instead about Azure subscription users and their
-> privileges, visit the [Azure role-based access control (Azure RBAC)
-> article](../role-based-access-control/built-in-roles.md) or review [how to
-> customize roles](../role-based-access-control/custom-roles.md).
- ## The server admin account The PostgreSQL engine uses
search Search Indexer Howto Access Trusted Service Exception https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-indexer-howto-access-trusted-service-exception.md
Previously updated : 10/14/2020 Last updated : 05/11/2021 # Indexer access to Azure Storage using the trusted service exception (Azure Cognitive Search)
Indexers in an Azure Cognitive Search service that access data in Azure Storage
Follow the instructions in [Set up a connection to an Azure Storage account using a managed identity](search-howto-managed-identities-storage.md). When you are finished, you will have registered your search service with Azure Active Directory as a trusted service, and you will have granted permissions in Azure Storage that gives the search identity specific rights to access data or information.
+> [!NOTE]
+> The instructions guide you through a portal approach for configuring Cognitive Search as a trusted service. To accomplish this in code, you can use the [REST API](/rest/api/searchmanagement/services/createorupdate), [Azure PowerShell](search-manage-powershell.md#create-a-service-with-a-system-assigned-managed-identity), or [Azure CLI](search-manage-azure-cli.md#create-a-service-with-a-system-assigned-managed-identity).
+ ## Step 2: Allow trusted Microsoft services to access the storage account In the Azure portal, navigate to the **Firewalls and Virtual Networks** tab of the storage account. Ensure that the option **Allow trusted Microsoft services to access this storage account** is checked. This option will only permit the specific search service instance with appropriate role-based access to the storage account (strong authentication) to access data in the storage account, even if it's secured by IP firewall rules.
search Semantic Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/semantic-search-overview.md
Previously updated : 04/01/2021 Last updated : 05/06/2021 # Semantic search in Azure Cognitive Search
Semantic search is a collection of query-related capabilities that add semantic relevance and language understanding to search results. This article is a high-level introduction to semantic search all-up, with descriptions of each feature and how they work collectively. The embedded video describes the technology, and the section at the end covers availability and pricing.
-We recommend reviewing this article for background, but if you'd rather get started right away, follow these steps:
+Semantic search is a premium feature. We recommend this article for background, but if you'd rather get started, follow these steps:
-1. [Sign up for the preview](https://aka.ms/SemanticSearchPreviewSignup), assuming a service that meets [regional and tier requirements](#availability-and-pricing).
-1. Create new or modify existing queries to return [semantic captions and highlights](semantic-how-to-query-request.md).
-1. Add a few more properties to also return [semantic answers](semantic-answers.md).
+1. [Sign up for the preview](https://aka.ms/SemanticSearchPreviewSignup) on a search service that meets [regional and tier requirements](#availability-and-pricing).
+1. Upon acceptance into the preview program, create or modify query requests to return [semantic captions and highlights](semantic-how-to-query-request.md).
+1. Add a few more query properties to also return [semantic answers](semantic-answers.md).
1. Optionally, include a [spell check](speller-how-to-add.md) query property to maximize precision and recall. ## What is semantic search?
-Semantic search is an optional layer of search-related AI that extends the traditional query execution pipeline with a semantic ranking model, and returns additional properties that improve the user experience.
+Semantic search is an optional layer of query-related AI that extends the traditional query execution pipeline in two ways. It adds a semantic ranking model, and it returns additional properties in the response that improve the user experience.
-*Semantic ranking* looks for context and relatedness among terms, elevating matches that make more sense given the query. Language understanding finds *captions* and *answers* within your content that summarize the matching document or answer a question, which can then be rendered on a search results page for a more productive search experience.
+*Semantic ranking* looks for context and relatedness among terms, elevating matches that make more sense given the query. Language understanding finds summarizations or *captions* and *answers* within your content and includes them in the response, which can then be rendered on a search results page for a more productive search experience.
State-of-the-art pretrained models are used for summarization and ranking. To maintain the fast performance that users expect from search, semantic summarization and ranking are applied to just the top 50 results, as scored by the [default similarity scoring algorithm](index-similarity-and-scoring.md#similarity-ranking-algorithms). Using those results as the document corpus, semantic ranking re-scores those results based on the semantic strength of the match.
To use semantic capabilities in queries, you'll need to make small modifications
## Availability and pricing
-Semantic capabilities are available through [sign-up registration](https://aka.ms/SemanticSearchPreviewSignup), on search services created at a Standard tier (S1, S2, S3), located in one of these regions: North Central US, West US, West US 2, East US 2, North Europe, West Europe.
+Semantic search is available through [sign-up registration](https://aka.ms/SemanticSearchPreviewSignup). Between preview launch on March 2 through early June, semantic features are offered free of charge.
-Spell correction is available in the same regions, but has no tier restrictions. If you have an existing service that meets tier and region criteria, only sign up is required.
+| Feature | Tier | Region | Sign up | Projected pricing |
+|||--||-|
+| Semantic search (captions, highlights, answers) | Standard tier (S1, S2, S3) | North Central US, West US, West US 2, East US 2, North Europe, West Europe | Required | Starting June 1, expected pricing is USD $500/month for the first 250,000 queries, and $2 for each additional 1,000 queries. |
+| Spell check | Any | North Central US, West US, West US 2, East US 2, North Europe, West Europe | Required | None (free) |
-Between preview launch on March 2 through late April, spell correction and semantic ranking are offered free of charge. Later in April the computational costs of running this functionality will become a billable event. The expected cost is about USD $500/month for 250,000 queries. You can find detailed cost information documented in the [Cognitive Search pricing page](https://azure.microsoft.com/pricing/details/search/) and in [Estimate and manage costs](search-sku-manage-costs.md).
+There is one [sign-up registration](https://aka.ms/SemanticSearchPreviewSignup) for both semantic features and spell check.
+
+You can use spell check without semantic search, free of charge. Charges will accrue when query requests include `queryType=semantic`, for non-empty search strings (queries with `search=*` are not charged).
+
+Final pricing information will be documented in the [Cognitive Search pricing page](https://azure.microsoft.com/pricing/details/search/) and in [Estimate and manage costs](search-sku-manage-costs.md).
## Next steps
security-center Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/upcoming-changes.md
Previously updated : 04/27/2021 Last updated : 05/09/2021
If you're looking for the latest release notes, you'll find them in the [What's
| Planned change | Estimated date for change | |--|| | [Two recommendations from "Apply system updates" security control being deprecated](#two-recommendations-from-apply-system-updates-security-control-being-deprecated) | April 2021 |
+| [Prefix for Kubernetes alerts changing from "AKS_" to "K8s_"](#prefix-for-kubernetes-alerts-changing-from-aks_-to-k8s_) | June 2021 |
| [Legacy implementation of ISO 27001 is being replaced with new ISO 27001:2013](#legacy-implementation-of-iso-27001-is-being-replaced-with-new-iso-270012013) | June 2021 | | [Recommendations from AWS will be released for general availability (GA)](#recommendations-from-aws-will-be-released-for-general-availability-ga) | **August** 2021 | | [Enhancements to SQL data classification recommendation](#enhancements-to-sql-data-classification-recommendation) | Q2 2021 |
The following two recommendations are being deprecated:
- **Kubernetes Services should be upgraded to a non-vulnerable Kubernetes version** - This recommendation's evaluations aren't as wide-ranging as we'd like them to be. The current version of this recommendation will eventually be replaced with an enhanced version that's better aligned with our customer's security needs.
+### Prefix for Kubernetes alerts changing from "AKS_" to "K8s_"
+
+**Estimated date for change:** June 2021
+
+Azure Defender for Kubernetes recently expanded to protect Kubernetes clusters hosted on-premises and in multi cloud environments. Learn more in [Use Azure Defender for Kubernetes to protect hybrid and multi-cloud Kubernetes deployments (in preview)](release-notes.md#use-azure-defender-for-kubernetes-to-protect-hybrid-and-multi-cloud-kubernetes-deployments-in-preview).
+
+To reflect the fact that the security alerts provided by Azure Defender for Kubernetes are no longer restricted to clusters on Azure Kubernetes Service, the prefix for the alert types is changing from "AKS_" to "K8s_". Where necessary, the names and descriptions will be updated too. For example, this alert:
+
+|Alert (alert type)|Description|
+|-|-|
+|Kubernetes penetration testing tool detected<br>(**AKS**_PenTestToolsKubeHunter)|Kubernetes audit log analysis detected usage of Kubernetes penetration testing tool in the **AKS** cluster. While this behavior can be legitimate, attackers might use such public tools for malicious purposes.
+|||
+
+will become:
+
+|Alert (alert type)|Description|
+|-|-|
+|Kubernetes penetration testing tool detected<br>(**K8s**_PenTestToolsKubeHunter)|Kubernetes audit log analysis detected usage of Kubernetes penetration testing tool in the **Kubernetes** cluster. While this behavior can be legitimate, attackers might use such public tools for malicious purposes.|
+|||
+
+Any suppression rules that refer to alerts beginning "AKS_" will be automatically converted. If you've setup SIEM exports, or custom automation scripts that refer to Kubernetes alerts by alert type, you'll need to update them with the new alert types.
+
+For a full list of the Kubernetes alerts, see [Alerts for Kubernetes clusters](alerts-reference.md#alerts-akscluster).
+ ### Legacy implementation of ISO 27001 is being replaced with new ISO 27001:2013 The legacy implementation of ISO 27001 will be removed from Security Center's regulatory compliance dashboard. If you're tracking your ISO 27001 compliance with Security Center, onboard the new ISO 27001:2013 standard for all relevant management groups or subscriptions, and the current legacy ISO 27001 will soon be removed from the dashboard.
sentinel Identify Threats With Entity Behavior Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/identify-threats-with-entity-behavior-analytics.md
ms.devlang: na
na Previously updated : 02/10/2021 Last updated : 04/19/2021
## What is User and Entity Behavior Analytics (UEBA)?
-### The concept
- Identifying threats inside your organization and their potential impact - whether a compromised entity or a malicious insider - has always been a time-consuming and labor-intensive process. Sifting through alerts, connecting the dots, and active hunting all add up to massive amounts of time and effort expended with minimal returns, and the possibility of sophisticated threats simply evading discovery. Particularly elusive threats like zero-day, targeted, and advanced persistent threats can be the most dangerous to your organization, making their detection all the more critical. The UEBA capability in Azure Sentinel eliminates the drudgery from your analystsΓÇÖ workloads and the uncertainty from their efforts, and delivers high-fidelity, actionable intelligence, so they can focus on investigation and remediation. As Azure Sentinel collects logs and alerts from all of its connected data sources, it analyzes them and builds baseline behavioral profiles of your organizationΓÇÖs entities (such as users, hosts, IP addresses, and applications) across time and peer group horizon. Using a variety of techniques and machine learning capabilities, Azure Sentinel can then identify anomalous activity and help you determine if an asset has been compromised. Not only that, but it can also figure out the relative sensitivity of particular assets, identify peer groups of assets, and evaluate the potential impact of any given compromised asset (its ΓÇ£blast radiusΓÇ¥). Armed with this information, you can effectively prioritize your investigation and incident handling.
-### Architecture overview
+### UEBA analytics architecture
:::image type="content" source="media/identify-threats-with-entity-behavior-analytics/entity-behavior-analytics-architecture.png" alt-text="Entity behavior analytics architecture":::
Entity pages are designed to be part of multiple usage scenarios, and can be acc
:::image type="content" source="./media/identify-threats-with-entity-behavior-analytics/entity-pages-use-cases.png" alt-text="Entity page use cases":::
-## Data schema
-
-### Behavior analytics table
-
-| Field | Description |
-|||
-| TenantId | unique ID number of the tenant |
-| SourceRecordId | unique ID number of the EBA event |
-| TimeGenerated | timestamp of the activity's occurrence |
-| TimeProcessed | timestamp of the activity's processing by the EBA engine |
-| ActivityType | high-level category of the activity |
-| ActionType | normalized name of the activity |
-| UserName | username of the user that initiated the activity |
-| UserPrincipalName | full username of the user that initiated the activity |
-| EventSource | data source that provided the original event |
-| SourceIPAddress | IP address from which activity was initiated |
-| SourceIPLocation | country from which activity was initiated, enriched from IP address |
-| SourceDevice | hostname of the device that initiated the activity |
-| DestinationIPAddress | IP address of the target of the activity |
-| DestinationIPLocation | country of the target of the activity, enriched from IP address |
-| DestinationDevice | name of the target device |
-| **UsersInsights** | contextual enrichments of involved users |
-| **DevicesInsights** | contextual enrichments of involved devices |
-| **ActivityInsights** | contextual analysis of activity based on our profiling |
-| **InvestigationPriority** | anomaly score, between 0-10 (0=benign, 10=highly anomalous) |
-|
-
-You can see the full set of contextual enrichments referenced in **UsersInsights**, **DevicesInsights**, and **ActivityInsights** in the [UEBA enrichments reference document](ueba-enrichments.md).
-
-### Querying behavior analytics data
+For more information about the data displayed in the **Entity behavior analytics** table, see [Azure Sentinel UEBA enrichments reference](ueba-enrichments.md).
+
+## Querying behavior analytics data
Using [KQL](/azure/data-explorer/kusto/query/), we can query the Behavioral Analytics Table.
You can use the [Jupyter notebook](https://github.com/Azure/Azure-Sentinel-Noteb
Permission analytics helps determine the potential impact of the compromising of an organizational asset by an attacker. This impact is also known as the asset's "blast radius." Security analysts can use this information to prioritize investigations and incident handling.
-Azure Sentinel determines the direct and transitive access rights held by a given user to Azure resources, by evaluating the Azure subscriptions the user can access directly or via groups or service principals. This information, as well as the full list of the user's Azure AD security group membership, is then stored in the **UserAccessAnalytics** table. The screenshot below shows a sample row in the UserAccessAnalytics table, for the user Alex Johnson. **Source entity** is the user or service principal account, and **target entity** is the resource that the source entity has access to. The values of **access level** and **access type** depend on the access-control model of the target entity. You can see that Alex has Contributor access to the Azure subscription *Contoso Hotels Tenant*. The access control model of the subscription is Azure RBAC.
+Azure Sentinel determines the direct and transitive access rights held by a given user to Azure resources, by evaluating the Azure subscriptions the user can access directly or via groups or service principals. This information, as well as the full list of the user's Azure AD security group membership, is then stored in the **UserAccessAnalytics** table. The screenshot below shows a sample row in the UserAccessAnalytics table, for the user Alex Johnson. **Source entity** is the user or service principal account, and **target entity** is the resource that the source entity has access to. The values of **access level** and **access type** depend on the access-control model of the target entity. You can see that Alex has Contributor access to the Azure subscription *Contoso Hotels Tenant*. The access control model of the subscription is Azure RBAC.
:::image type="content" source="./media/identify-threats-with-entity-behavior-analytics/user-access-analytics.png" alt-text="Screen shot of user access analytics table":::
You can use the [Jupyter notebook](https://github.com/Azure/Azure-Sentinel-Noteb
### Hunting queries and exploration queries
-Azure Sentinel provides out-of-the-box a set of hunting queries, exploration queries, and a workbook, based on the BehaviorAnalytics table. These tools present enriched data, focused on specific use cases, that indicate anomalous behavior.
+Azure Sentinel provides out-of-the-box a set of hunting queries, exploration queries, and the **User and Entity Behavior Analytics** workbook, which is based on the **BehaviorAnalytics** table. These tools present enriched data, focused on specific use cases, that indicate anomalous behavior.
+
+For more information, see:
+
+- [Hunt for threats with Azure Sentinel](hunting.md)
+- [Visualize and monitor your data](tutorial-monitor-your-data.md)
+
+As legacy defense tools become obsolete, organizations may have such a vast and porous digital estate that it becomes unmanageable to obtain a comprehensive picture of the risk and posture their environment may be facing. Relying heavily on reactive efforts, such as analytics and rules, enable bad actors to learn how to evade those efforts. This is where UEBA comes to play, by providing risk scoring methodologies and algorithms to figure out what is really happening.
-Learn more about [hunting and the investigation graph](./hunting.md) in Azure Sentinel.
## Next steps In this document, you learned about Azure Sentinel's entity behavior analytics capabilities. For practical guidance on implementation, and to use the insights you've gained, see the following articles: - [Enable entity behavior analytics](./enable-entity-behavior-analytics.md) in Azure Sentinel.
+- [Investigate incidents with UEBA data](investigate-with-ueba.md).
- [Hunt for security threats](./hunting.md).+
+For more information, also see the [Azure Sentinel UEBA enrichments reference](ueba-enrichments.md).
sentinel Investigate With Ueba https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/investigate-with-ueba.md
+
+ Title: Investigate incidents with UEBA data | Microsoft Docs
+description: Learn how to use UEBA data while investigating to gain greater context to potentially malicious activity occurring in your organization.
+
+documentationcenter: na
++
+editor: ''
+++
+ms.devlang: na
+
+ na
+ Last updated : 05/09/2021+++
+# Investigate incidents with UEBA data
+
+This article describes common methods and sample procedures for using [user entity behavior analytics (UEBA)](identify-threats-with-entity-behavior-analytics.md) in your regular investigation workflows.
+
+## Prerequisites
+
+Before you can use UEBA data in your investigations, you must [enable User and Entity Behavior Analytics (UEBA) in Azure Sentinel](enable-entity-behavior-analytics.md).
+
+Start looking for machine powered insights about one week after enabling UEBA.
+
+## Run proactive, routine searches in entity data
+
+We recommend running regular, proactive searches through user activity to create leads for further investigation.
+
+You can use the Azure Sentinel [User and Entity Behavior Analytics workbook](identify-threats-with-entity-behavior-analytics.md#hunting-queries-and-exploration-queries) to query your data, such as for:
+
+- **Top risky users**, with anomalies or attached incidents
+- **Data on specific users**, to determine whether subject has indeed been compromised, or whether there is an insider threat due to action deviating from the user's profile.
+
+Additionally, capture non-routine actions in the UEBA workbook, and use them to find anomalous activities and potentially non-compliance practices.
+
+### Investigate an anomalous sign-in
+
+For example, the following steps follow the investigation of a user who connected to a VPN that they'd never used before, which is an anomalous activity.
+
+1. In the Sentinel **Workbooks** area, search for and open the **User and Entity Behavior Analytics** workbook.
+1. Search for a specific user name to investigate and select their name in the **Top users to investigate** table.
+1. Scroll down through the **Incidents Breakdown** and **Anomalies Breakdown** tables to view the incidents and anomalies associated with the selected user.
+1. In the anomaly, such as one named **Anomalous Successful Logon**, review the details shown in the table to investigate. For example:
+
+ |Step |Description |
+ |||
+ |**Note the description on the right** | Each anomaly has a description, with a link to learn more in the [MITRE ATT&CK knowledge base](https://attack.mitre.org/). <br>For example: <br><br> ***Initial Access*** <br>*The adversary is trying to get into your network.* <br>*Initial Access consists of techniques that use various entry vectors to gain their initial foothold within a network. Techniques used to gain a foothold include targeted spear phishing and exploiting weaknesses on public-facing web servers. Footholds gained through initial access may allow for continued access, like valid accounts and use of external remote services, or may be limited-use due to changing passwords.* |
+ |**Note the text in the Description column** | In the anomaly row, scroll to the right to view an additional description. Select the link to view the full text. For example: <br><br> *Adversaries may steal the credentials of a specific user or service account using Credential Access techniques or capture credentials earlier in their reconnaissance process through social engineering for means of gaining Initial Access. APT33, for example, has used valid accounts for initial access. The query below generates an output of successful Sign-in performed by a user from a new geo location he has never connected from before, and none of his peers as well.* |
+ |**Note the UsersInsights data** | Scroll further to the right in the anomaly row to view the user insight data, such as the account display name and the account object ID. Select the text to view the full data on the right. |
+ |**Note the Evidence data** | Scroll further to the right in the anomaly row to view the evidence data for the anomaly. Select the text view the full data on the right, such as the following fields: <br><br>- **ActionUncommonlyPerformedByUser** <br>- **UncommonHighVolumeOfActions** <br>- **FirstTimeUserConnectedFromCountry** <br>- **CountryUncommonlyConnectedFromAmongPeers** <br>- **FirstTimeUserConnectedViaISP** <br>- **ISPUncommonlyUsedAmongPeers** <br>- **CountryUncommonlyConnectedFromInTenant** <br>- **ISPUncommonlyUsedInTenant** |
+ | | |
+
+Use the data found in the **User and Entity Behavior Analytics** workbook to determine whether the user activity is suspicious and requires further action.
+
+## Use UEBA data to analyze false positives
+
+Sometimes, an incident captured in an investigation is a false positive.
+
+A common example of a false positive is when impossible travel activity is detected, such as a user who signed into an application or portal from both New York and London within the same hour. While Azure Sentinel notes the impossible travel as an anomaly, an investigation with the user might clarify that a VPN was used with an alternative location to where the user actually was.
+
+### Analyze a false positive
+
+For example, for an **Impossible travel** incident, after confirming with the user that a VPN was used, navigate from the incident to the user entity page. Use the data displayed there to determine whether the locations captured are included in the user's commonly-known locations.
+
+For example:
+
+[ ![Open an incident's user entity page.](media/ueba/open-entity-pages.png) ](media/ueba/open-entity-pages.png#lightbox)
+
+The user entity page is also linked from the [incident page](tutorial-investigate-cases.md#how-to-investigate-incidents) itself and the [investigation graph](tutorial-investigate-cases.md#use-the-investigation-graph-to-deep-dive).
+
+> [!TIP]
+> After confirming the data on the user entity page for the specific user associated with the incident, go to the Azure Sentinel **Hunting** area to understand whether the user's peers usually connect from the same locations as well. If so, this knowledge would make an even stronger case for a false positive.
+>
+> In the **Hunting** area, run the **Anomalous Geo Location Logon** query. For more information, see [Hunt for threats with Azure Sentinel](hunting.md).
+>
+
+## Identify password spray and spear phishing attempts
+
+Without multi-factor authentication (MFA) enabled, user credentials are vulnerable to attackers looking to compromise attacks with [password spraying](https://www.microsoft.com/security/blog/2020/04/23/protecting-organization-password-spray-attacks/) or [spear phishing](https://www.microsoft.com/security/blog/2019/12/02/spear-phishing-campaigns-sharper-than-you-think/) attempts.
+
+### Investigate a password spray incident with UEBA insights
+
+For example, to investigate a password spray incident with UEBA insights, you might do the following to learn more:
+
+1. In the incident, on the bottom left, select **Investigate** to view the accounts, machines, and other data points that were potentially targeted in an attack.
+
+ Browsing through the data, you might see an administrator account with a relatively large number of logon failures. While this is suspicious, you might not want to restrict the account without further confirmation.
+
+1. Select the administrative user entity in the map, and then select **Insights** on the right to find more details, such as the graph of sign-ins over time.
+
+1. Select **Info** on the right, and then select **View full details** to jump to the [user entity page](identify-threats-with-entity-behavior-analytics.md#entity-pages) to drill down further.
+
+ For example, note whether this is the user's first Potential Password spray incident, or watch the user's sign in history to understand whether the failures were anomalous.
+
+> [!TIP]
+> You can also run the **Anomalous Failed Logon** [hunting query](hunting.md) to monitor all of an organization's anomalous failed logins. Use the results from the query to start investigations into possible password spray attacks.
+>
+
+## Next steps
+
+Learn more about UEBA, investigations, and hunting:
+
+- [Identify advanced threats with User and Entity Behavior Analytics (UEBA) in Azure Sentinel](identify-threats-with-entity-behavior-analytics.md)
+- [Azure Sentinel UEBA enrichments reference](ueba-enrichments.md)
+- [Tutorial: Investigate incidents with Azure Sentinel](tutorial-investigate-cases.md)
+- [Hunt for threats with Azure Sentinel](hunting.md)
sentinel Ueba Enrichments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/ueba-enrichments.md
# Azure Sentinel UEBA enrichments reference
-These tables list and describe entity enrichments that can be used to focus and sharpen your investigation of security incidents.
+This article describes the **Behavior analytics** table found on the [entity details pages](identify-threats-with-entity-behavior-analytics.md#how-to-use-entity-pages), as well as other entity enrichments you can use to focus and sharpen your security incident investigations.
-The first two tables, **User insights** and **Device insights**, contain entity information from Active Directory / Azure AD and Microsoft Threat Intelligence sources.
+The [User insights table](#user-insights-table) and the [Device insights table](#device-insights-table) contain entity information from Active Directory / Azure AD and Microsoft Threat Intelligence sources.
-<a name="baseline-explained"></a>The rest of the tables, under **Activity insights tables**, contain entity information based on the behavioral profiles built by Azure Sentinel's entity behavior analytics. The activities are analyzed against a baseline that is dynamically compiled each time it is used. Each activity has its defined lookback period from which this dynamic baseline is derived. This period is specified in the [**Baseline**](#activity-insights-tables) column in this table.
+Other tables, described under [Activity insights tables](#activity-insights-tables), contain entity information based on the behavioral profiles built by Azure Sentinel's entity behavior analytics.
+
+<a name="baseline-explained"></a>User activities are analyzed against a baseline that is dynamically compiled each time it is used. Each activity has its defined lookback period from which the dynamic baseline is derived. The lookback period is specified in the [**Baseline**](#activity-insights-tables) column in this table.
> [!NOTE]
-> The **Enrichment name** field in all three tables displays two rows of information. The first, in **bold**, is the "friendly name" of the enrichment. The second *(in italics and parentheses)* is the field name of the enrichment as stored in the [**Behavior Analytics table**](identify-threats-with-entity-behavior-analytics.md#data-schema).
+> The **Enrichment name** field in the [User insights table](#user-insights-table), [Device insights table](#device-insights-table), and the [Activity insights tables](#activity-insights-tables) displays two rows of information.
+>
+> The first, in **bold**, is the "friendly name" of the enrichment. The second *(in italics and parentheses)* is the field name of the enrichment as stored in the [**Behavior Analytics table**](#behavior-analytics-table).
+
+## Behavior analytics table
+
+The following table describes the behavior analytics data displayed on each [entity details page](identify-threats-with-entity-behavior-analytics.md#how-to-use-entity-pages) in Azure Sentinel.
+
+| Field | Description |
+|||
+| **TenantId** | unique ID number of the tenant |
+| **SourceRecordId** | unique ID number of the EBA event |
+| **TimeGenerated** | timestamp of the activity's occurrence |
+| **TimeProcessed** | timestamp of the activity's processing by the EBA engine |
+| **ActivityType** | high-level category of the activity |
+| **ActionType** | normalized name of the activity |
+| **UserName** | username of the user that initiated the activity |
+| **UserPrincipalName** | full username of the user that initiated the activity |
+| **EventSource** | data source that provided the original event |
+| **SourceIPAddress** | IP address from which activity was initiated |
+| **SourceIPLocation** | country from which activity was initiated, enriched from IP address |
+| **SourceDevice** | hostname of the device that initiated the activity |
+| **DestinationIPAddress** | IP address of the target of the activity |
+| **DestinationIPLocation** | country of the target of the activity, enriched from IP address |
+| **DestinationDevice** | name of the target device |
+| **UsersInsights** | contextual enrichments of involved users |
+| **DevicesInsights** | contextual enrichments of involved devices |
+| **ActivityInsights** | contextual analysis of activity based on our profiling |
+| **InvestigationPriority** | anomaly score, between 0-10 (0=benign, 10=highly anomalous) |
+|
## User insights table
+The following table describes the <?> listed in the **User insights** table in Azure Sentinel (where?)
+ | Enrichment name | Description | Sample value | | | | | | | **Account display name**<br>*(AccountDisplayName)* | The account display name of the user. | Admin, Hayden Cook |
The first two tables, **User insights** and **Device insights**, contain entity
## Activity insights tables
-#### Action performed
+### Action performed
| Enrichment name | [Baseline](#baseline-explained) (days) | Description | Sample value | | | | | |
The first two tables, **User insights** and **Device insights**, contain entity
| **Action uncommonly performed in tenant**<br>*(ActionUncommonlyPerformedInTenant)* | 180 | The action is not commonly performed in the organization. | True, False | |
-#### App used
+### App used
| Enrichment name | [Baseline](#baseline-explained) (days) | Description | Sample value | | | | | |
The first two tables, **User insights** and **Device insights**, contain entity
| **App uncommonly used in tenant**<br>*(AppUncommonlyUsedInTenant)* | 180 | The app is not commonly used in the organization. | True, False | |
-#### Browser used
+### Browser used
| Enrichment name | [Baseline](#baseline-explained) (days) | Description | Sample value | | | | | |
The first two tables, **User insights** and **Device insights**, contain entity
| **Browser uncommonly used in tenant**<br>*(BrowserUncommonlyUsedInTenant)* | 30 | The browser is not commonly used in the organization. | True, False | |
-#### Country connected from
+### Country connected from
| Enrichment name | [Baseline](#baseline-explained) (days) | Description | Sample value | | | | | |
The first two tables, **User insights** and **Device insights**, contain entity
| **Country uncommonly connected from in tenant**<br>*(CountryUncommonlyConnectedFromInTenant)* | 90 | The geo location, as resolved from the IP address, is not commonly connected from in the organization. | True, False | |
-#### Device used to connect
+### Device used to connect
| Enrichment name | [Baseline](#baseline-explained) (days) | Description | Sample value | | | | | |
The first two tables, **User insights** and **Device insights**, contain entity
| **Device uncommonly used in tenant**<br>*(DeviceUncommonlyUsedInTenant)* | 180 | The device is not commonly used in the organization. | True, False | |
-#### Other device-related
+### Other device-related
| Enrichment name | [Baseline](#baseline-explained) (days) | Description | Sample value | | | | | |
The first two tables, **User insights** and **Device insights**, contain entity
| **Device family uncommonly used in tenant**<br>*(DeviceFamilyUncommonlyUsedInTenant)* | 30 | The device family is not commonly used in the organization. | True, False | |
-#### Internet Service Provider used to connect
+### Internet Service Provider used to connect
| Enrichment name | [Baseline](#baseline-explained) (days) | Description | Sample value | | | | | |
The first two tables, **User insights** and **Device insights**, contain entity
| **ISP uncommonly used in tenant**<br>*(ISPUncommonlyUsedInTenant)* | 30 | The ISP is not commonly used in the organization. | True, False | |
-#### Resource accessed
+### Resource accessed
| Enrichment name | [Baseline](#baseline-explained) (days) | Description | Sample value | | | | | |
The first two tables, **User insights** and **Device insights**, contain entity
| **Resource uncommonly accessed in tenant**<br>*(ResourceUncommonlyAccessedInTenant)* | 180 | The resource is not commonly accessed in the organization. | True, False | |
-#### Miscellaneous
+### Miscellaneous
| Enrichment name | [Baseline](#baseline-explained) (days) | Description | Sample value | | | | | |
service-fabric Faq Managed Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/faq-managed-cluster.md
description: Frequently asked questions about Service Fabric managed clusters, i
Previously updated : 02/15/2021 Last updated : 5/10/2021 # Service Fabric managed clusters frequently asked questions
-Here are some frequently asked questions (FAQs) and answers for Service Fabric managed clusters (preview).
+Here are some frequently asked questions (FAQs) and answers for Service Fabric managed clusters.
## General
Here are some frequently asked questions (FAQs) and answers for Service Fabric m
Service Fabric managed clusters are an evolution of the Service Fabric cluster resource model designed to make it easier to deploy and manage clusters. A Service Fabric managed cluster uses the Azure Resource Manager encapsulation model so that a user only needs to define and deploy a single cluster resource compared to the many independent resources that they must deploy today (Virtual Machine Scale Set, Load Balancer, IP, and more).
-### What regions are supported in the preview?
+### What regions are supported?
-Supported regions for the Service Fabric managed clusters preview include `centraluseuap`, `eastus2euap`, `eastasia`, `northeurope`, `westcentralus`, and `eastus2`.
+Service Fabric managed clusters are supported in all public cloud regions.
### Can I do an in-place migration of my existing Service Fabric cluster to a managed cluster resource?
-No. At this time you would need to create a new Service Fabric cluster resource to use the new Service Fabric managed cluster resource type.
+No. You will need to create a new Service Fabric cluster resource to use the new Service Fabric managed cluster resource type.
### Is there an additional cost for Service Fabric managed clusters?
No. It isn't currently possible to have an internal-only load balancer. We recom
### Can I autoscale my cluster?
-Autoscaling is not currently available in the preview.
+Autoscaling is not currently supported.
### Can I deploy my cluster across availability zones?
-Cross availability zone clusters are not currently available in the preview.
+Yes, Service Fabric managed clusters which span availability zones are supported in Azure regions which support availability zones. For more information, see [Service Fabric managed clusters across availability zones](.\service-fabric-cross-availability-zones.md).
+
+### Can I deploy stateless node types on a Service Fabric managed cluster?
+
+Yes, Service Fabric managed clusters support stateless node types for any secondary node types. For more information, see [Service Fabric managed cluster stateless node types](./how-to-managed-cluster-stateless-node-type.md)
### Can I select between automatic and manual upgrades for my cluster runtime?
-In the preview, all runtime upgrades will be completed automatically.
+Yes, you can select between automatic and manual upgrades. For more information, see [cluster upgrades](https://docs.microsoft.com/azure/service-fabric/service-fabric-cluster-upgrade).
## Applications
The local development experience remains unchanged from existing Service Fabric
### Can I deploy my applications as an Azure Resource Manager resource? Yes. Support has been added to deploy applications as an Azure Resource Manager resource (in addition to deployment using PowerShell and CLI). To get started, see [Deploy a Service Fabric managed cluster application using ARM template](how-to-managed-cluster-app-deployment-template.md).+
+### Can I deploy applications with managed identities?
+
+ Yes, applications with managed identities can be deployed to a Service Fabric managed cluster. For more information see, [Application managed identities](.\concepts-managed-identity.md).
service-fabric How To Enable Managed Cluster Disk Encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/how-to-enable-managed-cluster-disk-encryption.md
Title: Enable disk encryption for Service Fabric managed cluster (preview) nodes
+ Title: Enable disk encryption for Service Fabric managed cluster nodes
description: Learn how to enable disk encryption for Azure Service Fabric managed cluster nodes in Windows using an ARM template. Previously updated : 02/15/2021 Last updated : 5/10/2021
-# Enable disk encryption for Service Fabric managed cluster (preview) nodes
+# Enable disk encryption for Service Fabric managed cluster nodes
In this guide, you'll learn how to enable disk encryption on Service Fabric managed cluster nodes in Windows using the [Azure Disk Encryption](../virtual-machines/windows/disk-encryption-overview.md) capability for [virtual machine scale sets](../virtual-machine-scale-sets/disk-encryption-azure-resource-manager.md) through Azure Resource Manager (ARM) templates.
Azure Disk Encryption requires an Azure Key Vault to control and manage disk enc
### Create Key Vault with disk encryption enabled
-Run the following commands to create a new Key Vault for disk encryption. Make sure the region for your Key Vault is [supported for Service Fabric managed clusters](faq-managed-cluster.md#what-regions-are-supported-in-the-preview) and is in the same region as your cluster.
+Run the following commands to create a new Key Vault for disk encryption. Make sure the region for your Key Vault is [supported for Service Fabric managed clusters](faq-managed-cluster.md#what-regions-are-supported) and is in the same region as your cluster.
# [PowerShell](#tab/azure-powershell)
EncryptionExtensionInstalled : True
[Azure Disk Encryption for Windows VMs](../virtual-machines/windows/disk-encryption-overview.md)
-[Encrypt virtual machine scale sets with Azure Resource Manager](../virtual-machine-scale-sets/disk-encryption-azure-resource-manager.md)
+[Encrypt virtual machine scale sets with Azure Resource Manager](../virtual-machine-scale-sets/disk-encryption-azure-resource-manager.md)
service-fabric How To Managed Cluster App Deployment Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/how-to-managed-cluster-app-deployment-template.md
Title: Deploy a Service Fabric managed cluster (preview) application using ARM template
-description: Deploy an application to a Azure Service Fabric managed cluster (preview) using an Azure Resource Manager template.
+ Title: Deploy a Service Fabric managed cluster application using ARM template
+description: Deploy an application to a Azure Service Fabric managed cluster using an Azure Resource Manager template.
Previously updated : 02/15/2021 Last updated : 5/10/2021
-# Deploy a Service Fabric managed cluster (preview) application using ARM template
+# Deploy a Service Fabric managed cluster application using ARM template
You have multiple options for deploying Azure Service Fabric applications on your Service Fabric managed cluster. We recommend using Azure Resource Manager. If you use Resource Manager, you can describe applications and services in JSON, and then deploy them in the same Resource Manager template as your cluster. Unlike using PowerShell or Azure CLI to deploy and manage applications, if you use Resource Manager, you don't have to wait for the cluster to be ready; application registration, provisioning, and deployment can all happen in one step. Using Resource Manager is the best way to manage the application life cycle in your cluster. For more information, see [Best practices: Infrastructure as code](service-fabric-best-practices-infrastructure-as-code.md#azure-service-fabric-resources).
The sample application contains [Azure Resource Manager templates](https://githu
```json {
- "apiVersion": "2021-01-01-preview",
+ "apiVersion": "2021-05-01",
"type": "Microsoft.ServiceFabric/managedclusters/applications", "name": "[concat(parameters('clusterName'), '/', parameters('applicationName'))]", "location": "[variables('clusterLocation')]", }, {
- "apiVersion": "2021-01-01-preview",
+ "apiVersion": "2021-05-01",
"type": "Microsoft.ServiceFabric/managedclusters/applicationTypes", "name": "[concat(parameters('clusterName'), '/', parameters('applicationTypeName'))]", "location": "[variables('clusterLocation')]", }, {
- "apiVersion": "2021-01-01-preview",
+ "apiVersion": "2021-05-01",
"type": "Microsoft.ServiceFabric/managedclusters/applicationTypes/versions", "name": "[concat(parameters('clusterName'), '/', parameters('applicationTypeName'), '/', parameters('applicationTypeVersion'))]", "location": "[variables('clusterLocation')]", }, {
- "apiVersion": "2021-01-01-preview",
+ "apiVersion": "2021-05-01",
"type": "Microsoft.ServiceFabric/managedclusters/applications/services", "name": "[concat(parameters('clusterName'), '/', parameters('applicationName'), '/', parameters('serviceName'))]", "location": "[variables('clusterLocation')]"
To delete an application that was deployed by using the application resource mod
## Next steps
-Get information about the application resource model:
+Learn more about managed cluster application deployment:
-* [Model an application in Service Fabric](service-fabric-application-model.md)
-* [Service Fabric application and service manifests](service-fabric-application-and-service-manifests.md)
-* [Best practices: Infrastructure as code](service-fabric-best-practices-infrastructure-as-code.md#azure-service-fabric-resources)
-* [Manage applications and services as Azure resources](service-fabric-best-practices-infrastructure-as-code.md)
+* [Deploy managed cluster application secrets](how-to-managed-cluster-application-secrets.md)
+* [Deploy managed cluster applications with managed identity](how-to-managed-cluster-application-managed-identity.md)
<!--Image references-->
service-fabric How To Managed Cluster Application Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/how-to-managed-cluster-application-managed-identity.md
+
+ Title: Configure and use application managed identity on Service Fabric managed cluster nodes
+description: Learn how to configure, and use an application managed identity on an ARM template deployed Azure Service Fabric managed cluster.
+ Last updated : 5/10/2021++
+# Deploy a Service Fabric application with Managed Identity
+
+To deploy a Service Fabric application with managed identity, the application needs to be deployed through Azure Resource Manager, typically with an Azure Resource Manager template. For more information on how to deploy Service Fabric application through Azure Resource Manager, see [Manage applications and services as Azure Resource Manager resources](service-fabric-application-arm-resource.md).
+
+> [!NOTE]
+>
+> Applications which are not deployed as an Azure resource **cannot** have Managed Identities.
+>
+> Service Fabric application deployment with Managed Identity is supported with API version `"2021-05-01"` on managed clusters.
+
+Sample managed cluster templates are available here: [Service Fabric managed cluster templates](https://github.com/Azure-Samples/service-fabric-cluster-templates)
+
+## Managed identity support in Service Fabric managed cluster
+
+When a Service Fabric application is configured with [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) and deployed to the cluster it will trigger automatic configuration of the *Managed Identity Token Service* on the Service Fabric managed cluster. This service is responsible for the authentication of Service Fabric applications using their managed identities, and for obtaining access tokens on their behalf. Once the service is enabled, you can see it in Service Fabric Explorer under the **System** section in the left pane, running under the name **fabric:/System/ManagedIdentityTokenService**.
+
+>[!NOTE]
+>The first time an application is deployed with Managed Identities you should expect to see a one-time longer deployment due to the automatic cluster configuration change. You should expect this to take from 15 minutes for a zonal cluster to 45 minutes for a zone-spanning cluster. If there are any other deployments in flight, MITS configuration will wait for those to complete first.
+
+Application resource supports assignment of both SystemAssigned or UserAssigned and assignment can be done as shown in below snippet.
+
+```json
+{
+ "type": "Microsoft.ServiceFabric/managedclusters/applications",
+ "apiVersion": "2021-05-01",
+ "identity": {
+ "type": "SystemAssigned",
+ "userAssignedIdentities": {}
+ },
+}
+
+```
+[Complete JSON reference](https://docs.microsoft.com/azure/templates/microsoft.servicefabric/2021-05-01/managedclusters/applications?tabs=json)
+
+## User-Assigned Identity
+
+To enable application with User-Assigned identity, first add the **identity** property to the application resource with type **userAssigned** and the referenced user-assigned identities. Then add a **managedIdentities** section inside the **properties** section for the **application** resource which contains a list of friendly name to principalId mapping for each of the user-assigned identities. For more information about User Assigned Identities see [Create, list or delete a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-powershell.md).
+
+### Application template
+
+To enable application with User Assigned identity, first add **identity** property to the application resource with type **userAssigned** and the referenced user assigned identities, then add a **managedIdentities** object inside the **properties** section that contains a list of friendly name to principalId mapping for each of the user assigned identities.
+
+```json
+{
+ "apiVersion": "2021-05-01",
+ "type": "Microsoft.ServiceFabric/managedclusters/applications",
+ "name": "[concat(parameters('clusterName'), '/', parameters('applicationName'))]",
+ "location": "[resourceGroup().location]",
+ "dependsOn": [
+ "[parameters('applicationVersion')]",
+ "[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities/', parameters('userAssignedIdentityName'))]"
+ ],
+ "identity": {
+ "type" : "userAssigned",
+ "userAssignedIdentities": {
+ "[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities/', parameters('userAssignedIdentityName'))]": {}
+ }
+ },
+ "properties": {
+ "version": "[parameters('applicationVersion')]",
+ "parameters": {
+ },
+ "managedIdentities": [
+ {
+ "name" : "[parameters('userAssignedIdentityName')]",
+ "principalId" : "[reference(resourceId('Microsoft.ManagedIdentity/userAssignedIdentities/', parameters('userAssignedIdentityName')), '2018-11-30').principalId]"
+ }
+ ]
+ }
+}
+```
+
+In the example above the resource name of the user assigned identity is being used as the friendly name of the managed identity for the application. The following examples assume the actual friendly name is "AdminUser".
+
+### Application package
+
+1. For each identity defined in the `managedIdentities` section in the Azure Resource Manager template, add a `<ManagedIdentity>` tag in the application manifest under **Principals** section. The `Name` attribute needs to match the `name` property defined in the `managedIdentities` section.
+
+ **ApplicationManifest.xml**
+
+ ```xml
+ <Principals>
+ <ManagedIdentities>
+ <ManagedIdentity Name="AdminUser" />
+ </ManagedIdentities>
+ </Principals>
+ ```
+
+2. In the **ServiceManifestImport** section, add a **IdentityBindingPolicy** for the service that uses the Managed Identity. This policy maps the `AdminUser` identity to a service-specific identity name that needs to be added into the service manifest later on.
+
+ **ApplicationManifest.xml**
+
+ ```xml
+ <ServiceManifestImport>
+ <Policies>
+ <IdentityBindingPolicy ServiceIdentityRef="WebAdmin" ApplicationIdentityRef="AdminUser" />
+ </Policies>
+ </ServiceManifestImport>
+ ```
+
+3. Update the service manifest to add a **ManagedIdentity** inside the **Resources** section with the name matching the `ServiceIdentityRef` in the `IdentityBindingPolicy` of the application manifest:
+
+ **ServiceManifest.xml**
+
+ ```xml
+ <Resources>
+ ...
+ <ManagedIdentities DefaultIdentity="WebAdmin">
+ <ManagedIdentity Name="WebAdmin" />
+ </ManagedIdentities>
+ </Resources>
+ ```
+
+## System-assigned managed identity
+
+### Application template
+
+To enable application with a system-assigned managed identity, add the **identity** property to the application resource, with type **systemAssigned** as shown in the example below:
+
+```json
+ {
+ "apiVersion": "2021-05-01",
+ "type": "Microsoft.ServiceFabric/managedclusters/applications",
+ "name": "[concat(parameters('clusterName'), '/', parameters('applicationName'))]",
+ "location": "[resourceGroup().location]",
+ "dependsOn": [
+ "[concat('Microsoft.ServiceFabric/clusters/', parameters('clusterName'), '/applicationTypes/', parameters('applicationTypeName'), '/versions/', parameters('applicationTypeVersion'))]"
+ ],
+ "identity": {
+ "type" : "systemAssigned"
+ },
+ "properties": {
+ "typeName": "[parameters('applicationTypeName')]",
+ "typeVersion": "[parameters('applicationTypeVersion')]",
+ "parameters": {
+ }
+ }
+ }
+```
+This property declares (to Azure Resource Manager, and the Managed Identity and Service Fabric Resource Providers, respectively, that this resource shall have an implicit (`system assigned`) managed identity.
+
+### Application and service package
+
+1. Update the application manifest to add a **ManagedIdentity** element in the **Principals** section, containing a single entry as shown below:
+
+ **ApplicationManifest.xml**
+
+ ```xml
+ <Principals>
+ <ManagedIdentities>
+ <ManagedIdentity Name="SystemAssigned" />
+ </ManagedIdentities>
+ </Principals>
+ ```
+ This maps the identity assigned to the application as a resource to a friendly name, for further assignment to the services comprising the application.
+
+2. In the **ServiceManifestImport** section corresponding to the service that is being assigned the managed identity, add an **IdentityBindingPolicy** element, as indicated below:
+
+ **ApplicationManifest.xml**
+
+ ```xml
+ <ServiceManifestImport>
+ <Policies>
+ <IdentityBindingPolicy ServiceIdentityRef="WebAdmin" ApplicationIdentityRef="SystemAssigned" />
+ </Policies>
+ </ServiceManifestImport>
+ ```
+
+ This element assigns the identity of the application to the service; without this assignment, the service will not be able to access the identity of the application. In the snippet above, the `SystemAssigned` identity (which is a reserved keyword) is mapped to the service's definition under the friendly name `WebAdmin`.
+
+3. Update the service manifest to add a **ManagedIdentity** element inside the **Resources** section with the name matching the value of the `ServiceIdentityRef` setting from the `IdentityBindingPolicy` definition in the application manifest:
+
+ **ServiceManifest.xml**
+
+ ```xml
+ <Resources>
+ ...
+ <ManagedIdentities DefaultIdentity="WebAdmin">
+ <ManagedIdentity Name="WebAdmin" />
+ </ManagedIdentities>
+ </Resources>
+ ```
+ This is the equivalent mapping of an identity to a service as described above, but from the perspective of the service definition. The identity is referenced here by its friendly name (`WebAdmin`), as declared in the application manifest.
+
+## Next steps
+
+* [Leverage the managed identity of a Service Fabric application from service code](./how-to-managed-identity-service-fabric-app-code.md)
+* [Grant an Azure Service Fabric application access to other Azure resources](./how-to-grant-access-other-resources.md)
service-fabric How To Managed Cluster Application Secrets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/how-to-managed-cluster-application-secrets.md
+
+ Title: Use application secrets in Service Fabric managed clusters
+description: Learn about Azure Service Fabric application secrets and how to gather info required for use in managed clusters
+ Last updated : 5/10/2021++
+# Use application secrets in Service Fabric managed clusters
+
+Secrets can be any sensitive information, such as storage connection strings, passwords, or other values that should not be handled in plain text. This article uses Azure Key Vault to manage keys and secrets as it's for Service Fabric managed clusters. However, *using* secrets in an application is cloud platform-agnostic to allow applications to be deployed to a cluster hosted anywhere.
+
+The recommended way to manage service configuration settings is through [service configuration packages][config-package]. Configuration packages are versioned and updatable through managed rolling upgrades with health-validation and auto rollback. This is preferred to global configuration as it reduces the chances of a global service outage. Encrypted secrets are no exception. Service Fabric has built-in features for encrypting and decrypting values in a configuration package Settings.xml file using certificate encryption.
+
+The following diagram illustrates the basic flow for secret management in a Service Fabric application:
+
+![secret management overview][overview]
+
+There are four main steps in this flow:
+
+1. Obtain a data encipherment certificate.
+2. Install the certificate in your cluster.
+3. Encrypt secret values when deploying an application with the certificate and inject them into a service's Settings.xml configuration file.
+4. Read encrypted values out of Settings.xml by decrypting with the same encipherment certificate.
+
+[Azure Key Vault][key-vault-get-started] is used here as a safe storage location for certificates and as a way to get certificates installed on the Service Fabric managed cluster nodes in Azure.
+
+For an example on how to implement applications secrets, see [Manage application secrets](service-fabric-application-secret-management.md).
+
+Alternatively, we also support [KeyVaultReference](service-fabric-keyvault-references.md). Service Fabric KeyVaultReference support makes it easy to deploy secrets to your applications simply by referencing the URL of the secret that is stored in Key Vault
+
+## Create a data encipherment certificate
+To create your own key vault and setup certificates, follow the instructions from Azure Key Vault by using the [Azure CLI, PowerShell, Portal, and more][key-vault-certs].
+
+>[!NOTE]
+> The key vault must be [enabled for template deployment](https://docs.microsoft.com/azure/key-vault/general/manage-with-cli2#bkmk_KVperCLI) to allow the compute resource provider to get certificates from it and install it on cluster nodes.
+
+## Install the certificate in your cluster
+This certificate must be installed on each node in the cluster and Service Fabric managed clusters helps make this easy. The managed cluster service can push version-specific secrets to the nodes to help install secrets that won't change often like installing a private root CA to the nodes. For most production workloads we suggest using [KeyVault extension][key-vault-windows]. The Key Vault VM extension provides automatic refresh of certificates stored in an Azure key vault vs a static version.
+
+For managed clusters you'll need three values, two from Azure Key Vault, and one you decide on for the local store name on the nodes.
+
+Parameters:
+* Source Vault: This is the
+ * e.g.: /subscriptions/{subscriptionid}/resourceGroups/myrg1/providers/Microsoft.KeyVault/vaults/mykeyvault1
+* Certificate URL: This is the full object identifier and is case-insensitive and immutable
+ * https://mykeyvault1.vault.azure.net/secrets/{secretname}/{secret-version}
+* Certificate Store: This is the local certificate store on the nodes where the cert will be placed
+ * certificate store name on the nodes, e.g.: "MY"
+
+Service Fabric managed clusters supports two methods for adding version-specific secrets to your nodes.
+
+1. Portal during the initial cluster creation only
+Insert values from above in to this area:
+
+![portal secrets input][sfmc-secrets]
+
+2. Azure Resource Manager during create or anytime
+
+```json
+{
+ "apiVersion": "2021-05-01",
+ "type": "Microsoft.ServiceFabric/managedclusters/nodetypes",
+ "properties": {
+ "vmSecrets": [
+ {
+ "sourceVault": {
+ "id": "/subscriptions/{subscriptionid}/resourceGroups/myrg1/providers/Microsoft.KeyVault/vaults/mykeyvault1"
+ },
+ "vaultCertificates": [
+ {
+ "certificateStore": "MY",
+ "certificateUrl": "https://mykeyvault1.vault.azure.net/certificates/{certificatename}/{secret-version}"
+ }
+ ]
+ }
+ ]
+ }
+}
+```
++
+<!-- Links -->
+[key-vault-get-started]:../key-vault/general/overview.md
+[key-vault-certs]: ../key-vault/certificates/quick-create-cli.md
+[config-package]: service-fabric-application-and-service-manifests.md
+[key-vault-windows]: ../virtual-machines/extensions/key-vault-windows.md
+
+<!-- Images -->
+[overview]:./media/service-fabric-application-and-service-security/overview.png
+[sfmc-secrets]:./media/how-to-managed-cluster-application-secrets/sfmc-secrets.png
service-fabric How To Managed Cluster Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/how-to-managed-cluster-availability-zones.md
+
+ Title: Deploy a Service Fabric managed cluster across Availability Zones
+description: Learn how to deploy Service Fabric managed cluster across Availability Zones and how to configure in an ARM template.
+ Last updated : 5/10/2021+
+# Deploy a Service Fabric managed cluster across availability zones
+
+Availability Zones in Azure are a high-availability offering that protects your applications and data from datacenter failures. An Availability Zone is a unique physical location equipped with independent power, cooling, and networking within an Azure region.
+
+Service Fabric managed cluster supports deployments that span across multiple Availability Zones to provide zone resiliency. This configuration will ensure high-availability of the critical system services and your applications to protect from single-points-of-failure. Azure Availability Zones are only available in select regions. For more information, see [Azure Availability Zones Overview](../availability-zones/az-overview.md).
+
+>[!NOTE]
+>Availability Zone spanning is only available on Standard SKU clusters.
+
+Sample templates are available: [Service Fabric cross availability zone template](https://github.com/Azure-Samples/service-fabric-cluster-templates)
+
+## Recommendations for zone resilient Azure Service Fabric managed clusters
+A Service Fabric cluster distributed across Availability Zones ensures high availability of the cluster state.
+
+The recommended topology for managed cluster requires the resources outlined below:
+
+* The cluster SKU must be Standard
+* Primary node type should have at least nine nodes for best resiliency, but supports minimum number of six.
+* Secondary node type(s) should have at least six nodes for best resiliency, but supports minimum number of three.
+
+>[!NOTE]
+>Only 3 Availability Zone deployments are supported.
+
+>[!NOTE]
+> It is not possible to do an in-place change of a managed cluster from non-spanning to a spanned cluster.
+
+Diagram that shows the Azure Service Fabric Availability Zone architecture
+ ![Azure Service Fabric Availability Zone Architecture][sf-multi-az-arch]
+
+Sample node list depicting FD/UD formats in a virtual machine scale set spanning zones
+
+ ![Sample node list depicting FD/UD formats in a virtual machine scale set spanning zones.][sfmc-multi-az-nodes]
+
+**Distribution of Service replicas across zones**:
+When a service is deployed on the nodeTypes that are spanning zones, the replicas are placed to ensure they land up in separate zones. This separation is ensured as the fault domainΓÇÖs on the nodes present in each of these nodeTypes are configured with the zone information (i.e FD = fd:/zone1/1 etc.). For example: for five replicas or instances of a service the distribution will be 2-2-1 and runtime will try to ensure equal distribution across AZs.
+
+**User Service Replica Configuration**:
+Stateful user services deployed on the cross availability zone nodeTypes should be configured with this configuration: replica count with target = 9, min = 5. This configuration will help the service to be working even when one zone goes down since 6 replicas will be still up in the other two zones. An application upgrade in such a scenario will also go through.
+
+**Zone down scenario**:
+When a zone goes down, all the nodes in that zone will appear as down. Service replicas on these nodes will also be down. Since there are replicas in the other zones, the service continues to be responsive with primary replicas failing over to the zones which are functioning. The services will appear in warning state as the target replica count is not met and the VM count is still more than the defined min target replica size. As a result, Service Fabric load balancer will bring up replicas in the working zones to match the configured target replica count. At this point, the services will appear healthy. When the zone which was down comes back up, the load balancer will again spread all the service replicas evenly across all the zones.
+
+## Networking Configuration
+For more information, see [Configure network settings for Service Fabric managed clusters](https://docs.microsoft.com/azure/service-fabric/how-to-managed-cluster-networking)
+
+## Enabling a zone resilient Azure Service Fabric managed cluster
+To enable a zone resilient Azure Service Fabric managed cluster, you must include the following in the managed cluster resource definition.
+
+* The **ZonalResiliency** property, which specifies if the cluster is zone resilient or not.
+
+```json
+{
+ "apiVersion": "2021-05-01",
+ "type": "Microsoft.ServiceFabric/managedclusters",
+ "ZonalResiliency": "true"
+
+}
+```
+[sf-architecture]: ./media/service-fabric-cross-availability-zones/sf-cross-az-topology.png
+[sf-architecture]: ./media/service-fabric-cross-availability-zones/sf-cross-az-topology.png
+[sf-multi-az-arch]: ./media/service-fabric-cross-availability-zones/sf-multi-az-topology.png
+[sfmc-multi-az-nodes]: ./media/how-to-managed-cluster-availability-zones/sfmc-multi-az-nodes.png
service-fabric How To Managed Cluster Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/how-to-managed-cluster-configuration.md
Title: Configure your Service Fabric managed cluster (preview)
+ Title: Configure your Service Fabric managed cluster
description: Learn how to configure your Service Fabric managed cluster for automatic OS upgrades, NSG rules, and more. Previously updated : 02/15/2021 Last updated : 5/10/2021
-# Service Fabric managed cluster (preview) configuration options
+# Service Fabric managed cluster configuration options
-In addition to selecting the [Service Fabric managed cluster SKU](overview-managed-cluster.md#service-fabric-managed-cluster-skus) when creating your cluster, there are a number of other ways to configure it. In the current preview, you can:
+In addition to selecting the [Service Fabric managed cluster SKU](overview-managed-cluster.md#service-fabric-managed-cluster-skus) when creating your cluster, there are a number of other ways to configure it, including:
-* Configure [networking options](how-to-managed-cluster-networking.md) for your cluster
-* Add a [virtual machine scale set extension](how-to-managed-cluster-vmss-extension.md) to a node type
-* Configure [managed identity](how-to-managed-identity-managed-cluster-virtual-machine-scale-sets.md) on your node types
-* Enable [automatic OS upgrades](how-to-managed-cluster-configuration.md#enable-automatic-os-image-upgrades) for your nodes
-* Enable [OS and data disk encryption](how-to-enable-managed-cluster-disk-encryption.md) on your nodes
+* Adding a [virtual machine scale set extension](how-to-managed-cluster-vmss-extension.md) to a node type
+* Configuring cluster [availability zone spanning](how-to-managed-cluster-availability-zones.md)
+* Configuring cluster [NSG rules and other networking options](how-to-managed-cluster-networking.md)
+* Configuring [managed identity](how-to-managed-identity-managed-cluster-virtual-machine-scale-sets.md) on cluster node types
+* Enabling [automatic OS upgrades](how-to-managed-cluster-configuration.md#enable-automatic-os-image-upgrades) for cluster nodes
+* Enabling [OS and data disk encryption](how-to-enable-managed-cluster-disk-encryption.md) on cluster nodes
+* Selecting the cluster [managed disk type](how-to-managed-cluster-managed-disk.md) SKU
## Enable automatic OS image upgrades
You can choose to enable automatic OS image upgrades to the virtual machines run
To enable automatic OS upgrades:
-* Use the `2021-01-01-preview` (or later) version of *Microsoft.ServiceFabric/managedclusters* and *Microsoft.ServiceFabric/managedclusters/nodetypes* resources
+* Use the `2021-05-01` (or later) version of *Microsoft.ServiceFabric/managedclusters* and *Microsoft.ServiceFabric/managedclusters/nodetypes* resources
* Set the cluster's property `enableAutoOSUpgrade` to *true* * Set the cluster nodeTypes' resource property `vmImageVersion` to *latest*
For example:
```json {
- "apiVersion": "2021-01-01-preview",
+ "apiVersion": "2021-05-01",
"type": "Microsoft.ServiceFabric/managedclusters", ... "properties": {
For example:
}, }, {
- "apiVersion": "2021-01-01-preview",
+ "apiVersion": "2021-05-01",
"type": "Microsoft.ServiceFabric/managedclusters/nodetypes", ... "properties": {
service-fabric How To Managed Cluster Grant Access Other Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/how-to-managed-cluster-grant-access-other-resources.md
+
+ Title: Grant an application access to other Azure resources on a Service Fabric managed cluster
+description: This article explains how to grant your managed-identity-enabled Service Fabric application access to other Azure resources supporting Azure Active Directory-based authentication on a Service Fabric managed cluster.
++ Last updated : 5/10/2021++
+# Granting a Service Fabric application's managed identity access to Azure resources on a Service Fabric managed cluster
+
+Before the application can use its managed identity to access other resources, permissions must be granted to that identity on the protected Azure resource being accessed. Granting permissions is typically a management action on the 'control plane' of the Azure service owning the protected resource routed via Azure Resource Manager, which will enforce any applicable role-based access checking.
+
+The exact sequence of steps will then depend on the type of Azure resource being accessed, as well as the language/client used to grant permissions. The remainder of the article assumes a user-assigned identity assigned to the application and includes several typical examples for your convenience, but it is in no way an exhaustive reference for this topic; consult the documentation of the respective Azure services for up-to-date instructions on granting permissions.
+
+## Granting access to Azure Storage
+You can use the Service Fabric application's managed identity (user-assigned in this case) to retrieve the data from an Azure storage blob. Grant the identity the required permissions in the Azure portal with the following steps:
+
+1. Navigate to the storage account
+2. Click the Access control (IAM) link in the left panel.
+3. (optional) Check existing access: select System- or User-assigned managed identity in the 'Find' control; select the appropriate identity from the ensuing result list
+4. Click + Add role assignment on top of the page to add a new role assignment for the application's identity.
+Under Role, from the dropdown, select Storage Blob Data Reader.
+5. In the next dropdown, under Assign access to, choose `User assigned managed identity`.
+6. Next, ensure the proper subscription is listed in Subscription dropdown and then set Resource Group to All resource groups.
+7. Under Select, choose the UAI corresponding to the Service Fabric application and then click Save.
+
+Support for system-assigned Service Fabric managed identities does not include integration in the Azure portal; if your application uses a system-assigned identity, you will have to find first the client ID of the application's identity, and then repeat the steps above but selecting the `Azure AD user, group, or service principal` option in the Find control.
+
+## Granting access to Azure Key Vault
+Similarly with accessing storage, you can leverage the managed identity of a Service Fabric application to access an Azure key vault. The steps for granting access in the Azure portal are similar to those listed above, and won't be repeated here. Refer to the image below for differences.
+
+![Key Vault access policy](../key-vault/media/vs-secure-secret-appsettings/add-keyvault-access-policy.png)
+
+The following example illustrates granting access to a vault via a template deployment; add the snippet(s) below as another entry under the `resources` element of the template. The sample demonstrates access granting for both user-assigned and system-assigned identity types, respectively - choose the applicable one.
+
+```json
+ # under 'variables':
+ "variables": {
+ "userAssignedIdentityResourceId" : "[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities/', parameters('userAssignedIdentityName'))]",
+ }
+ # under 'resources':
+ {
+ "type": "Microsoft.KeyVault/vaults/accessPolicies",
+ "name": "[concat(parameters('keyVaultName'), '/add')]",
+ "apiVersion": "2018-02-14",
+ "properties": {
+ "accessPolicies": [
+ {
+ "tenantId": "[reference(variables('userAssignedIdentityResourceId'), '2018-11-30').tenantId]",
+ "objectId": "[reference(variables('userAssignedIdentityResourceId'), '2018-11-30').principalId]",
+ "dependsOn": [
+ "[variables('userAssignedIdentityResourceId')]"
+ ],
+ "permissions": {
+ "keys": ["get", "list"],
+ "secrets": ["get", "list"],
+ "certificates": ["get", "list"]
+ }
+ }
+ ]
+ }
+ },
+```
+And for system-assigned managed identities:
+```json
+ # under 'variables':
+ "variables": {
+ "sfAppSystemAssignedIdentityResourceId": "[concat(resourceId('Microsoft.ServiceFabric/managedClusters/applications/', parameters('clusterName'), parameters('applicationName')), '/providers/Microsoft.ManagedIdentity/Identities/default')]"
+ }
+ # under 'resources':
+ {
+ "type": "Microsoft.KeyVault/vaults/accessPolicies",
+ "name": "[concat(parameters('keyVaultName'), '/add')]",
+ "apiVersion": "2018-02-14",
+ "properties": {
+ "accessPolicies": [
+ {
+ "name": "[concat(parameters('clusterName'), '/', parameters('applicationName'))]",
+ "tenantId": "[reference(variables('sfAppSystemAssignedIdentityResourceId'), '2018-11-30').tenantId]",
+ "objectId": "[reference(variables('sfAppSystemAssignedIdentityResourceId'), '2018-11-30').principalId]",
+ "dependsOn": [
+ "[variables('sfAppSystemAssignedIdentityResourceId')]"
+ ],
+ "permissions": {
+ "secrets": [
+ "get",
+ "list"
+ ],
+ "certificates":
+ [
+ "get",
+ "list"
+ ]
+ }
+ },
+ ]
+ }
+ }
+```
+
+For more details, please see [Vaults - Update Access Policy](/rest/api/keyvault/vaults/updateaccesspolicy).
+
+## Next steps
+* [Deploy an Azure Service Fabric application with user-assigned or system-assigned managed identity](./how-to-deploy-service-fabric-application-system-assigned-managed-identity.md)
service-fabric How To Managed Cluster Managed Disk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/how-to-managed-cluster-managed-disk.md
+
+ Title: Select managed disk types for Service Fabric managed cluster nodes
+description: Learn how to select managed disk types for Service Fabric managed cluster nodes and configure in an ARM template.
+ Last updated : 5/10/2021++
+# Select managed disk types for Service Fabric managed cluster nodes
+
+Azure Service Fabric managed clusters use managed disks for all storage needs, including application data, for scenarios such as reliable collections and actors. Azure managed disks are block-level storage volumes that are managed by Azure and used with Azure Virtual Machines. Managed disks are like a physical disk in an on-premises server but, virtualized. With managed disks, all you have to do is specify the disk size, the disk type, and provision the disk. Once you provision the disk, Azure handles the rest. For more information about managed disks, see [Introduction to Azure managed disks
+](../virtual-machines/managed-disks-overview.md).
+
+## Managed disk types
+
+Azure Service Fabric manged clusters support the following managed disk types:
+* Standard HDD
+ * Standard HDD locally redundant storage. Best for backup, non-critical, and infrequent access.
+* Standard SSD *Default*
+ * Standard SSD locally redundant storage. Best for web servers, lightly used enterprise applications and dev/test.
+* Premium SSD *Compatible with specific VM sizes* for more information see [Premium SSD](https://docs.microsoft.com/azure/virtual-machines/disks-types#premium-ssd)
+ * Premium SSD locally redundant storage. Best for production and performance sensitive workloads.
+
+>[!NOTE]
+> Any temp disk associated with VM Size will *not* be used for storing any Service Fabric or application related data
+
+## Specifying a Service Fabric managed cluster disk type
+
+To specify a Service Fabric managed cluster disk type you must include the following value in the managed cluster resource definition.
+
+* The value **dataDiskType** property, which specifies what managed disk type to use for your nodes.
+
+Possible values are:
+* "Standard_LRS"
+* "StandardSSD_LRS"
+* "Premium_LRS"
+>[!NOTE]
+> Not all managed disk types are available for all vm sizes, for more info see [What disk types are available in Azure?](../virtual-machines/disks-types.md)
+
+```json
+{
+ "apiVersion": "2021-05-01",
+ "type": "Microsoft.ServiceFabric/managedclusters",
+ "dataDiskType": "StandardSSD_LRS"
+
+}
+```
+
+Sample templates are available that include this specification: [Service Fabric managed cluster templates](https://github.com/Azure-Samples/service-fabric-cluster-templates)
service-fabric How To Managed Cluster Managed Identity Service Fabric App Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/how-to-managed-cluster-managed-identity-service-fabric-app-code.md
+
+ Title: Use managed identity with an application on a Service Fabric managed cluster
+description: How to use managed identities in Azure Service Fabric application code to access Azure Services on a Service Fabric managed cluster.
++ Last updated : 5/10/2021++
+# How to leverage a Service Fabric application's managed identity to access Azure services on a Service Fabric managed cluster
+
+Service Fabric applications can leverage managed identities to access other Azure resources which support Azure Active Directory-based authentication. An application can obtain an [access token](../active-directory/develop/developer-glossary.md#access-token) representing its identity, which may be system-assigned or user-assigned, and use it as a 'bearer' token to authenticate itself to another service - also known as a [protected resource server](../active-directory/develop/developer-glossary.md#resource-server). The token represents the identity assigned to the Service Fabric application, and will only be issued to Azure resources (including SF applications) which share that identity. Refer to the [managed identity overview](../active-directory/managed-identities-azure-resources/overview.md) documentation for a detailed description of managed identities, as well as the distinction between system-assigned and user-assigned identities. We will refer to a managed-identity-enabled Service Fabric application as the [client application](../active-directory/develop/developer-glossary.md#client-application) throughout this article.
+
+See a companion sample application that demonstrates using system-assigned and user-assigned [Service Fabric application managed identities](https://github.com/Azure-Samples/service-fabric-managed-identity) with Reliable Services and containers.
+
+> [!IMPORTANT]
+> A managed identity represents the association between an Azure resource and a service principal in the corresponding Azure AD tenant associated with the subscription containing the resource. As such, in the context of Service Fabric, managed identities are only supported for applications deployed as Azure resources.
+
+> [!IMPORTANT]
+> Prior to using the managed identity of a Service Fabric application, the client application must be granted access to the protected resource. Please refer to the list of [Azure services which support Azure AD authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-managed-identities-for-azure-resources)
+ to check for support, and then to the respective service's documentation for specific steps to grant an identity access to resources of interest.
+
+
+## Leverage a managed identity using Azure.Identity
+
+The Azure Identity SDK now supports Service Fabric. Using Azure.Identity makes writing code to use Service Fabric app managed identities easier because it handles fetching tokens, caching tokens, and server authentication. While accessing most Azure resources, the concept of a token is hidden.
+
+Service Fabric support is available in the following versions for these languages:
+- [C# in version 1.3.0](https://www.nuget.org/packages/Azure.Identity). See a [C# sample](https://github.com/Azure-Samples/service-fabric-managed-identity).
+- [Python in version 1.5.0](https://pypi.org/project/azure-identity/). See a [Python sample](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/identity/azure-identity/tests/managed-identity-live/service-fabric/service_fabric.md).
+- [Java in version 1.2.0](/java/api/overview/azure/identity-readme).
+
+C# sample of initializing credentials and using the credentials to fetch a secret from Azure Key Vault:
+
+```csharp
+using Azure.Identity;
+using Azure.Security.KeyVault.Secrets;
+
+namespace MyMIService
+{
+ internal sealed class MyMIService : StatelessService
+ {
+ protected override async Task RunAsync(CancellationToken cancellationToken)
+ {
+ try
+ {
+ // Load the service fabric application managed identity assigned to the service
+ ManagedIdentityCredential creds = new ManagedIdentityCredential();
+
+ // Create a client to keyvault using that identity
+ SecretClient client = new SecretClient(new Uri("https://mykv.vault.azure.net/"), creds);
+
+ // Fetch a secret
+ KeyVaultSecret secret = (await client.GetSecretAsync("mysecret", cancellationToken: cancellationToken)).Value;
+ }
+ catch (CredentialUnavailableException e)
+ {
+ // Handle errors with loading the Managed Identity
+ }
+ catch (RequestFailedException)
+ {
+ // Handle errors with fetching the secret
+ }
+ catch (Exception e)
+ {
+ // Handle generic errors
+ }
+ }
+ }
+}
+
+```
+
+## Acquiring an access token using REST API
+In clusters enabled for managed identity, the Service Fabric runtime exposes a localhost endpoint which applications can use to obtain access tokens. The endpoint is available on every node of the cluster, and is accessible to all entities on that node. Authorized callers may obtain access tokens by calling this endpoint and presenting an authentication code; the code is generated by the Service Fabric runtime for each distinct service code package activation, and is bound to the lifetime of the process hosting that service code package.
+
+Specifically, the environment of a managed-identity-enabled Service Fabric service will be seeded with the following variables:
+- 'IDENTITY_ENDPOINT': the localhost endpoint corresponding to service's managed identity
+- 'IDENTITY_HEADER': an unique authentication code representing the service on the current node
+- 'IDENTITY_SERVER_THUMBPRINT' : Thumbprint of service fabric managed identity server
+
+> [!IMPORTANT]
+> The application code should consider the value of the 'IDENTITY_HEADER' environment variable as sensitive data - it should not be logged or otherwise disseminated. The authentication code has no value outside of the local node, or after the process hosting the service has terminated, but it does represent the identity of the Service Fabric service, and so should be treated with the same precautions as the access token itself.
+
+To obtain a token, the client performs the following steps:
+- forms a URI by concatenating the managed identity endpoint (IDENTITY_ENDPOINT value) with the API version and the resource (audience) required for the token
+- creates a GET http(s) request for the specified URI
+- adds appropriate server certificate validation logic
+- adds the authentication code (IDENTITY_HEADER value) as a header to the request
+- submits the request
+
+A successful response will contain a JSON payload representing the resulting access token, as well as metadata describing it. A failed response will also include an explanation of the failure. See below for additional details on error handling.
+
+Access tokens will be cached by Service Fabric at various levels (node, cluster, resource provider service), so a successful response does not necessarily imply that the token was issued directly in response to the user application's request. Tokens will be cached for less than their lifetime, and so an application is guaranteed to receive a valid token. It is recommended that the application code caches itself any access tokens it acquires; the caching key should include (a derivation of) the audience.
+
+Sample request:
+```http
+GET 'https://localhost:2377/metadata/identity/oauth2/token?api-version=2019-07-01-preview&resource=https://vault.azure.net/' HTTP/1.1 Secret: 912e4af7-77ba-4fa5-a737-56c8e3ace132
+```
+where:
+
+| Element | Description |
+| - | -- |
+| `GET` | The HTTP verb, indicating you want to retrieve data from the endpoint. In this case, an OAuth access token. |
+| `https://localhost:2377/metadata/identity/oauth2/token` | The managed identity endpoint for Service Fabric applications, provided via the IDENTITY_ENDPOINT environment variable. |
+| `api-version` | A query string parameter, specifying the API version of the Managed Identity Token Service; currently the only accepted value is `2019-07-01-preview`, and is subject to change. |
+| `resource` | A query string parameter, indicating the App ID URI of the target resource. This will be reflected as the `aud` (audience) claim of the issued token. This example requests a token to access Azure Key Vault, whose an App ID URI is https:\//vault.azure.net/. |
+| `Secret` | An HTTP request header field, required by the Service Fabric Managed Identity Token Service for Service Fabric services to authenticate the caller. This value is provided by the SF runtime via IDENTITY_HEADER environment variable. |
++
+Sample response:
+```json
+HTTP/1.1 200 OK
+Content-Type: application/json
+{
+ "token_type": "Bearer",
+ "access_token": "eyJ0eXAiO...",
+ "expires_on": 1565244611,
+ "resource": "https://vault.azure.net/"
+}
+```
+where:
+
+| Element | Description |
+| - | -- |
+| `token_type` | The type of token; in this case, a "Bearer" access token, which means the presenter ('bearer') of this token is the intended subject of the token. |
+| `access_token` | The requested access token. When calling a secured REST API, the token is embedded in the `Authorization` request header field as a "bearer" token, allowing the API to authenticate the caller. |
+| `expires_on` | The timestamp of the expiration of the access token; represented as the number of seconds from "1970-01-01T0:0:0Z UTC" and corresponds to the token's `exp` claim. In this case, the token expires on 2019-08-08T06:10:11+00:00 (in RFC 3339)|
+| `resource` | The resource for which the access token was issued, specified via the `resource` query string parameter of the request; corresponds to the token's 'aud' claim. |
++
+## Acquiring an access token using C#
+The above becomes, in C#:
+
+```C#
+namespace Azure.ServiceFabric.ManagedIdentity.Samples
+{
+ using System;
+ using System.Net.Http;
+ using System.Text;
+ using System.Threading;
+ using System.Threading.Tasks;
+ using System.Web;
+ using Newtonsoft.Json;
+
+ /// <summary>
+ /// Type representing the response of the SF Managed Identity endpoint for token acquisition requests.
+ /// </summary>
+ [JsonObject]
+ public sealed class ManagedIdentityTokenResponse
+ {
+ [JsonProperty(Required = Required.Always, PropertyName = "token_type")]
+ public string TokenType { get; set; }
+
+ [JsonProperty(Required = Required.Always, PropertyName = "access_token")]
+ public string AccessToken { get; set; }
+
+ [JsonProperty(PropertyName = "expires_on")]
+ public string ExpiresOn { get; set; }
+
+ [JsonProperty(PropertyName = "resource")]
+ public string Resource { get; set; }
+ }
+
+ /// <summary>
+ /// Sample class demonstrating access token acquisition using Managed Identity.
+ /// </summary>
+ public sealed class AccessTokenAcquirer
+ {
+ /// <summary>
+ /// Acquire an access token.
+ /// </summary>
+ /// <returns>Access token</returns>
+ public static async Task<string> AcquireAccessTokenAsync()
+ {
+ var managedIdentityEndpoint = Environment.GetEnvironmentVariable("IDENTITY_ENDPOINT");
+ var managedIdentityAuthenticationCode = Environment.GetEnvironmentVariable("IDENTITY_HEADER");
+ var managedIdentityServerThumbprint = Environment.GetEnvironmentVariable("IDENTITY_SERVER_THUMBPRINT");
+ // Latest api version, 2019-07-01-preview is still supported.
+ var managedIdentityApiVersion = Environment.GetEnvironmentVariable("IDENTITY_API_VERSION");
+ var managedIdentityAuthenticationHeader = "secret";
+ var resource = "https://management.azure.com/";
+
+ var requestUri = $"{managedIdentityEndpoint}?api-version={managedIdentityApiVersion}&resource={HttpUtility.UrlEncode(resource)}";
+
+ var requestMessage = new HttpRequestMessage(HttpMethod.Get, requestUri);
+ requestMessage.Headers.Add(managedIdentityAuthenticationHeader, managedIdentityAuthenticationCode);
+
+ var handler = new HttpClientHandler();
+ handler.ServerCertificateCustomValidationCallback = (httpRequestMessage, cert, certChain, policyErrors) =>
+ {
+ // Do any additional validation here
+ if (policyErrors == SslPolicyErrors.None)
+ {
+ return true;
+ }
+ return 0 == string.Compare(cert.GetCertHashString(), managedIdentityServerThumbprint, StringComparison.OrdinalIgnoreCase);
+ };
+
+ try
+ {
+ var response = await new HttpClient(handler).SendAsync(requestMessage)
+ .ConfigureAwait(false);
+
+ response.EnsureSuccessStatusCode();
+
+ var tokenResponseString = await response.Content.ReadAsStringAsync()
+ .ConfigureAwait(false);
+
+ var tokenResponseObject = JsonConvert.DeserializeObject<ManagedIdentityTokenResponse>(tokenResponseString);
+
+ return tokenResponseObject.AccessToken;
+ }
+ catch (Exception ex)
+ {
+ string errorText = String.Format("{0} \n\n{1}", ex.Message, ex.InnerException != null ? ex.InnerException.Message : "Acquire token failed");
+
+ Console.WriteLine(errorText);
+ }
+
+ return String.Empty;
+ }
+ } // class AccessTokenAcquirer
+} // namespace Azure.ServiceFabric.ManagedIdentity.Samples
+```
+## Accessing Key Vault from a Service Fabric application using Managed Identity
+This sample builds on the above to demonstrate accessing a secret stored in a Key Vault using managed identity.
+
+```C#
+ /// <summary>
+ /// Probe the specified secret, displaying metadata on success.
+ /// </summary>
+ /// <param name="vault">vault name</param>
+ /// <param name="secret">secret name</param>
+ /// <param name="version">secret version id</param>
+ /// <returns></returns>
+ public async Task<string> ProbeSecretAsync(string vault, string secret, string version)
+ {
+ // initialize a KeyVault client with a managed identity-based authentication callback
+ var kvClient = new Microsoft.Azure.KeyVault.KeyVaultClient(new Microsoft.Azure.KeyVault.KeyVaultClient.AuthenticationCallback((a, r, s) => { return AuthenticationCallbackAsync(a, r, s); }));
+
+ Log(LogLevel.Info, $"\nRunning with configuration: \n\tobserved vault: {config.VaultName}\n\tobserved secret: {config.SecretName}\n\tMI endpoint: {config.ManagedIdentityEndpoint}\n\tMI auth code: {config.ManagedIdentityAuthenticationCode}\n\tMI auth header: {config.ManagedIdentityAuthenticationHeader}");
+ string response = String.Empty;
+
+ Log(LogLevel.Info, "\n== {DateTime.UtcNow.ToString()}: Probing secret...");
+ try
+ {
+ var secretResponse = await kvClient.GetSecretWithHttpMessagesAsync(vault, secret, version)
+ .ConfigureAwait(false);
+
+ if (secretResponse.Response.IsSuccessStatusCode)
+ {
+ // use the secret: secretValue.Body.Value;
+ response = String.Format($"Successfully probed secret '{secret}' in vault '{vault}': {PrintSecretBundleMetadata(secretResponse.Body)}");
+ }
+ else
+ {
+ response = String.Format($"Non-critical error encountered retrieving secret '{secret}' in vault '{vault}': {secretResponse.Response.ReasonPhrase} ({secretResponse.Response.StatusCode})");
+ }
+ }
+ catch (Microsoft.Rest.ValidationException ve)
+ {
+ response = String.Format($"encountered REST validation exception 0x{ve.HResult.ToString("X")} trying to access '{secret}' in vault '{vault}' from {ve.Source}: {ve.Message}");
+ }
+ catch (KeyVaultErrorException kvee)
+ {
+ response = String.Format($"encountered KeyVault exception 0x{kvee.HResult.ToString("X")} trying to access '{secret}' in vault '{vault}': {kvee.Response.ReasonPhrase} ({kvee.Response.StatusCode})");
+ }
+ catch (Exception ex)
+ {
+ // handle generic errors here
+ response = String.Format($"encountered exception 0x{ex.HResult.ToString("X")} trying to access '{secret}' in vault '{vault}': {ex.Message}");
+ }
+
+ Log(LogLevel.Info, response);
+
+ return response;
+ }
+
+ /// <summary>
+ /// KV authentication callback, using the application's managed identity.
+ /// </summary>
+ /// <param name="authority">The expected issuer of the access token, from the KV authorization challenge.</param>
+ /// <param name="resource">The expected audience of the access token, from the KV authorization challenge.</param>
+ /// <param name="scope">The expected scope of the access token; not currently used.</param>
+ /// <returns>Access token</returns>
+ public async Task<string> AuthenticationCallbackAsync(string authority, string resource, string scope)
+ {
+ Log(LogLevel.Verbose, $"authentication callback invoked with: auth: {authority}, resource: {resource}, scope: {scope}");
+ var encodedResource = HttpUtility.UrlEncode(resource);
+
+ // This sample does not illustrate the caching of the access token, which the user application is expected to do.
+ // For a given service, the caching key should be the (encoded) resource uri. The token should be cached for a period
+ // of time at most equal to its remaining validity. The 'expires_on' field of the token response object represents
+ // the number of seconds from Unix time when the token will expire. You may cache the token if it will be valid for at
+ // least another short interval (1-10s). If its expiration will occur shortly, don't cache but still return it to the
+ // caller. The MI endpoint will not return an expired token.
+ // Sample caching code:
+ //
+ // ManagedIdentityTokenResponse tokenResponse;
+ // if (responseCache.TryGetCachedItem(encodedResource, out tokenResponse))
+ // {
+ // Log(LogLevel.Verbose, $"cache hit for key '{encodedResource}'");
+ //
+ // return tokenResponse.AccessToken;
+ // }
+ //
+ // Log(LogLevel.Verbose, $"cache miss for key '{encodedResource}'");
+ //
+ // where the response cache is left as an exercise for the reader. MemoryCache is a good option, albeit not yet available on .net core.
+
+ var requestUri = $"{config.ManagedIdentityEndpoint}?api-version={config.ManagedIdentityApiVersion}&resource={encodedResource}";
+ Log(LogLevel.Verbose, $"request uri: {requestUri}");
+
+ var requestMessage = new HttpRequestMessage(HttpMethod.Get, requestUri);
+ requestMessage.Headers.Add(config.ManagedIdentityAuthenticationHeader, config.ManagedIdentityAuthenticationCode);
+ Log(LogLevel.Verbose, $"added header '{config.ManagedIdentityAuthenticationHeader}': '{config.ManagedIdentityAuthenticationCode}'");
+
+ var response = await httpClient.SendAsync(requestMessage)
+ .ConfigureAwait(false);
+ Log(LogLevel.Verbose, $"response status: success: {response.IsSuccessStatusCode}, status: {response.StatusCode}");
+
+ response.EnsureSuccessStatusCode();
+
+ var tokenResponseString = await response.Content.ReadAsStringAsync()
+ .ConfigureAwait(false);
+
+ var tokenResponse = JsonConvert.DeserializeObject<ManagedIdentityTokenResponse>(tokenResponseString);
+ Log(LogLevel.Verbose, "deserialized token response; returning access code..");
+
+ // Sample caching code (continuation):
+ // var expiration = DateTimeOffset.FromUnixTimeSeconds(Int32.Parse(tokenResponse.ExpiresOn));
+ // if (expiration > DateTimeOffset.UtcNow.AddSeconds(5.0))
+ // responseCache.AddOrUpdate(encodedResource, tokenResponse, expiration);
+
+ return tokenResponse.AccessToken;
+ }
+
+ private string PrintSecretBundleMetadata(SecretBundle bundle)
+ {
+ StringBuilder strBuilder = new StringBuilder();
+
+ strBuilder.AppendFormat($"\n\tid: {bundle.Id}\n");
+ strBuilder.AppendFormat($"\tcontent type: {bundle.ContentType}\n");
+ strBuilder.AppendFormat($"\tmanaged: {bundle.Managed}\n");
+ strBuilder.AppendFormat($"\tattributes:\n");
+ strBuilder.AppendFormat($"\t\tenabled: {bundle.Attributes.Enabled}\n");
+ strBuilder.AppendFormat($"\t\tnbf: {bundle.Attributes.NotBefore}\n");
+ strBuilder.AppendFormat($"\t\texp: {bundle.Attributes.Expires}\n");
+ strBuilder.AppendFormat($"\t\tcreated: {bundle.Attributes.Created}\n");
+ strBuilder.AppendFormat($"\t\tupdated: {bundle.Attributes.Updated}\n");
+ strBuilder.AppendFormat($"\t\trecoveryLevel: {bundle.Attributes.RecoveryLevel}\n");
+
+ return strBuilder.ToString();
+ }
+
+ private enum LogLevel
+ {
+ Info,
+ Verbose
+ };
+
+ private void Log(LogLevel level, string message)
+ {
+ if (level != LogLevel.Verbose
+ || config.DoVerboseLogging)
+ {
+ Console.WriteLine(message);
+ }
+ }
+
+```
+
+## Error handling
+The 'status code' field of the HTTP response header indicates the success status of the request; a '200 OK' status indicates success, and the response will include the access token as described above. Following are a short enumeration of possible error responses.
+
+| Status Code | Error Reason | How To Handle |
+| -- | | - |
+| 404 Not found. | Unknown authentication code, or the application was not assigned a managed identity. | Rectify the application setup or token acquisition code. |
+| 429 Too many requests. | Throttle limit reached, imposed by AAD or SF. | Retry with Exponential Backoff. See guidance below. |
+| 4xx Error in request. | One or more of the request parameters was incorrect. | Do not retry. Examine the error details for more information. 4xx errors are design-time errors.|
+| 5xx Error from service. | The managed identity subsystem or Azure Active Directory returned a transient error. | It is safe to retry after a short while. You may hit a throttling condition (429) upon retrying.|
+
+If an error occurs, the corresponding HTTP response body contains a JSON object with the error details:
+
+| Element | Description |
+| - | -- |
+| code | Error code. |
+| correlationId | A correlation ID that can be used for debugging. |
+| message | Verbose description of error. **Error descriptions can change at any time. Do not depend on the error message itself.**|
+
+Sample error:
+```json
+{"error":{"correlationId":"7f30f4d3-0f3a-41e0-a417-527f21b3848f","code":"SecretHeaderNotFound","message":"Secret is not found in the request headers."}}
+```
+
+Following is a list of typical Service Fabric errors specific to managed identities:
+
+| Code | Message | Description |
+| -- | -- | -- |
+| SecretHeaderNotFound | Secret is not found in the request headers. | The authentication code was not provided with the request. |
+| ManagedIdentityNotFound | Managed identity not found for the specified application host. | The application has no identity, or the authentication code is unknown. |
+| ArgumentNullOrEmpty | The parameter 'resource' should not be null or empty string. | The resource (audience) was not provided in the request. |
+| InvalidApiVersion | The api-version '' is not supported. Supported version is '2019-07-01-preview'. | Missing or unsupported API version specified in the request URI. |
+| InternalServerError | An error occurred. | An error was encountered in the managed identity subsystem, possibly outside of the Service Fabric stack. Most likely cause is an incorrect value specified for the resource (check for trailing '/'?) |
+
+## Retry guidance
+
+Typically the only retryable error code is 429 (Too Many Requests); internal server errors/5xx error codes may be retryable, though the cause may be permanent.
+
+Throttling limits apply to the number of calls made to the managed identity subsystem - specifically the 'upstream' dependencies (the Managed Identity Azure service, or the secure token service). Service Fabric caches tokens at various levels in the pipeline, but given the distributed nature of the involved components, the caller may experience inconsistent throttling responses (i.e. get throttled on one node/instance of an application, but not on a different node while requesting a token for the same identity.) When the throttling condition is set, subsequent requests from the same application may fail with the HTTP status code 429 (Too Many Requests) until the condition is cleared.
+
+It is recommended that requests failed due to throttling are retried with an exponential backoff, as follows:
+
+| Call index | Action on receiving 429 |
+| | |
+| 1 | Wait 1 second and retry |
+| 2 | Wait 2 seconds and retry |
+| 3 | Wait 4 seconds and retry |
+| 4 | Wait 8 seconds and retry |
+| 4 | Wait 8 seconds and retry |
+| 5 | Wait 16 seconds and retry |
+
+## Resource IDs for Azure services
+See [Azure services that support Azure AD authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md) for a list of resources that support Azure AD, and their respective resource IDs.
+
+## Next steps
+* [Deploy an Azure Service Fabric application with user-assigned or system-assigned managed identity](./how-to-deploy-service-fabric-application-system-assigned-managed-identity.md)
+* [Grant an Azure Service Fabric application access to other Azure resources](./how-to-managed-cluster-grant-access-other-resources.md)
+* [Explore a sample application using Service Fabric Managed Identity](https://github.com/Azure-Samples/service-fabric-managed-identity)
service-fabric How To Managed Cluster Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/how-to-managed-cluster-networking.md
Title: Configure network settings for Service Fabric managed clusters (preview)
+ Title: Configure network settings for Service Fabric managed clusters
description: Learn how to configure your Service Fabric managed cluster for NSG rules, RDP port access, load balancing rules, and more. Previously updated : 03/02/2021 Last updated : 5/10/2021
-# Configure network settings for Service Fabric managed clusters (preview)
+# Configure network settings for Service Fabric managed clusters
-Service Fabric managed clusters are created with a default networking configuration. This configuration consists of mandatory rules for essential cluster functionality, and a few optional rules which are intended to make customer configuration easier.
+Service Fabric managed clusters are created with a default networking configuration. This configuration consists of mandatory rules for essential cluster functionality, and a few optional rules such as allowing all outbound traffic by default, which are intended to make customer configuration easier.
Beyond the default networking configuration, you can modify the networking rules to meet the needs of your scenario.
Be aware of these considerations when creating new NSG rules for your managed cl
With classic (non-managed) Service Fabric clusters, you must declare and manage a separate *Microsoft.Network/networkSecurityGroups* resource in order to [apply Network Security Group (NSG) rules to your cluster](https://github.com/Azure/azure-quickstart-templates/tree/master/service-fabric-secure-nsg-cluster-65-node-3-nodetype). Service Fabric managed clusters enable you to assign NSG rules directly within the cluster resource of your deployment template.
-Use the [networkSecurityRules](/azure/templates/microsoft.servicefabric/managedclusters#managedclusterproperties-object) property of your *Microsoft.ServiceFabric/managedclusters* resource (version `2021-01-01-preview` or later) to assign NSG rules. For example:
+Use the [networkSecurityRules](/azure/templates/microsoft.servicefabric/managedclusters#managedclusterproperties-object) property of your *Microsoft.ServiceFabric/managedclusters* resource (version `2021-05-01` or later) to assign NSG rules. For example:
```json
- "apiVersion": "2021-01-01-preview",
+ "apiVersion": "2021-05-01",
"type": "Microsoft.ServiceFabric/managedclusters", ... "properties": {
service-fabric How To Managed Cluster Stateless Node Type https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/how-to-managed-cluster-stateless-node-type.md
+
+ Title: Deploy a Service Fabric managed cluster with stateless node types
+description: Learn how to create and deploy stateless node types in Service Fabric managed clusters
+ Last updated : 5/10/2021+
+# Deploy a Service Fabric managed cluster with stateless node types
+
+Service Fabric node types come with an inherent assumption that at some point of time, stateful services might be placed on the nodes. Stateless node types relax this assumption for a node type. Relaxing this assumption enables node stateless node types to benefit from faster scale-out operations by removing some of the restrictions on repair and maintenance operations.
+
+* Primary node types cannot be configured to be stateless
+* Stateless node types require an API version of **2021-05-01** or later
++
+Sample templates are available: [Service Fabric Stateless Node types template](https://github.com/Azure-Samples/service-fabric-cluster-templates)
+
+## Enable stateless node types in a Service Fabric managed cluster
+To set one or more node types as stateless in a node type resource, set the **isStateless** property to **true**. When deploying a Service Fabric cluster with stateless node types, it is required to have at least one primary node type, which is not stateless in the cluster.
+
+* The Service Fabric managed cluster resource apiVersion should be **2021-05-01** or later.
+
+```json
+ {
+ "apiVersion": "[variables('sfApiVersion')]",
+ "type": "Microsoft.ServiceFabric/managedclusters/nodetypes",
+ "name": "[concat(parameters('clusterName'), '/', parameters('nodeTypeName'))]",
+ "location": "[resourcegroup().location]",
+ "dependsOn": [
+ "[concat('Microsoft.ServiceFabric/managedclusters/', parameters('clusterName'))]"
+ ],
+ "properties": {
+ "isStateless": true,
+ "isPrimary": false,
+ "vmImagePublisher": "[parameters('vmImagePublisher')]",
+ "vmImageOffer": "[parameters('vmImageOffer')]",
+ "vmImageSku": "[parameters('vmImageSku')]",
+ "vmImageVersion": "[parameters('vmImageVersion')]",
+ "vmSize": "[parameters('nodeTypeSize')]",
+ "vmInstanceCount": "[parameters('nodeTypeVmInstanceCount')]",
+ "dataDiskSizeGB": "[parameters('nodeTypeDataDiskSizeGB')]"
+ }
+ }
+}
+```
+
+## Configure stateless node types with multiple Availability Zones
+To configure a Stateless node type spanning across multiple availability zones follow [Service Fabric clusters across availability zones](.\service-fabric-cross-availability-zones.md).
+
+>[!NOTE]
+> The zonal resiliency property must be set at the cluster level, and this property cannot be changed in place.
+
+## Migrate to using stateless node types in a cluster
+For all migration scenarios, a new stateless node type needs to be added. Existing node type cannot be migrated to be stateless. You can add a new stateless node type to an existing Service Fabric managed cluster, and remove any original node types from the cluster.
+
+## Next steps
+
+To learn more about Service Fabric managed clusters, see:
+
+> [!div class="nextstepaction"]
+> [Service Fabric managed clusters frequently asked questions](./faq-managed-cluster.md)
service-fabric How To Managed Cluster Upgrades https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/how-to-managed-cluster-upgrades.md
+
+ Title: Upgrading Azure Service Fabric managed clusters
+description: Learn about options for upgrading your Azure Service Fabric managed cluster
+ Last updated : 05/10/2021+
+# Manage Service Fabric managed cluster upgrades
+
+An Azure Service Fabric cluster is a resource you own, but it's partly managed by Microsoft. Here's how to manage when and how Microsoft updates your Azure Service Fabric managed cluster.
+
+## Set upgrade mode
+
+Azure Service Fabric managed clusters are set by default to receive automatic Service Fabric upgrades as they are released by Microsoft using a [wave deployment](#wave-deployment-for-automatic-upgrades) strategy. As an alternative, you can setup manual mode upgrades in which you choose from a list of currently supported versions. You can configure these settings either through the *Fabric upgrades* control in Azure portal or the `ClusterUpgradeMode` setting in your cluster deployment template.
+
+## Wave deployment for automatic upgrades
+
+With wave deployment, you can create a pipeline for upgrading your test, stage, and production clusters in sequence, separated by built-in 'bake time' to validate upcoming Service Fabric versions before your production clusters are updated.
+
+>NOTE
+>By default clusters will be set to Wave 0.
+
+To select a wave deployment for automatic upgrade, first determine which wave to assign your cluster:
+
+* **Wave 0** (`Wave0`) Clusters are updated as soon as a new Service Fabric build is released.
+* **Wave 1** (`Wave1`): Clusters are updated after Wave 0 to allow for bake time. This occurs after a minimum of 7 days after Wave 0
+* **Wave 2** (`Wave2`): Clusters are updated last to allow for further bake time. This occurs after a minimum of 14 days after Wave 0
+
+## Set the Wave for your cluster
+
+You can set your cluster to one of the available wave's either through the *Fabric upgrades* control in Azure portal or the `ClusterUpgradeMode` setting in your cluster deployment template.
+
+### Azure portal
+
+Using Azure portal, you'll choose between the available automatic waves when creating a new Service Fabric cluster.
++
+You can also toggle between available automatic waves from the **Fabric upgrades** section of an existing cluster resource.
++
+### Resource Manager template
+
+To change your cluster upgrade mode using a Resource Manager template, specify either *Automatic* or *Manual* for the `ClusterUpgradeMode` property of the *Microsoft.ServiceFabric/clusters* resource definition. If you choose manual upgrades, also set the `clusterCodeVersion` to a currently [supported fabric version](#query-for-supported-cluster-versions).
+
+#### Manual upgrade
+
+```json
+{
+"apiVersion": "2021-05-01",
+"type": "Microsoft.ServiceFabric/managedClusters",
+"properties": {
+ "ClusterUpgradeMode": "Manual",
+ "ClusterCodeVersion": "7.2.457.9590"
+ }
+}
+```
+
+Upon successful deployment of the template, changes to the cluster upgrade mode will be applied. If your cluster is in manual mode, the cluster upgrade will kick off automatically.
+
+The [cluster health policies](https://docs.microsoft.com/azure/service-fabric/service-fabric-health-introduction#health-policies) (a combination of node health and the health all the applications running in the cluster) are adhered to during the upgrade. If cluster health policies are not met, the upgrade is rolled back.
+
+Once you have fixed the issues that resulted in the rollback, you'll need to initiate the upgrade again, by following the same steps as before.
+
+#### Automatic upgrade with wave deployment
+
+To configure Automatic upgrades and the wave deployment, simply add/validate `ClusterUpgradeMode` is set to `Automatic` and the `upgradeWave` property is defined with one of the wave values listed above in your Resource Manager template.
+
+```json
+{
+"apiVersion": "2021-05-01",
+"type": "Microsoft.ServiceFabric/managedClusters",
+"properties": {
+ "ClusterUpgradeMode": "Automatic",
+ "upgradeWave": "Wave1",
+ }
+}
+```
+
+Once you deploy the updated template, your cluster will be enrolled in the specified wave for the next upgrade period and after that.
+
+## Custom policies for manual upgrades
+
+You can specify custom health policies for manual cluster upgrades. These policies get applied each time you select a new runtime version, which triggers the system to kick off the upgrade of your cluster. If you do not override the policies, the defaults are used.
+
+You can specify the custom health policies or review the current settings under the **Fabric upgrades** section of your cluster resource in Azure portal by selecting *Custom* option for **Upgrade policy**.
++
+## Query for supported cluster versions
+
+You can use [Azure REST API](/rest/api/azure/) to list all available Service Fabric runtime versions ([clusterVersions](/rest/api/servicefabric/sfrp-api-clusterversions_list)) available for the specified location and your subscription.
+
+You can also reference [Service Fabric versions](service-fabric-versions.md) for further details on supported versions and operating systems.
+
+```REST
+GET https://<endpoint>/subscriptions/{{subscriptionId}}/providers/Microsoft.ServiceFabric/locations/{{location}}/clusterVersions?api-version=2018-02-01
+
+"value": [
+ {
+ "id": "subscriptions/########-####-####-####-############/providers/Microsoft.ServiceFabric/environments/Windows/clusterVersions/5.0.1427.9490",
+ "name": "5.0.1427.9490",
+ "type": "Microsoft.ServiceFabric/environments/clusterVersions",
+ "properties": {
+ "codeVersion": "5.0.1427.9490",
+ "supportExpiryUtc": "2016-11-26T23:59:59.9999999",
+ "environment": "Windows"
+ }
+ },
+ {
+ "id": "subscriptions/########-####-####-####-############/providers/Microsoft.ServiceFabric/environments/Windows/clusterVersions/4.0.1427.9490",
+ "name": "5.1.1427.9490",
+ "type": " Microsoft.ServiceFabric/environments/clusterVersions",
+ "properties": {
+ "codeVersion": "5.1.1427.9490",
+ "supportExpiryUtc": "9999-12-31T23:59:59.9999999",
+ "environment": "Windows"
+ }
+ },
+ {
+ "id": "subscriptions/########-####-####-####-############/providers/Microsoft.ServiceFabric/environments/Windows/clusterVersions/4.4.1427.9490",
+ "name": "4.4.1427.9490",
+ "type": " Microsoft.ServiceFabric/environments/clusterVersions",
+ "properties": {
+ "codeVersion": "4.4.1427.9490",
+ "supportExpiryUtc": "9999-12-31T23:59:59.9999999",
+ "environment": "Linux"
+ }
+ }
+]
+}
+```
+
+The `supportExpiryUtc` in the output reports when a given release is expiring or has expired. Latest releases will not have a valid date, but rather a value of *9999-12-31T23:59:59.9999999*, which just means that the expiry date is not yet set.
+
+## Next steps
+
+* [Customize your Service Fabric managed cluster configuration](how-to-managed-cluster-configuration.md)
+* Learn about [application upgrades](service-fabric-application-upgrade.md)
+
+<!--Image references-->
+[CertificateUpgrade]: ./media/service-fabric-cluster-upgrade/CertificateUpgrade2.png
+[AddingProbes]: ./media/service-fabric-cluster-upgrade/addingProbes2.PNG
+[AddingLBRules]: ./media/service-fabric-cluster-upgrade/addingLBRules.png
+[Upgrade-Wave-Settings]: ./media/service-fabric-cluster-upgrade/manage-upgrade-wave-settings.png
+[ARMUpgradeMode]: ./media/service-fabric-cluster-upgrade/ARMUpgradeMode.PNG
+[Create_Manualmode]: ./media/service-fabric-cluster-upgrade/Create_Manualmode.PNG
+[Manage_Automaticmode]: ./media/service-fabric-cluster-upgrade/Manage_Automaticmode.PNG
service-fabric How To Managed Cluster Vmss Extension https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/how-to-managed-cluster-vmss-extension.md
Title: Add a virtual machine scale set extension to a Service Fabric managed cluster node type (preview)
+ Title: Add a virtual machine scale set extension to a Service Fabric managed cluster node type
description: Here's how to add a virtual machine scale set extension a Service Fabric managed cluster node type Previously updated : 09/28/2020 Last updated : 5/10/2021
-# Add a virtual machine scale set extension to a Service Fabric managed cluster node type (preview)
+# Add a virtual machine scale set extension to a Service Fabric managed cluster node type
Each node type in a Service Fabric managed cluster is backed by a virtual machine scale set. This enables you to add [virtual machine scale set extensions](../virtual-machines/extensions/overview.md) to your Service Fabric managed cluster node types.
service-fabric How To Managed Identity Managed Cluster Virtual Machine Scale Sets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/how-to-managed-identity-managed-cluster-virtual-machine-scale-sets.md
Title: Add a managed identity to a Service Fabric managed cluster node type (preview)
+ Title: Add a managed identity to a Service Fabric managed cluster node type
description: This article shows how to add a managed identity to a Service Fabric managed cluster node type Previously updated : 11/24/2020- Last updated : 5/10/2021
-# Add a managed identity to a Service Fabric managed cluster node type (preview)
+# Add a managed identity to a Service Fabric managed cluster node type
Each node type in a Service Fabric managed cluster is backed by a virtual machine scale set. To allow managed identities to be used with a managed cluster node type, a property `vmManagedIdentity` has been added to node type definitions containing a list of identities that may be used, `userAssignedIdentities`. Functionality mirrors how managed identities can be used in non-managed clusters, such as using a managed identity with the [Azure Key Vault virtual machine scale set extension](../virtual-machines/extensions/key-vault-windows.md). -
-For an example of a Service Fabric managed cluster deployment that makes use of managed identity on a node type, see [this template](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/SF-Managed-Standard-SKU-1-NT-MI). For a list of supported regions, see the [managed cluster FAQ](./faq-managed-cluster.md#what-regions-are-supported-in-the-preview).
+For an example of a Service Fabric managed cluster deployment that makes use of managed identity on a node type, see [this template](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/SF-Managed-Standard-SKU-1-NT-MI).
> [!NOTE] > Only user-assigned identities are currently supported for this feature.
Before you begin:
* If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. * If you plan to use PowerShell, [install](/cli/azure/install-azure-cli) the Azure CLI to run CLI reference commands.
-## Create a user-assigned managed identity
+## Create a user-assigned managed identity
A user-assigned managed identity can be defined in the resources section of an Azure Resource Manager (ARM) template for creation upon deployment:
A user-assigned managed identity can be defined in the resources section of an A
or created via PowerShell: ```powershell
-az group create --name <resourceGroupName> --location <location>
-az identity create --name <userAssignedIdentityName> --resource-group <resourceGroupName>
+az group create --name <resourceGroupName> --location <location>
+az identity create --name <userAssignedIdentityName> --resource-group <resourceGroupName>
``` ## Add a role assignment with Service Fabric Resource Provider
New-AzRoleAssignment -PrincipalId 00000000-0000-0000-0000-000000000000 -RoleD
## Add managed identity properties to node type definition
-Finally, add the `vmManagedIdentity` and `userAssignedIdentities` properties to the managed cluster's node type definition. Be sure to use **2021-01-01-preview** or later for the `apiVersion`.
+Finally, add the `vmManagedIdentity` and `userAssignedIdentities` properties to the managed cluster's node type definition. Be sure to use **2021-05-01** or later for the `apiVersion`.
```json { "type": "Microsoft.ServiceFabric/managedclusters/nodetypes",
- "apiVersion": "2021-01-01-preview",
+ "apiVersion": "2021-05-01",
... "properties": { "isPrimary" : true,
service-fabric Overview Managed Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/overview-managed-cluster.md
Title: Service Fabric managed clusters (preview)
+ Title: Service Fabric managed clusters
description: Service Fabric managed clusters are an evolution of the Azure Service Fabric cluster resource model that streamlines deployment and cluster management. Previously updated : 02/15/2021 Last updated : 5/10/2021
-# Service Fabric managed clusters (preview)
+# Service Fabric managed clusters
Service Fabric managed clusters are an evolution of the Azure Service Fabric cluster resource model that streamlines your deployment and cluster management experience.
Service Fabric managed clusters provide a number of advantages over traditional
**Best practices by default** - Simplified reliability and durability settings
-There is no additional cost for Service Fabric managed clusters beyond the cost of underlying resources required for the cluster.
+There is no additional cost for Service Fabric managed clusters beyond the cost of underlying resources required for the cluster, and the same Service Fabric SLA applies for managed clusters.
+
+> [!NOTE]
+> There is no migration path from existing Service Fabric clusters to managed clusters. You will need to create a new Service Fabric managed cluster to use this new resource type.
## Service Fabric managed cluster SKUs
Service Fabric managed clusters are available in both Basic and Standard SKUs.
| Add/remove node types | No | Yes | | Zone redundancy | No | Yes |
-## What's new for Service Fabric managed clusters
-
-The latest features for Service Fabric managed clusters preview include support for:
+## Feature support
-* [Deploying applications using ARM templates](how-to-managed-cluster-app-deployment-template.md)
-* [Automatic OS upgrades](how-to-managed-cluster-configuration.md#enable-automatic-os-image-upgrades)
-* [Disk encryption](how-to-enable-managed-cluster-disk-encryption.md)
-* [Applying NSG rules](how-to-managed-cluster-networking.md)
+The capabilities of managed clusters will continue to expand. Currently there is support for:
-Features to be added in upcoming releases include:
-
-* Deploying applications using Visual Studio
-* Managed Identities support
-* Availability Zones
-* Reverse Proxy
-* Autoscaling
+* [Application deployment using ARM templates](how-to-managed-cluster-app-deployment-template.md)
+* [Application secrets](how-to-managed-cluster-application-secrets.md)
+* [Automatic OS image upgrades](how-to-managed-cluster-configuration.md#enable-automatic-os-image-upgrades)
+* [Availability zone spanning](how-to-managed-cluster-availability-zones.md)
+* [Disk encryption](how-to-enable-managed-cluster-disk-encryption.md) and [managed disk type](how-to-managed-cluster-managed-disk.md) selection
+* Managed identity support for managed cluster [node types](how-to-managed-identity-managed-cluster-virtual-machine-scale-sets.md) and [application authentication](how-to-managed-cluster-application-managed-identity.md)
+* [NSG rules and other networking options](how-to-managed-cluster-networking.md)
+* [Stateless-only node types](how-to-managed-cluster-stateless-node-type.md)
+* [Virtual machine scale set extensions](how-to-managed-cluster-vmss-extension.md) for node types
## Next steps To get started with Service Fabric managed clusters, try the quickstart: > [!div class="nextstepaction"]
-> [Create a Service Fabric managed cluster (preview)](quickstart-managed-cluster-template.md)
-
+> [Create a Service Fabric managed cluster](quickstart-managed-cluster-template.md)
[sf-composition]: ./media/overview-managed-cluster/sfrp-composition-resource.png [sf-encapsulation]: ./media/overview-managed-cluster/sfrp-encapsulated-resource.png
service-fabric Quickstart Managed Cluster Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/quickstart-managed-cluster-template.md
Title: Deploy a Service Fabric managed cluster (preview) using Azure Resource Manager
+ Title: Deploy a Service Fabric managed cluster using Azure Resource Manager
description: Learn how to create a Service Fabric managed cluster with an Azure Resource Manager template Previously updated : 09/28/2020- Last updated : 5/10/2021
-# Quickstart: Deploy a Service Fabric managed cluster (preview) with an Azure Resource Manager template
+# Quickstart: Deploy a Service Fabric managed cluster with an Azure Resource Manager template
Service Fabric managed clusters are an evolution of the Azure Service Fabric cluster resource model that streamlines your deployment and cluster management experience. Service Fabric managed clusters are a fully encapsulated resource that enable you to deploy a single Service Fabric cluster resource rather than having to deploy all of the underlying resources that make up a Service Fabric cluster. This article describes how to do deploy a Service Fabric managed cluster for testing in Azure using an Azure Resource Manager template (ARM template).
Take note of the certificate thumbprint as this will be required to deploy the t
* **Subscription**: Select an Azure subscription. * **Resource Group**: Select **Create new**. Enter a unique name for the resource group, such as *myResourceGroup*, then choose **OK**.
- * **Location**: Select a location, such as **eastus2**. Supported regions for Service Fabric managed clusters preview include `centraluseuap`, `eastus2euap`, `eastasia`, `northeurope`, `westcentralus`, and `eastus2`.
+ * **Location**: Select a location.
* **Cluster Name**: Enter a unique name for your cluster, such as *mysfcluster*. * **Admin Username**: Enter a name for the admin to be used for RDP on the underlying VMs in the cluster. * **Admin Password**: Enter a password for the admin to be used for RDP on the underlying VMs in the cluster.
service-fabric Tutorial Managed Cluster Add Remove Node Type https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/tutorial-managed-cluster-add-remove-node-type.md
Title: Add and remove node types of a Service Fabric managed cluster (preview)
+ Title: Add and remove node types of a Service Fabric managed cluster
description: In this tutorial, learn how to add and remove node types of a Service Fabric managed cluster. Previously updated : 09/28/2020 Last updated : 05/10/2021
-# Tutorial: Add and remove node types from a Service Fabric managed cluster (preview)
+# Tutorial: Add and remove node types from a Service Fabric managed cluster
In this tutorial series we will discuss:
service-fabric Tutorial Managed Cluster Deploy App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/tutorial-managed-cluster-deploy-app.md
Title: Deploy an application to a Service Fabric managed cluster via PowerShell (preview)
+ Title: Deploy an application to a Service Fabric managed cluster via PowerShell
description: In this tutorial, you will connect to a Service Fabric managed cluster and deploy an application via PowerShell. Previously updated : 09/28/2020 Last updated : 5/10/2021
-# Tutorial: Deploy an app to a Service Fabric managed cluster (preview)
+# Tutorial: Deploy an app to a Service Fabric managed cluster
In this tutorial series we will discuss:
To connect to your cluster, you'll need the cluster certificate thumbprint. You
The following command can be used to query your cluster resource for the cluster certificate thumbprint. ```powershell
-$serverThumbprint = (Get-AzResource -ResourceId /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.ServiceFabric/managedclusters/mysfcluster).Properties.clusterCertificateThumbprint
+$serverThumbprint = (Get-AzResource -ResourceId /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.ServiceFabric/managedclusters/mysfcluster).Properties.clusterCertificateThumbprints
``` With the cluster certificate thumbprint, you're ready to connect to your cluster.
Remove-ServiceFabricApplication fabric:/Voting
## Next steps
-In this step, we deployed an app to a Service Fabric managed cluster. To learn more about Service Fabric managed clusters, see:
+In this step, we deployed an application to a Service Fabric managed cluster. To learn more about application deployment options, see:
-> [!div class="nextstepaction"]
-> [Service Fabric managed clusters frequently asked questions](faq-managed-cluster.md)
+* [Deploy managed cluster application secrets](how-to-managed-cluster-application-secrets.md)
+* [Deploy managed cluster applications using ARM templates](how-to-managed-cluster-app-deployment-template.md)
+* [Deploy managed cluster applications with managed identity](how-to-managed-cluster-application-managed-identity.md)
+
+To learn more about managed cluster configuration options, see:
+
+* [Configure your manage cluster](how-to-managed-cluster-configuration.md)
service-fabric Tutorial Managed Cluster Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/tutorial-managed-cluster-deploy.md
Title: Deploy a Service Fabric managed cluster (preview)
+ Title: Deploy a Service Fabric managed cluster
description: In this tutorial, you will deploy a Service Fabric managed cluster for testing. Previously updated : 08/27/2020 Last updated : 5/10/2021
-# Tutorial: Deploy a Service Fabric managed cluster (preview)
+# Tutorial: Deploy a Service Fabric managed cluster
In this tutorial series we will discuss:
service-fabric Tutorial Managed Cluster Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/tutorial-managed-cluster-scale.md
Title: Scale out a Service Fabric managed cluster (preview)
+ Title: Scale out a Service Fabric managed cluster
description: In this tutorial, learn how to scale out a node type of a Service Fabric managed cluster. Previously updated : 09/28/2020 Last updated : 5/10/2021
-# Tutorial: Scale out a Service Fabric managed cluster (preview)
+# Tutorial: Scale out a Service Fabric managed cluster
In this tutorial series we will discuss:
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/azure-to-azure-support-matrix.md
Azure Government | US GOV Virginia, US GOV Iowa, US GOV Arizona, US GOV Texas
Germany | Germany Central, Germany Northeast China | China East, China North, China North2, China East2 Brazil | Brazil South
-Restricted Regions reserved for in-country disaster recovery |Switzerland West reserved for Switzerland North, France South reserved for France Central, UAE Central restricted for UAE North customers, Norway West for Norway East customers, JIO India Central for JIO India West customers, Brazil Southeast for Brazil South
+Restricted Regions reserved for in-country disaster recovery |Switzerland West reserved for Switzerland North, France South reserved for France Central, UAE Central restricted for UAE North customers, Norway West for Norway East customers, JIO India Central for JIO India West customers, Brazil Southeast for Brazil South customers, South Africa West for South Africa North customers, Germany North for Germany West Central customers.
Replication and recovery of VMs between two regions in different continents is limited to the following region pairs:
spring-cloud Quickstart Deploy Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/quickstart-deploy-apps.md
Compiling the project takes 5 -10 minutes. Once completed, you should have indiv
* **Public endpoint:** In the list of provided projects, enter the number that corresponds with `api-gateway`. This gives it public access. 1. Verify the `appName` elements in the POM files are correct:
- ```
+ ```xml
<build> <plugins> <plugin>
Compiling the project takes 5 -10 minutes. Once completed, you should have indiv
<appName>customers-service</appName> ```
- You may have to correct `appName` texts to the following:
+ Please make sure `appName` texts match the following, remove any prefix if needed and save the file:
* api-gateway * customers-service
Compiling the project takes 5 -10 minutes. Once completed, you should have indiv
```azurecli mvn azure-spring-cloud:deploy ```
+
## Verify the services A successful deployment command will return a the URL of the form: "https://<service name>-spring-petclinic-api-gateway.azuremicroservices.io". Use it to navigate to the running service.
You can also navigate the Azure portal to find the URL.
## Deploy extra apps
-To get the PetClinic app functioning with all features like Admin Server, Visits and Veterinarians, you can deploy the other microservices. Rerun the configuration command and select the following microservices.
+To get the PetClinic app functioning with all features like Admin Server, Visits and Veterinarians, you can deploy the other microservices. Rerun the configuration command and select the following microservices.
* admin-server * vets-service * visits-service
-Then run the `deploy` command again.
+Correct app names in each `pom.xml` for above modules and then run the `deploy` command again.
#### [IntelliJ](#tab/IntelliJ)
In order to deploy to Azure you must sign in with your Azure account with Azure
1. Set **Public Endpoint** to *Enable*. 1. In the **App:** textbox, select **Create app...**. 1. Enter *api-gateway*, then click **OK**.
-1. Specify the memory and JVM options.
+1. Specify the memory to 2 GB and JVM options: `-Xms2048m -Xmx2048m`.
![Memory JVM options](media/spring-cloud-intellij-howto/memory-jvm-options.png)
Other microservices included in this sample can be deployed similarly.
In this quickstart, you created Azure resources that will continue to accrue charges if they remain in your subscription. If you don't intend to continue on to the next quickstart, see [Clean up resources](./quickstart-logs-metrics-tracing.md#clean-up-resources). Otherwise, advance to the next quickstart: > [!div class="nextstepaction"]
-> [Logs, Metrics and Tracing](./quickstart-logs-metrics-tracing.md)
+> [Logs, Metrics and Tracing](./quickstart-logs-metrics-tracing.md)
static-web-apps Add Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/add-api.md
Next, add the following build details.
1. Click the **Go to Resource** button to take you to the web app's _Overview_ page.
- As the app is being built in the background, you can click on the banner which contains a link to view the build status.
-
- :::image type="content" source="media/add-api/github-action-flag.png" alt-text="GitHub Workflow":::
-
-1. Once the deployment is complete, you can navigate to the web app, by clicking on the _URL_ link shown on the _Overview_ page.
-
- :::image type="content" source="media/add-api/static-app-url-from-portal.png" alt-text="Access static app URL from the Azure portal":::
## Clean up resources
static-web-apps Publish Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/publish-azure-resource-manager.md
Previously updated : 04/18/2021 Last updated : 05/10/2021
In this tutorial, you learn to:
## Prerequisites - **Active Azure account:** If you don't have one, you can [create an account for free](https://azure.microsoft.com/free/).-- **GitHub Account:** If you don't have one, you can [create a GitHub Account for free](https://github.com)
+- **GitHub Account:** If you don't have one, you can [create a GitHub Account for free](https://github.com)
- **Editor for ARM Templates:** Reviewing and editing templates requires a JSON editor. Visual Studio Code with the [Azure Resource Manager Tools extension](https://marketplace.visualstudio.com/items?itemName=msazurermtools.azurerm-vscode-tools) is well suited for editing ARM Templates. For instructions on how to install and configure Visual Studio Code, see [Quickstart: Create ARM templates with Visual Studio Code](../azure-resource-manager/templates/quickstart-create-templates-use-visual-studio-code.md). - **Azure CLI or Azure PowerShell**: Deploying ARM templates requires a command line tool. For the installation instructions, see:
In this tutorial, you learn to:
- [Install Azure CLI on macOS](https://docs.microsoft.com/cli/azure/install-azure-cli-macos) - [Install Azure PowerShell](https://docs.microsoft.com/powershell/azure/install-az-ps) - ## Create a GitHub personal access token One of the required parameters in the ARM template is `repositoryToken`, which allows the ARM deployment process to interact with the GitHub repo holding the static site source code.
One of the required parameters in the ARM template is `repositoryToken`, which a
1. Copy the token value and paste it into a text editor for later use. > [!IMPORTANT]
-> Make sure you copy this token and store it somewhere safe. Consider storing this token in [Azure KeyVault](../azure-resource-manager/templates/template-tutorial-use-key-vault.md) and access it in your ARM Template.
-## Create a GitHub repo
-
-The following steps demonstrate how to create a new repository for a static web app.
-
-> [!NOTE]
-> If you want to use an existing code repository you can skip this section.
+> Make sure you copy this token and store it somewhere safe. Consider storing this token in [Azure KeyVault](../azure-resource-manager/templates/template-tutorial-use-key-vault.md) and access it in your ARM Template.
-1. Log on to [GitHub](https://github.com) using your GitHub account credentials.
-
-1. Create a new repository named **myfirstswadeployment**.
+## Create a GitHub repo
-1. Define your GitHub repo as **Public**.
+This article uses a GitHub template repository to make it easy for you to get started. The template features a starter app used to deploy using Azure Static Web Apps.
-1. Select the checkbox next to **Add a Readme file**.
+1. Navigate to the following location to create a new repository:
+ 1. [https://github.com/staticwebdev/vanilla-basic/generate](https://github.com/login?return_to=/staticwebdev/vanilla-basic/generate)
-1. Select **Create repository**.
+1. Name your repository **myfirstswadeployment**
-1. Once the repository is created, select **Add file**.
+ > [!NOTE]
+ > Azure Static Web Apps requires at least one HTML file to create a web app. The repository you create in this step includes a single _https://docsupdatetracker.net/index.html_ file.
-1. Select **Create New file** and provide **https://docsupdatetracker.net/index.html** as file name.
+1. Select **Create repository from template**.
-1. Paste the following snippet of code in the **Edit new file** pane
-
- ```html
- <!doctype html>
- <html>
- <head>
- <title>Hello World!</title>
- </head>
- <body>
- <h1>Hello World!</h1>
- </body>
- </html>
- ```
-
-1. Scroll down and select **Commit new file** to save the file.
+ :::image type="content" source="./media/getting-started/create-template.png" alt-text="Create repository from template":::
## Create the ARM Template
You need either Azure CLI or Azure PowerShell to deploy the template.
To deploy a template sign in to either the Azure CLI or Azure PowerShell.
-# [PowerShell](#tab/azure-powershell)
-
-```azurepowershell
-Connect-AzAccount
-```
- # [Azure CLI](#tab/azure-cli) ```azurecli az login ``` --
-If you have multiple Azure subscriptions, select the subscription you want to use. Replace `<SUBSCRIPTION-ID-OR-SUBSCRIPTION-NAME>` with your subscription information:
- # [PowerShell](#tab/azure-powershell) ```azurepowershell
-Set-AzContext <SUBSCRIPTION-ID-OR-SUBSCRIPTION-NAME>
+Connect-AzAccount
``` ++
+If you have multiple Azure subscriptions, select the subscription you want to use. Replace `<SUBSCRIPTION-ID-OR-SUBSCRIPTION-NAME>` with your subscription information:
+ # [Azure CLI](#tab/azure-cli) ```azurecli az account set --subscription <SUBSCRIPTION-ID-OR-SUBSCRIPTION-NAME> ```
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Set-AzContext <SUBSCRIPTION-ID-OR-SUBSCRIPTION-NAME>
+```
+ ## Create a resource group
When you deploy a template, you specify a resource group that contains related r
> [!NOTE] > The CLI examples in this article are written for the Bash shell.
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+resourceGroupName="myfirstswadeployRG"
+
+az group create \
+ --name $resourceGroupName \
+ --location "Central US"
+```
+ # [PowerShell](#tab/azure-powershell) ```azurepowershell
New-AzResourceGroup `
-Location "Central US" ``` ++
+## Deploy template
+
+Use one of these deployment options to deploy the template.
+ # [Azure CLI](#tab/azure-cli) ```azurecli
-resourceGroupName="myfirstswadeployRG"
-az group create \
- --name $resourceGroupName \
- --location "Central US"
+az deployment group create \
+ --name DeployLocalTemplate \
+ --resource-group $resourceGroupName \
+ --template-file <PATH-TO-AZUREDEPLOY.JSON> \
+ --parameters <PATH-TO-AZUREDEPLOY.PARAMETERS.JSON> \
+ --verbose
``` --
-## Deploy template
-
-Use one of these deployment options to deploy the template.
+To learn more about deploying templates using the Azure CLI, see [Deploy resources with ARM templates and Azure CLI](../azure-resource-manager/templates/deploy-cli.md).
# [PowerShell](#tab/azure-powershell)
New-AzResourceGroupDeployment `
To learn more about deploying templates using Azure PowerShell, see [Deploy resources with ARM templates and Azure PowerShell](../azure-resource-manager/templates/deploy-powershell.md).
-# [Azure CLI](#tab/azure-cli)
-
-```azurecli
-
-az deployment group create \
- --name DeployLocalTemplate \
- --resource-group $resourceGroupName \
- --template-file <PATH-TO-AZUREDEPLOY.JSON> \
- --parameters <PATH-TO-AZUREDEPLOY.PARAMETERS.JSON> \
- --verbose
-```
-
-To learn more about deploying templates using the Azure CLI, see [Deploy resources with ARM templates and Azure CLI](../azure-resource-manager/templates/deploy-cli.md).
- + [!INCLUDE [view website](../../includes/static-web-apps-get-started-view-website.md)]+ ## Clean up resources Clean up the resources you deployed by deleting the resource group.
storage Encryption Scope Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/encryption-scope-manage.md
Previously updated : 03/26/2021 Last updated : 05/10/2021
To create an encryption scope in the Azure portal, follow these steps:
1. In the **Create Encryption Scope** pane, enter a name for the new scope. 1. Select the desired type of encryption key support, either **Microsoft-managed keys** or **Customer-managed keys**. - If you selected **Microsoft-managed keys**, click **Create** to create the encryption scope.
- - If you selected **Customer-managed keys**, then select a subscription and specify a key vault or a managed HSM and a key to use for this encryption scope, as shown in the following image.
+ - If you selected **Customer-managed keys**, then select a subscription and specify a key vault or a managed HSM and a key to use for this encryption scope.
+1. If infrastructure encryption is enabled for the storage account, then it will automatically be enabled for the new encryption scope. Otherwise, you can choose whether to enable infrastructure encryption for the encryption scope.
:::image type="content" source="media/encryption-scope-manage/create-encryption-scope-customer-managed-key-portal.png" alt-text="Screenshot showing how to create encryption scope in Azure portal":::
To create an encryption scope with PowerShell, install the [Az.Storage](https://
### Create an encryption scope protected by Microsoft-managed keys
-To create a new encryption scope that is protected by Microsoft-managed keys, call the **New-AzStorageEncryptionScope** command with the `-StorageEncryption` parameter.
+To create a new encryption scope that is protected by Microsoft-managed keys, call the [New-AzStorageEncryptionScope](/powershell/module/az.storage/new-azstorageencryptionscope) command with the `-StorageEncryption` parameter.
+
+If infrastructure encryption is enabled for the storage account, then it will automatically be enabled for the new encryption scope. Otherwise, you can choose whether to enable infrastructure encryption for the encryption scope. To create the new scope with infrastructure encryption enabled, include the `-RequireInfrastructureEncryption` parameter.
Remember to replace the placeholder values in the example with your own values:
Set-AzKeyVaultAccessPolicy `
-PermissionsToKeys wrapkey,unwrapkey,get ```
-Next, call the **New-AzStorageEncryptionScope** command with the `-KeyvaultEncryption` parameter, and specify the key URI. Including the key version on the key URI is optional. If you omit the key version, then the encryption scope will automatically use the most recent key version. If you include the key version, then you must update the key version manually to use a different version.
+Next, call the [New-AzStorageEncryptionScope](/powershell/module/az.storage/new-azstorageencryptionscope) command with the `-KeyvaultEncryption` parameter, and specify the key URI. Including the key version on the key URI is optional. If you omit the key version, then the encryption scope will automatically use the most recent key version. If you include the key version, then you must update the key version manually to use a different version.
+
+If infrastructure encryption is enabled for the storage account, then it will automatically be enabled for the new encryption scope. Otherwise, you can choose whether to enable infrastructure encryption for the encryption scope. To create the new scope with infrastructure encryption enabled, include the `-RequireInfrastructureEncryption` parameter.
Remember to replace the placeholder values in the example with your own values:
To create an encryption scope with Azure CLI, first install Azure CLI version 2.
### Create an encryption scope protected by Microsoft-managed keys
-To create a new encryption scope that is protected by Microsoft-managed keys, call the [az storage account encryption-scope create](/cli/azure/storage/account/encryption-scope#az_storage_account_encryption_scope_create) command, specifying the `--key-source` parameter as `Microsoft.Storage`. Remember to replace the placeholder values with your own values:
+To create a new encryption scope that is protected by Microsoft-managed keys, call the [az storage account encryption-scope create](/cli/azure/storage/account/encryption-scope#az_storage_account_encryption_scope_create) command, specifying the `--key-source` parameter as `Microsoft.Storage`.
+
+If infrastructure encryption is enabled for the storage account, then it will automatically be enabled for the new encryption scope. Otherwise, you can choose whether to enable infrastructure encryption for the encryption scope. To create the new scope with infrastructure encryption enabled, include the `--require-infrastructure-encryption` parameter and set its value to `true`.
+
+Remember to replace the placeholder values with your own values:
```azurecli-interactive az storage account encryption-scope create \
az storage account encryption-scope create \
### Create an encryption scope protected by customer-managed keys
-To create a new encryption scope that is protected by Microsoft-managed keys, call the [az storage account encryption-scope create](/cli/azure/storage/account/encryption-scope#az_storage_account_encryption_scope_create) command, specifying the `--key-source` parameter as `Microsoft.Storage`. Remember to replace the placeholder values with your own values:
- To create a new encryption scope that is protected by customer-managed keys in a key vault or managed HSM, first configure customer-managed keys for the storage account. You must assign a managed identity to the storage account and then use the managed identity to configure the access policy for the key vault so that the storage account has permissions to access it. For more information, see [Customer-managed keys for Azure Storage encryption](../common/customer-managed-keys-overview.md). To configure customer-managed keys for use with an encryption scope, purge protection must be enabled on the key vault or managed HSM. The key vault or managed HSM must be in the same region as the storage account.
az keyvault set-policy \
--key-permissions get unwrapKey wrapKey ```
-Next, call the **az storage account encryption-scope create** command with the `--key-uri` parameter, and specify the key URI. Including the key version on the key URI is optional. If you omit the key version, then the encryption scope will automatically use the most recent key version. If you include the key version, then you must update the key version manually to use a different version.
+Next, call the [az storage account encryption-scope](/cli/azure/storage/account/encryption-scope#az_storage_account_encryption_scope_create) command with the `--key-uri` parameter, and specify the key URI. Including the key version on the key URI is optional. If you omit the key version, then the encryption scope will automatically use the most recent key version. If you include the key version, then you must update the key version manually to use a different version.
+
+If infrastructure encryption is enabled for the storage account, then it will automatically be enabled for the new encryption scope. Otherwise, you can choose whether to enable infrastructure encryption for the encryption scope. To create the new scope with infrastructure encryption enabled, include the `--require-infrastructure-encryption` parameter and set its value to `true`.
Remember to replace the placeholder values in the example with your own values:
To learn how to configure Azure Storage encryption with customer-managed keys in
- [Configure encryption with customer-managed keys stored in Azure Key Vault](../common/customer-managed-keys-configure-key-vault.md) - [Configure encryption with customer-managed keys stored in Azure Key Vault Managed HSM (preview)](../common/customer-managed-keys-configure-key-vault-hsm.md).
+To learn more about infrastructure encryption, see [Enable infrastructure encryption for double encryption of data](../common/infrastructure-encryption-enable.md).
+ ## List encryption scopes for storage account # [Portal](#tab/portal)
az storage account encryption-scope update \
- [Azure Storage encryption for data at rest](../common/storage-service-encryption.md) - [Encryption scopes for Blob storage](encryption-scope-overview.md)-- [Customer-managed keys for Azure Storage encryption](../common/customer-managed-keys-overview.md)
+- [Customer-managed keys for Azure Storage encryption](../common/customer-managed-keys-overview.md)
+- [Enable infrastructure encryption for double encryption of data](../common/infrastructure-encryption-enable.md)
storage Encryption Scope Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/encryption-scope-overview.md
Previously updated : 03/26/2021 Last updated : 05/10/2021
If you define an encryption scope with a customer-managed key, then you can choo
A storage account may have up to 10,000 encryption scopes that are protected with customer-managed keys for which the key version is automatically updated. If your storage account already has 10,000 encryption scopes that are protected with customer-managed keys that are being automatically updated, then the key version must be updated manually for any additional encryption scopes that are protected with customer-managed keys.
+### Infrastructure encryption
+
+Infrastructure encryption in Azure Storage enables double encryption of data. With infrastructure encryption, data is encrypted twice &mdash; once at the service level and once at the infrastructure level &mdash; with two different encryption algorithms and two different keys.
+
+Infrastructure encryption is supported for an encryption scope, as well as at the level of the storage account. If infrastructure encryption is enabled for an account, then any encryption scope created on that account automatically uses infrastructure encryption. If infrastructure encryption is not enabled at the account level, then you have the option to enable it for an encryption scope at the time that you create the scope. The infrastructure encryption setting for an encryption scope cannot be changed after the scope is created.
+
+For more information about infrastructure encryption, see [Enable infrastructure encryption for double encryption of data](../common/infrastructure-encryption-enable.md).
+ ### Encryption scopes for containers and blobs When you create a container, you can specify a default encryption scope for the blobs that are subsequently uploaded to that container. When you specify a default encryption scope for a container, you can decide how the default encryption scope is enforced:
storage Network File System Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/network-file-system-protocol-support.md
The status of items that appear in this tables will change over time as support
- NFS 3.0 support can't be enabled on existing storage accounts. -- NFS 3.0 support cant' be disabled in a storage account after you've enabled it.
+- NFS 3.0 support can't be disabled in a storage account after you've enabled it.
- Files can't be viewed in either the Azure portal or Azure Storage Explorer. To list files and directories, either [mount a Blob Storage container by using the NFS 3.0 protocol](network-file-system-protocol-support-how-to.md), or use the [Blob service REST API](/rest/api/storageservices/blob-service-rest-api).
storage Storage Manage Find Blobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-manage-find-blobs.md
The following criteria applies to blob index filtering:
- Filters are applied with lexicographic sorting on strings - Same sided range operations on the same key are invalid (for example, `"Rank" > '10' AND "Rank" >= '15'`) - When using REST to create a filter expression, characters should be URI encoded
+- Tag queries are optimized for equality match using a single tag (e.g. StoreID = "100"). Range queries using a single tag involving >, >=, <, <= are also efficient. Any query using AND with more than one tag will not be as efficient. For example, Cost > "01" AND Cost <= "100" is efficient. Cost > "01 AND StoreID = "2" is not as efficient.
The below table shows all the valid operators for `Find Blobs by Tags`:
The following table summarizes the differences between metadata and blob index t
## Pricing
-Blob index pricing is in public preview and subject to change for general availability. You're charged for the monthly average number of index tags within a storage account. There's no cost for the indexing engine. Requests to `Set Blob Tags`, `Get Blob Tags`, and `Find Blobs by Tags` are charged in accordance to their respective operation types. See [Block Blob pricing to learn more](https://azure.microsoft.com/pricing/details/storage/blobs/).
+Blob index pricing is in public preview and subject to change for general availability. You're charged for the monthly average number of index tags within a storage account. There's no cost for the indexing engine. Requests to Set Blog Tags, Get Blob Tags, and Find Blob Tags are charged at the current respective transaction rates. Note that the number of list transactions consumed when doing a Find Blobs by Tag transaction is equal to the number of clauses in the request. For example, the query (StoreID = 100) is one list transaction. The query (StoreID = 100 AND SKU = 10010) is two list transactions. See [Block Blob pricing to learn more](https://azure.microsoft.com/pricing/details/storage/blobs/).
## Regional availability and storage account support
storage Infrastructure Encryption Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/infrastructure-encryption-enable.md
Title: Create a storage account with infrastructure encryption enabled for double encryption of data
+ Title: Enable infrastructure encryption for double encryption of data
-description: Customers who require higher levels of assurance that their data is secure can also enable 256-bit AES encryption at the Azure Storage infrastructure level. When infrastructure encryption is enabled, data in a storage account is encrypted twice with two different encryption algorithms and two different keys.
+description: Customers who require higher levels of assurance that their data is secure can also enable 256-bit AES encryption at the Azure Storage infrastructure level. When infrastructure encryption is enabled, data in a storage account or encryption scope is encrypted twice with two different encryption algorithms and two different keys.
Previously updated : 09/17/2020 Last updated : 05/10/2021
-# Create a storage account with infrastructure encryption enabled for double encryption of data
+# Enable infrastructure encryption for double encryption of data
-Azure Storage automatically encrypts all data in a storage account at the service level using 256-bit AES encryption, one of the strongest block ciphers available, and is FIPS 140-2 compliant. Customers who require higher levels of assurance that their data is secure can also enable 256-bit AES encryption at the Azure Storage infrastructure level. When infrastructure encryption is enabled, data in a storage account is encrypted twice &mdash; once at the service level and once at the infrastructure level &mdash; with two different encryption algorithms and two different keys. Double encryption of Azure Storage data protects against a scenario where one of the encryption algorithms or keys may be compromised. In this scenario, the additional layer of encryption continues to protect your data.
+Azure Storage automatically encrypts all data in a storage account at the service level using 256-bit AES encryption, one of the strongest block ciphers available, and is FIPS 140-2 compliant. Customers who require higher levels of assurance that their data is secure can also enable 256-bit AES encryption at the Azure Storage infrastructure level for double encryption. Double encryption of Azure Storage data protects against a scenario where one of the encryption algorithms or keys may be compromised. In this scenario, the additional layer of encryption continues to protect your data.
+
+Infrastructure encryption can be enabled for the entire storage account, or for an encryption scope within an account. When infrastructure encryption is enabled for a storage account or an encryption scope, data is encrypted twice &mdash; once at the service level and once at the infrastructure level &mdash; with two different encryption algorithms and two different keys.
Service-level encryption supports the use of either Microsoft-managed keys or customer-managed keys with Azure Key Vault or Key Vault Managed Hardware Security Model (HSM) (preview). Infrastructure-level encryption relies on Microsoft-managed keys and always uses a separate key. For more information about key management with Azure Storage encryption, see [About encryption key management](storage-service-encryption.md#about-encryption-key-management).
-To doubly encrypt your data, you must first create a storage account that is configured for infrastructure encryption. This article describes how to create a storage account that enables infrastructure encryption.
+To doubly encrypt your data, you must first create a storage account or an encryption scope that is configured for infrastructure encryption. This article describes how to enable infrastructure encryption.
## Register to use infrastructure encryption
-To create a storage account that has infrastructure encryption enabled, you must first register to use this feature with Azure by using PowerShell or Azure CLI.
+To enable infrastructure encryption, you must first register to use this feature with Azure by using PowerShell or Azure CLI.
# [Azure portal](#tab/portal)
N/A
## Create an account with infrastructure encryption enabled
-You must configure a storage account to use infrastructure encryption at the time that you create the account. The storage account must be of type general-purpose v2.
-
-Infrastructure encryption cannot be enabled or disabled after the account has been created.
+To enable infrastructure encryption for a storage account, you must configure a storage account to use infrastructure encryption at the time that you create the account. Infrastructure encryption cannot be enabled or disabled after the account has been created. The storage account must be of type general-purpose v2.
# [Azure portal](#tab/portal)
To use PowerShell to create a storage account with infrastructure encryption ena
:::image type="content" source="media/infrastructure-encryption-enable/create-account-infrastructure-encryption-portal.png" alt-text="Screenshot showing how to enable infrastructure encryption when creating account":::
+To verify that infrastructure encryption is enabled for a storage account with the Azure portal, follow these steps:
+
+1. Navigate to your storage account in the Azure portal.
+1. Under **Settings**, choose **Encryption**.
+
+ :::image type="content" source="media/infrastructure-encryption-enable/verify-infrastructure-encryption-portal.png" alt-text="Screenshot showing how to verify that infrastructure encryption is enabled for account":::
+ # [PowerShell](#tab/powershell) To use PowerShell to create a storage account with infrastructure encryption enabled, make sure you have installed the [Az.Storage PowerShell module](https://www.powershellgallery.com/packages/Az.Storage), version 2.2.0 or later. For more information, see [Install Azure PowerShell](/powershell/azure/install-az-ps).
New-AzStorageAccount -ResourceGroupName <resource_group> `
-RequireInfrastructureEncryption ```
+To verify that infrastructure encryption is enabled for a storage account, call the [Get-AzStorageAccount](/powershell/module/az.storage/get-azstorageaccount) command. This command returns a set of storage account properties and their values. Retrieve the `RequireInfrastructureEncryption` field within the `Encryption` property and verify that it is set to `True`.
+
+The following example retrieves the value of the `RequireInfrastructureEncryption` property. Remember to replace the placeholder values in angle brackets with your own values:
+
+```powershell
+$account = Get-AzStorageAccount -ResourceGroupName <resource-group> `
+ -StorageAccountName <storage-account>
+$account.Encryption.RequireInfrastructureEncryption
+```
+ # [Azure CLI](#tab/azure-cli) To use Azure CLI to create a storage account that has infrastructure encryption enabled, make sure you have installed Azure CLI version 2.8.0 or later. For more information, see [Install the Azure CLI](/cli/azure/install-azure-cli).
az storage account create \
--require-infrastructure-encryption ```
+To verify that infrastructure encryption is enabled for a storage account, call the [az storage account show](/cli/azure/storage/account#az-storage-account-show) command. This command returns a set of storage account properties and their values. Look for the `requireInfrastructureEncryption` field within the `encryption` property and verify that it is set to `true`.
+
+The following example retrieves the value of the `requireInfrastructureEncryption` property. Remember to replace the placeholder values in angle brackets with your own values:
+
+```azurecli-interactive
+az storage account show /
+ --name <storage-account> /
+ --resource-group <resource-group>
+```
+ # [Template](#tab/template) The following JSON example creates a general-purpose v2 storage account that is configured for read-access geo-redundant storage (RA-GRS) and has infrastructure encryption enabled for double encryption of data. Remember to replace the placeholder values in brackets with your own values:
The following JSON example creates a general-purpose v2 storage account that is
-## Verify that infrastructure encryption is enabled
-
-# [Azure portal](#tab/portal)
-
-To verify that infrastructure encryption is enabled for a storage account with the Azure portal, follow these steps:
+## Create an encryption scope with infrastructure encryption enabled
-1. Navigate to your storage account in the Azure portal.
-1. Under **Settings**, choose **Encryption**.
-
- :::image type="content" source="media/infrastructure-encryption-enable/verify-infrastructure-encryption-portal.png" alt-text="Screenshot showing how to verify that infrastructure encryption is enabled for account":::
-
-# [PowerShell](#tab/powershell)
-
-To verify that infrastructure encryption is enabled for a storage account with PowerShell, call the [Get-AzStorageAccount](/powershell/module/az.storage/get-azstorageaccount) command. This command returns a set of storage account properties and their values. Retrieve the `RequireInfrastructureEncryption` field within the `Encryption` property and verify that it is set to `True`.
-
-The following example retrieves the value of the `RequireInfrastructureEncryption` property. Remember to replace the placeholder values in angle brackets with your own values:
-
-```powershell
-$account = Get-AzStorageAccount -ResourceGroupName <resource-group> `
- -StorageAccountName <storage-account>
-$account.Encryption.RequireInfrastructureEncryption
-```
-
-# [Azure CLI](#tab/azure-cli)
-
-To verify that infrastructure encryption is enabled for a storage account with Azure CLI, call the [az storage account show](/cli/azure/storage/account#az_storage_account_show) command. This command returns a set of storage account properties and their values. Look for the `requireInfrastructureEncryption` field within the `encryption` property and verify that it is set to `true`.
-
-The following example retrieves the value of the `requireInfrastructureEncryption` property. Remember to replace the placeholder values in angle brackets with your own values:
-
-```azurecli-interactive
-az storage account show /
- --name <storage-account> /
- --resource-group <resource-group>
-```
-
-# [Template](#tab/template)
-
-N/A
--
+If infrastructure encryption is enabled for an account, then any encryption scope created on that account automatically uses infrastructure encryption. If infrastructure encryption is not enabled at the account level, then you have the option to enable it for an encryption scope at the time that you create the scope. The infrastructure encryption setting for an encryption scope cannot be changed after the scope is created. For more information, see [Create an encryption scope](../blobs/encryption-scope-manage.md#create-an-encryption-scope).
## Next steps - [Azure Storage encryption for data at rest](storage-service-encryption.md) - [Customer-managed keys for Azure Storage encryption](customer-managed-keys-overview.md)
+- [Encryption scopes for Blob storage](../blobs/encryption-scope-overview.md)
storage Storage Auth Aad Msi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-auth-aad-msi.md
To create a service principal with Azure CLI and assign an Azure role, call the
If you do not have sufficient permissions to assign a role to the service principal, you may need to ask the account owner or administrator to perform the role assignment.
-The following example uses the Azure CLI to create a new service principal and assign the **Storage Blob Data Reader** role to it with account scope
+The following example uses the Azure CLI to create a new service principal and assign the **Storage Blob Data Contributor** role to it with account scope
```azurecli-interactive az ad sp create-for-rbac \
async static Task CreateBlockBlobAsync(string accountName, string containerName,
- [Manage access rights to storage data with Azure RBAC](./storage-auth-aad-rbac-portal.md). - [Use Azure AD with storage applications](storage-auth-aad-app.md). - [Run PowerShell commands with Azure AD credentials to access blob data](../blobs/authorize-data-operations-powershell.md)-- [Tutorial: Access storage from App Service using managed identities](../../app-service/scenario-secure-app-access-storage.md)
+- [Tutorial: Access storage from App Service using managed identities](../../app-service/scenario-secure-app-access-storage.md)
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-network-security.md
The following table lists services that can have access to your storage account
| Azure Data Factory | Microsoft.DataFactory/factories | Allows access to storage accounts through the ADF runtime. | | Azure Data Share | Microsoft.DataShare/accounts | Allows access to storage accounts through Data Share. | | Azure DevTest Labs | Microsoft.DevTestLab/labs | Allows access to storage accounts through DevTest Labs. |
-| Azure IoT Hub | Microsoft.Devices/IotHubs | Allows data from an IoT hub to be written to Blob storage. [Learn more](../../iot-hub/virtual-network-support.md#egress-connectivity-to-storage-account-endpoints-for-routing) |
+| Azure IoT Hub | Microsoft.Devices/IotHubs | Allows data from an IoT hub to be written to Blob storage. [Learn more](../../iot-hub/virtual-network-support.md#egress-connectivity-from-iot-hub-to-other-azure-resources) |
| Azure Logic Apps | Microsoft.Logic/workflows | Enables logic apps to access storage accounts. [Learn more](../../logic-apps/create-managed-service-identity.md#authenticate-access-with-managed-identity). | | Azure Machine Learning Service | Microsoft.MachineLearningServices | Authorized Azure Machine Learning workspaces write experiment output, models, and logs to Blob storage and read the data. [Learn more](../../machine-learning/how-to-network-security-overview.md#secure-the-workspace-and-associated-resources). | | Azure Media Services | Microsoft.Media/mediaservices | Allows access to storage accounts through Media Services. |
storage Storage Use Azcopy Blobs Synchronize https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-use-azcopy-blobs-synchronize.md
azcopy sync 'https://mystorageaccount.blob.core.windows.net/mycontainer' 'C:\myD
## Update a container with changes in another container
-The first container that appears in this command is the source. The second one is the destination.
+The first container that appears in this command is the source. The second one is the destination. Make sure to append a a SAS token to each source URL.
+
+If you provide authorization credentials by using Azure Active Directory (Azure AD), you can omit the SAS token only from the destination URL. Make sure that you've set up the proper roles in your destination account. See [Option 1: Use Azure Active Directory](storage-use-azcopy-v10.md?toc=/azure/storage/blobs/toc.json#option-1-use-azure-active-directory).
> [!TIP] > This example encloses path arguments with single quotes (''). Use single quotes in all command shells except for the Windows Command Shell (cmd.exe). If you're using a Windows Command Shell (cmd.exe), enclose path arguments with double quotes ("") instead of single quotes (''). **Syntax**
-`azcopy sync 'https://<source-storage-account-name>.blob.core.windows.net/<container-name>' 'https://<destination-storage-account-name>.blob.core.windows.net/<container-name>' --recursive`
+`azcopy sync 'https://<source-storage-account-name>.blob.core.windows.net/<container-name>/<SAS-token>' 'https://<destination-storage-account-name>.blob.core.windows.net/<container-name>' --recursive`
**Example** ```azcopy
-azcopy sync 'https://mysourceaccount.blob.core.windows.net/mycontainer' 'https://mydestinationaccount.blob.core.windows.net/mycontainer' --recursive
+azcopy sync 'https://mysourceaccount.blob.core.windows.net/mycontainer?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' 'https://mydestinationaccount.blob.core.windows.net/mycontainer' --recursive
``` ## Update a directory with changes to a directory in another container
-The first directory that appears in this command is the source. The second one is the destination.
+The first directory that appears in this command is the source. The second one is the destination. Make sure to append a a SAS token to each source URL.
+
+If you provide authorization credentials by using Azure Active Directory (Azure AD), you can omit the SAS token only from the destination URL. Make sure that you've set up the proper roles in your destination account. See [Option 1: Use Azure Active Directory](storage-use-azcopy-v10.md?toc=/azure/storage/blobs/toc.json#option-1-use-azure-active-directory).
> [!TIP] > This example encloses path arguments with single quotes (''). Use single quotes in all command shells except for the Windows Command Shell (cmd.exe). If you're using a Windows Command Shell (cmd.exe), enclose path arguments with double quotes ("") instead of single quotes (''). **Syntax**
-`azcopy sync 'https://<source-storage-account-name>.blob.core.windows.net/<container-name>/<directory-name>' 'https://<destination-storage-account-name>.blob.core.windows.net/<container-name>/<directory-name>' --recursive`
+`azcopy sync 'https://<source-storage-account-name>.blob.core.windows.net/<container-name>/<directory-name>/<SAS-token>' 'https://<destination-storage-account-name>.blob.core.windows.net/<container-name>/<directory-name>' --recursive`
**Example** ```azcopy
-azcopy sync 'https://mysourceaccount.blob.core.windows.net/<container-name>/myDirectory' 'https://mydestinationaccount.blob.core.windows.net/mycontainer/myDirectory' --recursive
+azcopy sync 'https://mysourceaccount.blob.core.windows.net/<container-name>/myDirectory?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' 'https://mydestinationaccount.blob.core.windows.net/mycontainer/myDirectory' --recursive
``` ## Synchronize with optional flags
synapse-analytics How To Discover Connect Analyze Azure Purview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/catalog-and-governance/how-to-discover-connect-analyze-azure-purview.md
Title: Discover, connect, and explore data in Synapse using Azure Purview description: Guide on how to discover data, connect them and explore them in Synapse-+ Last updated 12/16/2020-+
synapse-analytics Quickstart Connect Azure Purview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/catalog-and-governance/quickstart-connect-azure-purview.md
Title: Connect an Azure Purview AccountΓÇ» description: Connect an Azure Purview Account to a Synapse workspace.-+ Last updated 12/16/2020-+
synapse-analytics Tutorial Build Applications Use Mmlspark https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/machine-learning/tutorial-build-applications-use-mmlspark.md
from mmlspark.cognitive import *
from notebookutils import mssparkutils # A general Cognitive Services key for Text Analytics and Computer Vision (or use separate keys that belong to each service)
-service_key = "ADD_YOUR_SUBSCRIPION_KEY"
+cognitive_service_key = mssparkutils.credentials.getSecret("ADD_YOUR_KEY_VAULT_NAME", "ADD_YOUR_SERVICE_KEY","ADD_YOUR_KEY_VAULT_LINKED_SERVICE_NAME")
# A Bing Search v7 subscription key
-bing_search_key = "ADD_YOUR_SUBSCRIPION_KEY"
+bingsearch_service_key = mssparkutils.credentials.getSecret("ADD_YOUR_KEY_VAULT_NAME", "ADD_YOUR_BING_SEARCH_KEY","ADD_YOUR_KEY_VAULT_LINKED_SERVICE_NAME")
# An Anomaly Dectector subscription key
-anomaly_key = "ADD_YOUR_SUBSCRIPION_KEY"
-# Your linked key vault for Synapse workspace
-key_vault = "YOUR_KEY_VAULT_NAME"
+anomalydetector_key = mssparkutils.credentials.getSecret("ADD_YOUR_KEY_VAULT_NAME", "ADD_YOUR_ANOMALY_KEY","ADD_YOUR_KEY_VAULT_LINKED_SERVICE_NAME")
-cognitive_service_key = mssparkutils.credentials.getSecret(key_vault, service_key)
-bingsearch_service_key = mssparkutils.credentials.getSecret(key_vault, bing_search_key)
-anomalydetector_key = mssparkutils.credentials.getSecret(key_vault, anomaly_key)
- ```
synapse-analytics Tutorial Data Analyst https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/tutorial-data-analyst.md
The OPENROWSET(BULK...) function allows you to access files in Azure Storage. [O
Since data is stored in the Parquet file format, automatic schema inference is available. You can easily query the data without listing the data types of all columns in the files. You also can use the virtual column mechanism and the filepath function to filter out a certain subset of files.
+> [!NOTE]
+> If you are using database with non-default collation (this is default collation SQL_Latin1_General_CP1_CI_AS), you should take into account case sensitivity.
+>
+> If you create a database with case sensitive collation then when you specify columns make sure to use correct name of the column.
+>
+> Example for a column name 'tpepPickupDateTime' would be correct while 'tpeppickupdatetime' wouldn't work in non-default collation.
+ Let's first get familiar with the NYC Taxi data by running the following query: ```sql
time-series-insights Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/concepts-storage.md
Title: 'Storage overview - Azure Time Series Insights Gen2 | Microsoft Docs' description: Learn about data storage in Azure Time Series Insights Gen2.-+
virtual-machines Ddv4 Ddsv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/ddv4-ddsv4-series.md
The new Ddsv4 VM sizes include fast, larger local SSD storage (up to 2,400 GiB)
| Standard_D48ds_v4 | 48 | 192 | 1800 | 32 | 462000/2904(1200) | 76800/1152 | 8|24000 | | Standard_D64ds_v4 | 64 | 256 | 2400 | 32 | 615000/3872(1600) | 80000/1200 | 8|30000 |
-<sup>**</sup> These IOPs values can be guaranteed by using [Gen2 VMs](generation-2.md)
+<sup>**</sup> These IOPs values can be achieved by using [Gen2 VMs](generation-2.md)
[!INCLUDE [virtual-machines-common-sizes-table-defs](../../includes/virtual-machines-common-sizes-table-defs.md)]
virtual-machines Disk Encryption Sample Scripts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/disk-encryption-sample-scripts.md
To configure encryption during the distribution installation, do the following s
![openSUSE 13.2 Setup - Provide passphrase on boot](./media/disk-encryption/opensuse-encrypt-fig2.png)
-3. Prepare the VM for uploading to Azure by following the instructions in [Prepare a SLES or openSUSE virtual machine for Azure](./suse-create-upload-vhd.md?toc=/azure/virtual-machines/linux/toc.json#prepare-opensuse-131). Don't run the last step (deprovisioning the VM) yet.
+3. Prepare the VM for uploading to Azure by following the instructions in [Prepare a SLES or openSUSE virtual machine for Azure](./suse-create-upload-vhd.md?toc=/azure/virtual-machines/linux/toc.json#prepare-opensuse-152). Don't run the last step (deprovisioning the VM) yet.
To configure encryption to work with Azure, do the following steps: 1. Edit the /etc/dracut.conf, and add the following line:
virtual-machines Login Using Aad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/login-using-aad.md
Title: Log in to a Linux VM with Azure Active Directory credentials
+ Title: Log in to a Linux VM with Azure Active Directory credentials
description: Learn how to create and configure a Linux VM to sign in using Azure Active Directory authentication.-+ Previously updated : 11/17/2020-- Last updated : 05/07/2021
-# Preview: Log in to a Linux virtual machine in Azure using Azure Active Directory authentication
-
-To improve the security of Linux virtual machines (VMs) in Azure, you can integrate with Azure Active Directory (AD) authentication. When you use Azure AD authentication for Linux VMs, you centrally control and enforce policies that allow or deny access to the VMs. This article shows you how to create and configure a Linux VM to use Azure AD authentication.
++++ ++
+# Deprecated: Login to a Linux virtual machine in Azure with Azure Active Directory using device code flow authentication
-> [!IMPORTANT]
-> Azure Active Directory authentication is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-> Use this feature on a test virtual machine that you expect to discard after testing.
->
+> [!CAUTION]
+> **The public preview feature described in this article is being deprecated August 15th, 2021.**
+>
+> This feature is being replaced with the ability to use Azure AD and SSH via certificate-based authentication. For more information see the article, [Preview: Login to a Linux virtual machine in Azure with Azure Active Directory using SSH certificate-based authentication](../../active-directory/devices/howto-vm-sign-in-azure-ad-linux.md).
+To improve the security of Linux virtual machines (VMs) in Azure, you can integrate with Azure Active Directory (AD) authentication. When you use Azure AD authentication for Linux VMs, you centrally control and enforce policies that allow or deny access to the VMs. This article shows you how to create and configure a Linux VM to use Azure AD authentication.
There are many benefits of using Azure AD authentication to log in to Linux VMs in Azure, including:
The following Linux distributions are currently supported during the preview of
| SUSE Linux Enterprise Server | SLES 12 | | Ubuntu Server | Ubuntu 14.04 LTS, Ubuntu Server 16.04, and Ubuntu Server 18.04 | -
-The following Azure regions are currently supported during the preview of this feature:
--- All global Azure regions-
->[!IMPORTANT]
-> To use this preview feature, only deploy a supported Linux distro and in a supported Azure region. The feature is not supported in Azure Government or sovereign clouds.
+> [!IMPORTANT]
+> The preview is not supported in Azure Government or sovereign clouds.
> > It's not supported to use this extension on Azure Kubernetes Service (AKS) clusters. For more information, see [Support policies for AKS](../../aks/support-policies.md). - If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version 2.0.31 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli). ## Network requirements
With this line:
%aad_admins ALL=(ALL) NOPASSWD:ALL ``` - ## Troubleshoot sign-in issues Some common errors when you try to SSH with Azure AD credentials include no Azure roles assigned, and repeated prompts to sign in. Use the following sections to correct these issues.
virtual-machines Suse Create Upload Vhd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/suse-create-upload-vhd.md
Last updated 12/01/2020
-# Prepare a SLES or openSUSE virtual machine for Azure
+# Prepare a SLES or openSUSE Leap virtual machine for Azure
-This article assumes that you have already installed a SUSE or openSUSE Linux operating system to a virtual hard disk. Multiple tools exist to create .vhd files, for example a virtualization solution such as Hyper-V. For instructions, see [Install the Hyper-V Role and Configure a Virtual Machine](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh846766(v=ws.11)).
+This article assumes that you have already installed a SUSE or openSUSE Leap Linux operating system to a virtual hard disk. Multiple tools exist to create .vhd files, for example a virtualization solution such as Hyper-V. For instructions, see [Install the Hyper-V Role and Configure a Virtual Machine](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh846766(v=ws.11)).
-## SLES / openSUSE installation notes
+## SLES / openSUSE Leap installation notes
* Please see also [General Linux Installation Notes](create-upload-generic.md#general-linux-installation-notes) for more tips on preparing Linux for Azure. * The VHDX format is not supported in Azure, only **fixed VHD**. You can convert the disk to VHD format using Hyper-V Manager or the convert-vhd cmdlet. * When installing the Linux system it is recommended that you use standard partitions rather than LVM (often the default for many installations). This will avoid LVM name conflicts with cloned VMs, particularly if an OS disk ever needs to be attached to another VM for troubleshooting. [LVM](/previous-versions/azure/virtual-machines/linux/configure-lvm) or [RAID](/previous-versions/azure/virtual-machines/linux/configure-raid) may be used on data disks if preferred.
This article assumes that you have already installed a SUSE or openSUSE Linux op
* All VHDs on Azure must have a virtual size aligned to 1MB. When converting from a raw disk to VHD you must ensure that the raw disk size is a multiple of 1MB before conversion. See [Linux Installation Notes](create-upload-generic.md#general-linux-installation-notes) for more information. ## Use SUSE Studio
-[SUSE Studio](https://studioexpress.opensuse.org/) can easily create and manage your SLES and openSUSE images for Azure and Hyper-V. This is the recommended approach for customizing your own SLES and openSUSE images.
+[SUSE Studio](https://studioexpress.opensuse.org/) can easily create and manage your SLES and openSUSE Leap images for Azure and Hyper-V. This is the recommended approach for customizing your own SLES and openSUSE Leap images.
As an alternative to building your own VHD, SUSE also publishes BYOS (Bring Your Own Subscription) images for SLES at [VM Depot](https://www.microsoft.com/research/wp-content/uploads/2016/04/using-and-contributing-vms-to-vm-depot.pdf).
As an alternative to building your own VHD, SUSE also publishes BYOS (Bring Your
16. Click **Action -> Shut Down** in Hyper-V Manager. Your Linux VHD is now ready to be uploaded to Azure.
-## Prepare openSUSE 13.1+
+## Prepare openSUSE 15.2+
1. In the center pane of Hyper-V Manager, select the virtual machine. 2. Click **Connect** to open the window for the virtual machine. 3. On the shell, run the command '`zypper lr`'. If this command returns output similar to the following, then the repositories are configured as expected--no adjustments are necessary (note that version numbers may vary): | # | Alias | Name | Enabled | Refresh | - | :-- | :-- | : | :
- | 1 | Cloud:Tools_13.1 | Cloud:Tools_13.1 | Yes | Yes
- | 2 | openSUSE_13.1_OSS | openSUSE_13.1_OSS | Yes | Yes
- | 3 | openSUSE_13.1_Updates | openSUSE_13.1_Updates | Yes | Yes
+ | 1 | Cloud:Tools_15.2 | Cloud:Tools_15.2 | Yes | Yes
+ | 2 | openSUSE_15.2_OSS | openSUSE_15.2_OSS | Yes | Yes
+ | 3 | openSUSE_15.2_Updates | openSUSE_15.2_Updates | Yes | Yes
If the command returns "No repositories defined..." then use the following commands to add these repos: ```console
- # sudo zypper ar -f http://download.opensuse.org/repositories/Cloud:Tools/openSUSE_13.1 Cloud:Tools_13.1
- # sudo zypper ar -f https://download.opensuse.org/distribution/13.1/repo/oss openSUSE_13.1_OSS
- # sudo zypper ar -f http://download.opensuse.org/update/13.1 openSUSE_13.1_Updates
+ # sudo zypper ar -f http://download.opensuse.org/repositories/Cloud:Tools/openSUSE_15.2 Cloud:Tools_15.2
+ # sudo zypper ar -f https://download.opensuse.org/distribution/15.2/repo/oss openSUSE_15.2_OSS
+ # sudo zypper ar -f http://download.opensuse.org/update/15.2 openSUSE_15.2_Updates
``` You can then verify the repositories have been added by running the command '`zypper lr`' again. In case one of the relevant update repositories is not enabled, enable it with following command:
As an alternative to building your own VHD, SUSE also publishes BYOS (Bring Your
13. Click **Action -> Shut Down** in Hyper-V Manager. Your Linux VHD is now ready to be uploaded to Azure. ## Next steps
-You're now ready to use your SUSE Linux virtual hard disk to create new virtual machines in Azure. If this is the first time that you're uploading the .vhd file to Azure, see [Create a Linux VM from a custom disk](upload-vhd.md#option-1-upload-a-vhd).
+You're now ready to use your SUSE Linux virtual hard disk to create new virtual machines in Azure. If this is the first time that you're uploading the .vhd file to Azure, see [Create a Linux VM from a custom disk](upload-vhd.md#option-1-upload-a-vhd).
virtual-machines User Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/user-data.md
+
+ Title: User data for Azure Virtual Machine
+description: Allow customer to insert script or other metadata into an Azure virtual machine at provision time.
+++++ Last updated : 04/30/2021++++
+# User Data for Azure Virtual Machine
+
+User data allows you to pass your own scripts or metadata to your virtual machine.
+
+## What is "user data"
+
+User data is a set of scripts or other metadata, that will be inserted to an Azure virtual machine at provision time. Any application on the virtual machine can access the user data from the Azure Instance Metadata Service (IMDS) after provision.
+
+User data is a new version of [custom data](./custom-data.md) and it offers added benefits:
+
+* User data can be retrieved from Azure Instance Metadata Service(IMDS) after provision.
+
+* User data is persistent. It will be available during the lifetime of the VM.
+
+* User data can be updated from outside the VM, without stopping or rebooting the VM.
+
+* User data can be queried via GET VM/VMSS API with $expand option.
+
+ In addition, if user data is not added at provision time, you can still add it after provision.
+
+**Security warning**
+
+> [!WARNING]
+> User data will not be encrypted, and any process on the VM can query this data. You should not store confidential information in user data.
+
+Make sure you get the latest Azure Resource Manager API to use the new user data features. The contents should be base64 encoded before passed to the API. The size cannot exceed 64 KB.
+
+## Create user data for Azure VM/VMSS
+
+**Adding user data when creating new VM**
+
+Use [this Azure Resource Manager template](https://aka.ms/ImdsUserDataArmTemplate) to create a new VM with user data.
+If you are using rest API, for single VMs, add 'UserData' to the "properties" section with the PUT request to create the VM.
+
+```json
+{
+ "name": "testVM",
+ "location": "West US",
+ "properties": {
+ "hardwareProfile": {
+ "vmSize": "Standard_A1"
+ },
+ "storageProfile": {
+ "osDisk": {
+ "osType": "Windows",
+ "name": "osDisk",
+ "createOption": "Attach",
+ "vhd": {
+ "uri": "http://myaccount.blob.core.windows.net/container/directory/blob.vhd"
+ }
+ }
+ },
+ "userData": "c2FtcGxlIHVzZXJEYXRh",
+ "networkProfile": { "networkInterfaces" : [ { "name" : "nic1" } ] },
+ }
+}
+```
+
+**Adding user data when you create new virtual machine scale set**
+
+Using rest API, add 'UserData' to the "virtualMachineProfile" section with the PUT request when creating the virtual machine scale set.
+```json
+{
+ "location": "West US",
+ "sku": {
+ "name": "Standard_A1",
+ "capacity": 1
+ },
+ "properties": {
+ "upgradePolicy": {
+ "mode": "Automatic"
+ },
+ "virtualMachineProfile": {
+ "userData": "VXNlckRhdGE=",
+ "osProfile": {
+ "computerNamePrefix": "TestVM",
+ "adminUsername": "TestUserName",
+ "windowsConfiguration": {
+ "provisionVMAgent": true,
+ "timeZone": "Dateline Standard Time"
+ }
+ },
+ "storageProfile": {
+ "osDisk": {
+ "createOption": "FromImage",
+ "caching": "ReadOnly"
+ },
+ "imageReference": {
+ "publisher": "publisher",
+ "offer": "offer",
+ "sku": "sku",
+ "version": "1.2.3"
+ }
+ },
+ "networkProfile": {"networkInterfaceConfigurations":[{"name":"nicconfig1","properties":{"ipConfigurations":[{"name":"ip1","properties":{"subnet":{"id":"vmssSubnet0"}}}]}}]},
+ "diagnosticsProfile": {
+ "bootDiagnostics": {
+ "enabled": true,
+ "storageUri": "https://crputest.blob.core.windows.net"
+ }
+ }
+ },
+ "provisioningState": 0,
+ "overprovision": false,
+ "uniqueId": "00000000-0000-0000-0000-000000000000"
+ }
+}
+```
++
+## Retrieving user data
+
+Applications running inside the VM can retrieve user data from IMDS endpoint. For details, see [IMDS sample code here.](./linux/instance-metadata-service.md?tabs=linux#get-user-data)
+
+Customers can retrieve existing value of user data via rest API
+using \$expand=userData endpoint (request body can be left empty).
+
+Single VMs:
+
+`GET "/subscriptions/{guid}/resourceGroups/{RGName}/providers/Microsoft.Compute/virtualMachines/{VMName}?$expand=userData"`
+
+Virtual machine scale set:
+
+`GET "/subscriptions/{guid}/resourceGroups/{RGName}/providers/Microsoft.Compute/virtualMachineScaleSets/{VMSSName}?$expand=userData"`
+
+Virtual machine scale set VM:
+
+` GET "/subscriptions/{guid}/resourceGroups/{RGName}/providers/Microsoft.Compute/virtualMachineScaleSets/{VMSSName}/virtualmachines/{vmss instance id}?$expand=userData" `
+
+## Updating user data
+
+With Rest API, you can use a normal PUT or PATCH request to update the user data. The user data will be updated without the need to stop or reboot the VM.
+
+`PUT
+"/subscriptions/{guid}/resourceGroups/{RGName}/providers/Microsoft.Compute/ virtualMachines/{VMName}
+`
+
+`PATCH
+"/subscriptions/{guid}/resourceGroups/{RGName}/providers/Microsoft.Compute/ virtualMachines/{VMName}
+`
+
+The VM.Properties in these requests should contain your desired UserData field, like this:
+
+```json
+"properties": {
+ "hardwareProfile": {
+ "vmSize": "Standard_D1_v2"
+ },
+ "storageProfile": {
+ "imageReference": {
+ "sku": "2016-Datacenter",
+ "publisher": "MicrosoftWindowsServer",
+ "version": "latest",
+ "offer": "WindowsServer"
+ },
+ "osDisk": {
+ "caching": "ReadWrite",
+ "managedDisk": {
+ "storageAccountType": "Standard_LRS"
+ },
+ "name": "vmOSdisk",
+ "createOption": "FromImage"
+ }
+ },
+ "networkProfile": {
+ "networkInterfaces": [
+ {
+ "id": "/subscriptions/{subscription-id}/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkInterfaces/{existing-nic-name}",
+ "properties": {
+ "primary": true
+ }
+ }
+ ]
+ },
+ "osProfile": {
+ "adminUsername": "{your-username}",
+ "computerName": "{vm-name}",
+ "adminPassword": "{your-password}"
+ },
+ "diagnosticsProfile": {
+ "bootDiagnostics": {
+ "storageUri": "http://{existing-storage-account-name}.blob.core.windows.net",
+ "enabled": true
+ }
+ },
+ "userData": "U29tZSBDdXN0b20gRGF0YQ=="
+ }
+```
+> [!NOTE]
+> If you pass in an empty string for "userData" in this case, the user data will be deleted.
+
+## User data and custom data
+
+Custom data will continue to work the same way as today. Note you cannot retrieve custom data from IMDS.
+
+## Adding user data to an existing VM
+
+If you have an existing VM/VMSS without user data, you can still add user data to this VM by using the updating commands, as described in the ["Updating the User data"](#updating-user-data) section. Make sure you upgrade to the latest version of Azure Resource Manger API.
+
+## Next steps
+
+Try out [Azure Instance Metadata Service](./linux/instance-metadata-service.md), learn how to get the VM instance metadata and user data from its endpoint.
vpn-gateway Openvpn Azure Ad Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/openvpn-azure-ad-tenant.md
Previously updated : 05/05/2021 Last updated : 05/10/2021
Use the steps in [Add or delete users - Azure Active Directory](../active-direct
* **Tenant:** TenantID for the Azure AD tenant ```https://login.microsoftonline.com/{AzureAD TenantID}/```
- * **Audience:** ApplicationID of the "Azure VPN" Azure AD Enterprise App ```{AppID of the "Azure VPN" AD Enterprise app}```
+ * **Audience:** Application ID of the "Azure VPN" Azure AD Enterprise App
+
+ * Enter 41b23e61-6c1e-4545-b367-cd054e0ed4b4 for Azure Public
+ * Enter 51bb15d4-3a4f-4ebf-9dca-40096fe32426 for Azure Government
+ * Enter 538ee9e6-310a-468d-afef-ea97365856a9 for Azure Germany
+ * Enter 49f817b6-84ae-4cc0-928c-73f27289b3aa for Azure China 21Vianet
+ * **Issuer**: URL of the Secure Token Service ```https://sts.windows.net/{AzureAD TenantID}/```