Updates from: 04/02/2021 03:06:27
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory User Provisioning Sync Attributes For Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/user-provisioning-sync-attributes-for-mapping.md
Title: Synchronize attributes to Azure AD for mapping
-description: When configuring user provisioning to SaaS apps, use the directory extension feature to add source attributes that aren't synchronized by default.
+ Title: Synchronize attributes to Azure Active Directory for mapping
+description: When configuring user provisioning with Azure Active Directory and SaaS apps, use the directory extension feature to add source attributes that aren't synchronized by default.
Previously updated : 03/17/2021 Last updated : 03/31/2021
-# Syncing extension attributes attributes
+# Syncing extension attributes for app provisioning
-When customizing attribute mappings for user provisioning, you might find that the attribute you want to map doesn't appear in the **Source attribute** list. This article shows you how to add the missing attribute by synchronizing it from your on-premises Active Directory (AD) to Azure Active Directory (Azure AD) or by creating the extension attributes in Azure AD for a cloud only user.
+Azure Active Directory (Azure AD) must contain all the data (attributes) required to create a user profile when provisioning user accounts from Azure AD to a [SaaS app](../saas-apps/tutorial-list.md). When customizing attribute mappings for user provisioning, you might find the attribute you want to map doesn't appear in the **Source attribute** list. This article shows you how to add the missing attribute.
-Azure AD must contain all the data required to create a user profile when provisioning user accounts from Azure AD to a SaaS app. In some cases, to make the data available you might need synchronize attributes from your on-premises AD to Azure AD. Azure AD Connect automatically synchronizes certain attributes to Azure AD, but not all attributes. Furthermore, some attributes (such as SAMAccountName) that are synchronized by default might not be exposed using the Azure AD Graph API. In these cases, you can use the Azure AD Connect directory extension feature to synchronize the attribute to Azure AD. That way, the attribute will be visible to the Azure AD Graph API and the Azure AD provisioning service. If the data you need for provisioning is in Active Directory but isn't available for provisioning because of the reasons described above, you can use Azure AD Connect to create extension attributes.
+For users only in Azure AD, you can [create schema extensions using PowerShell or Microsoft Graph](#create-an-extension-attribute-on-a-cloud-only-user).
-While most users are likely hybrid users that are synchronized from Active Directory, you can also create extensions on cloud-only users without using Azure AD Connect. Using PowerShell or Microsoft Graph you can extend the schema of a cloud only user.
-
-## Create an extension attribute using Azure AD Connect
-
-1. Open the Azure AD Connect wizard, choose Tasks, and then choose **Customize synchronization options**.
-
- ![Azure Active Directory Connect wizard Additional tasks page](./media/user-provisioning-sync-attributes-for-mapping/active-directory-connect-customize.png)
-
-2. Sign in as an Azure AD Global Administrator.
-
-3. On the **Optional Features** page, select **Directory extension attribute sync**.
-
- ![Azure Active Directory Connect wizard Optional features page](./media/user-provisioning-sync-attributes-for-mapping/active-directory-connect-directory-extension-attribute-sync.png)
-
-4. Select the attribute(s) you want to extend to Azure AD.
- > [!NOTE]
- > The search under **Available Attributes** is case sensitive.
-
- ![Screenshot that shows the "Directory extensions" selection page](./media/user-provisioning-sync-attributes-for-mapping/active-directory-connect-directory-extensions.png)
-
-5. Finish the Azure AD Connect wizard and allow a full synchronization cycle to run. When the cycle is complete, the schema is extended and the new values are synchronized between your on-premises AD and Azure AD.
-
-6. In the Azure portal, while youΓÇÖre [editing user attribute mappings](customize-application-attributes.md), the **Source attribute** list will now contain the added attribute in the format `<attributename> (extension_<appID>_<attributename>)`. Select the attribute and map it to the target application for provisioning.
-
- ![Azure Active Directory Connect wizard Directory extensions selection page](./media/user-provisioning-sync-attributes-for-mapping/attribute-mapping-extensions.png)
-
-> [!NOTE]
-> The ability to provision reference attributes from on-premises AD, such as **managedby** or **DN/DistinguishedName**, is not supported today. You can request this feature on [User Voice](https://feedback.azure.com/forums/169401-azure-active-directory).
+For users in on-premises Active Directory, you must sync the users to Azure AD. You can sync users and attributes using [Azure AD Connect](../hybrid/whatis-azure-ad-connect.md). Azure AD Connect automatically synchronizes certain attributes to Azure AD, but not all attributes. Furthermore, some attributes (such as SAMAccountName) that are synchronized by default might not be exposed using the Azure AD Graph API. In these cases, you can [use the Azure AD Connect directory extension feature to synchronize the attribute to Azure AD](#create-an-extension-attribute-using-azure-ad-connect). That way, the attribute will be visible to the Azure AD Graph API and the Azure AD provisioning service.
## Create an extension attribute on a cloud only user
-Customers can use Microsoft Graph and PowerShell to extend the user schema. These extension attributes are automatically discovered in most cases, but customers with more than 1000 service principals may find extensions missing in the source attribute list. If an attribute that you create using the steps below does not automatically appear in the source attribute list please verify using graph that the extension attribute was successfully created and then add it to your schema [manually](https://docs.microsoft.com/azure/active-directory/app-provisioning/customize-application-attributes#editing-the-list-of-supported-attributes). When making the graph requests below, please click learn more to verify the permissions required to make the requests. You can use [graph explorer](https://docs.microsoft.com/graph/graph-explorer/graph-explorer-overview) to make the requests.
+You can use Microsoft Graph and PowerShell to extend the user schema for users in Azure AD. These extension attributes are automatically discovered in most cases.
+
+When you've more than 1000 service principals, you may find extensions missing in the source attribute list. If an attribute you've created doesn't automatically appear, then verify the attribute was created and add it manually to your schema. To verify it was created, use Microsoft Graph and [Graph Explorer](/graph/graph-explorer/graph-explorer-overview.md). To add it manually to your schema, see [Editing the list of supported attributes](customize-application-attributes.md#editing-the-list-of-supported-attributes).
### Create an extension attribute on a cloud only user using Microsoft Graph
-You will need to use an application to extend the schema of your users. List the apps in your tenant to identify the id of the application that you would like to use to extend the user schema. [Learn more.](https://docs.microsoft.com/graph/api/application-list?view=graph-rest-1.0&tabs=http)
+You can extend the schema of Azure AD users using [Microsoft Graph](/graph/overview.md).
+
+First, list the apps in your tenant to get the ID of the app you're working on. To learn more, see [List extensionProperties](/graph/api/application-list-extensionproperty?view=graph-rest-1.0&tabs=http&preserve-view=true).
```json GET https://graph.microsoft.com/v1.0/applications ```
-Create the extension attribute. Replace the **id** property below with the **id** retrieved in the previous step. You will need to use the **"id"** attribute and not the "appId". [Learn more.](https://docs.microsoft.com/graph/api/application-post-extensionproperty?view=graph-rest-1.0&tabs=http)
+Next, create the extension attribute. Replace the **ID** property below with the **ID** retrieved in the previous step. You'll need to use the **"ID"** attribute and not the "appId". To learn more, see [Create extensionProperty]/graph/api/application-post-extensionproperty.md?view=graph-rest-1.0&tabs=http&preserve-view=true).
+ ```json POST https://graph.microsoft.com/v1.0/applications/{id}/extensionProperties Content-type: application/json
Content-type: application/json
} ```
-The previous request created an extension attribute with the format "extension_appID_extensionName". Update a user with the extension attribute. [Learn more.](https://docs.microsoft.com/graph/api/user-update?view=graph-rest-1.0&tabs=http)
+The previous request created an extension attribute with the format `extension_appID_extensionName`. You can now update a user with this extension attribute. To learn more, see [Update user](/graph/api/user-update.md?view=graph-rest-1.0&tabs=http&preserve-view=true).
```json PATCH https://graph.microsoft.com/v1.0/users/{id} Content-type: application/json
Content-type: application/json
"extension_inputAppId_extensionName": "extensionValue" } ```
-Check the user to ensure the attribute was successfully updated. [Learn more.](https://docs.microsoft.com/graph/api/user-get?view=graph-rest-1.0&tabs=http#example-3-users-request-using-select)
+Finally, verify the attribute for the user. To learn more, see [Get a user](/graph/api/user-get.md?view=graph-rest-1.0&tabs=http#example-3-users-request-using-select&preserve-view=true).
```json GET https://graph.microsoft.com/v1.0/users/{id}?$select=displayName,extension_inputAppId_extensionName
New-AzureADApplicationExtensionProperty -ObjectId $App.ObjectId -Name ΓÇ£TestAtt
#List users in your tenant to determine the objectid for your user Get-AzureADUser
-#Set a value for the extension property on the user. Replace the objectid with the id of the user and the extension name with the value from the previous step
+#Set a value for the extension property on the user. Replace the objectid with the ID of the user and the extension name with the value from the previous step
Set-AzureADUserExtension -objectid 0ccf8df6-62f1-4175-9e55-73da9e742690 -ExtensionName ΓÇ£extension_6552753978624005a48638a778921fan3_TestAttributeNameΓÇ¥ #Verify that the attribute was added correctly.
Get-AzureADUser -ObjectId 0ccf8df6-62f1-4175-9e55-73da9e742690 | Select -ExpandP
```
+## Create an extension attribute using Azure AD Connect
+
+1. Open the Azure AD Connect wizard, choose Tasks, and then choose **Customize synchronization options**.
+
+ ![Azure Active Directory Connect wizard Additional tasks page](./media/user-provisioning-sync-attributes-for-mapping/active-directory-connect-customize.png)
+
+2. Sign in as an Azure AD Global Administrator.
+
+3. On the **Optional Features** page, select **Directory extension attribute sync**.
+
+ ![Azure Active Directory Connect wizard Optional features page](./media/user-provisioning-sync-attributes-for-mapping/active-directory-connect-directory-extension-attribute-sync.png)
+
+4. Select the attribute(s) you want to extend to Azure AD.
+ > [!NOTE]
+ > The search under **Available Attributes** is case sensitive.
+
+ ![Screenshot that shows the "Directory extensions" selection page](./media/user-provisioning-sync-attributes-for-mapping/active-directory-connect-directory-extensions.png)
+
+5. Finish the Azure AD Connect wizard and allow a full synchronization cycle to run. When the cycle is complete, the schema is extended and the new values are synchronized between your on-premises AD and Azure AD.
+
+6. In the Azure portal, while youΓÇÖre [editing user attribute mappings](customize-application-attributes.md), the **Source attribute** list will now contain the added attribute in the format `<attributename> (extension_<appID>_<attributename>)`. Select the attribute and map it to the target application for provisioning.
+
+ ![Azure Active Directory Connect wizard Directory extensions selection page](./media/user-provisioning-sync-attributes-for-mapping/attribute-mapping-extensions.png)
+
+> [!NOTE]
+> The ability to provision reference attributes from on-premises AD, such as **managedby** or **DN/DistinguishedName**, is not supported today. You can request this feature on [User Voice](https://feedback.azure.com/forums/169401-azure-active-directory).
++ ## Next steps * [Define who is in scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md)
active-directory Howto Authentication Temporary Access Pass https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-authentication-temporary-access-pass.md
Previously updated : 03/29/2021 Last updated : 03/31/2021
Keep these limitations in mind:
- When using a one-time Temporary Access Pass to register a Passwordless method such as FIDO2 or Phone sign-in, the user must complete the registration within 10 minutes of sign-in with the one-time Temporary Access Pass. This limitation does not apply to a Temporary Access Pass that can be used more than once. - Guest users can't sign in with a Temporary Access Pass.-- Users in scope for Self Service Password Reset (SSPR) registration policy will be required to register one of the SSPR methods after they have signed in with a Temporary Access Pass. If the user is only going to use FIDO2 key, exclude them from the SSPR policy or disable the SSPR registration policy.
+- Users in scope for Self Service Password Reset (SSPR) registration policy *or* [Identity Protection Multi-factor authentication registration policy](../identity-protection/howto-identity-protection-configure-mfa-policy.md) will be required to register authentication methods after they have signed in with a Temporary Access Pass.
+Users in scope for these policies will get redirected to the [Interrupt mode of the combined registration](concept-registration-mfa-sspr-combined.md#combined-registration-modes). This experience does not currently support FIDO2 and Phone Sign-in registration.
- A Temporary Access Pass cannot be used with the Network Policy Server (NPS) extension and Active Directory Federation Services (AD FS) adapter, or during Windows Setup/Out-of-Box-Experience (OOBE) and AutoPilot. - When Seamless SSO is enabled on the tenant, the users are prompted to enter a password. The **Use your Temporary Access Pass instead** link will be available for the user to sign-in with a Temporary Access Pass.
active-directory Reference Breaking Changes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/reference-breaking-changes.md
Previously updated : 2/22/2021 Last updated : 3/30/2021
The authentication system alters and adds features on an ongoing basis to improv
## Upcoming changes
+### Bug fix: Azure AD will no longer URL encode the state parameter twice.
+
+**Effective date**: May 2021
+
+**Endpoints impacted**: v1.0 and v2.0
+
+**Protocol impacted**: All flows that visit the `/authorize` endpoint (implicit flow and authorization code flow)
+
+A bug was found and fixed in the Azure AD authorization response. During the `/authorize` leg of authentication, the `state` parameter from the request is included in the response, in order to preserve app state and help prevent CSRF attacks. Azure AD incorrectly URL encoded the `state` parameter before inserting it into the response, where it was encoded once more. This would result in applications incorrectly rejecting the response from Azure AD.
+
+Azure AD will no longer double-encode this parameter, allowing apps to correctly parse the result. This change will be made for all applications.
+ ### Conditional Access will only trigger for explicitly requested scopes
-**Effective date**: March 2021
+**Effective date**: May 2021, with gradual rollout starting in April.
**Endpoints impacted**: v2.0
In order to reduce the number of unnecessary Conditional Access prompts, Azure A
Apps will now receive access tokens with a mix of permissions in this - those requested, as well as those they have consent for that do not require Conditional Access prompts. The scopes of the access token is reflected in the token response's `scope` parameter.
+This change will be made for all apps except those with an observed dependency on this behavior. Developers will receive outreach if they are exempted from this change, as them may have a dependency on the additional conditional access prompts.
+ **Examples** An app has consent for `user.read`, `files.readwrite`, and `tasks.read`. `files.readwrite` has Conditional Access policies applied to it, while the other two do not. If an app makes a token request for `scope=user.read`, and the currently signed in user has not passed any Conditional Access policies, then the resulting token will be for the `user.read` and `tasks.read` permissions. `tasks.read` is included because the app has consent for it, and it does not require a Conditional Access policy to be enforced.
active-directory V2 Conditional Access Dev Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-conditional-access-dev-guide.md
Specifically, all Microsoft Graph scopes represent some dataset that can individ
For example, if an app requests the following Microsoft Graph scopes, ```
-scopes="Bookings.Read.All Mail.Read"
+scopes="ChannelMessages.Read.All Mail.Read"
```
-An app can expect their users to fulfill all policies set on Bookings and Exchange. Some scopes may map to multiple datasets if it grants access.
+An app can expect their users to fulfill all policies set on Teams and Exchange. Some scopes may map to multiple datasets if it grants access.
### Complying with a Conditional Access policy
To try out this scenario, see our [JS SPA On-behalf-of code sample](https://gith
* For more Azure AD code samples, see [samples](sample-v2-code.md). * For more info on the MSAL SDK's and access the reference documentation, see the [Microsoft Authentication Library overview](msal-overview.md). * To learn more about multi-tenant scenarios, see [How to sign in users using the multi-tenant pattern](howto-convert-app-to-be-multi-tenant.md).
-* Learn more about [Conditional access and securing access to IoT apps](/azure/architecture/example-scenario/iot-aad/iot-aad).
+* Learn more about [Conditional access and securing access to IoT apps](/azure/architecture/example-scenario/iot-aad/iot-aad).
active-directory V2 Oauth2 Client Creds Grant Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-oauth2-client-creds-grant-flow.md
Previously updated : 10/2/2020 Last updated : 4/1/2021
You can use the [OAuth 2.0 client credentials grant](https://tools.ietf.org/html
This article describes how to program directly against the protocol in your application. When possible, we recommend you use the supported Microsoft Authentication Libraries (MSAL) instead to [acquire tokens and call secured web APIs](authentication-flows-app-scenarios.md#scenarios-and-supported-authentication-flows). Also take a look at the [sample apps that use MSAL](sample-v2-code.md).
-The OAuth 2.0 client credentials grant flow permits a web service (confidential client) to use its own credentials, instead of impersonating a user, to authenticate when calling another web service. In this scenario, the client is typically a middle-tier web service, a daemon service, or a web site. For a higher level of assurance, the Microsoft identity platform also allows the calling service to use a certificate (instead of a shared secret) as a credential.
+The OAuth 2.0 client credentials grant flow permits a web service (confidential client) to use its own credentials, instead of impersonating a user, to authenticate when calling another web service. For a higher level of assurance, the Microsoft identity platform also allows the calling service to use a certificate (instead of a shared secret) as a credential. Because the applications own credentials are being used, these credentials must be kept safe - _never_ publish that credential in your source code, embed it in web pages, or use it in a widely distributed native application.
In the client credentials flow, permissions are granted directly to the application itself by an administrator. When the app presents a token to a resource, the resource enforces that the app itself has authorization to perform an action since there is no user involved in the authentication. This article covers both the steps needed to [authorize an application to call an API](#application-permissions), as well as [how to get the tokens needed to call that API](#get-a-token).
active-directory How To Connect Selective Password Hash Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-selective-password-hash-synchronization.md
To reduce the configuration administrative effort, you should first consider the
> Configuring selective password hash synchronization directly influences password writeback. Password changes or password resets that are initiated in Azure Active Directory write back to on-premises Active Directory only if the user is in scope for password hash synchronization. ### The adminDescription attribute
-Both scenarios rely on setting the adminDescription attribute of users to a specific value. This allows the the rules to be applied and is what makes selective PHS work.
+Both scenarios rely on setting the adminDescription attribute of users to a specific value. This allows the rules to be applied and is what makes selective PHS work.
|Scenario|adminDescription value| |--|--|
The following section describes how to enable selective password hash synchroniz
- Set the attribute value, in active directory, that was defined as scoping attribute on the users you want to allow in password hash synchronization. >[!Important]
->The steps provided to configure selective password hash synchronization will only effect user objects that have
+>The steps provided to configure selective password hash synchronization will only affect user objects that have
the attribute **adminDescription** populated in Active Directory with the value of **PHSFiltered**. >If this attribute is not populated or the value is something other than **PHSFiltered** then these rules will not be applied to the user objects.
the attribute **adminDescription** populated in Active Directory with the value
3. The first rule will disable password hash sync. Provide the following name to the new custom rule: **In from AD - User AccountEnabled - Filter Users from PHS**. Change the precedence value to a number lower than 100 (for example **90** or whichever is the lowest value available in your environment).
- Make sure the checkboxes **Enable Password Sync** and **Disabled** are unchecked and c.
+ Make sure the checkboxes **Enable Password Sync** and **Disabled** are unchecked.
Click **Next**. ![Edit inbound](media/how-to-connect-selective-password-hash-synchronization/exclude-3.png) 4. In **Scoping filter**, click **Add clause**.
The following is a summary of the actions that will be taken in the steps below:
- Set the attribute value, in active directory, that was defined as scoping attribute on the users you want to allow in password hash synchronization. >[!Important]
->The steps provided to configure selective password hash synchronization will only effect user objects that have
+>The steps provided to configure selective password hash synchronization will only affect user objects that have
the attribute **adminDescription** populated in Active Directory with the value of **PHSIncluded**. >If this attribute is not populated or the value is something other than **PHSIncluded** then these rules will not be applied to the user objects.
active-directory Application Proxy Wildcard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-wildcard.md
For security reasons, this is a hard requirement and we will not support wildcar
### DNS updates
-When using custom domains, you need to create a DNS entry with a CNAME record for the external URL (for example, `*.adventure-works.com`) pointing to the external URL of the application proxy endpoint.For wildcard applications, the CNAME record needs to point to the relevant external URLs:
+When using custom domains, you need to create a DNS entry with a CNAME record for the external URL (for example, `*.adventure-works.com`) pointing to the external URL of the application proxy endpoint. For wildcard applications, the CNAME record needs to point to the relevant external URL:
> `<yourAADTenantId>.tenant.runtime.msappproxy.net` To confirm that you have configured your CNAME correctly, you can use [nslookup](/windows-server/administration/windows-commands/nslookup) on one of the target endpoints, for example, `expenses.adventure-works.com`. Your response should include the already mentioned alias (`<yourAADTenantId>.tenant.runtime.msappproxy.net`).
+### Using connector groups assigned to an App Proxy cloud service region other than the default region
+If you have connectors installed in regions different from your default tenant region, it may be beneficial to change which region your connector group is optimized for to improve performance accessing these applications. To learn more see, [Optimize connector groups to use closest Application Proxy cloud service](application-proxy-network-topology.md#optimize-connector-groups-to-use-closest-application-proxy-cloud-service-preview).
+
+If the connector group assigned to the wildcard application uses a **different region than your default region**, you will need to update the CNAME record to point to a regional specific external URL. Use the following table to determine the relevant URL:
+
+| Connector Assigned Region | External URL |
+| | |
+| Asia | `<yourAADTenantId>.asia.tenant.runtime.msappproxy.net`|
+| Australia | `<yourAADTenantId>.aus.tenant.runtime.msappproxy.net` |
+| Europe | `<yourAADTenantId>.eur.tenant.runtime.msappproxy.net`|
+| North America | `<yourAADTenantId>.nam.tenant.runtime.msappproxy.net` |
+ ## Considerations Here are some considerations you should take into account for wildcard applications.
If you have multiple applications published for finance and you have `finance.ad
## Next steps - To learn more about **Custom domains**, see [Working with custom domains in Azure AD Application Proxy](application-proxy-configure-custom-domain.md).-- To learn more about **Publishing applications**, see [Publish applications using Azure AD Application Proxy](application-proxy-add-on-premises-application.md)
+- To learn more about **Publishing applications**, see [Publish applications using Azure AD Application Proxy](application-proxy-add-on-premises-application.md)
active-directory Howto Manage Inactive User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/howto-manage-inactive-user-accounts.md
You detect inactive accounts by evaluating the **lastSignInDateTime** property e
- **Users by date**: In this scenario, you request a list of users with a lastSignInDateTime before a specified date: `https://graph.microsoft.com/beta/users?filter=signInActivity/lastSignInDateTime le 2019-06-01T00:00:00Z` ----
+> [!NOTE]
+> There may be the need to generate a report of the last sign in date of all users, if so you can use the following scenario.
+> **Last Sign In Date and Time for All Users**: In this scenario, you request a list of all users, and the last lastSignInDateTime for each respective user: `https://graph.microsoft.com/beta/users?$select=displayName,signInActivity`
## What you need to know
To generate a lastSignInDateTime timestamp, you need a successful sign-in. Becau
* [Get data using the Azure Active Directory reporting API with certificates](tutorial-access-api-with-certificates.md) * [Audit API reference](/graph/api/resources/directoryaudit?view=graph-rest-beta)
-* [Sign-in activity report API reference](/graph/api/resources/signin?view=graph-rest-beta)
+* [Sign-in activity report API reference](/graph/api/resources/signin?view=graph-rest-beta)
active-directory Groups Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/groups-concept.md
The following scenarios are not supported right now:
- *Azure AD P2 licensed customers only* Even after deleting the group, it is still shown an eligible member of the role in PIM UI. Functionally there's no problem; it's just a cache issue in the Azure portal. - Use the new [Exchange Admin Center](https://admin.exchange.microsoft.com/) for role assignments via group membership. The old Exchange Admin Center doesnΓÇÖt support this feature yet. Exchange PowerShell cmdlets will work as expected. - Azure Information Protection Portal (the classic portal) doesn't recognize role membership via group yet. You can [migrate to the unified sensitivity labeling platform](/azure/information-protection/configure-policy-migrate-labels) and then use the Office 365 Security & Compliance center to use group assignments to manage roles.
+- [Apps Admin Center](https://config.office.com/) doesn't support this feature yet. Assign users directly to Office Apps Administrator role.
We are fixing these issues.
active-directory Credential Design https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/credential-design.md
+
+ Title: How to customize your Azure Active Directory Verifiable Credentials (preview)
+description: This article shows you how to create your own custom verifiable credential
++++++ Last updated : 04/01/2021+
+# Customer intent: As a developer I am looking for information on how to enable my users to control their own information
++
+# How to customize your verifiable credentials (preview)
+
+Verifiable credentials are made up of two components, the rules and display files. The rules file determines what the user needs to provide before they receive a verifiable credential. The display file controls the branding of the credential and styling of the claims. In this guide, we will explain how to modify both files to meet the requirements of your organization.
+
+> [!IMPORTANT]
+> Azure Active Directory Verifiable Credentials is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Rules File: Requirements from the user
+
+The rules file is a simple JSON file that describes important properties of verifiable credentials. In particular, it describes how claims are used to populate your verifiable credential.
+
+There are currently three input types that that are available to configure in the rules file. These types are used by the verifiable credential issuing service to insert claims into a verifiable credential and attest to that information with your DID. The following are the three types with explanations.
+
+- ID Token
+- Verifiable credentials via a verifiable presentation.
+- Self-Attested Claims
+
+**ID Token:** The sample App and Tutorial use the ID Token. When this option is configured, you will need to provide an Open ID Connect configuration URI and include the claims that should be included in the VC. The user will be prompted to 'Sign in' on the Authenticator app to meet this requirement and add the associated claims from their account.
+
+**Verifiable Credentials:** The end result of an issuance flow is to produce a Verifiable Credential but you may also ask the user to Present a Verifiable Credential in order to issue one. The Rules File is able to take specific claims from the presented Verifiable Credential and include those claims in the newly issued Verifiable Credential from your organization.
+
+**Self Attested Claims:** When this option is selected, the user will be able to directly type information into Authenticator. At this time, strings are the only supported input for self attested claims.
+
+![detailed view of verifiable credential card](media/credential-design/issuance-doc.png)
+
+**Static Claims:** Additionally we are able declare a static claim in the Rules file, however this input does not come from the user. The Issuer defines a static claim in the Rules file and would look like any other claim in the Verifiable Credential. Simply add a credentialSubject after vc.type and declare the attribute and the claim.
+
+```json
+"vc": {
+ "type": [ "StaticClaimCredential" ],
+ "credentialSubject": {
+ "staticClaim": true,
+ "anotherClaim": "Your Claim Here"
+ },
+ }
+}
+```
++
+## Input Type: ID Token
+
+To get ID Token as input, the rules file needs to configure the well-known endpoint of the OIDC compatible Identity system. In that system you need to register an application with the correct information from [Issuer service communication examples](issuer-openid.md). Additionally, the client_id needs to be put in the rules file, as well as a scope parameter needs to be filled in with the correct scopes. For example, Azure Active Directory needs the email scope if you want to return an email claim in the ID token.
+```json
+ {
+ "attestations": {
+ "idTokens": [
+ {
+ "mapping": {
+ "firstName": { "claim": "given_name" },
+ "lastName": { "claim": "family_name" }
+ },
+ "configuration": "https://dIdPlayground.b2clogin.com/dIdPlayground.onmicrosoft.com/B2C_1_sisu/v2.0/.well-known/openid-configuration",
+ "client_id": "8d5b446e-22b2-4e01-bb2e-9070f6b20c90",
+ "redirect_uri": "vcclient://openid/",
+ "scope": "openid profile"
+ }
+ ]
+ },
+ "validityInterval": 2592000,
+ "vc": {
+ "type": ["https://schema.org/EducationalCredential", "https://schemas.ed.gov/universityDiploma2020", "https://schemas.contoso.edu/diploma2020" ]
+ }
+ }
+```
+
+| Property | Description |
+| -- | -- |
+| `attestations.idTokens` | An array of OpenID Connect identity providers that are supported for sourcing user information. |
+| `...mapping` | An object that describes how claims in each ID token are mapped to attributes in the resulting verifiable credential. |
+| `...mapping.{attribute-name}` | The attribute that should be populated in the resulting Verifiable Credential. |
+| `...mapping.{attribute-name}.claim` | The claim in ID tokens whose value should be used to populate the attribute. |
+| `...configuration` | The location of your identity provider's configuration document. This URL must adhere to the [OpenID Connect standard for identity provider metadata](https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderMetadata). The configuration document must include the `issuer`, `authorization_endpoint`, `token_endpoint`, and `jwks_uri` fields. |
+| `...client_id` | The client ID obtained during the client registration process. |
+| `...scope` | A space-delimited list of scopes the IDP needs to be able to return the correct claims in the ID token. |
+| `...redirect_uri` | Must always use the value `vcclient://openid/`. |
+| `validityInterval` | A time duration, in seconds, representing the lifetime of your verifiable credentials. After this time period elapses, the verifiable credential will no longer be valid. Omitting this value means that each Verifiable Credential will remain valid until is it explicitly revoked. |
+| `vc.type` | An array of strings indicating the schema(s) that your Verifiable Credential satisfies. See the section below for more detail. |
+
+### vc.type: Choose credential type(s)
+
+All verifiable credentials must declare their "type" in their rules file. The type of a credential distinguishes your verifiable credentials from credentials issued by other organizations and ensures interoperability between issuers and verifiers. To indicate a credential type, you must provide one or more credential types that the credential satisfies. Each type is represented by a unique string - often a URI will be used to ensure global uniqueness. The URI does not need to be addressable; it is treated as a string.
+
+As an example, a diploma credential issued by Contoso University might declare the following types:
+
+| Type | Purpose |
+| - | - |
+| `https://schema.org/EducationalCredential` | Declares that diplomas issued by Contoso University contain attributes defined by schema.org's `EducationaCredential` object. |
+| `https://schemas.ed.gov/universityDiploma2020` | Declares that diplomas issued by Contoso University contain attributes defined by the United States department of education. |
+| `https://schemas.contoso.edu/diploma2020` | Declares that diplomas issued by Contoso University contain attributes defined by Contoso University. |
+
+By declaring all three types, Contoso University's diplomas can be used to satisfy different requests from verifiers. A bank can request a set of `EducationCredential`s from a user, and the diploma can be used to satisfy the request. But the Contoso University Alumni Association can request a credential of type `https://schemas.contoso.edu/diploma2020`, and the diploma will also satisfy the request.
+
+To ensure interoperability of your credentials, it's recommended that you work closely with related organizations to define credential types, schemas, and URIs for use in your industry. Many industry bodies provide guidance on the structure of official documents that can be repurposed for defining the contents of verifiable credentials. You should also work closely with the verifiers of your credentials to understand how they intend to request and consume your verifiable credentials.
+
+## Input Type: Verifiable Credential
+
+>[!NOTE]
+>Rules files that ask for a verifiable credential do not use the presentation exchange format for requesting credentials. This will be updated when the Issuing Service supports the standard, Credential Manifest.
+
+```json
+{
+ "attestations": {
+ "presentations": [
+ {
+ "mapping": {
+ "first_name": {
+ "claim": "$.vc.credentialSubject.firstName",
+ },
+ "last_name": {
+ "claim": "$.vc.credentialSubject.lastName",
+ "indexed": true
+ }
+ },
+ "credentialType": "VerifiedCredentialNinja",
+ "contracts": [
+ "https://beta.did.msidentity.com/v1.0/3c32ed40-8a10-465b-8ba4-0b1e86882668/verifiableCredential/contracts/VerifiedCredentialNinja"
+ ],
+ "issuers": [
+ {
+ "iss": "did:ion:123"
+ }
+ ]
+ }
+ ]
+ },
+ "validityInterval": 25920000,
+ "vc": {
+ "type": [
+ "ProofOfNinjaNinja"
+ ],
+ }
+ }
+ ```
+
+| Property | Description |
+| -- | -- |
+| `attestations.presentations` | An array of verifiable credentials being requested as inputs. |
+| `...mapping` | An object that describes how claims in each presented Verifiable Credential are mapped to attributes in the resulting Verifiable Credential. |
+| `...mapping.{attribute-name}` | The attribute that should be populated in the resulting verifiable credential. |
+| `...mapping.{attribute-name}.claim` | The claim in the Verifiable Credential whose value should be used to populate the attribute. |
+| `...mapping.{attribute-name}.indexed` | Only one can be enabled per Verifiable Credential to save for revoke. Please see the [article on how to revoke a credential](how-to-issuer-revoke.md) for more information. |
+| `credentialType` | The credentialType of the Verifiable Credential you are asking the user to present. |
+| `contracts` | The URI of the contract in the Verifiable Credential Service portal. |
+| `issuers.iss` | The issuer DID for the Verifiable Credential being asked of the user. |
+| `validityInterval` | A time duration, in seconds, representing the lifetime of your verifiable credentials. After this time period elapses, the Verifiable Credential will no longer be valid. Omitting this value means that each Verifiable Credential will remain valid until is it explicitly revoked. |
+| `vc.type` | An array of strings indicating the schema(s) that your verifiable credential satisfies. |
++
+## Input Type: Self-Attested Claims
+
+During the issuance flow, the user can be asked to input some self-attested information. As of now, the only input type is a 'string'.
+```json
+{
+ "attestations": {
+ "selfIssued": {
+ "mapping": {
+ "alias": {
+ "claim": "name"
+ }
+ },
+ },
+ "validityInterval": 25920000,
+ "vc": {
+ "type": [
+ "ProofOfNinjaNinja"
+ ],
+ }
+ }
+++
+```
+| Property | Description |
+| -- | -- |
+| `attestations.selfIssued` | An array of self-issued claims that require input from the user. |
+| `...mapping` | An object that describes how self-issued claims are mapped to attributes in the resulting Verifiable Credential. |
+| `...mapping.alias` | The attribute that should be populated in the resulting Verifiable Credential. |
+| `...mapping.alias.claim` | The claim in the Verifiable Credential whose value should be used to populate the attribute. |
+| `validityInterval` | A time duration, in seconds, representing the lifetime of your verifiable credentials. After this time period elapses, the Verifiable Credential will no longer be valid. Omitting this value means that each Verifiable Credential will remain valid until is it explicitly revoked. |
+| `vc.type` | An array of strings indicating the schema(s) that your Verifiable Credential satisfies. |
++
+## Display File: verifiable credentials in Microsoft Authenticator
+
+Verifiable credentials offer a limited set of options that can be used to reflect your brand. This article provides instructions how to customize your credentials, and best practices for designing credentials that look great once issued to users.
+
+Verifiable credentials issued to users are displayed as cards in Microsoft Authenticator. As the administrator, you may choose card color, icon, and text strings to match your organization's brand.
+
+![issuance documentation](media/credential-design/detailed-view.png)
+
+Cards also contain customizable fields that you can use to let users know the purpose of the card, the attributes it contains, and more.
+
+## Create a credential display file
+
+Much like the rules file, the display file is a simple JSON file that describes how the contents of your verifiable credentials should be displayed in the Microsoft Authenticator app.
+
+>[!NOTE]
+> At this time, this display model is only used by Microsoft Authenticator.
+
+The display file has the following structure.
+
+```json
+{
+ "default": {
+ "locale": "en-US",
+ "card": {
+ "title": "University Graduate",
+ "issuedBy": "Contoso University",
+ "backgroundColor": "#212121",
+ "textColor": "#FFFFFF",
+ "logo": {
+ "uri": "https://contoso.edu/images/logo.png",
+ "description": "Contoso University Logo"
+ },
+ "description": "This digital diploma is issued to students and alumni of Contoso University."
+ },
+ "consent": {
+ "title": "Do you want to get your digital diploma from Contoso U?",
+ "instructions": "Please log in with your Contoso U account to receive your digital diploma."
+ },
+ "claims": {
+ "vc.credentialSubject.name": {
+ "type": "String",
+ "label": "Name"
+ }
+ }
+ }
+}
+```
+
+| Property | Description |
+| -- | -- |
+| `locale` | The language of the Verifiable Credential. Reserved for future use. |
+| `card.title` | Displays the type of credential to the user. Recommended maximum length of 25 characters. |
+| `card.issuedBy` | Displays the name of the issuing organization to the user. Recommended maximum length of 40 characters. |
+| `card.backgroundColor` | Determines the background color of the card, in hex format. A subtle gradient will be applied to all cards. |
+| `card.textColor` | Determines the text color of the card, in hex format. Recommended to use black or white. |
+| `card.logo` | A logo that is displayed on the card. The URL provided must be publicly addressable. Recommended maximum height of 36 px, and maximum width of 100 px regardless of phone size. Recommend PNG with transparent background. |
+| `card.description` | Supplemental text displayed alongside each card. Can be used for any purpose. Recommended maximum length of 100 characters. |
+| `consent.title` | Supplemental text displayed when a card is being issued. Used to provide details about the issuance process. Recommended length of 100 characters. |
+| `consent.instructions` | Supplemental text displayed when a card is being issued. Used to provide details about the issuance process. Recommended length of 100 characters. |
+| `claims` | Allows you to provide labels for attributes included in each credential. |
+| `claims.{attribute}` | Indicates the attribute of the credential to which the label applies. |
+| `claims.{attribute}.type` | Indicates the attribute type. Currently we only support 'String'. |
+| `claims.{attribute}.label` | The value that should be used as a label for the attribute, which will show up in Authenticator. This maybe different than the label that was used in the rules file. Recommended maximum length of 40 characters. |
+
+>[!note]
+ >If a claim is included in the rules file and then omitted in the display file, there are two different types of experiences. On iOS, the claim will not be displayed in details section shown in the above image, while on Android the claim will be shown.
+
+## Next steps
+
+Now you have a better understanding of verifiable credential design and how you can create your own to meet your needs.
+
+- [Issuer service communication examples](issuer-openid.md)
active-directory Decentralized Identifier Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/decentralized-identifier-overview.md
+
+ Title: Introduction to Azure Active Directory Verifiable Credentials (preview)
+description: An overview Azure Verifiable Credentials.
+++
+editor:
+++ Last updated : 04/01/2021++++
+# Introduction to Azure Active Directory Verifiable Credentials (Preview)
+
+> [!IMPORTANT]
+> Azure Active Directory Verifiable Credentials is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Our digital and physical lives are increasingly linked to the apps, services, and devices we use to access a rich set of experiences. This digital transformation allows us to interact with hundreds of companies and thousands of other users in ways that were previously unimaginable.
+
+But identity data has too often been exposed in security breaches. These breaches are impactful to people's lives affecting our social, professional, and financial lives. Microsoft believes that thereΓÇÖs a better way. Every person has a right to an identity that they own and control, one that securely stores elements of their digital identity and preserves privacy. This primer explains how we are joining hands with a diverse community to build an open, trustworthy, interoperable, and standards-based Decentralized Identity (DID) solution for individuals and organizations.
+
+## Why we need Decentralized Identity
+
+Today we use our digital identity at work, at home, and across every app, service, and device we use. ItΓÇÖs made up of everything we say, do, and experience in our livesΓÇöpurchasing tickets for an event, checking into a hotel, or even ordering lunch. Currently, our identity and all our digital interactions are owned and controlled by other parties, some of whom we arenΓÇÖt even aware of.
+
+Generally, users grant consent to several apps and devices. This approach requires a high degree of vigilance on the user's part to track who has access to what information. On the enterprise front, collaboration with consumers and partners requires high-touch orchestration to securely exchange data in a way that maintains privacy and security for all involved.
+
+We believe a standards-based Decentralized Identity system can unlock a new set of experiences that give users and organizations to have greater control over their dataΓÇöand deliver a higher degree of trust and security for apps, devices, and service providers
+
+## Lead with open standards
+
+WeΓÇÖre committed to working closely with customers, partners, and the community to unlock the next generation of Decentralized IdentityΓÇôbased experiences, and weΓÇÖre excited to partner with the individuals and organizations that are making incredible contributions in this space. If the DID ecosystem is to grow, standards, technical components, and code deliverables must be open source and accessible to all.
+
+Microsoft is actively collaborating with members of the Decentralized Identity Foundation (DIF), the W3C Credentials Community Group, and the wider identity community. WeΓÇÖre worked with these groups to identify and develop critical standards, and the following standards have been implemented in our services.
+
+* [W3C Decentralized Identifiers](https://www.w3.org/TR/did-core/)
+* [W3C Verifiable Credentials](https://www.w3.org/TR/vc-data-model/)
+* [DIF Sidetree](https://identity.foundation/sidetree/spec/)
+* [DIF Well Known DID Configuration](https://identity.foundation/specs/did-configuration/)
+* [DIF DID-SIOP](https://identity.foundation/did-siop/)
+* [DIF Presentation Exchange](https://identity.foundation/presentation-exchange/)
++
+## What are DIDs
+
+Before we can understand DIDs, it helps to compare them with current identity systems. Email addresses and social network IDs are human-friendly aliases for collaboration but are now overloaded to serve as the control points for data access across many scenarios beyond collaboration. This creates a potential problem, because access to these IDs can be removed at any time by external parties.
+
+Decentralized Identifiers (DIDs) are different. DIDs are user-generated, self-owned, globally unique identifiers rooted in decentralized systems like ION. They possess unique characteristics, like greater assurance of immutability, censorship resistance, and tamper evasiveness. These attributes are critical for any ID system that is intended to provide self-ownership and user control.
+
+MicrosoftΓÇÖs verifiable credential solution uses decentralized credentials (DIDs) to cryptographically sign as proof that a relying party (verifier) is attesting to information proving they are the owners of a verifiable credential. Therefore, a basic understanding of decentralized identifiers is recommended for anyone creating a verifiable credential solution based on the Microsoft offering.
+## What are Verifiable Credentials
+
+ We use IDs in our daily lives. We have drivers licenses that we use as evidence of our ability to operate a car. Universities issue diplomas that prove we attained a level of education. We use passports to prove who we are to authorities as we arrive to other countries. The data model describes how we could handle these types of scenarios when working over the internet but in a secure manner that respects user's privacy. You can get additional information in The [Verifiable Credentials Data Model 1.0](https://www.w3.org/TR/vc-data-model/)
+
+In short, verifiable credentials are data objects consisting of claims made by the issuer attesting information about a subject. These claims are identified by schema and include the DID the issuer and subject. The issuer's DID creates a digital signature as proof that they attest to this information.
++
+## How does Decentralized Identity work?
+
+We need a new form of identity. We need an identity that brings together technologies and standards to deliver key identity attributes like self-ownership, and censorship resistance. These capabilities are difficult to achieve using existing systems.
+
+To deliver on these promises, we need a technical foundation made up of seven key innovations. One key innovation is identifiers that are owned by the user, a user agent to manage keys associated with such identifiers, and encrypted, user-controlled datastores.
+
+![overview of Microsoft's verifiable credential environment](media/decentralized-identifier-overview/microsoft-did-system.png)
+
+**1. W3C Decentralized Identifiers (DIDs)**
+IDs users create, own, and control independently of any organization or government. DIDs are globally unique identifiers linked to Decentralized Public Key Infrastructure (DPKI) metadata composed of JSON documents that contain public key material, authentication descriptors, and service endpoints.
+
+**2. Decentralized system: ION (Identity Overlay Network)**
+ION is a Layer 2 open, permissionless network based on the purely deterministic Sidetree protocol, which requires no special tokens, trusted validators, or other consensus mechanisms; the linear progression of Bitcoin's time chain is all that's required for its operation. We have [open sourced a npm package](https://www.npmjs.com/package/@decentralized-identity/ion-tools) to make working with the ION network easy to integrate into your apps and services. Libraries include creating a new DID, generating keys and anchoring your DID on the Bitcoin blockchain.
+
+**3. DID User Agent/Wallet: Microsoft Authenticator App**
+Enables real people to use decentralized identities and Verifiable Credentials. Authenticator creates DIDs, facilitates issuance and presentation requests for verifiable credentials and manages the backup of your DID's seed through an encrypted wallet file.
+
+**4. Microsoft Resolver**
+An API that connects to our ION node to look up and resolve DIDs using the ```did:ion``` method and return the DID Document Object (DDO). The DDO includes DPKI metadata associated with the DID such as public keys and service endpoints.
+
+**5. Azure Active Directory Verified Credentials Service**
+An issuance and verification API and open-source SDK for [W3C Verifiable Credentials](https://www.w3.org/TR/vc-data-model/) that are signed with the ```did:ion``` method. They enable identity owners to generate, present, and verify claims. This forms the basis of trust between users of the systems.
+
+## A sample scenario
+
+The scenario we use to explain how VCs work involves:
+
+- Woodgrove Inc. a company.
+- Proseware, a company that offers Woodgrove employees discounts.
+- Alice, an employee at Woodgrove, Inc. who wants to get a discount from Proseware
+++
+Today, Alice provides a username and password to log onto WoodgroveΓÇÖs networked environment. Woodgrove is deploying a VC solution to provide a more manageable way for Alice to prove she is an employee of Woodgrove. Proseware is using a VC solution compatible with Woodgrove's VC solution and they accept credentials issued by Woodgrove as proof of employment.
+
+The issuer of the credential, Woodgrove Inc., creates a public key and a private key. The public key is stored on ION. When the key is added to the infrastructure, the entry is recorded in a blockchain-based decentralized ledger. The issuer provides Alice the private key that is stored in a wallet application. Each time Alice successfully uses the private key the transaction is logged in the wallet application.
+
+![microsoft-did-overview](media/decentralized-identifier-overview/did-overview.png)
+
+## Roles in a verifiable credential solution
+
+There are three primary actors in the verifiable credential solution. In the following diagram:
+
+- **Step 1**, the **user** requests a verifiable credential from an issuer.
+- **Step 2**, the **issuer** of the credential attests that the proof the user provided is accurate and creates a verifiable credential signed with their DID and the userΓÇÖs DID is the subject.
+- **In Step 3**, the user signs a verifiable presentation (VP) with their DID and sends to the **verifier.** The verifier then validates of the credential by matching with the public key placed in the DPKI.
+
+The roles in this scenario are:
+
+![roles in a verifiable credential environment](media/decentralized-identifier-overview/issuer-user-verifier.png)
+
+**issuer** ΓÇô The issuer is an organization that creates an issuance solution requesting information from a user. The information is used to verify the userΓÇÖs identity. For example, Woodgrove, Inc. has an issuance solution that enables them to create and distribute verifiable credentials (VCs) to all their employees. The employee uses the Authenticator app to sign in with their username and password, which passes an ID token to the issuing service. Once Woodgrove, Inc. validates the ID token submitted, the issuance solution creates a VC that includes claims about the employee and is signed with Woodgrove, Inc. DID. The employee now has a verifiable credential that is signed by their employer, which includes the employees DID as the subject DID.
+
+**user** ΓÇô The user is the person or entity that is requesting a VC. For example, Alice is a new employee of Woodgrove, Inc. and was previously issued her proof of employment verifiable credential. When Alice needs to provide proof of employment in order to get a discount at Proseware, she can grant access to the credential in her Authenticator app by signing a verifiable presentation that proves Alice is the owner of the DID. Proseware is able to validate the credential was issued by Woodgrove, Inc.and Alice is the owner of the credential.
+
+**verifier** ΓÇô The verifier is a company or entity who needs to verify claims from one or more issuers they trust. For example, Proseware trusts Woodgrove, Inc. does an adequate job of verifying their employeesΓÇÖ identity and issuing authentic and valid VCs. When Alice tries to order the equipment she needs for her job, Proseware will use open standards such as SIOP and Presentation Exchange to request credentials from the User proving they are an employee of Woodgrove, Inc. For example, Proseware might provide Alice a link to a website with a QR code she scans with her phone camera. This initiates the request for a specific VC, which Authenticator will analyze and give Alice the ability to approve the request to prove her employment to Proseware. Proseware can use the verifiable credentials service API or SDK, to verify the authenticity of the verifiable presentation. Based on the information provided by Alice they give Alice the discount. If other companies and organizations know that Woodgrove, Inc. issues VCs to their employees, they can also create a verifier solution and use the Woodgrove, Inc. verifiable credential to provide special offers reserved for Woodgrove, Inc. employees.
+
+## Next steps
+
+Now that you know about DIDs and verifiable credentials try them yourself by following our get started article or one of our articles providing more detail on verifiable credential concepts.
+
+- [Get started with verifiable credentials](get-started-verifiable-credentials.md)
+- [How to customize your credentials](credential-design.md)
+- [Verifiable credentials FAQ](verifiable-credentials-faq.md)
active-directory Enable Your Tenant Verifiable Credentials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/enable-your-tenant-verifiable-credentials.md
+
+ Title: "Tutorial: Configure your Azure Active Directory to issue verifiable credentials (Preview)"
+description: In this tutorial, you build the environment needed to deploy verifiable credentials in your tenant
+documentationCenter: ''
+++++ Last updated : 03/31/2021+++
+# Customer intent: As an administrator, I want the high-level steps that I should follow so that I can quickly start using verifiable credentials in my own Azure AD
+++
+# Tutorial: Configure your Azure Active Directory to issue verifiable credentials (Preview)
+
+In this tutorial, we build on the work done as part of the [get started](get-started-verifiable-credentials.md) article and set up your Azure Active Directory (Azure AD) with its own [decentralized identifier](https://www.microsoft.com/security/business/identity-access-management/decentralized-identity-blockchain?rtc=1#:~:text=Decentralized%20identity%20is%20a%20trust,protect%20privacy%20and%20secure%20transactions.) (DID). We use the decentralized identifier to issue a verifiable credential using the sample app and your issuer; however, in this tutorial, we still use the sample Azure B2C tenant for authentication. In our next tutorial, we will take additional steps to get the app configured to work with your Azure AD.
+
+> [!IMPORTANT]
+> Azure Active Directory Verifiable Credentials is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+In this article:
+
+> [!div class="checklist"]
+> * You create the necessary services to onboard your Azure AD for verifiable credentials
+> * We are creating your DID
+> * We are customizing the Rules and Display files
+> * Configure verifiable credentials in Azure AD.
++
+## Prerequisites
+
+Before you can successfully complete this tutorial, you must first:
+
+- Complete the [Get started](get-started-verifiable-credentials.md).
+- Have an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Azure AD with a P2 [license](https://azure.microsoft.com/pricing/details/active-directory/). Follow the [How to create a free developer account](how-to-create-a-free-developer-account.md) if you do not have one.
+- An instance of [Azure Key Vault](../../key-vault/general/overview.md) where you have rights to create keys and secrets.
+
+## Azure Active Directory
+
+Before we can start, we need an Azure AD tenant. When your tenant is enabled for verifiable credentials, it is assigned a decentralized identifier (DID) and it is enabled with an issuer service for issuing verifiable credentials. Any verifiable credential you issue is issued by your tenant and its DID. The DID is also used when verifying verifiable credentials.
+If you just created a test Azure subscription, your tenant does not need to be populated with user accounts but you will need to have at least one user test account to complete later tutorials.
+
+## Create a Key Vault
+
+When working with verifiable credentials, you have complete control and management of the cryptographic keys your tenant uses to digitally sign verifiable credentials. To issue and verify credentials, you must provide Azure AD with access to your own instance of Azure Key Vault.
+
+1. From the Azure portal menu, or from the **Home** page, select **Create a resource**.
+2. In the Search box, enter **key vault**.
+3. From the results list, choose **Key Vault**.
+4. On the Key Vault section, choose **Create**.
+5. On the **Create key vault** section provide the following information:
+ - **Subscription**: Choose a subscription.
+ - Under **Resource Group**, choose **Create new** and enter a resource group name such as **vc-resource-group**. We are using the same resource group name across multiple articles.
+ - **Name**: A unique name is required. We use **woodgroup-vc-kv** so replace this value with your own unique name.
+ - In the **Location** pull-down menu, choose a location.
+ - Leave the other options to their defaults.
+6. After providing the information above, select **Access Policy**
+
+ ![create a key vault page](media/enable-your-tenant-verifiable-credentials/create-key-vault.png)
+
+7. In the Access Policy screen, choose **Add Access Policy**
+
+ >[!NOTE]
+ > By default the account that creates the key vault is the only one with access. The verifiable credential service needs access to key vault. The key vault must have an access policy allowing the Admin to **create keys**, have the ability to **delete keys** if you opt out, and **sign** to create the domain binding for verifiable credential. If you are using the same account while testing make sure to modify the default policy to grant the account **sign** in addition to the default permissions granted to vault creators.
+
+8. For the User Admin, make sure the key permissions section has **Create**, **Delete**, and **Sign** enabled. By default Create and Delete are already enabled and Sign should be the only Key Permission that needs to be updated.
+
+ ![Key Vault permissions](media/enable-your-tenant-verifiable-credentials/keyvault-access.png)
+
+9. Select **Review + create**.
+10. Select **Create**.
+11. Go to the vault and take note of the vault name and URI
+
+Take note of the two properties listed below:
+
+- **Vault Name**: In the example, the value name is **woodgrove-vc-kv**. You use this name for other steps.
+- **Vault URI**: In the example, this value is https://woodgrove-vc-kv.vault.azure.net/. Applications that use your vault through its REST API must use this URI.
+
+>[!NOTE]
+> Each key vault transaction results in additional Azure subscription costs. Review the [Key Vault pricing page](https://azure.microsoft.com/pricing/details/key-vault/) for more details.
+
+>[!IMPORTANT]
+> During the Azure Active Directory Verifiable Credentials preview, keys and secrets created in your vault should not be modified once created. Deleting, disabling, or updating your keys and secrets invalidates any issued credentials. Do not modify your keys or secrets during the preview.
+
+## Create a Modified Rules and Display File
+
+In this section, we use the rules and display files from the Sample issuer app and modify them slightly to create your tenant's first verifiable credential.
+
+1. Copy both the rules and display json files to a temporary folder and rename them **MyFirstVC-display.json** and **MyFirstVC-rules.json** respectively. You can find both files under **issuer\issuer_config**
+
+ ![display and rules files as part of the sample app directory](media/enable-your-tenant-verifiable-credentials/sample-app-rules-display.png)
+
+ ![display and rules files in a temp folder](media/enable-your-tenant-verifiable-credentials/display-rules-files-temp.png)
+
+2. Open up the MyFirstVC-rules.json file in your code editor.
+
+ ```json
+ {
+ "attestations": {
+ "idTokens": [
+ {
+ "mapping": {
+ "firstName": { "claim": "given_name" },
+ "lastName": { "claim": "family_name" }
+ },
+ "configuration": "https://didplayground.b2clogin.com/didplayground.onmicrosoft.com/B2C_1_sisu/v2.0/.well-known/openid-configuration",
+ "client_id": "8d5b446e-22b2-4e01-bb2e-9070f6b20c90",
+ "redirect_uri": "vcclient://openid/"
+ }
+ ]
+ },
+ "validityInterval": 2592000,
+ "vc": {
+ "type": ["VerifiedCredentialExpert"]
+ }
+ }
+
+ ```
+
+Now let's change the type field to "MyFirstVC".
+
+ ```json
+ "type": ["MyFirstVC"]
+
+ ```
+
+Save this change.
+
+ >[!NOTE]
+ > We are not changing the **"configuration"** or the **"client_id"** at this point in the tutorial. We still use the Microsoft B2C tenant we used in the [Get started](get-started-verifiable-credentials.md). We will use your Azure AD in the next tutorial.
+
+3. Open up the MyFirstVC-display.json file in your code editor.
+
+ ```json
+ {
+ "default": {
+ "locale": "en-US",
+ "card": {
+ "title": "Verified Credential Expert",
+ "issuedBy": "Microsoft",
+ "backgroundColor": "#000000",
+ "textColor": "#ffffff",
+ "logo": {
+ "uri": "https://didcustomerplayground.blob.core.windows.net/public/VerifiedCredentialExpert_icon.png",
+ "description": "Verified Credential Expert Logo"
+ },
+ "description": "Use your verified credential to prove to anyone that you know all about verifiable credentials."
+ },
+ "consent": {
+ "title": "Do you want to get your Verified Credential?",
+ "instructions": "Sign in with your account to get your card."
+ },
+ "claims": {
+ "vc.credentialSubject.firstName": {
+ "type": "String",
+ "label": "First name"
+ },
+ "vc.credentialSubject.lastName": {
+ "type": "String",
+ "label": "Last name"
+ }
+ }
+ }
+ }
+ ```
+
+Lets make a few modifications so this verifiable credential looks visibly different from sample code's version.
+
+```json
+ "card": {
+ "title": "My First VC",
+ "issuedBy": "Your Issuer Name",
+ "backgroundColor": "#ffffff",
+ "textColor": "#000000",
+```
+
+Save these changes.
+## Create a storage account
+
+Before creating our first verifiable credential, we need to create a Blob Storage container that can hold our configuration and rules files.
+
+1. Create a storage account using the options shown below. For detailed steps review the [Create a storage account](../../storage/common/storage-account-create.md?tabs=azure-portal) article.
+
+ - **Subscription:** Choose the subscription that we are using for these tutorials.
+ - **Resource group:** Choose the same resource group we used in earlier tutorials (**vc-resource-group**).
+ - **Name:** A unique name.
+ - **Location:** (US) EAST US.
+ - **Performance:** Standard.
+ - **Account kind:** Storage V2.
+ - **Replication:** Locally redundant.
+
+ ![Create a new storage account](media/enable-your-tenant-verifiable-credentials/create-storage-account.png)
+
+2. After creating the storage account, we need to create a container. Select **Containers** under **Blob Storage** and create a container using the values provided below:
+
+ - **Name:** vc-container
+ - **Public access level:** Private (no anonymous access)
+
+ ![Create a container](media/enable-your-tenant-verifiable-credentials/new-container.png)
+
+3. Now select your new container and upload both the new rules and display files **MyFirstVC-display.json** and **MyFirstVC-rules.json** we created earlier.
+
+ ![upload rules file](media/enable-your-tenant-verifiable-credentials/blob-storage-upload-rules-display-files.png)
+
+## Assign blob role
+
+Before creating the credential, we need to first give the signed in user the correct role assignment so they can access the files in Storage Blob.
+
+1. Navigate to **Storage** > **Container**.
+2. Choose **Access Control (IAM)** from the menu on the left.
+3. Choose **Role Assignments**.
+4. Select **Add**.
+5. In the **Role** section, choose **Storage Blob Data Reader**.
+6. Under **Assign access to** choose **User, group, or service principle**.
+7. In **Select**: Choose the account that you are using to perform these steps.
+8. Select **Save** to complete the role assignment.
++
+ ![Add a role assignment](media/enable-your-tenant-verifiable-credentials/role-assignment.png)
+
+ >[!IMPORTANT]
+ >By default, container creators get the **Owner** role assigned. The **Owner** role is not enough on its own. Your account needs the **Storage Blob Data Reader** role. For more information review [Use the Azure portal to assign an Azure role for access to blob and queue data](../../storage/common/storage-auth-aad-rbac-portal.md)
+
+## Set up verifiable credentials (Preview)
+
+Now we need to take the last step to set up your tenant for verifiable credentials.
+
+1. From the Azure portal, search for **verifiable credentials**.
+2. Choose **Verifiable Credentials (Preview)**.
+3. Choose **Get started**
+4. We need to set up your organization and provide the organization name, domain, and key vault. Let's look at each one of the values.
+
+ - **organization name**: This name is how you reference your business within the Verifiable Credential service. This value is not customer facing.
+ - **Domain:** The domain entered is added to a service endpoint in your DID document. [Microsoft Authenticator](../user-help/user-help-auth-app-download-install.md) and other wallets use this information to validate that your DID is [linked to your domain](how-to-dnsbind.md). If the wallet can verify the DID, it displays a verified symbol. If the wallet is unable to verify the DID, it informs the user that the credential was issued by an organization it could not validate. The domain is what binds your DID to something tangible that the user may know about your business.
+ - **Key vault:** Provide the name of the Key Vault that we created earlier.
+
+ >[!IMPORTANT]
+ > The domain can not be a redirect, otherwise the DID and domain cannot be linked. Make sure to use https://www.domain.com format.
+
+5. Choose **Save and create credential**
+
+ ![set up your organizational identity](media/enable-your-tenant-verifiable-credentials/save-create.png)
+
+Congratulations, your tenant is now enabled for the Verifiable Credential preview!
+
+## Create your VC in the Portal
+
+The previous step leaves you in the **Create a new credential** page.
+
+ ![verifiable credentials get started](media/enable-your-tenant-verifiable-credentials/create-credential-after-enable-did.png)
+
+1. Under Credential Name, add the name **MyFirstVC**. This name is used in the portal to identify your verifiable credentials and it is included as part of the verifiable credentials contract.
+2. In the Display file section, choose **Configure display file**
+3. In the **Storage accounts** section, select **woodgrovevcstorage**.
+4. From the list of available containers choose **vc-container**.
+5. Choose the **MyFirstVC-display.json** file we created earlier.
+6. From the **Create a new credential** in the **Rules file** section choose **Configure rules file**
+7. In the **Storage accounts** section, select **woodgrovecstorage**
+8. Choose **vc-container**.
+9. Select **MyFirstVC-rules.json**
+10. From the **Create a new credential** screen choose **Create**.
+
+ ![create a new credential](media/enable-your-tenant-verifiable-credentials/create-my-first-vc.png)
+
+### Credential URL
+
+Now that you have a new credential, copy the credential URL.
+
+ ![The issue credential URL](media/enable-your-tenant-verifiable-credentials/issue-credential-url.png)
+
+>[!NOTE]
+>The credential URL is the combination of the rules and display files. It is the URL that Authenticator evaluates before displaying to the user verifiable credential issuance requirements.
+
+## Update the sample app
+
+Now we make modifications to the sample app's issuer code to update it with your verifiable credential URL. This allows you to issue verifiable credentials using your own tenant.
+
+1. Go to the folder where you placed the sample app's files.
+2. Open the issuer folder and then open app.js in Visual Studio Code.
+3. Update the constant 'credential' with your new credential URL and set the credentialType constant to 'MyFirstVC' and save the file.
+
+ ![image of visual studio code showing the relevant areas highlighted](media/enable-your-tenant-verifiable-credentials/sample-app-vscode.png)
+
+4. Open a command prompt and open the issuer folder.
+5. Run the updated node app.
+
+ ```terminal
+ node app.js
+ ```
+
+6. Using a different command prompt run ngrok to set up a URL on 8081
+
+ ```terminal
+ ngrok http 8081
+ ```
+
+ >[!IMPORTANT]
+ > You may also notice a warning that this app or website may be risky. The message is expected at this time because we have not yet linked your DID to your domain. Follow the [DNS binding](how-to-dnsbind.md) instructions to configure this.
+
+
+7. Open the HTTPS URL generated by ngrok.
+
+ ![NGROK forwarding endpoints](media/enable-your-tenant-verifiable-credentials/ngrok-url-screen.png)
+
+8. Choose **GET CREDENTIAL**
+9. In Authenticator scan the QR code.
+10. At **This app or website may be risky** warning message choose **Advanced**.
+
+ ![Initial warning](media/enable-your-tenant-verifiable-credentials/site-warning.png)
+
+11. At the risky website warning choose **Proceed anyways (unsafe)**
+
+ ![Second warning about the issuer](media/enable-your-tenant-verifiable-credentials/site-warning-proceed.png)
++
+12. At the **Add a credential** screen notice a few things:
+ 1. At the top of the screen you can see a red **Not verified** message
+ 1. The credential is customized based on the changes we made to the display file.
+ 1. The **Sign in to your account** option is pointing to **didplayground.b2clogin.com**.
+
+ ![add credential screen with warning](media/enable-your-tenant-verifiable-credentials/add-credential-not-verified.png)
+
+13. Choose **Sign in to your account** and authenticate using the credential information you provided in the [get started tutorial](get-started-verifiable-credentials.md).
+14. After successfully authenticating the **Add** button is no longer greyed out. Choose **Add**.
+
+ ![add credential screen after authenticating](media/enable-your-tenant-verifiable-credentials/add-credential-not-verified-authenticated.png)
+
+We have now issued a verifiable credential using our tenant to generate our vc while still using our B2C tenant for authentication.
+
+ ![vc issued by your azure AD and authenticated by our Azure B2C instance](media/enable-your-tenant-verifiable-credentials/my-vc-b2c.png)
++
+## Test verifying the VC using the sample app
+
+Now that we've issued the verifiable credential from our own tenant, let's verify it using our sample app.
+
+>[!IMPORTANT]
+> When testing, use the same email and password that you used during the [get started](get-started-verifiable-credentials.md) article. While your tenant is issuing the vc, the input is coming from an id token issued by the Microsoft B2C tenant. In tutorial two, we are switching token issuance to your tenant
+
+1. Open up **Settings** in the verifiable credentials blade in the Azure portal. Copy the decentralized identifier (DID).
+
+ ![copy the DID](media/enable-your-tenant-verifiable-credentials/issuer-identifier.png)
+
+2. Now open verifier folder part of the sample app files. We need to update the app.js file in the verifier sample code and make the following changes:
+
+ - **credential**: change to your credential URL
+ - **credentialType**: 'MyFirstVC'
+ - **issuerDid**: Copy this value from Azure portal>Verifiable credentials (Preview)>Settings>Decentralized identifier (DID)
+
+ ![update the constant issuerDid to match your issuer identifier](media/enable-your-tenant-verifiable-credentials/editing-verifier.png)
+
+3. Stop running your issuer ngrok service.
+
+ ```cmd
+ control-c
+ ```
+
+4. Now run ngrok with the verifier port 8082.
+
+ ```cmd
+ ngrok http 8082
+ ```
+
+5. In another terminal window, navigate to the verifier app and run it similarly to how we ran the issuer app.
+
+ ```cmd
+ cd ..
+ cd verifier
+ node app.js
+ ```
+
+6. Open the ngrok url in your browser and scan the QR code using Authenticator in your mobile device.
+7. On your mobile device, choose **Allow** at the **New permission request** screen.
+
+ >[!NOTE]
+ > The DID signing this VC is still the Microsoft B2C. The Verifier DID is still from the Microsoft Sample App tenant. Since Microsoft's DID has been linked to a domain we own, you do not see the warning like we experienced during the issuance flow. This will be updated in the next section.
+
+ ![new permission request](media/enable-your-tenant-verifiable-credentials/new-permission-request.png)
+
+8. You have no successfully verified your credential.
+
+## Next steps
+
+Now that you have the sample code issuing a VC from your issuer, lets continue to the next section where you use your own identity provider to authenticate users trying to get verifiable credential and use your DID to sign presentation requests.
+
+> [!div class="nextstepaction"]
+> [Tutorial - Issue and verify verifiable credentials using your tenant](issue-verify-verifiable-credentials-your-tenant.md)
++
active-directory Get Started Verifiable Credentials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/get-started-verifiable-credentials.md
+
+ Title: "Tutorial: Get started with verifiable credentials using a sample app (preview)"
+description: In this tutorial, you learn how to issue verifiable credentials using our sample app and test tenant
+++++ Last updated : 03/31/2021
+# Customer intent: As an enterprise we want to enable customers to manage information about themselves using verifiable credentials
+++
+# Tutorial: Get started with verifiable credentials using a sample app (preview)
+
+In this tutorial, we go over the steps needed to issue your first verifiable credential: a Verified Credential expert card. You can then use this card to prove to a verifier that you are a verified credential expert, mastered in the art of digital credentialing. Get started with Azure Active Directory Verifiable Credentials by using the Verifiable Credentials sample app to issue your first verifiable credential.
+
+![This is an image of an example card](media/get-started-verifiable-credentials/verifiedcredentialexpert-card.png)
+
+> [!IMPORTANT]
+> Azure Active Directory Verifiable Credentials is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+- [NodeJS](https://nodejs.org/en/download/) version 10.14 or higher installed on our test system.
+- You need [GIT](https://git-scm.com/downloads) installed If you want to clone the repository that hosts the sample app,
+- [Visual Studio Code](https://code.visualstudio.com/Download)
+- A system to host our sample site.
+- A mobile device with Microsoft Authenticator version 6.2005.3599 or higher installed.
+- [NGROK](https://ngrok.com/) free.
+
+## Download the sample code
+
+To issue yourself a Verified Credential Expert Card, you need to run a website on your local machine. The website is used to initiate a verifiable credential issuance process. We've provided a simple website, written in NodeJS, that we use throughout this tutorial.
+
+First, download our sample code from GitHub [here](https://github.com/Azure-Samples/active-directory-verifiable-credentials), or clone the repository to your local machine:
+
+```terminal
+git clone https://github.com/Azure-Samples/active-directory-verifiable-credentials.git
+```
+
+You may want to familiarize yourself with the code in the sample websites. The `issuer` folder contains all code used to issue a verifiable credential. More details are available in the sample's [readme](https://github.com/Azure-Samples/active-directory-verifiable-credentials).
+
+## Run the issuer website
+
+You can run the steps from within Visual Studio Code or any command line available in your operating system.
+
+1. Navigate to the `issuer` folder.
+
+ ```terminal
+ cd issuer
+ ```
+
+2. Once there we need to install all required packages and start the site.
+
+ ```terminal
+ npm install
+ node app.js
+ ```
+
+3. In the terminal, you will now see that your issuer app is listening on port 8081. Now let's set up a reverse proxy with Ngrok so Authenticator can communicate with your app.
+
+## Creating a reverse proxy with Ngrok
+
+When you run the sample website, your device needs to communicate with the Node server running on your local machine. We recommend using [ngrok](https://ngrok.com/) as an easy way to make your local development server available over the internet.
+
+1. After you download and extract **ngrok**, we need to run:
+
+ ```terminal
+ ngrok http 8081
+ ```
+
+By default the sample website runs on port `8081`. **Ngrok** outputs two forwarding URLs for your server. Copy the URL with the `https://` prefix.
+
+![ngrok helps you make your application end points available over the internet](media/get-started-verifiable-credentials/ngrok.png)
+
+>[!NOTE]
+> If you are using PowerShell you may need to type `./ngrok` for the command to be recognized.
+
+Now that your local port is exposed to the internet using ngrok, the sample site automatically uses the host name generated by ngrok. Open your browser and navigate to the ngrok https forwarding URL. You should be able to successfully visit the sample site's homepage. If the page opens, your device can communicate with the sample app running on your local server. You're now ready to issue yourself a verified credential expert card.
+
+## Issue a credential
+
+1. Install Authenticator on your mobile device. Microsoft Authenticator is used to receive, store, and present your verifiable credentials to interested parties.
+
+2. Next, issue yourself a verifiable credential. **Click** the **Get Credential** button. When you click the **Get Credential** button, the sample website displays a QR code, that you can be scan using Authenticator. If you view the site from the browser on your mobile device, clicking the **Get Credential** button triggers a deep link that opens the authenticator app and does not require the scanning of a QR code.
+
+ ![Get credential button](media/get-started-verifiable-credentials/credential-expert-get.png)
+
+3. Scan the website's QR code using Authenticator, or if you are accessing the website via a mobile click the Get credential button to trigger the deep link.
+
+ ![QR Code ](media/get-started-verifiable-credentials/credential-expert-issue.png)
+
+4. Notice that the **Add** button is greyed out at this time. Choose **Sign in to your account** below the card image.
+
+ ![Sign in ](media/get-started-verifiable-credentials/add-verified-credential-expert.png)
+
+5. Before you get your Credential expert card, the tenant we are using requires that you provide authentication information. If this is your first time going through tutorial you don't have a credential expert account, create a new user account in our B2C tenant.
+
+ ![authenticate before you proceed](media/get-started-verifiable-credentials/authenticate-credential-expert.png)
+
+6. After you are signed in, the **Add** button is no longer greyed out. Choose **Add** to accept your new VC.
+
+ ![Choose add after authenticating](media/get-started-verifiable-credentials/add-verified-credential-expert-after-auth.png)
++
+7. Congratulations! You now have a verified credential expert VC.
+
+ ![Credential expert VC added](media/get-started-verifiable-credentials/credential-expert-add-card.png)
+
+Next, it is time to verify your credential.
+
+## Validate credentials
+
+Now that you have completed the issuance portion of the tutorial and you have a verifiable credential in Authenticator, it is time to validate it in your own verifier app.
+
+1. Stop running your issuer ngrok service.
+
+ ```terminal
+ control+c
+ ```
++
+2. In another terminal window, open the Verifier app folder and run it similarly to how we ran the issuer app.
+
+ ```terminal
+ cd verifier
+ npm install
+ node app.js
+ ```
+
+3. Now run ngrok with the verifier port 8082.
+
+ ```terminal
+ ngrok http 8082
+ ```
+
+4. Open the ngrok https forwarding url in your browser and tap on the **VERIFY CREDENTIAL** button.
+
+ ![verify credential expert button](media/get-started-verifiable-credentials/prove-credential-expert.png)
+
+5. Open Authenticator and scan the QR code.
+
+ ![scan qr code to verify credential](media/get-started-verifiable-credentials/scan-verify.png)
+
+ > [!IMPORTANT]
+ > On iOS, it is the top right and on Android it is the bottom right. Scan the QR code.
+
+6. Choose **Allow** on the new permission request screen in Authenticator. By pressing allow, you are signing a verifiable presentation with your DID (Decentralized Identifier) to prove you in fact control this Verifiable Credential.
+
+ ![new permission request](media/get-started-verifiable-credentials/new-permission-request.png)
+
+ After a successful presentation three things should have been updated:
+
+ 1. The webpage should now display "Congratulations, Your name" + is a Verified Credential Expert!".
+
+ ![congratulations, verify again](media/get-started-verifiable-credentials/congratulations.png)
++
+ 2. Your verifier app terminal should also display the same message from the logs.
+
+ ![application activity in the command prompt](medi-verified-expert.png)
+
+ 3. In Authenticator, there should be an entry for recent activity of this presentation.
+
+ ![Activity in Authenticator](media/get-started-verifiable-credentials/recent-activity.png)
+
+
+>[!NOTE]
+> While running the verifier app, ngrok may stop working and display an error that there are too many connections. We've found this can be avoided by registering your account with ngrok.
+
+## Next steps
+
+Now that you have successfully completed the quick start guide, it's time to create your own Decentralized identifier in the Azure AD verifiable credentials service and issue your own verifiable credential.
+
+>[!div class="nextstepaction"]
+>[Configure your own issuer using the verifiable credentials sample app](./enable-your-tenant-verifiable-credentials.md)
active-directory How To Create A Free Developer Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/how-to-create-a-free-developer-account.md
+
+ Title: How to create a free Azure Active Directory developer tenant
+description: This article shows you how to create a developer account
++++++ Last updated : 04/01/2021+
+# Customer intent: As a developer I am looking to create a developer Azure Active Directory account so I can participate in the Preview with a P2 license.
++
+# How to create a free Azure Active Directory developer tenant
+
+> [!IMPORTANT]
+> Azure Active Directory Verifiable Credentials is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+> [!NOTE]
+> While in Preview a P2 license is required.
+
+There are two easy ways to create a free Azure Active Directory with a P2 trial license so you can install the Verifiable Credential Issuer service and you can test creating and validating Verifiable Credentials:
+
+- [Join](https://aka.ms/o365devprogram) the free Microsoft 365 Developer Program and get a free sandbox, tools, and other resources like an Azure Active Directory with P2 licenses. Configured Users, Groups, mailboxes etc.
+- Create a new [tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant) and activate a [free trial](https://azure.microsoft.com/trial/get-started-active-directory/) of Azure AD Premium P1 or P2 in your new tenant.
+
+If you decide to sign up for the free Microsoft 365 developer program, you need to follow a few easy steps:
+
+1. Click on the Join Now button on the screen
+
+2. Sign in with a new Microsoft Account or use an existing (work) account you already have.
+
+3. On the sign-up page select your region, enter a company name and accept the terms and conditions of the program before you click next
+
+4. Click on set up subscription. Specify the region where you want to create your new tenant, create a username, domain, and enter a password. This will create a new tenant and the first administrator of the tenant
+
+5. Enter the security information needed to protect the administrator account of your new tenant. This will setup MFA authentication for the account
++
+At this point, you have created a tenant with 25 E5 user licenses. The E5 licenses include Azure AD P2 licenses. Optionally, you can add sample data packs with users, groups, mail, and SharePoint to help you test in your development environment. For the Verifiable Credential Issuing service, they are not required.
+
+For your convenience, you could add your own work account as [guest](https://docs.microsoft.com/azure/active-directory/b2b/b2b-quickstart-add-guest-users-portal.md) in the newly created tenant and use that account to administer the tenant. If you want the guest account to be able to manage the Verifiable Credential Service you need to assign the role 'Global Administrator' to that user.
+
+## Next steps
+
+Now that you have a developer account you can try our [first tutorial](get-started-verifiable-credentials.md) to learn more about verifiable credentials.
active-directory How To Dnsbind https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/how-to-dnsbind.md
+
+ Title: Link your Domain to your Decentralized Identifier (DID) (preview)
+description: Learn how to DNS Bind?
+documentationCenter: ''
+++++ Last updated : 04/01/2021++
+#Customer intent: Why are we doing this?
++
+# Link your Domain to your Decentralized Identifier (DID)
+
+> [!IMPORTANT]
+> Azure Active Directory Verifiable Credentials is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+In this article:
+> [!div class="checklist"]
+> * Why do we need to link our DID to our domain?
+> * How do we link DIDs and domains?
+> * How does the domain linking process work?
+> * How does the verify/unverified domain logic work?
+
+## Prerequisites
+
+To link your DID to your domain, you need to have completed the following.
+
+- Complete the [Getting Started](get-started-verifiable-credentials.md) and subsequent [tutorial set](enable-your-tenant-verifiable-credentials.md).
+
+## Why do we need to link our DID to our domain?
+
+A DID starts out as an identifier that is not anchored to existing systems. A DID is useful because a user or organization can own it and control it. If an entity interacting with the organization does not know 'who' the DID belongs to, then the DID is not as useful.
+
+Linking a DID to a domain solves the initial trust problem by allowing any entity to cryptographically verify the relationship between a DID and a Domain.
+
+## How do we link DIDs and domains?
+
+We make a link between a domain and a DID by implementing an open standard written by the Decentralized Identity Foundation called [Well-Known DID configuration](https://identity.foundation/.well-known/resources/did-configuration/). The verifiable credentials service in Azure Active Directory (Azure AD) helps your organization make the link between the DID and domain by included the domain information that you provided in your DID, and generating the well-known config file:
+
+1. Azure AD uses the domain information you provide during organization setup to write a Service Endpoint within the DID Document. All parties who interact with your DID can see the domain your DID proclaims to be associated with.
+
+ ```json
+ "service": [
+ {
+ "id": "#linkeddomains",
+ "type": "LinkedDomains",
+ "serviceEndpoint": {
+ "origins": [
+ "https://www.contoso.com/"
+ ]
+ }
+ }
+ ```
+
+2. The verifiable credential service in Azure AD generates a compliant well-known configuration resource that you can host on your domain. The configuration file includes a self-issued verifiable credential of credentialType 'DomainLinkageCredential' signed with your DID that has an origin of your domain. Here is an example of the config doc that is stored at the root domain url.
++
+ ```json
+ {
+ "@context": "https://identity.foundation/.well-known/contexts/did-configuration-v0.0.jsonld",
+ "linked_dids": [
+ "jwt..."
+ ]
+ }
+ ```
+
+After you have the well-known configuration file, you need to make the file available using the domain name you specified when enabling your AAD for verifiable credentials.
+
+* Host the well-known DID configuration file at the root of the domain.
+* Do not use redirects.
+* Use https to distribute the configuration file.
+
+>[!IMPORTANT]
+>Microsoft Authenticator does not honor redirects, the URL specified must be the final destination URL.
+
+## User Experience
+
+When a user is going through an issuance flow or presenting a verifiable credential, they should know something about organization and its DID. If the domain our verifiable credential wallet, Microsoft Authenticator, validates a DID's relationship with the domain in the DID document and presents users with two different experiences depending on the outcome.
+
+## Verified Domain
+
+Before Microsoft Authenticator displays a **Verified** icon, a few things need to be true:
+
+* The DID signing the self-issued open ID (SIOP) request must have a Service endpoint for Linked Domain.
+* The root domain does not use a redirect and uses https.
+* The domain listed in the DID Document has a resolvable well-known resource.
+* The well-known resource's verifiable credential is signed with the same DID that was used to sign the SIOP that Microsoft Authenticator used to kick start the flow.
+
+If all of the previously mentioned are true, then Microsoft Authenticator displays a verified page and includes the domain that was validated.
+
+![new permission request](media/how-to-dnsbind/new-permission-request.png)
+
+## Unverified Domain
+
+If any of the above are not true, the Microsoft Authenticator will display a full page warning to the user that the domain is unverified, the user is in the middle of a risky transaction and they should proceed with caution. We have chosen to take this route because:
+
+* The DID is either not anchored to a domain.
+* The configuration was not set up properly.
+* The DID the user is interacting with is malicious and actually can't prove they own a domain (since they actually don't). Due to this last point, it is imperative that you link your DID to the domain the user is familiar with, to avoid propagating the warning message.
+
+![unverified domain warning in the add credential screen](media/how-to-dnsbind/add-credential-not-verified-authenticated.png)
+
+## Distribute well-known config
+
+1. Navigate to the Settings page in Verifiable Credentials and choose **Verify this domain**
+
+ ![Verify this domain in settings](media/how-to-dnsbind/settings-verify.png)
+
+2. Download the did-configuration.json file shown in the image below.
+
+ ![Download well known config](media/how-to-dnsbind/verify-download.png)
+
+3. Copy the JWT, open [jwt.ms](https://www.jwt.ms) and validate the domain is correct.
+
+4. Copy your DID and open the [ION Network Explorer](https://identity.foundation/ion/explorer) to verify the same domain is included in the DID Document.
+
+5. Host the well-known config resource at the location specified. Example: https://www.example.com/.well-known/did-configuration.json
+
+6. Test out issuing or presenting with Microsoft Authenticator to validate. Make sure the setting in Authenticator 'Warn about unsafe apps' is toggled on.
+
+>[!NOTE]
+>By default, 'Warn about unsafe apps' is turned on.
+
+Congratulations, you now have bootstrapped the web of trust with your DID!
+
+## Next steps
+
+If during onboarding you enter the wrong domain information of you decide to change it, you will need to [opt out](how-to-opt-out.md). At this time, we don't support updating your DID document. Opting out and opting back in will create a brand new DID.
active-directory How To Issuer Revoke https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/how-to-issuer-revoke.md
+
+ Title: How to Revoke a Verifiable Credential as an Issuer
+description: Learn how to revoke a Verifiable Credential that you've issued
+documentationCenter: ''
+++++ Last updated : 04/01/2021++
+#Customer intent: As an administrator, I am trying to learn the process of revoking verifiable credentials that I have issued
++
+# Revoke a previously issued verifiable credential (Preview)
+
+As part of the process of working with verifiable credentials (VCs), you not only have to issue credentials, but sometimes you also have to revoke them. In this article we go over the **Status** property part of the VC specification and take a closer look at the revocation process, why we may want to revoke credentials and some data and privacy implications.
+
+> [!IMPORTANT]
+> Azure Active Directory Verifiable Credentials is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Status property in verifiable credentials specification
+
+Before we can understand the implications of revoking a verifiable credential, it may help to know what the **status check** is and how it works today.
+
+The [W3C Verifiable Credentials spec](https://www.w3.org/TR/vc-data-model/) references the **status** property in section [4.9:](https://www.w3.org/TR/vc-data-model/#status)
+
+"This specification defines the following **credentialStatus** property for the discovery of information about the current status of a verifiable credential, such as whether it is suspended or revoked."
+
+However, the W3C specification does not define a format on how **status check** should be implemented.
+
+"Defining the data model, formats, and protocols for status schemes are out of scope for this specification. A Verifiable Credential Extension Registry [VC-EXTENSION-REGISTRY] exists that contains available status schemes for implementers who want to implement verifiable credential status checking."
+
+>[!NOTE]
+>For now, Microsoft's status check implementation is proprietary but we are actively working with the DID community to align on a standard.
+
+## How does the **status** property work?
+
+In every Microsoft issued verifiable credential, there is an attribute called credentialStatus. It's populated with a status API that Microsoft manages on your behalf. Here is an example of what it looks like.
+
+```json
+ "credentialStatus": {
+ "id": "https://portableidentitycards.azure-api.net/v1.0/7952032d-d1f3-4c65-993f-1112dab7e191/portableIdentities/card/status",
+ "type": "PortableIdentityCardServiceCredentialStatus2020"
+ }
+```
+
+The open source Verifiable Credentials SDK handles calling the status API and providing the necessary data.
+
+Once the API is called and provided the right information, the API will return either a True or False. True being the verifiable credential is still active with the Issuer and False signifying the verifiable credential has been actively revoked by the Issuer.
+
+## Why you may want to revoke a VC?
+
+Each customer will have their own unique reason's for wanting to revoke a verifiable credential, but here are some of the common themes we have heard thus far.
+
+- Student ID: the student is no longer an active student at the University.
+- Employee ID: the employee is no longer an active employee.
+- State Drivers License: the driver no longer lives in that state.
+
+## How to set up a verifiable credential with the ability to revoke
+
+All verifiable credential data is not stored with Microsoft by default. Therefore, we do not have any data to reference to revoke a specific verifiable credential ID. The issuer needs to specify a specific field from the verifiable credential attribute for Microsoft to index and subsequently salt and hash.
+
+>[!NOTE]
+>Hashing is a one way cryptographic operation that turns an input, called a ```preimage```, and produces an output called a hash that has a fixed length. It is not computationally feasible at this time to reverse a hash operation.
+
+You can tell Microsoft which attribute of the verifiable credential you would like to index. The implication of indexing is that indexed values may be used to search your verifiable credentials for the VCs you want to revoke.
+
+**Example:** Alice is a Woodgrove employee. Alice left Woodgrove to work at Contoso. Jane, the IT admin for Woodgrove, searches for Alice's email in the Verifiable Credentials Revoke search query. In this example, Jane, indexed the email field of the Woodgrove verified employee credential.
+
+See below for an example of how the Rules file is modified to include the index.
+
+```json
+{
+ "attestations": {
+ "idTokens": [
+ {
+ "mapping": {
+ "Name": { "claim": "name" },
+ "email": { "claim": "email", "indexed": true}
+ },
+ "configuration": "https://login.microsoftonline.com/tenant-id-here7/v2.0/.well-known/openid-configuration",
+ "client_id": "c0d6b785-7a08-494e-8f63-c30744c3be2f",
+ "redirect_uri": "vcclient://openid"
+ }
+ ]
+ },
+ "validityInterval": 25920000,
+ "vc": {
+ "type": ["WoodgroveEmployee"]
+ }
+}
+```
+
+>[!NOTE]
+>Only one attribute can be indexed from a Rules file.
+
+## How do I revoke a verifiable credential
+
+Once an index claim has been set and verifiable credentials have been issued to your users, it's time to see how you can revoke a verifiable credential in the VC blade.
+
+1. Navigate to the verifiable credentials blade in Azure Active Directory.
+1. Choose the verifiable credential where you've previously set up the index claim and issued a verifiable credential to a user. =
+1. On the left-hand menu, choose **Revoke a credential**
+ ![Revoke a credential](media/how-to-issuer-revoke/settings-revoke.png)
+1. Search for the index attribute of the user you want to revoke.
+
+ ![Find the credential to revoke](media/how-to-issuer-revoke/revoke-search.png)
+
+ >[!NOTE]
+ >Since we are only storing a hash of the indexed claim from the verifiable credential, only an exact match will populate the search results. We take the input as searched by the IT Admin and we use the same hashing algorithm to see if we have a hash match in our database.
+
+1. Once you've found a match, select the **Revoke** option to the right of the credential you want to revoke.
+
+ ![A warning letting you know that after revocation the user still has the credential](media/how-to-issuer-revoke/warning.png)
+
+1. After successful revocation you see the status update and a green banner will appear at the top of the page.
+ ![Verify this domain in settings](media/how-to-issuer-revoke/revoke-successful.png)
+
+Now whenever a relying party calls to check the status of this specific verifiable credential, Microsoft's status API, acting on behalf of the tenant, returns a 'false' response.
+
+## Next Steps
+
+Test out the functionality on your own with a test credential to get used to the flow. You can see information on how to configure your tenant to issue verifiable credentials by [reviewing our tutorials](get-started-verifiable-credentials.md).
active-directory How To Opt Out https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/how-to-opt-out.md
+
+ Title: Opt out of the verifiable credentials (Preview)
+description: Learn how to Opt Out of the Verifiable Credentials Preview
+documentationCenter: ''
+++++ Last updated : 04/01/2021++
+#Customer intent: As an administrator I am looking for information to help me disable
++
+# Opt out of the verifiable credentials (Preview)
+
+In this article:
+
+- The reason why you may need to opt out.
+- The steps required.
+- What happens to your data?
+- Effect on existing verifiable credentials.
+
+> [!IMPORTANT]
+> Azure Active Directory Verifiable Credentials is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+- Complete verifiable credentials onboarding.
+
+## Potential reasons for opting out
+
+At this time, we don't have the ability to make modifications to the domain information. As a result, if you make a mistake or decide that you want to make a change, there is no other option available besides opting out and starting again.
+
+## The steps required
+
+1. From the Azure portal search for verifiable credentials.
+2. Choose **Settings** from the left side menu.
+3. Under the section, **Reset your organization**, select **Delete all credentials, and opt out of preview**.
+
+ ![settings reset org](media/how-to-opt-out/settings-reset.png)
+
+4. Read the warning message and to continue select **Delete and opt out**.
+
+ ![settings delete and opt out](media/how-to-opt-out/delete-and-opt-out.png)
+
+You have now opted out of the Verifiable Credentials Preview. Keep reading to understand what is happening under the hood.
+
+## What happens to your data?
+
+When you complete opting out of the Azure Active Directory Verifiable Credentials service, the following actions will take place:
+
+- The DID keys in Key Vault will be [soft deleted](../../key-vault/general/soft-delete-overview.md).
+- The issuer object will be deleted from our database.
+- The tenant identifer will be deleted from our database.
+- All of the contracts objects will be deleted from our database.
+
+Once an opt-out takes place, you will not be able to recover your DID or conduct any operations on your DID. This step is a one-way operation, and you need to opt in again, which results in a new DID being created.
+
+## Effect on existing verifiable credentials.
+
+All verifiable credentials already issued will continue to exist. They will not be cryptographically invalidated as your DID will remain resolvable through ION.
+However, when relying parties call the status API, they will always receive back a failure message.
+
+## Next steps
+
+- Set up verifiable credentials on your [Azure tenant](get-started-verifiable-credentials.md)
active-directory Issue Verify Verifiable Credentials Your Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/issue-verify-verifiable-credentials-your-tenant.md
+
+ Title: Tutorial - Issue and verify verifiable credentials using your tenant (preview)
+description: Change the Verifiable Credential code sample to work with your Azure tenant
+documentationCenter: ''
+++++ Last updated : 04/01/2021+++
+#Customer intent: As an administrator, I want the high-level steps that I should follow so that I can quickly start using verifiable credentials in my own Azure AD
+++
+# Tutorial: Issue and verify verifiable credentials using your tenant (preview)
+
+> [!IMPORTANT]
+> Azure Active Directory Verifiable Credentials is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Now that you have your Azure tenant set up with the Verifiable Credential service, we walk through the steps necessary to enable your Azure Active Directory (Azure AD) to issue verifiable credentials using the sample app.
+
+In this article you:
+
+> [!div class="checklist"]
+> * Register the sample app in your Azure AD
+> * Create a Rules and Display File
+> * Upload Rules and Display files
+> * Set up your Verifiable Credentials Issuer service to use Azure Key Vault
+> * Update Sample Code with your tenant's information.
+
+Our sample code requires users to authenticate to an identity provider, specifically Azure AD B2C, before the Verified Credential Expert VC can be issued. Not all verifiable credential issuers require authentication before issuing credentials.
+
+Authenticating ID Tokens allows users to prove who they are before receiving their credential. When users successfully log in, the identity provider returns a security token containing claims about the user. The issuer service then transforms these security tokens and their claims into a verifiable credential. The verifiable credential is signed with the issuer's DID.
+
+Any identity provider that supports the OpenID Connect protocol is supported. Examples of supported identity providers include [Azure Active Directory](../fundamentals/active-directory-whatis.md), and [Azure AD B2C](../../active-directory-b2c/overview.md). In this tutorial we are using AAD.
+
+## Prerequisites
+
+This tutorial assumes you've already completed the steps in the [previous tutorial](enable-your-tenant-verifiable-credentials.md) and have access to the environment you used.
+
+## Register an App to enable DID Wallets to sign in users
+
+To issue a verifiable credential, you need to register an app so Authenticator, or any other verifiable credential wallet, is allowed to sign in users.
+
+Register an application called 'VC Wallet App' in Azure AD and obtain a client ID.
+
+1. Follow the instructions for registering an application with [Azure AD](../develop/quickstart-register-app.md) When registering, use the values below.
+
+ - Name: "VC Wallet App"
+ - Supported account types: Accounts in this organizational directory only
+ - Redirect URI: vcclient://openid/
+
+ ![register an application](media/issue-verify-verifable-credentials-your-tenant/register-application.png)
+
+2. After you register the application, write down the Application (client) ID. You need this value later.
+
+ ![application client ID](media/issue-verify-verifable-credentials-your-tenant/client-id.png)
+
+3. Select the **Endpoints** button and copy the OpenID Connect metadata document URI. You need this information for the next section.
+
+ ![issuer endpoints](media/issue-verify-verifable-credentials-your-tenant/application-endpoints.png)
+
+## Set up your node app with access to Key Vault
+
+To authenticate a user's credential issuance request, the issuer website uses your cryptographic keys in Azure Key Vault. To access Azure Key Vault, your website needs a client ID and client secret that can be used to authenticate to Azure Key Vault.
+
+1. While viewing the VC wallet app overview page select **Certificates & secrets**.
+ ![certificates and secrets](media/issue-verify-verifable-credentials-your-tenant/vc-wallet-app-certs-secrets.png)
+1. In the **Client secrets** section choose **New client secret**
+ 1. Add a description like "Node VC client secret"
+ 1. Expires: in one year.
+ ![Application secret with a one year expiration](media/issue-verify-verifable-credentials-your-tenant/add-client-secret.png)
+1. Copy down the SECRET. You need this information to update your sample node app.
+
+>[!WARNING]
+> You have one chance to copy down the secret. The secret is one way hashed after this. Do not copy the ID.
+
+After creating your application and client secret in Azure AD, you need to grant the application the necessary permissions to perform operations on your Key Vault. Making these permission changes is required to enable the website to access and use the private keys stored there.
+
+1. Go to Key Vault.
+2. Select the key vault we are using for these tutorials.
+3. Choose **Access Policies** on left nav
+4. Choose **+Add Access Policy**.
+5. In the **Key permissions** section choose **Get**, and **Sign**.
+6. Select **Principal** and use the application ID to search for the application we registered earlier. Select it.
+7. Select **Add**.
+8. Choose **SAVE**.
+
+For more information about Key Vault permissions and access control read the [key vault RBAC guide](../../key-vault/general/rbac-guide.md)
+
+![assign key vault permissions](media/issue-verify-verifable-credentials-your-tenant/key-vault-permissions.png)
+## Make changes to match your environment
+
+So far, we have been working with our sample app. The app uses [Azure Active Directory B2C](../../active-directory-b2c/overview.md) and we are now switching to use Azure AD so we need to make some changes not just to match your environment but also to support additional claims that were not used before.
+
+1. Copy the rules file below and save it to **modified-expertRules.json**.
+
+ > [!NOTE]
+ > **"scope": "openid profile"** is included in this Rules file and was not included in the Sample App's Rules file. The next section will explain how to enable the optional claims in your Azure Active Directory tenant.
+
+ ```json
+ {
+ "attestations": {
+ "idTokens": [
+ {
+ "mapping": {
+ "firstName": { "claim": "given_name" },
+ "lastName": { "claim": "family_name" }
+ },
+ "configuration": "https://dIdPlayground.b2clogin.com/dIdPlayground.onmicrosoft.com/B2C_1_sisu/v2.0/.well-known/openid-configuration",
+ "client_id": "8d5b446e-22b2-4e01-bb2e-9070f6b20c90",
+ "redirect_uri": "vcclient://openid/",
+ "scope": "openid profile"
+ }
+ ]
+ },
+ "validityInterval": 2592000,
+ "vc": {
+ "type": ["VerifiedCredentialExpert"]
+ }
+ }
+ ```
+
+2. Open the file and replace the **client_id** and **configuration** values with the two values we copied in the previous section.
+
+ ![highlighting the two values that need to be modified as part of this step](media/issue-verify-verifable-credentials-your-tenant/rules-file.png)
+
+ The value **Configuration** is the OpenID Connect metadata document URI.
+
+ Since the Sample Code is using Azure Active Directory B2C and we are using Azure Active Directory, we need to add optional claims via scopes in order for these claims to be included in the ID Token to be written into the Verifiable Credential.
+
+3. Back in the Azure portal, open Azure Active Directory.
+4. Choose **App registrations**.
+5. Open the VC Wallet App we created earlier.
+6. Choose **Token configuration**.
+7. Choose **+ Add optional claim**
+
+ ![under token configuration add an optional claim](media/issue-verify-verifable-credentials-your-tenant/token-configuration.png)
+
+8. From **Token type** choose **ID** and from the list of available claims choose **given_name** and **family_name**
+
+ ![add optional claims](media/issue-verify-verifable-credentials-your-tenant/add-optional-claim.png)
+
+9. Press **Add**.
+10. If you get a permissions warning as shown below, check the box and select **Add**.
+
+ ![add permissions for optional claims](media/issue-verify-verifable-credentials-your-tenant/add-optional-claim-permissions.png)
+
+Now when a user is presented with the "sign in" to get issued your verifiable credential, the VC Wallet App knows to include the specific claims via the scope parameter to be written in to the Verifiable Credential.
+
+## Create new VC with this rules file and the old display file
+
+1. Upload the new rules file to our container
+1. From the verifiable credentials page create a new credential called **modifiedCredentialExpert** using the old display file and the new rules file (**modified-credentialExpert.json**).
+1. After the credential creation process completes from the **Overview** page copy the **Issue Credential URL** and save it because we need it in the next section.
+
+## Before we continue
+
+We need to put a few values together before we can make the necessary code changes. We use these values in the next section to make the sample code use your own keys stored in your vault. So far we should have the following values ready.
+
+- **Contract URI** from the credential that we just created(Issue Credential URL)
+- **Application Client ID** We got this when we registered the Node app.
+- **Client secret** We created this earlier when we granted your app access to key vault.
+
+There are a few other values we need to get before we can make the changes one time in our sample app. Let's get those now!
+
+### Verifiable Credentials Settings
+
+1. Navigate to the Verifiable Credentials page and choose **Settings**.
+1. Copy down the following values:
+
+ - Tenant identifier
+ - Issuer identifier (your DID)
+ - Key vault (uri)
+
+1. Under the Signing key identifier, there is a URI but we only need a portion of it. Copy from the part that says **issuerSigningKeyION** as highlighted by the red rectangle in the image below.
+
+ ![sign in key identifier](media/issue-verify-verifable-credentials-your-tenant/issuer-signing-key-ion.png)
+
+### DID Document
+
+1. Open the [DIF ION Network Explorer](https://identity.foundation/ion/explorer/)
+
+2. Paste your DID in the search bar.
+
+4. From the formatted response find the section called **verificationMethod**
+5. Under "verificationMethod" copy the id and label it as the kvSigningKeyId
+
+ ```json=
+ "verificationMethod": [
+ {
+ "id": "#sig_25e48331",
+ ```
+
+Now we have everything we need to make the changes in our sample code.
+
+- **Issuer:** app.js update const credential with your new contract uri
+- **Verifier:** app.js update the issuerDid with your Issuer Identifier
+- **Issuer and Verifier** update the didconfig.json with the following values:
++
+```json=
+{
+ "azTenantId": "Your tenant ID",
+ "azClientId": "Your client ID",
+ "azClientSecret": "Your client secret",
+ "kvVaultUri": "your keyvault uri",
+ "kvSigningKeyId": "The verificationMethod ID from your DID Document",
+ "kvRemoteSigningKeyId" : "The snippet of the issuerSigningKeyION we copied ",
+ "did": "Your DID"
+}
+```
+
+>[!IMPORTANT]
+>This is a demo application and you should normally never give your application the secret directly.
++
+Now you have everything in place to issue and verify your own Verifiable Credential from your Azure Active Directory tenant with your own keys.
+
+## Issue and Verify the VC
+
+Follow the same steps we followed in the previous tutorial to issue the verifiable credential and validate it with your app. Once that you successfully complete the verification process you are now ready to continue learning about verifiable credentials.
+
+1. Open a command prompt and open the issuer folder.
+2. Run the updated node app.
+
+ ```terminal
+ node app.js
+ ```
+
+3. Using a different command prompt run ngrok to set up a URL on 8081
+
+ ```terminal
+ ngrok http 8081
+ ```
+
+ >[!IMPORTANT]
+ > You may also notice a warning that this app or website may be risky. The message is expected at this time because we have not yet linked your DID to your domain. Follow the [DNS binding](how-to-dnsbind.md) instructions to configure this.
+
+
+4. Open the HTTPS URL generated by ngrok.
+
+ ![NGROK forwarding endpoints](media/enable-your-tenant-verifiable-credentials/ngrok-url-screen.png)
+
+5. Choose **GET CREDENTIAL**
+6. In Authenticator scan the QR code.
+7. At **This app or website may be risky** warning message choose **Advanced**.
+
+ ![Initial warning](media/enable-your-tenant-verifiable-credentials/site-warning.png)
+
+8. At the risky website warning choose **Proceed anyways (unsafe)**
+
+ ![Second warning about the issuer](media/enable-your-tenant-verifiable-credentials/site-warning-proceed.png)
++
+9. At the **Add a credential** screen notice a few things:
+ 1. At the top of the screen you can see a red **Not verified** message
+ 1. The credential is customized based on the changes we made to the display file.
+ 1. The **Sign in to your account** option is pointing to your Azure AD sign in page.
+
+ ![add credential screen with warning](media/enable-your-tenant-verifiable-credentials/add-credential-not-verified.png)
+
+10. Choose **Sign in to your account** and authenticate using a User in your Azure AD tenant.
+11. After successfully authenticating the **Add** button is no longer greyed out. Choose **Add**.
+
+ ![add credential screen after authenticating](media/enable-your-tenant-verifiable-credentials/add-credential-not-verified-authenticated.png)
+
+We have now issued a verifiable credential using our tenant to generate our vc while still using our B2C tenant for authentication.
+
+ ![vc issued by your azure AD and authenticated by our Azure B2C instance](media/enable-your-tenant-verifiable-credentials/my-vc-b2c.png)
++
+## Test verifying the VC using the sample app
+
+Now that we've issued the verifiable credential from our own tenant with claims from your Azure AD, let's verify it using our sample app.
+
+1. Stop running your issuer ngrok service.
+
+ ```cmd
+ control-c
+ ```
+
+2. Now run ngrok with the verifier port 8082.
+
+ ```cmd
+ ngrok http 8082
+ ```
+
+3. In another terminal window, navigate to the verifier app and run it similarly to how we ran the issuer app.
+
+ ```cmd
+ cd ..
+ cd verifier
+ node app.js
+ ```
+
+4. Open the ngrok url in your browser and scan the QR code using Authenticator in your mobile device.
+5. On your mobile device, choose **Allow** at the **New permission request** screen.
+
+ >[!IMPORTANT]
+ > Since the Sample App is also using your DID to sign the presentation request, you will notice a warning that this app or website may be risky. The message is expected at this time because we have not yet linked your DID to your domain. Follow the [DNS binding](how-to-dnsbind.md) instructions to configure this.
+
+ ![new permission request](media/enable-your-tenant-verifiable-credentials/new-permission-request.png)
+
+8. You have no successfully verified your credential and the website should display your first and last name from your Azure AD's user account.
+
+You have now completing the tutorial and are officially a Verified Credential Expert! Your sample app is using your DID for both issuing and verifying, while writing claims into a verifiable credential from your Azure AD.
+
+## Next steps
+
+- Learn how to create [custom credentials](credential-design.md)
+- Issuer service communication [examples](issuer-openid.md)
active-directory Issuer Openid https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/issuer-openid.md
+
+ Title: Issuer service communication examples (preview)
+description: Details of communication between identity provider and issuer service
++++++ Last updated : 04/01/2021+
+# Customer intent: As a developer I am looking for information on how to enable my users to control their own information
+++
+# Issuer service communication examples (Preview)
+
+The verifiable credential issuer service can issue verifiable credentials by retrieving claims from an ID token generated by your organization's OpenID compliant identity provider. This article instructs you on how to set up your identity provider so Authenticator can communicate with it and retrieve the correct ID Token to pass to the issuing service.
+
+> [!IMPORTANT]
+> Azure Active Directory Verifiable Credentials is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
++
+To issue a Verifiable Credential, Authenticator is instructed through downloading the contract to gather input from the user and send that information to the issuing service. If you need to use an ID Token, you have to set up your identity provider to allow Authenticator to sign in a user using the OpenID Connect protocol. The claims in the resulting ID token are used to populate the contents of your verifiable credential. Authenticator authenticates the user using the OpenID Connect authorization code flow. Your OpenID provider must support the following OpenID Connect features:
+
+| Feature | Description |
+| - | -- |
+| Grant type | Must support the authorization code grant type. |
+| Token format | Must produce unencrypted compact JWTs. |
+| Signature algorithm | Must produce JWTs signed using RSA 256. |
+| Configuration document | Must support OpenID Connect configuration document and `jwks_uri`. |
+| Client registration | Must support public client registration using a `redirect_uri` value of `vclient://openid/`. |
+| PKCE | Recommended for security reasons, but not required. |
+
+Examples of the HTTP requests sent to your identity provider are included below. Your identity provider must accept and respond to these requests in accordance with the OpenID Connect authentication standard.
+
+## Client registration
+
+To receive a verifiable credential, your users need to sign into your IDP from the Microsoft Authenticator app.
+
+To enable this exchange, register an application with your identity provider. If you are using Azure AD, you can find the instructions [here](../develop/quickstart-register-app.md). Use the following values when registering.
+
+| Setting | Value |
+| - | -- |
+| Application name | `<Issuer Name> Verifiable Credential Service` |
+| Redirect URI | `vcclient://openid/ ` |
++
+After you register an application with your identity provider, record its client ID. You will use it in the section that follows. You also need to write down the URL to the well-known endpoint for the OIDC compatible identity provider. The Issuing Service uses this endpoint to download the public keys needed to validate the ID token once that itΓÇÖs sent by Authenticator.
+
+The configured redirect URI is used by Authenticator so it knows when the sign-in is completed and it can retrieve the ID token.
+
+## Authorization request
+
+The authorization request sent to your identity provider uses the following format.
+
+```HTTP
+GET /authorize?client_id=<client-id>&redirect_uri=portableidentity%3A%2F%2Fverify&response_mode=query&response_type=code&scope=openid&state=12345&nonce=12345 HTTP/1.1
+Host: www.contoso.com
+Connection: Keep-Alive
+```
+
+| Parameter | Value |
+| - | -- |
+| `client_id` | The client ID obtained during the application registration process. |
+| `redirect_uri` | Must use `vcclient://openid/`. |
+| `response_mode` | Must support `query`. |
+| `response_type` | Must support `code`. |
+| `scope` | Must support `openid`. |
+| `state` | Must be returned to the client according to the OpenID Connect standard. |
+| `nonce` | Must be returned as a claim in the ID token according to the OpenID Connect standard. |
+
+When it receives an authorization request, your identity provider should authenticate the user and take any steps necessary to complete sign-in, such as multi-factor authentication.
+
+You may customize the sign-in process to meet your needs. You could ask users to provide additional information, accept terms of service, pay for their credential, and more. Once all steps complete, respond to the authorization request by redirecting to the redirect URI as shown below.
+
+```HTTP
+vcclient://openid/?code=nbafhjbh1ub1yhbj1h4jr1&state=12345
+```
+
+| Parameter | Value |
+| - | -- |
+| `code` | The authorization code returned by your identity provider. |
+| `state` | Must be returned to the client according to the OpenID Connect standard. |
+
+## Token request
+
+The token request sent to your identity provider will have the following form.
+
+```HTTP
+POST /token HTTP/1.1
+Host: www.contoso.com
+Content-Type: application/x-www-form-urlencoded
+Content-Length: 291
+
+client_id=<client-id>&redirect_uri=vcclient%3A%2F%2Fopenid%2F&grant_type=authorization_code&code=nbafhjbh1ub1yhbj1h4jr1&scope=openid
+```
+
+| Parameter | Value |
+| - | -- |
+| `client_id` | The client ID obtained during the application registration process. |
+| `redirect_uri` | Must use `vcclient://openid/`. |
+| `scope` | Must support `openid`. |
+| `grant_type` | Must support `authorization_code`. |
+| `code` | The authorization code returned by your identity provider. |
+
+Upon receiving the token request, your identity provider should respond with an ID token.
+
+```HTTP
+HTTP/1.1 200 OK
+Content-Type: application/json
+Cache-Control: no-store
+Pragma: no-cache
+
+{
+"id_token": "eyJhbGciOiJSUzI1NiIsImtpZCI6IjFlOWdkazcifQ.ewogImlzc
+ yI6ICJodHRwOi8vc2VydmVyLmV4YW1wbGUuY29tIiwKICJzdWIiOiAiMjQ4Mjg5
+ NzYxMDAxIiwKICJhdWQiOiAiczZCaGRSa3F0MyIsCiAibm9uY2UiOiAibi0wUzZ
+ fV3pBMk1qIiwKICJleHAiOiAxMzExMjgxOTcwLAogImlhdCI6IDEzMTEyODA5Nz
+ AKfQ.ggW8hZ1EuVLuxNuuIJKX_V8a_OMXzR0EHR9R6jgdqrOOF4daGU96Sr_P6q
+ Jp6IcmD3HP99Obi1PRs-cwh3LO-p146waJ8IhehcwL7F09JdijmBqkvPeB2T9CJ
+ NqeGpe-gccMg4vfKjkM8FcGvnzZUN4_KSP0aAp1tOJ1zZwgjxqGByKHiOtX7Tpd
+ QyHE5lcMiKPXfEIQILVq0pc_E2DzL7emopWoaoZTF_m0_N0YzFC6g6EJbOEoRoS
+ K5hoDalrcvRYLSrQAZZKflyuVCyixEoV9GfNQC3_osjzw2PAithfubEEBLuVVk4
+ XUVrWOLrLl0nx7RkKU8NXNHq-rvKMzqg"
+}
+```
+
+The ID token must use the JWT compact serialization format, and must not be encrypted. The ID token should contain the following claims.
+
+| Claim | Value |
+| - | -- |
+| `kid` | The key identifier of the key used to sign the ID token, corresponding to an entry in the OpenID provider's `jwks_uri`. |
+| `aud` | The client ID obtained during the application registration process. |
+| `iss` | Must be the `issuer` value in your OpenID Connect configuration document. |
+| `exp` | Must contain the expiry time of the ID token. |
+| `iat` | Must contain the time at which the ID token was issued. |
+| `nonce` | The value included in the authorization request. |
+| Additional claims | The ID token should contain any additional claims whose values will be included in the Verifiable Credential that will be issued. This section is where you should include any attributes about the user, such as their name. |
active-directory Verifiable Credentials Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/verifiable-credentials-faq.md
+
+ Title: Frequently asked questions - Azure Verifiable Credentials (preview)
+description: Find answers to common questions about Verifiable Credentials
+++++ Last updated : 04/01/2021+
+# Customer intent: As a developer I am looking for information on how to enable my users to control their own information
++
+# Frequently Asked Questions (FAQ)
+
+This page contains commonly asked questions about Verifiable Credentials and Decentralized Identity. Questions are organized into the following sections.
+
+- [Vocabulary and basics](#the-basics)
+- [Conceptual questions about decentralized identity](#conceptual-questions)
+- [Questions about using Verifiable Credentials preview](#using-the-preview)
+
+> [!IMPORTANT]
+> Azure Active Directory Verifiable Credentials is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## The basics
+
+### What is a DID?
+
+Decentralized Identifers(DIDs) are identifiers that can be used to secure access to resources, sign and verify credentials, and facilitate application data exchange. Unlike traditional usernames and email addresses, DIDs are owned and controlled by the entity itself (be it a person, device, or company). DIDs exist independently of any external organization or trusted intermediary. [The W3C Decentralized Identifier spec](https://www.w3.org/TR/did-core/) explains this in further detail.
+
+### Why do we need a DID?
+
+Digital trust fundamentally requires participants to own and control their identities, and identity begins at the identifier.
+In an age of daily, large-scale system breaches and attacks on centralized identifier honeypots, decentralizing identity is becoming a critical security need for consumers and businesses.
+Individuals owning and controlling their identities are able to exchange verifiable data and proofs. A distributed credential environment allows for the automation of many business processes that are currently manual and labor intensive.
+
+### What is a Verifiable Credential?
+
+Credentials are a part of our daily lives; driver's licenses are used to assert that we are capable of operating a motor vehicle, university degrees can be used to assert our level of education, and government-issued passports enable us to travel between countries. Verifiable Credentials provides a mechanism to express these sorts of credentials on the Web in a way that is cryptographically secure, privacy respecting, and machine-verifiable. [The W3C Verifiable Credentials spec](https://www.w3.org/TR/vc-data-model//) explains this in further detail.
++
+## Conceptual questions
+
+### What happens when a user loses their phone? Can they recover their identity?
+
+There are multiple ways of offering a recovery mechanism to users, each with their own tradeoffs. We're currently evaluating options and designing approaches to recovery that offer convenience and security while respecting a user's privacy and self-sovereignty.
+
+### Why does validation of a verifiable credential require a query to a credential status endpoint? Is this not a privacy concern?
+
+The `credentialStatus` property in a verifiable credential requires the verifier to query the credential's issuer during validation. This is a convenient and efficient way for the issuer to be able to revoke a credential that has been previously issued. This also means that the issuer can track which verifiers have accessed a user's credentials. In some use cases this is desirable, but in many, this would be considered a serious violation of user privacy. We are exploring alternative means of credential revocation that will allow an issuer to revoke a verifiable credential without being able to trace a credential's usage.
+
+<!-- Additionally, an issuer can issuer a Verifiable Credential without a 'credentialStatus' endpoint. Please follow the instructions in [How to customize your verifiable credentials article.](credential-design.md) -->
+
+### How can a user trust a request from an issuer or verifier? How do they know a DID is the real DID for an organization?
+
+We have implemented [the Decentralized Identity Foundation's Well Known DID Configuration spec](https://identity.foundation/.well-known/resources/did-configuration/) in order to connect a DID to a highly known existing system, domain names. Each DID created using the Azure Active Directory Verifiable Credentials has the option of including a root domain name that will be encoded in the DID Document. Follow the article titled [Link your Domain to your Distributed Identifier](how-to-dnsbind.md) to learn more.
+
+### Does a user need to periodically rotate their DID keys?
+
+The DID methods used in verifiable credential exchanges support the ability for a user to update the keys associated with their DID. Currently, Microsoft Authenticator does not change the user's keys after a DID has been created.
+
+### Why does the Verifiable Credential preview use ION as its DID method, and therefore Bitcoin to provide decentralized public key infrastructure?
+
+ION is a decentralized, permissionless, scalable decentralized identifier Layer 2 network that runs atop Bitcoin. It achieves scalability without including a special cryptoasset token, trusted validators, or centralized consensus mechanisms. We use Bitcoin for the base Layer 1 substrate because of the strength of the decentralized network to provide a high degree of immutability for a chronological event record system.
+
+## Using the preview
+
+### Why must I use NodeJS for the Verifiable Credentials preview? Any plans for other programming languages?
+
+We chose NodeJS because it is a very popular platform for application developers. We will be releasing a Rest API that will allow the developers to issue and verify credentials.
+
+### Is any of the code used in the preview open source?
+
+Yes! The following repositories are the open-sourced components of our services.
+
+1. [SideTree, on GitHub](https://github.com/decentralized-identity/sidetree)
+2. The [VC SDK for Node, on GitHub](https://github.com/microsoft/VerifiableCredentials-Verification-SDK-Typescript)
+3. An [Android SDK for building decentralized identity wallets, on GitHub](https://github.com/microsoft/VerifiableCredential-SDK-Android)
+4. An [iOS SDK for building decentralized identity wallets, on GitHub](https://github.com/microsoft/VerifiableCredential-SDK-iOS)
aks Operator Best Practices Cluster Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/operator-best-practices-cluster-security.md
While AppArmor works for any Linux application, [seccomp (*sec*ure *comp*uting)]
To see seccomp in action, create a filter that prevents changing permissions on a file. [SSH][aks-ssh] to an AKS node, then create a seccomp filter named */var/lib/kubelet/seccomp/prevent-chmod* and paste the following content:
-```
+```json
{ "defaultAction": "SCMP_ACT_ALLOW", "syscalls": [ { "name": "chmod", "action": "SCMP_ACT_ERRNO"
+ },
+ {
+ "name": "fchmodat",
+ "action": "SCMP_ACT_ERRNO"
+ },
+ {
+ "name": "chmodat",
+ "action": "SCMP_ACT_ERRNO"
+ }
+ ]
+}
+```
+
+In version 1.19 and later, you need to configure the following:
+
+```json
+{
+ "defaultAction": "SCMP_ACT_ALLOW",
+ "syscalls": [
+ {
+ "names": ["chmod","fchmodat","chmodat"],
+ "action": "SCMP_ACT_ERRNO"
} ] }
spec:
restartPolicy: Never ```
+In version 1.19 and later, you need to configure the following:
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: chmod-prevented
+spec:
+ securityContext:
+ seccompProfile:
+ type: Localhost
+ localhostProfile: prevent-chmod
+ containers:
+ - name: chmod
+ image: mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
+ command:
+ - "chmod"
+ args:
+ - "777"
+ - /etc/hostname
+ restartPolicy: Never
+```
+ Deploy the sample pod using the [kubectl apply][kubectl-apply] command: ```console
aks Private Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/private-clusters.md
Title: Create a private Azure Kubernetes Service cluster
description: Learn how to create a private Azure Kubernetes Service (AKS) cluster Previously updated : 3/5/2021 Last updated : 3/31/2021
The following parameters can be leveraged to configure Private DNS Zone.
### Prerequisites
-* The AKS Preview version 0.5.3 or later
+* The AKS Preview version 0.5.7 or later
* The api version 2020-11-01 or later ### Create a private AKS cluster with Private DNS Zone (Preview)
az aks create -n <private-cluster-name> -g <private-cluster-resource-group> --lo
```azurecli-interactive az aks create -n <private-cluster-name> -g <private-cluster-resource-group> --load-balancer-sku standard --enable-private-cluster --enable-managed-identity --assign-identity <ResourceId> --private-dns-zone <custom private dns zone ResourceId> --fqdn-subdomain <subdomain-name> ```+ ## Options for connecting to the private cluster The API server endpoint has no public IP address. To manage the API server, you'll need to use a VM that has access to the AKS cluster's Azure Virtual Network (VNet). There are several options for establishing network connectivity to the private cluster.
The API server endpoint has no public IP address. To manage the API server, you'
* Create a VM in the same Azure Virtual Network (VNet) as the AKS cluster. * Use a VM in a separate network and set up [Virtual network peering][virtual-network-peering]. See the section below for more information on this option. * Use an [Express Route or VPN][express-route-or-VPN] connection.
+* Use the [AKS Run Command feature](#aks-run-command-preview).
Creating a VM in the same VNET as the AKS cluster is the easiest option. Express Route and VPNs add costs and require additional networking complexity. Virtual network peering requires you to plan your network CIDR ranges to ensure there are no overlapping ranges.
+### AKS Run Command (Preview)
+
+Today when you need to access a private cluster, you must do so within the cluster virtual network or a peered network or client machine. This usually requires your machine to be connected via VPN or Express Route to the cluster virtual network or a jumpbox to be created in the cluster virtual network. AKS run command allows you to remotely invoke commands in an AKS cluster through the AKS API. This feature provides an API that allows you to, for example, execute just-in-time commands from a remote laptop for a private cluster. This can greatly assist with quick just-in-time access to a private cluster when the client machine is not on the cluster private network while still retaining and enforcing the same RBAC controls and private API server.
+
+### Register the `RunCommandPreview` preview feature
+
+To use the new Run Command API, you must enable the `RunCommandPreview` feature flag on your subscription.
+
+Register the `RunCommandPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "RunCommandPreview"
+```
+
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
+
+```azurecli-interactive
+az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/RunCommandPreview')].{Name:name,State:properties.state}"
+```
+
+When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+
+### Use AKS Run Command
+
+Simple command
+
+```azurecli-interactive
+az aks command invoke -g <resourceGroup> -n <clusterName> -c "kubectl get pods -n kube-system"
+```
+
+Deploy a manifest by attaching the specific file
+
+```azurecli-interactive
+az aks command invoke -g <resourceGroup> -n <clusterName> -c "kubectl apply -f deployment.yaml -n default" -f deployment.yaml
+```
+
+Deploy a manifest by attaching a whole folder
+
+```azurecli-interactive
+az aks command invoke -g <resourceGroup> -n <clusterName> -c "kubectl apply -f deployment.yaml -n default" -f .
+```
+
+Perform a Helm install and pass the specific values manifest
+
+```azurecli-interactive
+az aks command invoke -g <resourceGroup> -n <clusterName> -c "helm repo add bitnami https://charts.bitnami.com/bitnami && helm repo update && helm install my-release -f values.yaml bitnami/nginx" -f values.yaml
+```
+ ## Virtual network peering As mentioned, virtual network peering is one way to access your private cluster. To use virtual network peering, you need to set up a link between virtual network and the private DNS zone.
attestation Azure Diagnostic Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/azure-diagnostic-monitoring.md
Title: Azure diagnostic monitoring - Azure Attestation
+ Title: Azure diagnostic monitoring for Azure Attestation
description: Azure diagnostic monitoring for Azure Attestation
Last updated 08/31/2020
-# Setting up diagnostics with Trusted Platform Module (TPM) endpoint of Azure Attestation
+# Set up diagnostics with a Trusted Platform Module (TPM) endpoint of Azure Attestation
-[Platform logs](../azure-monitor/essentials/platform-logs-overview.md) in Azure, including the Azure Activity log and resource logs, provide detailed diagnostic and auditing information for Azure resources and the Azure platform they depend on. [Platform metrics](../azure-monitor/essentials/data-platform-metrics.md) are collected by default and typically stored in the Azure Monitor metrics database. This article provides details on creating and configuring diagnostic settings to send platform metrics and platform logs to different destinations.
+This article helps you create and configure diagnostic settings to send platform metrics and platform logs to different destinations. [Platform logs](/azure/azure-monitor/platform/platform-logs-overview) in Azure, including the Azure Activity log and resource logs, provide detailed diagnostic and auditing information for Azure resources and the Azure platform that they depend on. [Platform metrics](/azure/azure-monitor/platform/data-platform-metrics) are collected by default and are stored in the Azure Monitor Metrics database.
-TPM endpoint service is enabled with diagnostic setting and can be used to monitor activity. To setup [Azure Monitoring](../azure-monitor/overview.md) for the TPM service endpoint using PowerShell kindly follow the below steps.
+Before you begin, make sure you've [set up Azure Attestation with Azure PowerShell](quickstart-powershell.md).
-Setup Azure Attestation service.
-
-[Set up Azure Attestation with Azure PowerShell](./quickstart-powershell.md)
+The Trusted Platform Module (TPM) endpoint service is enabled in the diagnostic settings and can be used to monitor activity. Set up [Azure Monitoring](/azure/azure-monitor/overview) for the TPM service endpoint by using the following code.
```powershell
Setup Azure Attestation service.
Set-AzDiagnosticSetting -ResourceId $ attestationProvider.Id -StorageAccountId $ storageAccount.Id -Enabled $true ```
-The activity logs can be found in the Containers section of the storage account. Detailed info can be found at [Collect resource logs from an Azure Resource and analyze with Azure Monitor - Azure Monitor](../azure-monitor/essentials/tutorial-resource-logs.md)
+
+Activity logs are in the **Containers** section of the storage account. For more information, see [Collect and analyze resource logs from an Azure resource](/azure/azure-monitor/learn/tutorial-resource-logs).
automation Automation Child Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-child-runbooks.md
When your runbook calls a graphical or PowerShell Workflow child runbook using i
The following example starts a test child runbook that accepts a complex object, an integer value, and a boolean value. The output of the child runbook is assigned to a variable. In this case, the child runbook is a PowerShell Workflow runbook. ```azurepowershell-interactive
-$vm = Get-AzVM ΓÇôResourceGroupName "LabRG" ΓÇôName "MyVM"
-$output = PSWF-ChildRunbook ΓÇôVM $vm ΓÇôRepeatCount 2 ΓÇôRestart $true
+$vm = Get-AzVM -ResourceGroupName "LabRG" -Name "MyVM"
+$output = PSWF-ChildRunbook -VM $vm -RepeatCount 2 -Restart $true
``` Here is the same example using a PowerShell runbook as the child. ```azurepowershell-interactive
-$vm = Get-AzVM ΓÇôResourceGroupName "LabRG" ΓÇôName "MyVM"
-$output = .\PS-ChildRunbook.ps1 ΓÇôVM $vm ΓÇôRepeatCount 2 ΓÇôRestart $true
+$vm = Get-AzVM -ResourceGroupName "LabRG" -Name "MyVM"
+$output = .\PS-ChildRunbook.ps1 -VM $vm -RepeatCount 2 -Restart $true
``` ## Start a child runbook using a cmdlet
Parameters for a child runbook started with a cmdlet are provided as a hashtable
The subscription context might be lost when starting child runbooks as separate jobs. For the child runbook to execute Az module cmdlets against a specific Azure subscription, the child must authenticate to this subscription independently of the parent runbook.
-If jobs within the same Automation account work with more than one subscription, selecting a subscription in one job can change the currently selected subscription context for other jobs. To avoid this situation, use `Disable-AzContextAutosave ΓÇôScope Process` at the beginning of each runbook. This action only saves the context to that runbook execution.
+If jobs within the same Automation account work with more than one subscription, selecting a subscription in one job can change the currently selected subscription context for other jobs. To avoid this situation, use `Disable-AzContextAutosave -Scope Process` at the beginning of each runbook. This action only saves the context to that runbook execution.
### Example
The following example starts a child runbook with parameters and then waits for
```azurepowershell-interactive # Ensure that the runbook does not inherit an AzContext
-Disable-AzContextAutosave ΓÇôScope Process
+Disable-AzContextAutosave -Scope Process
# Connect to Azure with Run As account $ServicePrincipalConnection = Get-AutomationConnection -Name 'AzureRunAsConnection'
$AzureContext = Set-AzContext -SubscriptionId $ServicePrincipalConnection.Subscr
$params = @{"VMName"="MyVM";"RepeatCount"=2;"Restart"=$true} Start-AzAutomationRunbook `
- ΓÇôAutomationAccountName 'MyAutomationAccount' `
- ΓÇôName 'Test-ChildRunbook' `
+ -AutomationAccountName 'MyAutomationAccount' `
+ -Name 'Test-ChildRunbook' `
-ResourceGroupName 'LabRG' ` -AzContext $AzureContext `
- ΓÇôParameters $params ΓÇôWait
+ -Parameters $params -Wait
``` ## Next steps
automation Automation Dsc Cd Chocolatey https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-dsc-cd-chocolatey.md
of VM extensions.
## Quick trip around the diagram Starting at the top, you write your code, build it, test it, then create an installation package. Chocolatey can handle various types of installation packages, such as MSI, MSU, ZIP. And you have the full power of PowerShell to do the actual installation if Chocolatey's native capabilities
-aren't up to it. Put the package into some place reachable ΓÇô a package repository. This usage
+aren't up to it. Put the package into some place reachable - a package repository. This usage
example uses a public folder in an Azure blob storage account, but it can be anywhere. Chocolatey works natively with NuGet servers and a few others for management of package metadata. [This article](https://github.com/chocolatey/choco/wiki/How-To-Host-Feed) describes the options. The usage example uses NuGet. A Nuspec is metadata about your packages. The Nuspec information is compiled into a NuPkg and stored on a NuGet server. When your configuration requests a package by name and references a NuGet server, the Chocolatey DSC resource on the VM grabs the package and installs it. You can also request a specific version of a package.
Full source for this usage example is in [this Visual Studio project](https://gi
At an authenticated (`Connect-AzAccount`) PowerShell command line: (can take a few minutes while the pull server is set up) ```azurepowershell-interactive
-New-AzResourceGroup ΓÇôName MY-AUTOMATION-RG ΓÇôLocation MY-RG-LOCATION-IN-QUOTES
-New-AzAutomationAccount ΓÇôResourceGroupName MY-AUTOMATION-RG ΓÇôLocation MY-RG-LOCATION-IN-QUOTES ΓÇôName MY-AUTOMATION-ACCOUNT
+New-AzResourceGroup -Name MY-AUTOMATION-RG -Location MY-RG-LOCATION-IN-QUOTES
+New-AzAutomationAccount -ResourceGroupName MY-AUTOMATION-RG -Location MY-RG-LOCATION-IN-QUOTES -Name MY-AUTOMATION-ACCOUNT
``` You can put your Automation account into any of the following regions (also known as locations): East US 2,
There's also a manual approach, used only once per resource, unless you want to
2. Install the integration module. ```azurepowershell-interactive
- Install-Module ΓÇôName MODULE-NAME` <ΓÇögrabs the module from the PowerShell Gallery
+ Install-Module -Name MODULE-NAME` <ΓÇögrabs the module from the PowerShell Gallery
``` 3. Copy the module folder from **c:\Program Files\WindowsPowerShell\Modules\MODULE-NAME** to a temporary folder.
There's also a manual approach, used only once per resource, unless you want to
```azurepowershell-interactive New-AzAutomationModule ` -ResourceGroupName MY-AUTOMATION-RG -AutomationAccountName MY-AUTOMATION-ACCOUNT `
- -Name MODULE-NAME ΓÇôContentLinkUri 'https://STORAGE-URI/CONTAINERNAME/MODULE-NAME.zip'
+ -Name MODULE-NAME -ContentLinkUri 'https://STORAGE-URI/CONTAINERNAME/MODULE-NAME.zip'
``` The included example implements these steps for cChoco and xNetworking.
The included example implements these steps for cChoco and xNetworking.
There's nothing special about the first time you import your configuration into the pull server and compile. All later imports or compilations of the same configuration look exactly the same. Each time you update your package and need to push it out to production you do this step after ensuring the
-configuration file is correct ΓÇô including the new version of your package. Here's the configuration file **ISVBoxConfig.ps1**:
+configuration file is correct - including the new version of your package. Here's the configuration file **ISVBoxConfig.ps1**:
```powershell Configuration ISVBoxConfig
Here is the **New-ConfigurationScript.ps1** script (modified to use the Az modul
```powershell Import-AzAutomationDscConfiguration `
- -ResourceGroupName MY-AUTOMATION-RG ΓÇôAutomationAccountName MY-AUTOMATION-ACCOUNT `
+ -ResourceGroupName MY-AUTOMATION-RG -AutomationAccountName MY-AUTOMATION-ACCOUNT `
-SourcePath C:\temp\AzureAutomationDsc\ISVBoxConfig.ps1 `
- -Published ΓÇôForce
+ -Published -Force
$jobData = Start-AzAutomationDscCompilationJob `
- -ResourceGroupName MY-AUTOMATION-RG ΓÇôAutomationAccountName MY-AUTOMATION-ACCOUNT `
+ -ResourceGroupName MY-AUTOMATION-RG -AutomationAccountName MY-AUTOMATION-ACCOUNT `
-ConfigurationName ISVBoxConfig $compilationJobId = $jobData.Id Get-AzAutomationDscCompilationJob `
- -ResourceGroupName MY-AUTOMATION-RG ΓÇôAutomationAccountName MY-AUTOMATION-ACCOUNT `
+ -ResourceGroupName MY-AUTOMATION-RG -AutomationAccountName MY-AUTOMATION-ACCOUNT `
-Id $compilationJobId ```
automation Automation Dsc Extension History https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-dsc-extension-history.md
This article provides information about each version of the Azure DSC VM extensi
- **Remarks:** This version uses DSC as included in Windows Server 2016 Technical Preview; for other Windows OSs, it installs the [Windows Management Framework 5.0 RTM](https://devblogs.microsoft.com/powershell/windows-management-framework-wmf-5-0-rtm-is-now-available-via-the-microsoft-update-catalog/) (installing WMF requires a reboot). - **New features:**
- - In extension version 2.14, changes to install WMF RTM were included. While upgrading from extension version 2.13.2.0 to 2.14.0.0, you may notice that some DSC cmdlets fail or your configuration fails with an error ΓÇô 'No Instance found with given property values'. For more information, see the [DSC release notes](/powershell/scripting/wmf/known-issues/known-issues-dsc). The workarounds for these issues have been added in 2.15 version.
+ - In extension version 2.14, changes to install WMF RTM were included. While upgrading from extension version 2.13.2.0 to 2.14.0.0, you may notice that some DSC cmdlets fail or your configuration fails with an error - 'No Instance found with given property values'. For more information, see the [DSC release notes](/powershell/scripting/wmf/known-issues/known-issues-dsc). The workarounds for these issues have been added in 2.15 version.
- If you already installed version 2.14 and are running into one of the above two issues, you need to perform these steps manually. In an elevated PowerShell session run the following commands: - `Remove-Item -Path $env:SystemRoot\system32\Configuration\DSCEngineCache.mof` - `mofcomp $env:windir\system32\wbem\DscCoreConfProv.mof`
automation Automation Graphical Authoring Intro https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-graphical-authoring-intro.md
Use [comparison operators](/powershell/module/microsoft.powershell.core/about/ab
For example, the following condition determines if the virtual machine from an activity named `Get-AzureVM` is currently stopped. ```powershell-interactive
-$ActivityOutput["Get-AzureVM"].PowerState ΓÇôeq "Stopped"
+$ActivityOutput["Get-AzureVM"].PowerState -eq "Stopped"
``` The following condition determines if the same virtual machine is in any state other than stopped. ```powershell-interactive
-$ActivityOutput["Get-AzureVM"].PowerState ΓÇône "Stopped"
+$ActivityOutput["Get-AzureVM"].PowerState -ne "Stopped"
``` You can join multiple conditions in your runbook using a [logical operator](/powershell/module/microsoft.powershell.core/about/about_logical_operators), such as `-and` or `-or`. For example, the following condition checks to see if the virtual machine in the previous example is in a state of Stopped or Stopping. ```powershell-interactive
-($ActivityOutput["Get-AzureVM"].PowerState ΓÇôeq "Stopped") -or ($ActivityOutput["Get-AzureVM"].PowerState ΓÇôeq "Stopping")
+($ActivityOutput["Get-AzureVM"].PowerState -eq "Stopped") -or ($ActivityOutput["Get-AzureVM"].PowerState -eq "Stopping")
``` ### Use hashtables
automation Automation Hrw Run Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-hrw-run-runbooks.md
To create the GPG keyring and keypair, use the Hybrid Runbook Worker [nxautomati
1. Use the sudo application to sign in as the **nxautomation** account. ```bash
- sudo su ΓÇô nxautomation
+ sudo su - nxautomation
``` 1. Once you are using **nxautomation**, generate the GPG keypair. GPG guides you through the steps. You must provide name, email address, expiration time, and passphrase. Then you wait until there is enough entropy on the machine for the key to be generated.
sudo python /opt/microsoft/omsconfig/modules/nxOMSAutomationWorker/DSCResources/
Once you have configured signature validation, use the following GPG command to sign the runbook. ```bash
-gpg ΓÇô-clear-sign <runbook name>
+gpg --clear-sign <runbook name>
``` The signed runbook is called **<runbook name>.asc**.
When you start a runbook in the Azure portal, you're presented with the **Run on
When starting a runbook using PowerShell, use the `RunOn` parameter with the [Start-AzAutomationRunbook](/powershell/module/Az.Automation/Start-AzAutomationRunbook) cmdlet. The following example uses Windows PowerShell to start a runbook named **Test-Runbook** on a Hybrid Runbook Worker group named MyHybridGroup. ```azurepowershell-interactive
-Start-AzAutomationRunbook ΓÇôAutomationAccountName "MyAutomationAccount" ΓÇôName "Test-Runbook" -RunOn "MyHybridGroup"
+Start-AzAutomationRunbook -AutomationAccountName "MyAutomationAccount" -Name "Test-Runbook" -RunOn "MyHybridGroup"
``` ## Logging
automation Automation Runbook Output And Messages https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-runbook-output-and-messages.md
Have your runbook write data to the output stream using [Write-Output](/powershe
```powershell #The following lines both write an object to the output stream.
-Write-Output ΓÇôInputObject $object
+Write-Output -InputObject $object
$object ```
Create a warning or error message using the [Write-Warning](/powershell/module/m
#The following lines create a warning message and then an error message that will suspend the runbook. $ErrorActionPreference = "Stop"
-Write-Warning ΓÇôMessage "This is a warning message."
-Write-Error ΓÇôMessage "This is an error message that will stop the runbook because of the preference variable."
+Write-Warning -Message "This is a warning message."
+Write-Error -Message "This is an error message that will stop the runbook because of the preference variable."
``` ### Write output to debug stream
The following code creates a verbose message using the [Write-Verbose](/powershe
```powershell #The following line creates a verbose message.
-Write-Verbose ΓÇôMessage "This is a verbose message."
+Write-Verbose -Message "This is a verbose message."
``` ## Handle progress records
The following example starts a sample runbook and then waits for it to complete.
```powershell $job = Start-AzAutomationRunbook -ResourceGroupName "ResourceGroup01" `
- ΓÇôAutomationAccountName "MyAutomationAccount" ΓÇôName "Test-Runbook"
+ -AutomationAccountName "MyAutomationAccount" -Name "Test-Runbook"
$doLoop = $true While ($doLoop) { $job = Get-AzAutomationJob -ResourceGroupName "ResourceGroup01" `
- ΓÇôAutomationAccountName "MyAutomationAccount" -Id $job.JobId
+ -AutomationAccountName "MyAutomationAccount" -Id $job.JobId
$status = $job.Status $doLoop = (($status -ne "Completed") -and ($status -ne "Failed") -and ($status -ne "Suspended") -and ($status -ne "Stopped")) } Get-AzAutomationJobOutput -ResourceGroupName "ResourceGroup01" `
- ΓÇôAutomationAccountName "MyAutomationAccount" -Id $job.JobId ΓÇôStream Output
+ -AutomationAccountName "MyAutomationAccount" -Id $job.JobId -Stream Output
# For more detailed job output, pipe the output of Get-AzAutomationJobOutput to Get-AzAutomationJobOutputRecord Get-AzAutomationJobOutput -ResourceGroupName "ResourceGroup01" `
- ΓÇôAutomationAccountName "MyAutomationAccount" -Id $job.JobId ΓÇôStream Any | Get-AzAutomationJobOutputRecord
+ -AutomationAccountName "MyAutomationAccount" -Id $job.JobId -Stream Any | Get-AzAutomationJobOutputRecord
``` ### Retrieve runbook output and messages in graphical runbooks
automation Automation Scenario Aws Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-scenario-aws-deployment.md
# Deploy an Amazon Web Services VM with a runbook
-In this article, you learn how you can leverage Azure Automation to provision a virtual machine in your Amazon Web Service (AWS) subscription and give that VM a specific name ΓÇô which AWS refers to as ΓÇ£taggingΓÇ¥ the VM.
+In this article, you learn how you can leverage Azure Automation to provision a virtual machine in your Amazon Web Service (AWS) subscription and give that VM a specific name - which AWS refers to as ΓÇ£taggingΓÇ¥ the VM.
## Prerequisites
automation Automation Region Dns Records https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/how-to/automation-region-dns-records.md
We recommend that you use the addresses listed when defining [exceptions](../aut
* [Azure IP Ranges and Service Tags - Azure public](https://www.microsoft.com/download/details.aspx?id=56519) * [Azure IP Ranges and Service Tags- Azure Government](https://www.microsoft.com/download/details.aspx?id=57063) * [Azure IP Ranges and Service Tags - Azure Germany](https://www.microsoft.com/download/details.aspx?id=57064)
-* [Azure IP Ranges and Service Tags ΓÇô Azure China Vianet 21](https://www.microsoft.com/download/details.aspx?id=57062)
+* [Azure IP Ranges and Service Tags - Azure China Vianet 21](https://www.microsoft.com/download/details.aspx?id=57062)
The IP address file lists the IP address ranges that are used in the Microsoft Azure datacenters. It includes compute, SQL, and storage ranges, and reflects currently deployed ranges and any upcoming changes to the IP ranges. New ranges that appear in the file aren't used in the datacenters for at least one week.
automation Automation Tutorial Runbook Textual Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/learn/automation-tutorial-runbook-textual-powershell.md
As shown in the example below, the Run As connection is made with the [Connect-A
```powershell # Ensures you do not inherit an AzContext in your runbook
- Disable-AzContextAutosave ΓÇôScope Process
+ Disable-AzContextAutosave -Scope Process
$connection = Get-AutomationConnection -Name AzureRunAsConnection
As shown in the example below, the Run As connection is made with the [Connect-A
```powershell # Ensures you do not inherit an AzContext in your runbook
- Disable-AzContextAutosave ΓÇôScope Process
+ Disable-AzContextAutosave -Scope Process
$connection = Get-AutomationConnection -Name AzureRunAsConnection
Now that your runbook is authenticating to your Azure subscription, you can mana
```powershell # Ensures you do not inherit an AzContext in your runbook
- Disable-AzContextAutosave ΓÇôScope Process
+ Disable-AzContextAutosave -Scope Process
$connection = Get-AutomationConnection -Name AzureRunAsConnection while(!($connectionResult) -and ($logonAttempt -le 10))
Your runbook currently starts the virtual machine that you hard-coded in the run
[string]$ResourceGroupName ) # Ensures you do not inherit an AzContext in your runbook
- Disable-AzContextAutosave ΓÇôScope Process
+ Disable-AzContextAutosave -Scope Process
$connection = Get-AutomationConnection -Name AzureRunAsConnection while(!($connectionResult) -and ($logonAttempt -le 10))
automation Automation Tutorial Runbook Textual https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/learn/automation-tutorial-runbook-textual.md
You've tested and published your runbook, but so far it doesn't do anything usef
```powershell-interactive # Ensures you do not inherit an AzContext in your runbook
- Disable-AzContextAutosave ΓÇôScope Process
+ Disable-AzContextAutosave -Scope Process
$Conn = Get-AutomationConnection -Name AzureRunAsConnection Connect-AzAccount -ServicePrincipal -Tenant $Conn.TenantID `
Now that your runbook is authenticating to the Azure subscription, you can manag
workflow MyFirstRunbook-Workflow { # Ensures that you do not inherit an AzContext in your runbook
- Disable-AzContextAutosave ΓÇôScope Process
+ Disable-AzContextAutosave -Scope Process
$Conn = Get-AutomationConnection -Name AzureRunAsConnection Connect-AzAccount -ServicePrincipal -Tenant $Conn.TenantID -ApplicationId $Conn.ApplicationID -CertificateThumbprint $Conn.CertificateThumbprint
Your runbook currently starts the VM that you have hardcoded in the runbook. It
[string]$ResourceGroupName ) # Ensures you do not inherit an AzContext in your runbook
- Disable-AzContextAutosave ΓÇôScope Process
+ Disable-AzContextAutosave -Scope Process
$Conn = Get-AutomationConnection -Name AzureRunAsConnection Connect-AzAccount -ServicePrincipal -Tenant $Conn.TenantID -ApplicationId $Conn.ApplicationID -CertificateThumbprint $Conn.CertificateThumbprint
automation Manage Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/manage-runbooks.md
$getJobParams = @{
ResourceGroupName = 'MyResourceGroup' Runbookname = 'Test-Runbook' }
-$job = (Get-AzAutomationJob @getJobParams | Sort-Object LastModifiedDate ΓÇôDesc)[0]
+$job = (Get-AzAutomationJob @getJobParams | Sort-Object LastModifiedDate -Desc)[0]
$job | Select-Object JobId, Status, JobParameters $getOutputParams = @{
automation Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/shared-resources/certificates.md
$PfxCertPath = '.\MyCert.pfx'
$CertificatePassword = ConvertTo-SecureString -String 'P@$$w0rd' -AsPlainText -Force $ResourceGroup = "ResourceGroup01"
-New-AzAutomationCertificate -AutomationAccountName "MyAutomationAccount" -Name $certificateName -Path $PfxCertPath ΓÇôPassword $CertificatePassword -Exportable -ResourceGroupName $ResourceGroup
+New-AzAutomationCertificate -AutomationAccountName "MyAutomationAccount" -Name $certificateName -Path $PfxCertPath -Password $CertificatePassword -Exportable -ResourceGroupName $ResourceGroup
``` ### Create a new certificate with a Resource Manager template
The following example shows how to add a certificate to a cloud service in a run
$serviceName = 'MyCloudService' $cert = Get-AutomationCertificate -Name 'MyCertificate' $certPwd = Get-AzAutomationVariable -ResourceGroupName "ResourceGroup01" `
-ΓÇôAutomationAccountName "MyAutomationAccount" ΓÇôName 'MyCertPassword'
+-AutomationAccountName "MyAutomationAccount" -Name 'MyCertPassword'
Add-AzureCertificate -ServiceName $serviceName -CertToDeploy $cert ```
automation Modules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/shared-resources/modules.md
Include a synopsis, description, and help URI for every cmdlet in your module. I
switch ($PSCmdlet.ParameterSetName) { "UserAccount" {
- $cred = New-Object ΓÇôTypeName System.Management.Automation.PSCredential ΓÇôArgumentList $UserName, $Password
+ $cred = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $UserName, $Password
Connect-Contoso -Credential $cred } "ConnectionObject" {
The following runbook example uses a Contoso connection asset called `ContosoCon
```powershell $contosoConnection = Get-AutomationConnection -Name 'ContosoConnection'
- $cred = New-Object ΓÇôTypeName System.Management.Automation.PSCredential ΓÇôArgumentList $contosoConnection.UserName, $contosoConnection.Password
+ $cred = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $contosoConnection.UserName, $contosoConnection.Password
Connect-Contoso -Credential $cred } ```
automation Variables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/shared-resources/variables.md
$rgName = "ResourceGroup01"
$accountName = "MyAutomationAccount" $variableValue = "My String"
-New-AzAutomationVariable -ResourceGroupName $rgName ΓÇôAutomationAccountName $accountName ΓÇôName "MyStringVariable" ΓÇôEncrypted $false ΓÇôValue $variableValue
-$string = (Get-AzAutomationVariable -ResourceGroupName $rgName -AutomationAccountName $accountName ΓÇôName "MyStringVariable").Value
+New-AzAutomationVariable -ResourceGroupName "ResourceGroup01"
+-AutomationAccountName "MyAutomationAccount" -Name 'MyStringVariable' `
+-Encrypted $false -Value 'My String'
+$string = (Get-AzAutomationVariable -ResourceGroupName "ResourceGroup01" `
+-AutomationAccountName "MyAutomationAccount" -Name 'MyStringVariable').Value
``` The following example shows how to create a variable with a complex type and then retrieve its properties. In this case, a virtual machine object from [Get-AzVM](/powershell/module/Az.Compute/Get-AzVM) is used specifying a subset of its properties.
The following example shows how to create a variable with a complex type and the
$rgName = "ResourceGroup01" $accountName = "MyAutomationAccount"
-$vm = Get-AzVM -ResourceGroupName $rgName ΓÇôName "VM01" | Select Name, Location, Tags
-New-AzAutomationVariable -ResourceGroupName $rgName ΓÇôAutomationAccountName $accountName ΓÇôName "MyComplexVariable" ΓÇôEncrypted $false ΓÇôValue $vm
+$vm = Get-AzVM -ResourceGroupName "ResourceGroup01" -Name "VM01" | Select Name, Location, Extensions
+New-AzAutomationVariable -ResourceGroupName "ResourceGroup01" -AutomationAccountName "MyAutomationAccount" -Name "MyComplexVariable" -Encrypted $false -Value $vm
-$vmValue = Get-AzAutomationVariable -ResourceGroupName $rgName ΓÇôAutomationAccountName $accountName ΓÇôName "MyComplexVariable"
+$vmValue = Get-AzAutomationVariable -ResourceGroupName "ResourceGroup01" `
+-AutomationAccountName "MyAutomationAccount" -Name "MyComplexVariable"
$vmName = $vmValue.Value.Name $vmTags = $vmValue.Value.Tags
Write-Output "Runbook has been run $numberOfRunnings times."
for ($i = 1; $i -le $numberOfIterations; $i++) { Write-Output "$i`: $sampleMessage" }
-Set-AutomationVariable ΓÇôName numberOfRunnings ΓÇôValue ($numberOfRunnings += 1)
+Set-AutomationVariable -Name numberOfRunnings -Value ($numberOfRunnings += 1)
``` # [Python 2](#tab/python2)
automation Start Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/start-runbooks.md
$runbookName = "Test-Runbook"
$ResourceGroup = "ResourceGroup01" $AutomationAcct = "MyAutomationAccount"
-$job = Start-AzAutomationRunbook ΓÇôAutomationAccountName $AutomationAcct -Name $runbookName -ResourceGroupName $ResourceGroup
+$job = Start-AzAutomationRunbook -AutomationAccountName $AutomationAcct -Name $runbookName -ResourceGroupName $ResourceGroup
$doLoop = $true While ($doLoop) {
- $job = Get-AzAutomationJob ΓÇôAutomationAccountName $AutomationAcct -Id $job.JobId -ResourceGroupName $ResourceGroup
+ $job = Get-AzAutomationJob -AutomationAccountName $AutomationAcct -Id $job.JobId -ResourceGroupName $ResourceGroup
$status = $job.Status $doLoop = (($status -ne "Completed") -and ($status -ne "Failed") -and ($status -ne "Suspended") -and ($status -ne "Stopped")) }
-Get-AzAutomationJobOutput ΓÇôAutomationAccountName $AutomationAcct -Id $job.JobId -ResourceGroupName $ResourceGroup ΓÇôStream Output
+Get-AzAutomationJobOutput -AutomationAccountName $AutomationAcct -Id $job.JobId -ResourceGroupName $ResourceGroup -Stream Output
``` If the runbook requires parameters, then you must provide them as a [hashtable](/powershell/module/microsoft.powershell.core/about/about_hash_tables). The key of the hashtable must match the parameter name and the value is the parameter value. The following example shows how to start a runbook with two string parameters named FirstName and LastName, an integer named RepeatCount, and a boolean parameter named Show. For more information on parameters, see [Runbook Parameters](#work-with-runbook-parameters). ```azurepowershell-interactive $params = @{"FirstName"="Joe";"LastName"="Smith";"RepeatCount"=2;"Show"=$true}
-Start-AzAutomationRunbook ΓÇôAutomationAccountName "MyAutomationAccount" ΓÇôName "Test-Runbook" -ResourceGroupName "ResourceGroup01" ΓÇôParameters $params
+Start-AzAutomationRunbook -AutomationAccountName "MyAutomationAccount" -Name "Test-Runbook" -ResourceGroupName "ResourceGroup01" -Parameters $params
``` ## Next steps
automation Hybrid Runbook Worker https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/troubleshoot/hybrid-runbook-worker.md
The following issues are possible causes:
#### Resolution ##### Mistyped workspace ID or key
-To verify if the agent's workspace ID or workspace key was mistyped, see [Adding or removing a workspace ΓÇô Windows agent](../../azure-monitor/agents/agent-manage.md#windows-agent) for the Windows agent or [Adding or removing a workspace ΓÇô Linux agent](../../azure-monitor/agents/agent-manage.md#linux-agent) for the Linux agent. Make sure to select the full string from the Azure portal, and copy and paste it carefully.
+To verify if the agent's workspace ID or workspace key was mistyped, see [Adding or removing a workspace - Windows agent](../../azure-monitor/platform/agent-manage.md#windows-agent) for the Windows agent or [Adding or removing a workspace - Linux agent](../../azure-monitor/platform/agent-manage.md#linux-agent) for the Linux agent. Make sure to select the full string from the Azure portal, and copy and paste it carefully.
##### Configuration not downloaded
automation Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/troubleshoot/runbooks.md
To determine what's wrong, follow these steps:
```powershell $Cred = Get-Credential #Using Azure Service Management
- Add-AzureAccount ΓÇôCredential $Cred
+ Add-AzureAccount -Credential $Cred
#Using Azure Resource Manager
- Connect-AzAccount ΓÇôCredential $Cred
+ Connect-AzAccount -Credential $Cred
``` 1. If your authentication fails locally, you haven't set up your Azure Active Directory (Azure AD) credentials properly. To get the Azure AD account set up correctly, see the article [Authenticate to Azure using Azure Active Directory](../automation-use-azure-ad.md).
Follow these steps to determine if you've authenticated to Azure and have access
1. To make sure that your script works standalone, test it outside of Azure Automation. 1. Make sure that your script runs the [Connect-AzAccount](/powershell/module/Az.Accounts/Connect-AzAccount) cmdlet before running the `Select-*` cmdlet.
-1. Add `Disable-AzContextAutosave ΓÇôScope Process` to the beginning of your runbook. This cmdlet ensures that any credentials apply only to the execution of the current runbook.
+1. Add `Disable-AzContextAutosave -Scope Process` to the beginning of your runbook. This cmdlet ensures that any credentials apply only to the execution of the current runbook.
1. If you still see the error message, modify your code by adding the `AzContext` parameter for `Connect-AzAccount`, and then execute the code. ```powershell
- Disable-AzContextAutosave ΓÇôScope Process
+ Disable-AzContextAutosave -Scope Process
$Conn = Get-AutomationConnection -Name AzureRunAsConnection Connect-AzAccount -ServicePrincipal -Tenant $Conn.TenantID -ApplicationId $Conn.ApplicationID -CertificateThumbprint $Conn.CertificateThumbprint
The subscription context might be lost when a runbook invokes multiple runbooks.
* To avoid referencing the wrong subscription, disable context saving in your Automation runbooks by using the following code at the start of each runbook. ```azurepowershell-interactive
- Disable-AzContextAutosave ΓÇôScope Process
+ Disable-AzContextAutosave -Scope Process
``` * The Azure PowerShell cmdlets support the `-DefaultProfile` parameter. This was added to all Az and AzureRm cmdlets to support running multiple PowerShell scripts in the same process, allowing you to specify the context and which subscription to use for each cmdlet. With your runbooks, you should save the context object in your runbook when the runbook is created (that is, when an account signs in) and every time it's changed, and reference the context when you specify an Az cmdlet.
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/overview.md
Title: Azure Automation Update Management overview
description: This article provides an overview of the Update Management feature that implements updates for your Windows and Linux machines. Previously updated : 03/19/2021 Last updated : 04/01/2021 + # Update Management overview You can use Update Management in Azure Automation to manage operating system updates for your Windows and Linux virtual machines in Azure, in on-premises environments, and in other cloud environments. You can quickly assess the status of available updates on all agent machines and manage the process of installing required updates for servers.
+As a service provider, you may have onboarded multiple customer tenants to [Azure Lighthouse](../../lighthouse/overview.md). Azure Lighthouse allows you to perform operations at scale across several Azure Active Directory (Azure AD) tenants at once, making management tasks like Update Management more efficient across those tenants you're responsible for.
+ > [!NOTE] > You can't use a machine configured with Update Management to run custom scripts from Azure Automation. This machine can only run the Microsoft-signed update script.
You can use Update Management in Azure Automation to manage operating system upd
To download and install available *Critical* and *Security* patches automatically on your Azure VM, review [Automatic VM guest patching](../../virtual-machines/automatic-vm-guest-patching.md) for Windows VMs.
-Before deploying Update Management and enabling your machines for management, make sure that you understand the information in the following sections.
+Before deploying Update Management and enabling your machines for management, make sure that you understand the information in the following sections.
## About Update Management
The following diagram illustrates how Update Management assesses and applies sec
![Update Management workflow](./media/overview/update-mgmt-updateworkflow.png)
-Update Management can be used to natively deploy to machines in multiple subscriptions in the same tenant.
+Update Management can be used to natively deploy to machines in multiple subscriptions in the same tenant, or across tenants using [Azure delegated resource management](../../lighthouse/concepts/azure-delegated-resource-management.md).
After a package is released, it takes 2 to 3 hours for the patch to show up for Linux machines for assessment. For Windows machines, it takes 12 to 15 hours for the patch to show up for assessment after it's been released. When a machine completes a scan for update compliance, the agent forwards the information in bulk to Azure Monitor logs. On a Windows machine, the compliance scan is run every 12 hours by default. For a Linux machine, the compliance scan is performed every hour by default. If the Log Analytics agent is restarted, a compliance scan is started within 15 minutes.
VMs created from the on-demand Red Hat Enterprise Linux (RHEL) images that are a
## Permissions
-To create and manage update deployments, you need specific permissions. To learn about these permissions, see [Role-based access ΓÇô Update Management](../automation-role-based-access-control.md#update-management-permissions).
+To create and manage update deployments, you need specific permissions. To learn about these permissions, see [Role-based access - Update Management](../automation-role-based-access-control.md#update-management-permissions).
## Update Management components
automation Pre Post Scripts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/pre-post-scripts.md
Pre-tasks and post-tasks run as runbooks and don't natively run on your Azure VM
* A Run As account * A runbook you want to run
-To interact with Azure machines, you should use the [Invoke-AzVMRunCommand](/powershell/module/az.compute/invoke-azvmruncommand) cmdlet to interact with your Azure VMs. For an example of how to do this, see the runbook example [Update Management ΓÇô run script with Run command](https://github.com/azureautomation/update-management-run-script-with-run-command).
+To interact with Azure machines, you should use the [Invoke-AzVMRunCommand](/powershell/module/az.compute/invoke-azvmruncommand) cmdlet to interact with your Azure VMs. For an example of how to do this, see the runbook example [Update Management - run script with Run command](https://github.com/azureautomation/update-management-run-script-with-run-command).
### Interact with non-Azure machines
Pre-tasks and post-tasks run in the Azure context and don't have access to non-A
* A runbook you want to run locally * A parent runbook
-To interact with non-Azure machines, a parent runbook is run in the Azure context. This runbook calls a child runbook with the [Start-AzAutomationRunbook](/powershell/module/Az.Automation/Start-AzAutomationRunbook) cmdlet. You must specify the `RunOn` parameter and provide the name of the Hybrid Runbook Worker for the script to run on. See the runbook example [Update Management ΓÇô run script locally](https://github.com/azureautomation/update-management-run-script-locally).
+To interact with non-Azure machines, a parent runbook is run in the Azure context. This runbook calls a child runbook with the [Start-AzAutomationRunbook](/powershell/module/Az.Automation/Start-AzAutomationRunbook) cmdlet. You must specify the `RunOn` parameter and provide the name of the Hybrid Runbook Worker for the script to run on. See the runbook example [Update Management - run script locally](https://github.com/azureautomation/update-management-run-script-locally).
## Abort patch deployment
Write-Output $context
#Example: How to create and write to a variable using the pre-script: <# #Create variable named after this run so it can be retrieved
-New-AzAutomationVariable -ResourceGroupName $ResourceGroup ΓÇôAutomationAccountName $AutomationAccount ΓÇôName $runId -Value "" ΓÇôEncrypted $false
+New-AzAutomationVariable -ResourceGroupName $ResourceGroup -AutomationAccountName $AutomationAccount -Name $runId -Value "" -Encrypted $false
#Set value of variable
-Set-AutomationVariable ΓÇôName $runId -Value $vmIds
+Set-AutomationVariable -Name $runId -Value $vmIds
#> #Example: How to retrieve information from a variable set during the pre-script
azure-cache-for-redis Cache Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-private-link.md
Title: Azure Cache for Redis with Azure Private Link (Preview)
+ Title: Azure Cache for Redis with Azure Private Link
description: Azure Private Endpoint is a network interface that connects you privately and securely to Azure Cache for Redis powered by Azure Private Link. In this article, you will learn how to create an Azure Cache, an Azure Virtual Network, and a Private Endpoint using the Azure portal. Previously updated : 10/14/2020 Last updated : 3/31/2021
-# Azure Cache for Redis with Azure Private Link (Public Preview)
+# Azure Cache for Redis with Azure Private Link
In this article, you'll learn how to create a virtual network and an Azure Cache for Redis instance with a private endpoint using the Azure portal. You'll also learn how to add a private endpoint to an existing Azure Cache for Redis instance. Azure Private Endpoint is a network interface that connects you privately and securely to Azure Cache for Redis powered by Azure Private Link.
Azure Private Endpoint is a network interface that connects you privately and se
* Azure subscription - [create one for free](https://azure.microsoft.com/free/) > [!IMPORTANT]
-> To use private endpoints, your Azure Cache for Redis instance needs to have been created after July 28th, 2020.
-> Currently, geo-replication, firewall rules, portal console support, multiple endpoints per clustered cache,
-> persistence to firewall and VNet injected caches is not supported.
+> Currently, zone redundancy, portal console support, and persistence to firewall storage accounts are not supported.
> >
It takes a while for the cache to create. You can monitor progress on the Azure
> [!IMPORTANT] > > There is a `publicNetworkAccess` flag which is `Disabled` by default.
-> This flag is meant to allow you to optionally allow both public and private endpoint access to the cache if it is set to `Enabled`. If set to `Disabled`, it will only allow private endpoint access. You can set the value to `Disabled` or `Enabled` with the following PATCH request. Edit the value to reflect which flag you want for your cache.
-> ```http
-> PATCH https://management.azure.com/subscriptions/{subscription}/resourceGroups/{resourcegroup}/providers/Microsoft.Cache/Redis/{cache}?api-version=2020-06-01
-> { "properties": {
-> "publicNetworkAccess":"Disabled"
-> }
-> }
-> ```
+> This flag is meant to allow you to optionally allow both public and private endpoint access to the cache if it is set to `Enabled`. If set to `Disabled`, it will only allow private endpoint access. You can set the value to `Disabled` or `Enabled`. For more details on how to change the value, see the [FAQ](#how-can-i-change-my-private-endpoint-to-be-disabled-or-enabled-from-public-network-access)
>-
-> [!IMPORTANT]
->
-> To connect to a clustered cache, `publicNetworkAccess` needs to be set to `Disabled` and there can only be one private endpoint connection.
> ## Create a private endpoint with an existing Azure Cache for Redis instance
To create a private endpoint, follow these steps.
2. Select the cache instance you want to add a private endpoint to.
-3. On the left side of the screen, select **(PREVIEW) Private Endpoint**.
+3. On the left side of the screen, select **Private Endpoint**.
4. Click the **Private Endpoint** button to create your private endpoint.
To create a private endpoint, follow these steps.
13. After the green **Validation passed** message appears, select **Create**.
+> [!IMPORTANT]
+>
+> There is a `publicNetworkAccess` flag which is `Disabled` by default.
+> This flag is meant to allow you to optionally allow both public and private endpoint access to the cache if it is set to `Enabled`. If set to `Disabled`, it will only allow private endpoint access. You can set the value to `Disabled` or `Enabled`. For more details on how to change the value, see the [FAQ](#how-can-i-change-my-private-endpoint-to-be-disabled-or-enabled-from-public-network-access)
+>
+>
++ ## FAQ ### Why can't I connect to a private endpoint?
-If your cache is already a VNet injected cache, private endpoints cannot be used with your cache instance. If your cache instance is using an unsupported feature (listed below), you won't be able to connect to your private endpoint instance. In addition, cache instances need to be created after July 27th to use private endpoints.
+If your cache is already a VNet injected cache, private endpoints cannot be used with your cache instance. If your cache instance is using an unsupported feature (listed below), you won't be able to connect to your private endpoint instance.
### What features are not supported with private endpoints?
-Geo-replication, firewall rules, portal console support, multiple endpoints per clustered cache, persistence to firewall rules and zone redundancy.
+Currently, zone redundancy, portal console support, and persistence to firewall storage accounts are not supported.
### How can I change my private endpoint to be disabled or enabled from public network access? There is a `publicNetworkAccess` flag which is `Disabled` by default.
-This flag is meant to allow you to optionally allow both public and private endpoint access to the cache if it is set to `Enabled`. If set to `Disabled`, it will only allow private endpoint access. You can set the value to `Disabled` or `Enabled` with the following PATCH request. Edit the value to reflect which flag you want for your cache.
+This flag is meant to allow you to optionally allow both public and private endpoint access to the cache if it is set to `Enabled`. If set to `Disabled`, it will only allow private endpoint access. You can set the value to `Disabled` or `Enabled` in the Azure portal or with a Restful API PATCH request.
+
+To change the value in the Azure portal, follow these steps.
+
+1. In the Azure portal, search for **Azure Cache for Redis** and press enter or select it from the search suggestions.
+
+2. Select the cache instance you want to change the public network access value.
+
+3. On the left side of the screen, select **Private Endpoint**.
+
+4. Click the **Enable public network access** button.
+
+To change the value through a Restful API PATCH request, see below and edit the value to reflect which flag you want for your cache.
```http PATCH https://management.azure.com/subscriptions/{subscription}/resourceGroups/{resourcegroup}/providers/Microsoft.Cache/Redis/{cache}?api-version=2020-06-01
PATCH https://management.azure.com/subscriptions/{subscription}/resourceGroups/
} ```
+### How can I have multiple endpoints in different virtual networks?
+To have multiple private endpoints in different virtual networks, the private DNS zone needs to be manually configured to the multiple virtual networks _before_ creating the private endpoint. For more information, see [Azure Private Endpoint DNS configuration](../private-link/private-endpoint-dns.md).
+
+### What happens if I delete all the private endpoints on my cache?
+Once you delete the private endpoints on your cache, your cache instance may become unreachable until either you explicitly enable public network access or you add another private endpoint. You can change the `publicNetworkAccess` flag on either the Azure portal or through a Restful API PATCH request. For more details on how to change the value, see the [FAQ](#how-can-i-change-my-private-endpoint-to-be-disabled-or-enabled-from-public-network-access)
+ ### Are network security groups (NSG) enabled for private endpoints? No, they are disabled for private endpoints. While subnets containing the private endpoint can have NSG associated with it, the rules will not be effective on traffic processed by the private endpoint. You must have [network policies enforcement disabled](../private-link/disable-private-endpoint-network-policy.md) to deploy private endpoints in a subnet. NSG is still enforced on other workloads hosted on the same subnet. Routes on any client subnet will be using an /32 prefix, changing the default routing behavior requires a similar UDR. Control the traffic by using NSG rules for outbound traffic on source clients. Deploy individual routes with /32 prefix to override private endpoint routes. NSG Flow logs and monitoring information for outbound connections are still supported and can be used
-### Can I use firewall rules with private endpoints?
-No, this is a current limitation of private endpoints. The private endpoint will not work properly if firewall rules are configured on the cache.
-
-### How can I connect to a clustered cache?
-`publicNetworkAccess` needs to be set to `Disabled` and there can only be one private endpoint connection.
- ### Since my private endpoint instance is not in my VNet, how is it associated with my VNet? It is only linked to your VNet. Since it is not in your VNet, NSG rules do not need to be modified for dependent endpoints. ### How can I migrate my VNet injected cache to a private endpoint cache?
-You will need to delete your VNet injected cache and create a new cache instance with a private endpoint.
+You will need to delete your VNet injected cache and create a new cache instance with a private endpoint. For more information, see [migrate to Azure Cache for Redis](cache-migration-guide.md)
## Next steps- * To learn more about Azure Private Link, see the [Azure Private Link documentation](../private-link/private-link-overview.md).
-* To compare various network isolation options for your cache instance, see [Azure Cache for Redis network isolation options documentation](cache-network-isolation.md).
+* To compare various network isolation options for your cache instance, see [Azure Cache for Redis network isolation options documentation](cache-network-isolation.md).
azure-cache-for-redis Cache Web App Aspnet Core Howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-web-app-aspnet-core-howto.md
+
+ Title: Create an ASP.NET Core web app with Azure Cache for Redis
+description: In this quickstart, you learn how to create an ASP.NET Core web app with Azure Cache for Redis
+++
+ms.devlang: dotnet
++ Last updated : 03/31/2021
+#Customer intent: As an ASP.NET Core developer, new to Azure Cache for Redis, I want to create a new ASP.NET Core web app that uses Azure Cache for Redis.
+
+# Quickstart: Use Azure Cache for Redis with an ASP.NET Core web app
+
+In this quickstart, you incorporate Azure Cache for Redis into an ASP.NET Core web application that connects to Azure Cache for Redis to store and retrieve data from the cache.
+
+## Skip to the code on GitHub
+
+If you want to skip straight to the code, see the [ASP.NET Core quickstart](https://github.com/Azure-Samples/azure-cache-redis-samples/tree/main/quickstart/aspnet-core) on GitHub.
+
+## Prerequisites
+
+- Azure subscription - [create one for free](https://azure.microsoft.com/free/)
+- [.NET Core SDK](https://dotnet.microsoft.com/download)
+
+## Create a cache
+++
+Make a note of the **HOST NAME** and the **Primary** access key. You will use these values later to construct the *CacheConnection* secret.
+
+## Create an ASP.NET Core web app
+
+Open a new command window and execute the following command to create a new ASP.NET Core Web App (Model-View-Controller):
+
+```dotnetcli
+dotnet new mvc -o ContosoTeamStats
+```
+
+In your command window, change to the new *ContosoTeamStats* project directory.
++
+Execute the following command to add the *Microsoft.Extensions.Configuration.UserSecrets* package to the project:
+
+```dotnetcli
+dotnet add package Microsoft.Extensions.Configuration.UserSecrets
+```
+
+Execute the following command to restore your packages:
+
+```dotnetcli
+dotnet restore
+```
+
+In your command window, execute the following command to store a new secret named *CacheConnection*, after replacing the placeholders (including angle brackets) for your cache name and primary access key:
+
+```dotnetcli
+dotnet user-secrets set CacheConnection "<cache name>.redis.cache.windows.net,abortConnect=false,ssl=true,allowAdmin=true,password=<primary-access-key>"
+```
+
+## Configure the cache client
+
+In this section, you will configure the application to use the [StackExchange.Redis](https://github.com/StackExchange/StackExchange.Redis) client for .NET.
+
+In your command window, execute the following command in the *ContosoTeamStats* project directory:
+
+```dotnetcli
+dotnet add package StackExchange.Redis
+```
+
+Once the installation is completed, the *StackExchange.Redis* cache client is available to use with your project.
+
+## Update the HomeController and Layout
+
+Add the following `using` statements to *Controllers\HomeController.cs*:
+
+```csharp
+using System.Net.Sockets;
+using System.Text;
+using System.Threading;
+
+using Microsoft.Extensions.Configuration;
+
+using StackExchange.Redis;
+```
+
+Replace:
+
+```csharp
+private readonly ILogger<HomeController> _logger;
+
+public HomeController(ILogger<HomeController> logger)
+{
+ _logger = logger;
+}
+```
+
+with:
+
+```csharp
+private readonly ILogger<HomeController> _logger;
+private static IConfiguration Configuration { get; set; }
+
+public HomeController(ILogger<HomeController> logger, IConfiguration configuration)
+{
+ _logger = logger;
+ if (Configuration == null)
+ Configuration = configuration;
+}
+```
+
+Add the following members to the `HomeController` class to support a new `RedisCache` action that runs some commands against the new cache.
+
+```csharp
+public ActionResult RedisCache()
+{
+ ViewBag.Message = "A simple example with Azure Cache for Redis on ASP.NET Core.";
+
+ IDatabase cache = GetDatabase();
+
+ // Perform cache operations using the cache object...
+
+ // Simple PING command
+ ViewBag.command1 = "PING";
+ ViewBag.command1Result = cache.Execute(ViewBag.command1).ToString();
+
+ // Simple get and put of integral data types into the cache
+ ViewBag.command2 = "GET Message";
+ ViewBag.command2Result = cache.StringGet("Message").ToString();
+
+ ViewBag.command3 = "SET Message \"Hello! The cache is working from ASP.NET Core!\"";
+ ViewBag.command3Result = cache.StringSet("Message", "Hello! The cache is working from ASP.NET Core!").ToString();
+
+ // Demonstrate "SET Message" executed as expected...
+ ViewBag.command4 = "GET Message";
+ ViewBag.command4Result = cache.StringGet("Message").ToString();
+
+ // Get the client list, useful to see if connection list is growing...
+ // Note that this requires allowAdmin=true in the connection string
+ ViewBag.command5 = "CLIENT LIST";
+ StringBuilder sb = new StringBuilder();
+ var endpoint = (System.Net.DnsEndPoint)GetEndPoints()[0];
+ IServer server = GetServer(endpoint.Host, endpoint.Port);
+ ClientInfo[] clients = server.ClientList();
+
+ sb.AppendLine("Cache response :");
+ foreach (ClientInfo client in clients)
+ {
+ sb.AppendLine(client.Raw);
+ }
+
+ ViewBag.command5Result = sb.ToString();
+
+ return View();
+}
+
+private const string SecretName = "CacheConnection";
+
+private static long lastReconnectTicks = DateTimeOffset.MinValue.UtcTicks;
+private static DateTimeOffset firstErrorTime = DateTimeOffset.MinValue;
+private static DateTimeOffset previousErrorTime = DateTimeOffset.MinValue;
+
+private static readonly object reconnectLock = new object();
+
+// In general, let StackExchange.Redis handle most reconnects,
+// so limit the frequency of how often ForceReconnect() will
+// actually reconnect.
+public static TimeSpan ReconnectMinFrequency => TimeSpan.FromSeconds(60);
+
+// If errors continue for longer than the below threshold, then the
+// multiplexer seems to not be reconnecting, so ForceReconnect() will
+// re-create the multiplexer.
+public static TimeSpan ReconnectErrorThreshold => TimeSpan.FromSeconds(30);
+
+public static int RetryMaxAttempts => 5;
+
+private static Lazy<ConnectionMultiplexer> lazyConnection = CreateConnection();
+
+public static ConnectionMultiplexer Connection
+{
+ get
+ {
+ return lazyConnection.Value;
+ }
+}
+
+private static Lazy<ConnectionMultiplexer> CreateConnection()
+{
+ return new Lazy<ConnectionMultiplexer>(() =>
+ {
+ string cacheConnection = Configuration[SecretName];
+ return ConnectionMultiplexer.Connect(cacheConnection);
+ });
+}
+
+private static void CloseConnection(Lazy<ConnectionMultiplexer> oldConnection)
+{
+ if (oldConnection == null)
+ return;
+
+ try
+ {
+ oldConnection.Value.Close();
+ }
+ catch (Exception)
+ {
+ // Example error condition: if accessing oldConnection.Value causes a connection attempt and that fails.
+ }
+}
+
+/// <summary>
+/// Force a new ConnectionMultiplexer to be created.
+/// NOTES:
+/// 1. Users of the ConnectionMultiplexer MUST handle ObjectDisposedExceptions, which can now happen as a result of calling ForceReconnect().
+/// 2. Don't call ForceReconnect for Timeouts, just for RedisConnectionExceptions or SocketExceptions.
+/// 3. Call this method every time you see a connection exception. The code will:
+/// a. wait to reconnect for at least the "ReconnectErrorThreshold" time of repeated errors before actually reconnecting
+/// b. not reconnect more frequently than configured in "ReconnectMinFrequency"
+/// </summary>
+public static void ForceReconnect()
+{
+ var utcNow = DateTimeOffset.UtcNow;
+ long previousTicks = Interlocked.Read(ref lastReconnectTicks);
+ var previousReconnectTime = new DateTimeOffset(previousTicks, TimeSpan.Zero);
+ TimeSpan elapsedSinceLastReconnect = utcNow - previousReconnectTime;
+
+ // If multiple threads call ForceReconnect at the same time, we only want to honor one of them.
+ if (elapsedSinceLastReconnect < ReconnectMinFrequency)
+ return;
+
+ lock (reconnectLock)
+ {
+ utcNow = DateTimeOffset.UtcNow;
+ elapsedSinceLastReconnect = utcNow - previousReconnectTime;
+
+ if (firstErrorTime == DateTimeOffset.MinValue)
+ {
+ // We haven't seen an error since last reconnect, so set initial values.
+ firstErrorTime = utcNow;
+ previousErrorTime = utcNow;
+ return;
+ }
+
+ if (elapsedSinceLastReconnect < ReconnectMinFrequency)
+ return; // Some other thread made it through the check and the lock, so nothing to do.
+
+ TimeSpan elapsedSinceFirstError = utcNow - firstErrorTime;
+ TimeSpan elapsedSinceMostRecentError = utcNow - previousErrorTime;
+
+ bool shouldReconnect =
+ elapsedSinceFirstError >= ReconnectErrorThreshold // Make sure we gave the multiplexer enough time to reconnect on its own if it could.
+ && elapsedSinceMostRecentError <= ReconnectErrorThreshold; // Make sure we aren't working on stale data (e.g. if there was a gap in errors, don't reconnect yet).
+
+ // Update the previousErrorTime timestamp to be now (e.g. this reconnect request).
+ previousErrorTime = utcNow;
+
+ if (!shouldReconnect)
+ return;
+
+ firstErrorTime = DateTimeOffset.MinValue;
+ previousErrorTime = DateTimeOffset.MinValue;
+
+ Lazy<ConnectionMultiplexer> oldConnection = lazyConnection;
+ CloseConnection(oldConnection);
+ lazyConnection = CreateConnection();
+ Interlocked.Exchange(ref lastReconnectTicks, utcNow.UtcTicks);
+ }
+}
+
+// In real applications, consider using a framework such as
+// Polly to make it easier to customize the retry approach.
+private static T BasicRetry<T>(Func<T> func)
+{
+ int reconnectRetry = 0;
+ int disposedRetry = 0;
+
+ while (true)
+ {
+ try
+ {
+ return func();
+ }
+ catch (Exception ex) when (ex is RedisConnectionException || ex is SocketException)
+ {
+ reconnectRetry++;
+ if (reconnectRetry > RetryMaxAttempts)
+ throw;
+ ForceReconnect();
+ }
+ catch (ObjectDisposedException)
+ {
+ disposedRetry++;
+ if (disposedRetry > RetryMaxAttempts)
+ throw;
+ }
+ }
+}
+
+public static IDatabase GetDatabase()
+{
+ return BasicRetry(() => Connection.GetDatabase());
+}
+
+public static System.Net.EndPoint[] GetEndPoints()
+{
+ return BasicRetry(() => Connection.GetEndPoints());
+}
+
+public static IServer GetServer(string host, int port)
+{
+ return BasicRetry(() => Connection.GetServer(host, port));
+}
+```
+
+Open *Views\Shared\\_Layout.cshtml*.
+
+Replace:
+
+```cshtml
+<a class="navbar-brand" asp-area="" asp-controller="Home" asp-action="Index">ContosoTeamStats</a>
+```
+
+with:
+
+```cshtml
+<a class="navbar-brand" asp-area="" asp-controller="Home" asp-action="RedisCache">Azure Cache for Redis Test</a>
+```
+
+## Add a new RedisCache view and update the styles
+
+Create a new file *Views\Home\RedisCache.cshtml* with the following content:
+
+```cshtml
+@{
+ ViewBag.Title = "Azure Cache for Redis Test";
+}
+
+<h2>@ViewBag.Title.</h2>
+<h3>@ViewBag.Message</h3>
+<br /><br />
+<table border="1" cellpadding="10" class="redis-results">
+ <tr>
+ <th>Command</th>
+ <th>Result</th>
+ </tr>
+ <tr>
+ <td>@ViewBag.command1</td>
+ <td><pre>@ViewBag.command1Result</pre></td>
+ </tr>
+ <tr>
+ <td>@ViewBag.command2</td>
+ <td><pre>@ViewBag.command2Result</pre></td>
+ </tr>
+ <tr>
+ <td>@ViewBag.command3</td>
+ <td><pre>@ViewBag.command3Result</pre></td>
+ </tr>
+ <tr>
+ <td>@ViewBag.command4</td>
+ <td><pre>@ViewBag.command4Result</pre></td>
+ </tr>
+ <tr>
+ <td>@ViewBag.command5</td>
+ <td><pre>@ViewBag.command5Result</pre></td>
+ </tr>
+</table>
+```
+
+Add the following lines to *wwwroot\css\site.css*:
+
+```css
+.redis-results pre {
+ white-space: pre-wrap;
+}
+```
+
+## Run the app locally
+
+Execute the following command in your command window to build the app:
+
+```dotnetcli
+dotnet build
+```
+
+Then run the app with the following command:
+
+```dotnetcli
+dotnet run
+```
+
+Browse to `https://localhost:5001` in your web browser.
+
+Select **Azure Cache for Redis Test** in the navigation bar of the web page to test cache access.
+
+In the example below, you can see the `Message` key previously had a cached value, which was set using the Redis Console in the Azure portal. The app updated that cached value. The app also executed the `PING` and `CLIENT LIST` commands.
+
+![Simple test completed local](./media/cache-web-app-aspnet-core-howto/cache-simple-test-complete-local.png)
+
+## Clean up resources
+
+If you're continuing to the next tutorial, you can keep the resources that you created in this quickstart and reuse them.
+
+Otherwise, if you're finished with the quickstart sample application, you can delete the Azure resources that you created in this quickstart to avoid charges.
+
+> [!IMPORTANT]
+> Deleting a resource group is irreversible. When you delete a resource group, all the resources in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. If you created the resources for hosting this sample inside an existing resource group that contains resources you want to keep, you can delete each resource individually from their respective blades instead of deleting the resource group.
+
+### To delete a resource group
+
+1. Sign in to the [Azure portal](https://portal.azure.com), and then select **Resource groups**.
+
+2. In the **Filter by name...** box, type the name of your resource group. The instructions for this article used a resource group named *TestResources*. On your resource group, in the results list, select **...**, and then select **Delete resource group**.
+
+ ![Delete](./media/cache-web-app-howto/cache-delete-resource-group.png)
+
+You're asked to confirm the deletion of the resource group. Type the name of your resource group to confirm, and then select **Delete**.
+
+After a few moments, the resource group and all of its resources are deleted.
+
+## Next steps
+
+For information on deploying to Azure, see:
+
+> [!div class="nextstepaction"]
+> [Tutorial: Build an ASP.NET Core and SQL Database app in Azure App Service](/azure/app-service/tutorial-dotnetcore-sqldb-app)
+
+For information about storing the cache connection secret in Azure Key Vault, see:
+
+> [!div class="nextstepaction"]
+> [Azure Key Vault configuration provider in ASP.NET Core](/aspnet/core/security/key-vault-configuration)
+
+Want to scale your cache from a lower tier to a higher tier?
+
+> [!div class="nextstepaction"]
+> [How to Scale Azure Cache for Redis](./cache-how-to-scale.md)
+
+Want to optimize and save on your cloud spending?
+
+> [!div class="nextstepaction"]
+> [Start analyzing costs with Cost Management](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn)
azure-functions Functions Runtime Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-runtime-install.md
- Title: Azure Functions Runtime Installation
-description: How to Install the Azure Functions Runtime preview 2
--- Previously updated : 11/28/2017---
-# Install the Azure Functions Runtime preview 2
--
-If you would like to install the Azure Functions Runtime preview 2, follow these steps:
-
-1. Ensure your machine passes the minimum requirements.
-1. Download the [Azure Functions Runtime Preview Installer](https://aka.ms/azafrv2).
-1. Uninstall the Azure Functions Runtime preview 1.
-1. Install the Azure Functions Runtime preview 2.
-1. Complete the configuration of the Azure Functions Runtime preview 2.
-1. Create your first function in Azure Functions Runtime Preview
-
-## Prerequisites
-
-Before you install the Azure Functions Runtime preview, you must have the following resources available:
-
-1. A machine running Microsoft Windows Server 2016 or Microsoft Windows 10 Creators Update (Professional or Enterprise Edition).
-1. A SQL Server instance running within your network. Minimum edition required is SQL Server Express.
-
-## Uninstall Previous Version
-
-If you have previously installed the Azure Functions Runtime preview, you must uninstall before installing the latest release. Uninstall the Azure Functions Runtime preview by removing the program in Add/Remove Programs in Windows.
-
-## Install the Azure Functions Runtime Preview
-
-The Azure Functions Runtime Preview Installer guides you through the installation of the Azure Functions Runtime preview Management and Worker Roles. It is possible to install the Management and Worker role on the same machine. However, as you add more function apps, you must deploy more worker roles on additional machines to be able to scale your functions onto multiple workers.
-
-## Install the Management and Worker Role on the same machine
-
-1. Run the Azure Functions Runtime Preview Installer.
-
- ![Azure Functions Runtime preview installer][1]
-
-1. Click **Next**.
-1. Once you have read the terms of the **EULA**, **check the box** to accept the terms and click **Next** to advance.
-1. Select the roles you want to install on this machine **Functions Management Role** and/or **Functions Worker Role** and click **Next**.
-
- ![Azure Functions Runtime preview installer - role selection][3]
-
- > [!NOTE]
- > You can install the **Functions Worker Role** on many other machines. To do so, follow these instructions, and only select **Functions Worker Role** in the installer.
-
-1. Click **Next** to have the **Azure Functions Runtime Setup Wizard** begin the installation process on your machine.
-1. Once complete, the setup wizard launches the **Azure Functions Runtime** configuration tool.
-
- ![Azure Functions Runtime preview installer complete][6]
-
- > [!NOTE]
- > If you are installing on **Windows 10** and the **Container** feature has not been previously enabled, the **Azure Functions Runtime Setup** prompts you to reboot your machine to complete the install.
-
-## Configure the Azure Functions Runtime
-
-To complete the Azure Functions Runtime installation, you must complete the configuration.
-
-1. The **Azure Functions Runtime** configuration tool shows which roles are installed on your machine.
-
- ![Azure Functions Runtime preview configuration tool][7]
-
-1. Click the **Database** tab, enter the connection details for your SQL Server instance, including specifying a [Database master key](/sql/relational-databases/security/encryption/sql-server-and-database-encryption-keys-database-engine), and click **Apply**. Connectivity to a SQL Server instance is required in order for the Azure Functions Runtime to create a database to support the Runtime.
-
- ![Azure Functions Runtime preview database configuration][8]
-
-1. Click the **Credentials** tab. Here, you must create two new credentials for use with a file share for hosting all your function apps. Specify **User name** and **Password** combinations for the **file share owner** and for the **file share user**, then click **Apply**.
-
- ![Azure Functions Runtime preview credentials][9]
-
-1. Click the **File Share** tab. Here you must specify the details of the file share location. The file share can be created for you or you can use an existing File Share and click **Apply**. If you select a new File Share location, you must specify a directory for use by the Azure Functions Runtime.
-
- ![Azure Functions Runtime preview file share][10]
-
-1. Click the **IIS** tab. This tab shows the details of the websites in IIS that the Azure Functions Runtime configuration tool creates. You may specify a custom DNS name here for the Azure Functions Runtime preview portal. Click **Apply** to complete.
-
- ![Azure Functions Runtime preview IIS][11]
-
-1. Click the **Services** tab. This tab shows the status of the services in your Azure Functions Runtime configuration tool. If the **Azure Functions Host Activation Service** is not running after initial configuration, click **Start Service**.
-
- ![Azure Functions Runtime preview configuration complete][12]
-
-1. Browse to the **Azure Functions Runtime Portal** as `https://<machinename>.<domain>/`.
-
- ![Azure Functions Runtime preview portal][13]
-
-## Create your first function in Azure Functions Runtime preview
-
-To create your first function in Azure Functions Runtime preview
-
-1. Browse to the **Azure Functions Runtime Portal** as `https://<machinename>.<domain>` for example `https://mycomputer.mydomain.com`.
-
-1. You are prompted to **Log in**, if deployed in a domain use your domain account username and password, otherwise use your local account username and password to log in to the portal.
-
- ![Azure Functions Runtime preview portal login][14]
-
-1. To create function apps, you must create a Subscription. In the top left-hand corner of the portal, click the **+** option next to the subscriptions.
-
- ![Azure Functions Runtime preview portal subscriptions][15]
-
-1. Choose **DefaultPlan**, enter a name for your Subscription, and click **Create**.
-
- ![Azure Functions Runtime preview portal subscription plan and name][16]
-
-1. All of your function apps are listed in the left-hand pane of the portal. To create a new Function App, select the heading **Function Apps** and click the **+** option.
-
-1. Enter a name for your function app, select the correct Subscription, choose which version of the Azure Functions runtime you wish to program against and click **Create**
-
- ![Azure Functions Runtime preview portal new function app][17]
-
-1. Your new function app is listed in the left-hand pane of the portal. Select Functions and then click **New Function** at the top of the center pane in the portal.
-
- ![Azure Functions Runtime preview templates][18]
-
-1. Select the Timer Trigger function, in the right-hand flyout name your function and change the Schedule to `*/5 * * * * *` (this cron expression causes your timer function to execute every five seconds), and click **Create**
-
- ![Azure Functions Runtime preview new timer function configuration][19]
-
-1. Your function has now been created. You can view the execution log of your Function app by expanding the **log** pane at the bottom of the portal.
-
- ![Azure Functions Runtime preview function executing][20]
-
-<!--Image references-->
-[1]: ./media/functions-runtime-install/AzureFunctionsRuntime_Installer1.png
-[2]: ./media/functions-runtime-install/AzureFunctionsRuntime_Installer2-EULA.png
-[3]: ./media/functions-runtime-install/AzureFunctionsRuntime_Installer3-ChooseRoles.png
-[4]: ./media/functions-runtime-install/AzureFunctionsRuntime_Installer4-Install.png
-[5]: ./media/functions-runtime-install/AzureFunctionsRuntime_Installer5-Progress.png
-[6]: ./media/functions-runtime-install/AzureFunctionsRuntime_Installer6-InstallComplete.png
-[7]: ./media/functions-runtime-install/AzureFunctionsRuntime_Configuration1.png
-[8]: ./media/functions-runtime-install/AzureFunctionsRuntime_Configuration2_SQL.png
-[9]: ./media/functions-runtime-install/AzureFunctionsRuntime_Configuration3_Credentials.png
-[10]: ./media/functions-runtime-install/AzureFunctionsRuntime_Configuration4_Fileshare.png
-[11]: ./media/functions-runtime-install/AzureFunctionsRuntime_Configuration5_IIS.png
-[12]: ./media/functions-runtime-install/AzureFunctionsRuntime_Configuration6_Services.png
-[13]: ./media/functions-runtime-install/AzureFunctionsRuntime_Portal.png
-[14]: ./media/functions-runtime-install/AzureFunctionsRuntime_Portal_Login.png
-[15]: ./media/functions-runtime-install/AzureFunctionsRuntime_Portal_Subscriptions.png
-[16]: ./media/functions-runtime-install/AzureFunctionsRuntime_Portal_Subscriptions1.png
-[17]: ./media/functions-runtime-install/AzureFunctionsRuntime_Portal_NewFunctionApp.png
-[18]: ./media/functions-runtime-install/AzureFunctionsRuntime_v1FunctionsTemplates.png
-[19]: ./media/functions-runtime-install/AzureFunctionsRuntime_Portal_NewTimerFunction.png
-[20]: ./media/functions-runtime-install/AzureFunctionsRuntime_Portal_RunningV2Function.png
azure-functions Functions Runtime Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-runtime-overview.md
- Title: Azure Functions Runtime Overview
-description: Overview of the Azure Functions Runtime Preview
--- Previously updated : 11/28/2017--
-# Azure Functions Runtime Overview (preview)
--
-The Azure Functions Runtime (preview) provides a new way for you to take advantage of the simplicity and flexibility of the Azure Functions programming model on-premises. Built on the same open source roots as Azure Functions, Azure Functions Runtime is deployed on-premises to provide a nearly identical development experience as the cloud service.
-
-![Azure Functions Runtime Preview Portal][1]
-
-The Azure Functions Runtime provides a way for you to experience Azure Functions before committing to the cloud. In this way, the code assets you build can then be taken with you to the cloud when you migrate. The runtime also opens up new options for you, such as using the spare compute power of your on-premises computers to run batch processes overnight. You can also use devices within your organization to conditionally send data to other systems, both on-premises and in the cloud.
-
-The Azure Functions Runtime consists of two pieces:
-
-* Azure Functions Runtime Management Role
-* Azure Functions Runtime Worker Role
-
-## Azure Functions Management Role
-
-The Azure Functions Management Role provides a host for the management of your Functions on-premises. This role performs the following tasks:
-
-* Hosting of the Azure Functions Management Portal, which is the same one you see in the [Azure portal](https://portal.azure.com). The portal provides a consistent experience that lets you develop your functions in the same way as you would in the Azure portal.
-* Distributing functions across multiple Functions workers.
-* Providing a publishing endpoint so that you can publish your functions direct from Microsoft Visual Studio by downloading and importing the publishing profile.
-
-## Azure Functions Worker Role
-
-The Azure Functions Worker Roles are deployed in Windows Containers and are where your function code executes. You can deploy multiple Worker Roles throughout your organization and this option is a key way in which customers can make use of spare compute power. One example of where spare compute exists in many organizations is machines powered on constantly but not being used for large periods of time.
-
-## Minimum Requirements
-
-To get started with the Azure Functions Runtime, you must have a machine with Windows Server 2016 or Windows 10 Creators Update with access to a SQL Server instance.
-
-## Next Steps
-
-Install the [Azure Functions Runtime preview](./functions-runtime-install.md)
-
-<!--Image references-->
-[1]: ./media/functions-runtime-overview/AzureFunctionsRuntime_Portal.png
azure-functions Recover Python Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/recover-python-functions.md
In your function app's requirements.txt, an unpinned package will be upgraded to
If your function app is using the Python pickel library to load Python object from .pkl file, it is possible that the .pkl contains malformed bytes string, or invalid address reference in it. To recover from this issue, try commenting out the pickle.load() function.
+### Pyodbc connection collision
+
+If your function app is using the popular ODBC database driver [pyodbc](https://github.com/mkleehammer/pyodbc), it is possible that multiple connections are opened within a single function app. To avoid this issue, please use the singleton pattern and ensure only one pyodbc connection is used across the function app.
+ ## Next steps
azure-government Documentation Government Csp List https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-csp-list.md
cloud: gov Previously updated : 01/05/2021 Last updated : 03/31/2021 # Azure Government authorized reseller list Since the launch of the [Azure Government in the Cloud Solution Provider Program (CSP)](https://azure.microsoft.com/blog/announcing-microsoft-azure-government-services-in-the-cloud-solution-provider-program/), work has been done with the Partner Community to bring them the benefits of this channel, enable them to resell Azure Government, and help them grow their business while providing the cloud services their customers need.
-Below you can find a list of all the authorized Cloud Solution Providers, AOS-G (Agreement for Online Services for Government), and Licensing Solution Providers (LSP) which can transact Azure Government. This list includes all approved Partners as of **January 5, 2021**. Updates to this list will be made as new partners are onboarded.
+Below you can find a list of all the authorized Cloud Solution Providers, AOS-G (Agreement for Online Services for Government), and Licensing Solution Providers (LSP) which can transact Azure Government. This list includes all approved Partners as of **March 31, 2021**. Updates to this list will be made as new partners are onboarded.
## Approved direct CSPs
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[Accelera Solutions Inc](http://www.accelerasolutions.com/)| |[Accenture Federal Services LLC](https://www.accenture.com/us-en/afs-industry-index)| |[Access Interactive Inc.](https://www.access-interactive.com/)|
+|[AccountabilIT](https://accountabilit.com)|
|[ACP Technologies](https://acp.us.com)| |[ActioNet](https://www.actionet.com/)| |[ADNET Technologies](https://thinkadnet.com/)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[Avtex Solutions](https://www.avtex.com)| |[BAE Systems Inc. and Affiliates](https://www.baesystems.com)| |[BEMO Corp](https://www.bemopro.com/)|
+|[Bitscape](https://www.bitscape.com)|
|[Bio Automation Support](https://www.stacsdna.com/)| |[Blackwood Associates, Inc. (dba BAI Federal)](https://www.blackwoodassociates.com/)| |[Blue Source Group, Inc.](https://www.blackwoodassociates.com/)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[Cloud Navigator, Inc - formerly ISC](https://www.cloudnav.com )| |[CNSS - Cherokee Nation System Solutions LLC](http://cherokee-cnt.com/Pages/home.aspx)| |[CodeLynx, LLC](http://www.codelynx.com/)|
+|[Columbus US, Inc.](https://www.columbusglobal.com)|
|[Competitive Innovations, LLC](https://www.cillc.com)| |[Computer Professionals International](http://www.comproinc.com/)| |[Computer Solutions Inc.](http://cs-inc.co/)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[Dell Federal Services](https://www.dellemc.com/en-us/industry/federal/federal-government-it.htm#)| |[Dell Marketing LP](https://www.dell.com/learn/us/en/rc1009777/fed)| |[Developing Today LLC](https://www.developingtoday.net/)|
+|[DevHawk, LLC](https://www.devhawk.io)|
|[Diffeo, Inc.](https://diffeo.com)| |[DirectApps, Inc. D.B.A. Direct Technology](https://directtechnology.com)| |[DominionTech Inc.](https://www.dominiontech.com)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|eFibernet Inc.| |[eMazzanti Technologies](https://www.emazzanti.net/)| |[Enabling Technologies Corp.](https://www.enablingtechcorp.com/)|
+|[Enlighten IT Consulting](https://www.eitccorp.com)|
|[Ensono](https://www.ensono.com)| |[Enterprise Infrastructure Partners, LLC](http://www.entisp.com/)| |[Enterprise Technology International](https://enterpriseti.com)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[Evertec](http://www.evertecinc.com)| |[eWay Corp](https://www.ewaycorp.com)| |[Exbabylon IT Solutions](https://www.exbabylon.com)|
+|[Executive Information Systems, LLC](https://www.execinfosys.com)|
|[FI Consulting](https://www.ficonsulting.com/)| |[FCN, Inc.](https://fcnit.com)| |[Federal Resources Corporation FRC](https://fedresources.com/)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[Gov4Miles](https://www.milestechnologies.com)| |Gravity Pro Consulting| |[Green House Data](https://www.greenhousedata.com/)|
+|[GreenPages Technology Solutions](https://www.greenpages.com)|
+|[GRS Technology Solutions](https://www.grstechnologysolutions.com)|
|[Hanu Software Solutions Inc.](https://www.hanusoftware.com/hanu/#contact)| |[Harmonia Holdings Group LLC](https://www.harmonia.com)|
+|[Harborgrid Inc.](https://www.harborgrid.com)|
|[HCL Technologies](https://www.hcltech.com/aerospace-and-defense)| |[HD Dynamics](https://www.hddynamics.com/)| |[Heartland Business Systems LLC](https://www.hbs.net/home)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[Hitachi Vantara](https://www.hitachivantarafederal.com/rean-cloud/)| |[HTS Voice & Data Systems, Inc.](https://www.hts-tx.com/)| |[HumanTouch LLC](https://www.humantouchllc.com/)|
+|[Hyertek Inc.](https://www.hyertek.com)|
|[I10 Inc](http://i10agile.com/)| |I2, Inc| |[i3 Business Solutions, LLC](https://www.i3businesssolutions.com/)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[IBM Corporation](https://www.ibm.com/industries/federal)| |[ImageSource](https://imagesourceinc.com/)| |[iMedia IT Solutions inc.](https://www.imediait.net/)|
+|[Impact Networking](https://www.impactmybiz.com)|
|[Imperitive Solutions LLC](https://www.imperitiv.com/)| |[Indicium Technologies Inc](https://www.istech-corp.com/)| |[Info Gain Consulting LLC](http://infogainconsulting.com/)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[Invoke, LLC](https://invokellc.com)| |[It1 Source LLC](https://www.it1.com)| |[ITInfra](https://itinfra.biz/)|
+|[ITsavvy](https://www.itsavvy.com)|
|[IV4, Inc](https://www.iv4.com)| |[Jackpine Technologies](https://www.jackpinetech.com)| |[Jacobs Technolgy Inc.](https://www.jacobs.com/)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[ManCom Inc](https://www.mancominc.com/)| |[ManTech](https://www.mantech.com/Pages/Home.aspx)| |[Marco Technologies LLC](https://www.marconet.com/)|
+|[Mazteck IT](https://www.mazteck.com)|
+|[Media3 Technologies, LLC](https://www.media3.net)|
+|[Medsphere](https://www.medsphere.com)|
|[Menlo Technologies](https://www.menlo-technologies.com)| |[MetroStar Systems Inc.](https://www.metrostarsystems.com)| |Mibura Inc.|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[Navisite LLC](https://www.navisite.com/)| |[NCI](https://www.nciinc.com/)| |[NeoTech Solutions Inc.](https://neotechreps.com)|
+|[Neovera Inc.](https://www.neovera.com)|
|[Netwize](https://www.netwize.com)| |[NewWave Telecom & Technologies, Inc](https://www.newwave.io)| |[NexustTek](https://www.nexustek.com/)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[Planet Technologies](https://go-planet.com)| |[Plexhosted LLC](https://plexhosted.com/)| |[Prescriptive Data Solutions LLC.](https://www.prescriptive.solutions)|
+|[PrenticeWorx](https://www.prenticeworx.com/)|
|[Presidio](https://www.presidio.com)| |[Principle Information Technology Company](https://www.principleinfotech.com/)| |[Practical Solutions](https://www.ps4b.com)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[ProArch IT Solutions](https://www.proarch.com/)| |[Project Hosts Inc.](https://www.projecthosts.com)| |[Protected Trust](https://www.proarch.com/)|
+|[Protera Technologies](https://www.protera.com)|
|[Pueo Business Solutions, LLC](https://www.pueo.com/)| |[Quality Technology Services LLC](https://www.qtsdatacenters.com/)| |[Quisitive](https://quisitive.com)| |[Quite Professionals](https://www.quietprofessionalsllc.com)|
+|[R3 LLC](https://www.r3.com)|
|[Ravnur Inc.](https://www.ravnur.com)| |[Razor Technology, LLC](https://www.razor-tech.com)| |[Re:discovery Software, Inc.](https://rediscoverysoftware.com)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[Revenue Solutions, Inc](https://www.revenuesolutionsinc.com)| |[RMON Networks Inc.](https://rmonnetworks.com/)| |[rmsource, Inc.](https://www.rmsource.com)|
+|[RoboTech Science, Inc.](https://robotechscience.com)|
+|[Rollout Systems LLC](https://www.rolloutsys.com)|
|[RV Global Solutions](https://rvglobalsolutions.com/)| |[Saiph Technologies Corporation](http://www.saiphtech.com/)| |[SAP NS2](https://sapns2.com)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[Secure-24](https://www.secure-24.com)| |[Selex Galileo Inc](http://www.selexgalileo.com/)| |[Sev1Tech](https://www.sev1tech.com/)|
+|[SEV Technologies](https://sevtechnologies.com/)|
|[Sevatec Inc.](https://www.sevatec.com/)| |[Shadow-Soft, LLC.](https://shadow-soft.com)| |[SHI International Corp](https://www.shi.com)| |[SHR Consulting Group LLC](https://www.shrgroupllc.com)| |[Shoshin Technologies Inc.](https://www.shoshintech.com)| |[Sieena, Inc.](https://siennatech.com/)|
+|[Simeon Networks](https://simeonnetworks.com)|
|[Simons Advisors, LLC](https://simonsadvisors.com/)| |[Sirius Computer Solutions, Inc.](https://www.siriuscom.com/)| |[SKY SOLUTIONS LLC](https://www.skysolutions.com/)| |[SKY Terra Technologies LLC](https://www.skyterratech.com)| |[Smartronix](https://www.smartronix.com)|
+|[Smoothlogics](https://www.smoothlogics.com)|
|[Socius 1 LLC](http://www.socius1.com)| |[Softchoice Corporation](https://www.softchoice.com)| |[Software Services Group (dba Secant Technologies)](https://www.secantcorp.com/)| |[SoftwareONE Inc.](https://www.softwareone.com/en-us)| |[Solution Systems Inc.](https://www.solsyst.com/)|
+|[South River Technologies](https://southrivertech.com)|
|[Stabilify](http://www.stabilify.net/)| |[Stafford Associates](https://www.staffordnet.com/)| |[Static Networks, LLC](https://staticnetworks.com)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[Strategic Communications](https://stratcomminc.com)| |[Stratus Solutions](https://stratussolutions.com)| |[Strongbridge LLC](https://www.sb-llc.com)|
-|[Summit 7 Systems, Inc.](https://summit7systems.com/)|
+|[Summit 7 Systems, Inc.](https://www.summit7.us/)|
|[Sumo Logic](https://www.sumologic.com/)| |[SWC Technology Partners](https://www.swc.com)| |[Sybatech, Inc](https://www.sybatech.com)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[TechnoMile](https://technomile.com/)| |[TechTrend](https://techtrend.us)| |[TekSynap](https://www.teksynap.com)|
+|[TestPros Inc.](https://www.testpros.com)|
|[The Cram Group LLC](https://aeccloud.com/)| |[The Informatics Application Group Inc.](https://tiag.net)| |[The Porter Group, LLC](https://www.thepottergroupllc.com/)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[Tribridge Holdings, LLC](https://www.dxc.technology/public_sector)| |[Trigent Solutions Inc.](http://trigentsolutions.com/)| |[Triple Point Security Incorporated](https://www.triplepointsecurity.com)|
+|[Trusted Tech Team](https://www.trustedtechteam.com)|
|[U2Cloud LLC](https://www.u2cloud.com)| |[UDRI - SSG](https://udayton.edu/udri/_resources/docs/ssg_v8.pdf)| |[Unisys Corp / Blue Bell](https://www.unisys.com)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|CDW Corp.|cdwgsales@cdwg.com|800-808-4239| |Dell Corp.|Get_Azure@Dell.com|888-375-9857| |Insight Public Sector|federal@insight.com|800-467-4448|
-|PC Connection|govccollections@govconnection.com|800-998-0009|
+|PC Connection|govtssms@connection.com|800-998-0009|
|SHI, Inc.|msftgov@shi.com|888-764-8888| |Minburn Technology Group|microsoft@minburntech.com |571-699-0705 Opt. 1|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[Dox Electornics Inc.](https://www.doxnet.com)| |[F1 Soluitions Inc](https://www.f1networks.com)| |[Four Points Technolgy, LLC](https://www.4points.com)|
+|[General Dynamics Information Technology](https://www.gdit.com)|
|[Jackpine Technologies](https://www.jackpinetech.com)| |Jasper Solutions|
+|[Johnson Technology Systems Inc](https://www.jtsusa.com/)|
|[KTL Solutions, Inc.](https://www.ktlsolutions.com)| |[LiftOff LLC](https://www.liftoffllc.com)| |[Northrop Grumman](https://www.northropgrumman.com/)| |[Novetta](https://www.novetta.com)| |[Permuta Technologies, Inc.](http://www.permuta.com/)| |[Planet Technologies, Inc.](https://go-planet.com)|
+|[Perspecta](https://perspecta.com)|
|[Quiet Professionals, LLC](https://quietprofessionalsllc.com)| |[Red River](https://www.redriver.com)| |[SAIC](https://www.saic.com)| |[Smartronix](https://www.smartronix.com)|
-|[Summit 7 Services, Inc.](https://summit7systems.com)|
+|[Summit 7 Systems, Inc.](https://www.summit7.us/)|
|[TechTrend, Inc](https://techtrend.us)| |[VLCM](https://www.vlcmtech.com)| |[VC3](https://www.vc3.com)|
azure-maps Tutorial Iot Hub Maps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/tutorial-iot-hub-maps.md
For a complete list of Azure Maps REST APIs, see:
To get a list of devices that are Azure certified for IoT, visit:
-* [Azure certified devices](https://catalog.azureiotsolutions.com/)
+* [Azure certified devices](https://devicecatalog.azure.com/)
## Clean up resources
azure-monitor Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/gateway.md
Last updated 12/24/2019
# Connect computers without internet access by using the Log Analytics gateway in Azure Monitor
->[!NOTE]
->As Microsoft Operations Management Suite (OMS) transitions to Microsoft Azure Monitor, terminology is changing. This article refers to OMS Gateway as the Azure Log Analytics gateway.
->
- This article describes how to configure communication with Azure Automation and Azure Monitor by using the Log Analytics gateway when computers that are directly connected or that are monitored by Operations Manager have no internet access. The Log Analytics gateway is an HTTP forward proxy that supports HTTP tunneling using the HTTP CONNECT command. This gateway sends data to Azure Automation and a Log Analytics workspace in Azure Monitor on behalf of the computers that cannot directly connect to the internet.
The Log Analytics gateway supports only Transport Layer Security (TLS) 1.0, 1.1,
For additional information, review [Sending data securely using TLS 1.2](../logs/data-security.md#sending-data-securely-using-tls-12).
+>[!NOTE]
+>The gateway is a forwarding proxy that doesnΓÇÖt store any data. Once the agent establishes connection with Azure Monitor, it follows the same encryption flow with or without the gateway. The data is encrypted between the client and the endpoint. Since the gateway is just a tunnel, it doesnΓÇÖt have the ability the inspect what is being sent.
+ ### Supported number of agent connections The following table shows approximately how many agents can communicate with a gateway server. Support is based on agents that upload about 200 KB of data every 6 seconds. For each agent tested, data volume is about 2.7 GB per day.
To get help, select the question mark icon in the upper-right corner of the port
## Next steps
-[Add data sources](./../agents/agent-data-sources.md) to collect data from connected sources, and store the data in your Log Analytics workspace.
+[Add data sources](./../agents/agent-data-sources.md) to collect data from connected sources, and store the data in your Log Analytics workspace.
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/api-custom-events-metrics.md
In [Metrics Explorer](../essentials/metrics-charts.md), you can create a chart t
You can also [Search](./diagnostic-search.md) for client data points with specific user names and accounts.
+> [!NOTE]
+> The [EnableAuthenticationTrackingJavaScript property in the ApplicationInsightsServiceOptions class](https://github.com/microsoft/ApplicationInsights-dotnet/blob/develop/NETCORE/src/Shared/Extensions/ApplicationInsightsServiceOptions.cs) in the .NET Core SDK simplifies the JavaScript configuration needed to inject the username as the Auth Id for each trace sent by the Application Insights JavaScript SDK. When this property is set to true, the username from the user in the ASP.NET Core is printed along with [client-side telemetry](asp-net-core.md#enable-client-side-telemetry-for-web-applications), so adding `appInsights.setAuthenticatedUserContext` manually wouldn't be needed anymore, as it is already injected by the SDK for ASP.NET Core. The Auth Id will also be sent to the server where the SDK in .NET Core will identify it and use it for any server-side telemetry, as described in the [JavaScript API reference](https://github.com/microsoft/ApplicationInsights-JS/blob/master/API-reference.md#setauthenticatedusercontext). However, for JavaScript applications that don't work in the same way as ASP.NET Core MVC (such as SPA web apps), you would still need to add `appInsights.setAuthenticatedUserContext` manually.
+ ## <a name="properties"></a>Filtering, searching, and segmenting your data by using properties You can attach properties and measurements to your events (and also to metrics, page views, exceptions, and other telemetry data).
azure-monitor Status Monitor V2 Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/status-monitor-v2-troubleshoot.md
Review the [API reference](status-monitor-v2-api-reference.md) for a detailed de
4. Try to browse to your app. 5. After your app is loaded, return to PerfView and select **Stop Collection**.
-### How to capture full SQL command text
-
-To capture full SQL command text you need to modify the applicationinsights.config file with the following:
-
-```xml
-<Add Type="Microsoft.ApplicationInsights.DependencyCollector.DependencyTrackingTelemetryModule, Microsoft.AI.DependencyCollector">,
-<EnableSqlCommandTextInstrumentation>true</EnableSqlCommandTextInstrumentation>
-</Add>
-```
- ## Next steps - Review the [API reference](status-monitor-v2-overview.md#powershell-api-reference) to learn about parameters you might have missed.
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/metrics-supported.md
description: List of metrics available for each resource type with Azure Monitor
Previously updated : 02/06/2021 Last updated : 04/01/2021 # Supported metrics with Azure Monitor
For important additional information, see [Monitoring Agents Overview](../agents
> [!IMPORTANT] > This latest update adds a new column and reordered the metrics to be alphabetic. The addition information means that the tables below may have a horizontal scroll bar at the bottom, depending on the width of your browser window. If you believe you are missing information, use the scroll bar to see the entirety of the table.- ## microsoft.aadiam/azureADMetrics |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
For important additional information, see [Monitoring Agents Overview](../agents
|HttpIncomingRequestDuration|Yes|HttpIncomingRequestDuration|Count|Average|Latency on an http request.|StatusCode, Authentication| |ThrottledHttpRequestCount|Yes|ThrottledHttpRequestCount|Count|Count|Throttled http requests.|No Dimensions| - ## Microsoft.AppPlatform/Spring |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
For important additional information, see [Monitoring Agents Overview](../agents
|total-requests|Yes|total-requests|Count|Average|Total number of requests in the lifetime of the process|Deployment, AppName, Pod| |working-set|Yes|working-set|Count|Average|Amount of working set used by the process (MB)|Deployment, AppName, Pod| - ## Microsoft.Automation/automationAccounts |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
For important additional information, see [Monitoring Agents Overview](../agents
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|RequestLatency|Yes|Request Latency|Milliseconds|Total|Time taken by the server to process the request|Operation, Authentication, Protocol|
-|RequestsTraffic|Yes|Requests Traffic|Percent|Count|Number of Requests Made|Operation, Authentication, Protocol, StatusCode, StatusCodeClass|
+|RequestLatency|Yes|Request Latency|Milliseconds|Total|Time taken by the server to process the request|Operation, Authentication, Protocol, DataCenter|
+|RequestsTraffic|Yes|Requests Traffic|Percent|Count|Number of Requests Made|Operation, Authentication, Protocol, StatusCode, StatusCodeClass, DataCenter|
## Microsoft.Cache/redis
For important additional information, see [Monitoring Agents Overview](../agents
|totalkeys|Yes|Total Keys|Count|Maximum||No Dimensions| |usedmemory|Yes|Used Memory|Bytes|Maximum||No Dimensions| |usedmemorypercentage|Yes|Used Memory Percentage|Percent|Maximum||InstanceId|
-|usedmemoryRss|Yes|Used Memory RSS|Bytes|Maximum||InstanceId|
## Microsoft.Cdn/cdnwebapplicationfirewallpolicies
For important additional information, see [Monitoring Agents Overview](../agents
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |ByteHitRatio|Yes|Byte Hit Ratio|Percent|Average|This is the ratio of the total bytes served from the cache compared to the total response bytes|Endpoint|
-|OriginHealthPercentage|Yes|Origin Health Percentage|Percent|Average|The percentage of successful health probes from AFDX to backends.|Origin, OriginPool|
+|OriginHealthPercentage|Yes|Origin Health Percentage|Percent|Average|The percentage of successful health probes from AFDX to backends.|Origin, OriginGroup|
|OriginLatency|Yes|Origin Latency|MilliSeconds|Average|The time calculated from when the request was sent by AFDX edge to the backend until AFDX received the last response byte from the backend.|Origin, Endpoint| |OriginRequestCount|Yes|Origin Request Count|Count|Total|The number of requests sent from AFDX to origin.|HttpStatus, HttpStatusGroup, Origin, Endpoint| |Percentage4XX|Yes|Percentage of 4XX|Percent|Average|The percentage of all the client requests for which the response status code is 4XX|Endpoint, ClientRegion, ClientCountry|
For important additional information, see [Monitoring Agents Overview](../agents
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|CPU Credits Consumed|Yes|CPU Credits Consumed|Count|Average|Total number of credits consumed by the Virtual Machine|No Dimensions|
-|CPU Credits Remaining|Yes|CPU Credits Remaining|Count|Average|Total number of credits available to burst|No Dimensions|
+|CPU Credits Consumed|Yes|CPU Credits Consumed|Count|Average|Total number of credits consumed by the Virtual Machine. Only available on B-series burstable VMs|No Dimensions|
+|CPU Credits Remaining|Yes|CPU Credits Remaining|Count|Average|Total number of credits available to burst. Only available on B-series burstable VMs|No Dimensions|
|Data Disk Bandwidth Consumed Percentage|Yes|Data Disk Bandwidth Consumed Percentage|Percent|Average|Percentage of data disk bandwidth consumed per minute|LUN| |Data Disk IOPS Consumed Percentage|Yes|Data Disk IOPS Consumed Percentage|Percent|Average|Percentage of data disk I/Os consumed per minute|LUN| |Data Disk Max Burst Bandwidth|Yes|Data Disk Max Burst Bandwidth|Count|Average|Maximum bytes per second throughput Data Disk can achieve with bursting|LUN|
For important additional information, see [Monitoring Agents Overview](../agents
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|CPU Credits Consumed|Yes|CPU Credits Consumed|Count|Average|Total number of credits consumed by the Virtual Machine|No Dimensions|
-|CPU Credits Remaining|Yes|CPU Credits Remaining|Count|Average|Total number of credits available to burst|No Dimensions|
+|CPU Credits Consumed|Yes|CPU Credits Consumed|Count|Average|Total number of credits consumed by the Virtual Machine. Only available on B-series burstable VMs|No Dimensions|
+|CPU Credits Remaining|Yes|CPU Credits Remaining|Count|Average|Total number of credits available to burst. Only available on B-series burstable VMs|No Dimensions|
|Data Disk Bandwidth Consumed Percentage|Yes|Data Disk Bandwidth Consumed Percentage|Percent|Average|Percentage of data disk bandwidth consumed per minute|LUN, VMName| |Data Disk IOPS Consumed Percentage|Yes|Data Disk IOPS Consumed Percentage|Percent|Average|Percentage of data disk I/Os consumed per minute|LUN, VMName| |Data Disk Max Burst Bandwidth|Yes|Data Disk Max Burst Bandwidth|Count|Average|Maximum bytes per second throughput Data Disk can achieve with bursting|LUN, VMName|
For important additional information, see [Monitoring Agents Overview](../agents
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|CPU Credits Consumed|Yes|CPU Credits Consumed|Count|Average|Total number of credits consumed by the Virtual Machine|No Dimensions|
-|CPU Credits Remaining|Yes|CPU Credits Remaining|Count|Average|Total number of credits available to burst|No Dimensions|
+|CPU Credits Consumed|Yes|CPU Credits Consumed|Count|Average|Total number of credits consumed by the Virtual Machine. Only available on B-series burstable VMs|No Dimensions|
+|CPU Credits Remaining|Yes|CPU Credits Remaining|Count|Average|Total number of credits available to burst. Only available on B-series burstable VMs|No Dimensions|
|Data Disk Bandwidth Consumed Percentage|Yes|Data Disk Bandwidth Consumed Percentage|Percent|Average|Percentage of data disk bandwidth consumed per minute|LUN| |Data Disk IOPS Consumed Percentage|Yes|Data Disk IOPS Consumed Percentage|Percent|Average|Percentage of data disk I/Os consumed per minute|LUN| |Data Disk Max Burst Bandwidth|Yes|Data Disk Max Burst Bandwidth|Count|Average|Maximum bytes per second throughput Data Disk can achieve with bursting|LUN|
For important additional information, see [Monitoring Agents Overview](../agents
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |apiserver_current_inflight_requests|No|Inflight Requests|Count|Average|Maximum number of currently used inflight requests on the apiserver per request kind in the last second|requestKind|
+|cluster_autoscaler_cluster_safe_to_autoscale|No|Cluster Health|Count|Average|Determines whether or not cluster autoscaler will take action on the cluster||
+|cluster_autoscaler_scale_down_in_cooldown|No|Scale Down Cooldown|Count|Average|Determines if the scale down is in cooldown - No nodes will be removed during this timeframe||
+|cluster_autoscaler_unneeded_nodes_count|No|Unneeded Nodes|Count|Average|Cluster auotscaler marks those nodes as candidates for deletion and are eventually deleted||
+|cluster_autoscaler_unschedulable_pods_count|No|Unschedulable Pods|Count|Average|Number of pods that are currently unschedulable in the cluster||
|kube_node_status_allocatable_cpu_cores|No|Total number of available cpu cores in a managed cluster|Count|Average|Total number of available cpu cores in a managed cluster|| |kube_node_status_allocatable_memory_bytes|No|Total amount of available memory in a managed cluster|Bytes|Average|Total amount of available memory in a managed cluster|| |kube_node_status_condition|No|Statuses for various node conditions|Count|Average|Statuses for various node conditions|condition, status, status2, node| |kube_pod_status_phase|No|Number of pods by phase|Count|Average|Number of pods by phase|phase, namespace, pod| |kube_pod_status_ready|No|Number of pods in Ready state|Count|Average|Number of pods in Ready state|namespace, pod, condition|
+|node_cpu_usage_millicores|Yes|CPU Usage Millicores|MilliCores|Average|Aggregated measurement of CPU utilization in millicores across the cluster|node, nodepool|
+|node_cpu_usage_percentage|Yes|CPU Usage Percentage|Percent|Average|Aggregated average CPU utilization measured in percentage across the cluster|node, nodepool|
+|node_disk_usage_bytes|Yes|Disk Used Bytes|Bytes|Average|Disk space used in bytes by device|node, nodepool, device|
+|node_disk_usage_percentage|Yes|Disk Used Percentage|Percent|Average|Disk space used in percent by device|node, nodepool, device|
+|node_memory_rss_bytes|Yes|Memory RSS Bytes|Bytes|Average|Container RSS memory used in bytes|node, nodepool|
+|node_memory_rss_percentage|Yes|Memory RSS Percentage|Percent|Average|Container RSS memory used in percent|node, nodepool|
+|node_memory_working_set_bytes|Yes|Memory Working Set Bytes|Bytes|Average|Container working set memory used in bytes|node, nodepool|
+|node_memory_working_set_percentage|Yes|Memory Working Set Percentage|Percent|Average|Container working set memory used in percent|node, nodepool|
+|node_network_in_bytes|Yes|Network In Bytes|Bytes|Average|Network received bytes|node, nodepool|
+|node_network_out_bytes|Yes|Network Out Bytes|Bytes|Average|Network transmitted bytes|node, nodepool|
## Microsoft.CustomProviders/resourceproviders
For important additional information, see [Monitoring Agents Overview](../agents
|write_throughput|Yes|Write Throughput Bytes/Sec|Count|Average|Bytes written per second to the data disk during monitoring period|No Dimensions|
+## Microsoft.DBForPostgreSQL/serverGroupsv2
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|active_connections|Yes|Active Connections|Count|Average|Active Connections|ServerName|
+|cpu_percent|Yes|CPU percent|Percent|Average|CPU percent|ServerName|
+|iops|Yes|IOPS|Count|Average|IO operations per second|ServerName|
+|memory_percent|Yes|Memory percent|Percent|Average|Memory percent|ServerName|
+|network_bytes_egress|Yes|Network Out|Bytes|Total|Network Out across active connections|ServerName|
+|network_bytes_ingress|Yes|Network In|Bytes|Total|Network In across active connections|ServerName|
+|storage_percent|Yes|Storage percent|Percent|Average|Storage percent|ServerName|
+|storage_used|Yes|Storage used|Bytes|Average|Storage used|ServerName|
++ ## Microsoft.DBforPostgreSQL/servers |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
For important additional information, see [Monitoring Agents Overview](../agents
|CassandraTableUpdate|No|Cassandra Table Updated|Count|Count|Cassandra Table Updated|ResourceName, ChildResourceName, | |CreateAccount|Yes|Account Created|Count|Count|Account Created|No Dimensions| |DataUsage|No|Data Usage|Bytes|Total|Total data usage reported at 5 minutes granularity|CollectionName, DatabaseName, Region|
+|DedicatedGatewayRequests|Yes|DedicatedGatewayRequests|Count|Count|Requests at the dedicated gateway|DatabaseName, CollectionName, CacheExercised, OperationName, Region|
|DeleteAccount|Yes|Account Deleted|Count|Count|Account Deleted|No Dimensions| |DocumentCount|No|Document Count|Count|Total|Total document count reported at 5 minutes granularity|CollectionName, DatabaseName, Region| |DocumentQuota|No|Document Quota|Bytes|Total|Total storage quota reported at 5 minutes granularity|CollectionName, DatabaseName, Region|
For important additional information, see [Monitoring Agents Overview](../agents
|CategorizedGatewayRequests|Yes|Categorized Gateway Requests|Count|Total|Number of gateway requests by categories (1xx/2xx/3xx/4xx/5xx)|HttpStatus| |GatewayRequests|Yes|Gateway Requests|Count|Total|Number of gateway requests|HttpStatus| |KafkaRestProxy.ConsumerRequest.m1_delta|Yes|REST proxy Consumer RequestThroughput|CountPerSecond|Total|Number of consumer requests to Kafka REST proxy|Machine, Topic|
-|KafkaRestProxy.ConsumerRequestTime.p95|Yes|REST proxy Consumer RequestLatency|Milliseconds|Average|Message Latency in a consumer request through Kafka REST proxy|Machine, Topic|
+|KafkaRestProxy.ConsumerRequestFail.m1_delta|Yes|REST proxy Consumer Unsuccessful Requests|CountPerSecond|Total|Consumer request exceptions|Machine, Topic|
+|KafkaRestProxy.ConsumerRequestTime.p95|Yes|REST proxy Consumer RequestLatency|Milliseconds|Average|Message latency in a consumer request through Kafka REST proxy|Machine, Topic|
+|KafkaRestProxy.ConsumerRequestWaitingInQueueTime.p95|Yes|REST proxy Consumer Request Backlog|Milliseconds|Average|Consumer REST proxy queue length|Machine, Topic|
|KafkaRestProxy.MessagesIn.m1_delta|Yes|REST proxy Producer MessageThroughput|CountPerSecond|Total|Number of producer messages through Kafka REST proxy|Machine, Topic| |KafkaRestProxy.MessagesOut.m1_delta|Yes|REST proxy Consumer MessageThroughput|CountPerSecond|Total|Number of consumer messages through Kafka REST proxy|Machine, Topic| |KafkaRestProxy.OpenConnections|Yes|REST proxy ConcurrentConnections|Count|Total|Number of concurrent connections through Kafka REST proxy|Machine, Topic| |KafkaRestProxy.ProducerRequest.m1_delta|Yes|REST proxy Producer RequestThroughput|CountPerSecond|Total|Number of producer requests to Kafka REST proxy|Machine, Topic|
-|KafkaRestProxy.ProducerRequestTime.p95|Yes|REST proxy Producer RequestLatency|Milliseconds|Average|Message Latency in a producer request through Kafka REST proxy|Machine, Topic|
+|KafkaRestProxy.ProducerRequestFail.m1_delta|Yes|REST proxy Producer Unsuccessful Requests|CountPerSecond|Total|Producer request exceptions|Machine, Topic|
+|KafkaRestProxy.ProducerRequestTime.p95|Yes|REST proxy Producer RequestLatency|Milliseconds|Average|Message latency in a producer request through Kafka REST proxy|Machine, Topic|
+|KafkaRestProxy.ProducerRequestWaitingInQueueTime.p95|Yes|REST proxy Producer Request Backlog|Milliseconds|Average|Producer REST proxy queue length|Machine, Topic|
|NumActiveWorkers|Yes|Number of Active Workers|Count|Maximum|Number of Active Workers|MetricName|
For important additional information, see [Monitoring Agents Overview](../agents
|dataExport.messages.written|Yes|Data Export Messages Written|Count|Total|Number of messages written to a destination|exportId, exportDisplayName, destinationId, destinationDisplayName|
-## Microsoft.IoTSpaces/Graph
-
-|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
-||||||||
-|ApiLatency|No|ApiLatency|6|0|Measures latency of API requests made to Microsoft.IoTSpaces in Milliseconds|No Dimensions|
-|FunctionExecutionLatency|No|FunctionExecutionLatency|6|0|Measures latency of user-defined function execution in Milliseconds for Microsoft.IoTSpaces|No Dimensions|
-|MessageEgressFailure|No|MessageEgressFailure|2|3|Looks up a localized string similar to Measures Failed count event in Count for Microsoft.IoTSpaces|No Dimensions|
-|MessageEgressLatency|No|MessageEgressLatency|6|0|Measures the latency from dispatcher to other endpoints in Milliseconds for Microsoft.IoTSpaces|No Dimensions|
-|MessageEgressSuccess|No|MessageEgressSuccess|2|3|Looks up a localized string similar to Measures completed count event in Count for Microsoft.IoTSpaces|No Dimensions|
-|ProcessingLatency|No|ProcessingLatency|6|0|Measures latency from message ingested to dispatched event in Milliseconds for Microsoft.IoTSpaces|No Dimensions|
-- ## microsoft.keyvault/managedhsms |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
For important additional information, see [Monitoring Agents Overview](../agents
|ContentKeyPolicyCount|Yes|Content Key Policy count|Count|Average|How many content key policies are already created in current media service account|No Dimensions| |ContentKeyPolicyQuota|Yes|Content Key Policy quota|Count|Average|How many content key polices are allowed for current media service account|No Dimensions| |ContentKeyPolicyQuotaUsedPercentage|Yes|Content Key Policy quota used percentage|Percent|Average|Content Key Policy used percentage in current media service account|No Dimensions|
+|MaxChannelsAndLiveEventsCount|Yes|Max live event quota|Count|Maximum|The maximum number of live events allowed in the current media services account|No Dimensions|
+|MaxRunningChannelsAndLiveEventsCount|Yes|Max running live event quota|Count|Maximum|The maximum number of running live events allowed in the current media services account|No Dimensions|
|RunningChannelsAndLiveEventsCount|Yes|Running live event count|Count|Average|The total number of running live events in the current media services account|No Dimensions| |StreamingPolicyCount|Yes|Streaming Policy count|Count|Average|How many streaming policies are already created in current media service account|No Dimensions| |StreamingPolicyQuota|Yes|Streaming Policy quota|Count|Average|How many streaming policies are allowed for current media service account|No Dimensions|
For important additional information, see [Monitoring Agents Overview](../agents
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|CPU|Yes|CPU usage|Percent|Average|CPU usage for premium streaming endpoints. This data is not available for standard streaming endpoints.|VmId|
+|CPU|Yes|CPU usage|Percent|Average|CPU usage for premium streaming endpoints. This data is not available for standard streaming endpoints.|No Dimensions|
|Egress|Yes|Egress|Bytes|Total|The amount of Egress data, in bytes.|OutputFormat|
-|EgressBandwidth|No|Egress bandwidth|BitsPerSecond|Average|Egress bandwidth in bits per second.|VmId|
+|EgressBandwidth|No|Egress bandwidth|BitsPerSecond|Average|Egress bandwidth in bits per second.|No Dimensions|
|Requests|Yes|Requests|Count|Total|Requests to a Streaming Endpoint.|OutputFormat, HttpStatusCode, ErrorCode| |SuccessE2ELatency|Yes|Success end to end Latency|Milliseconds|Average|The average latency for successful requests in milliseconds.|OutputFormat|
For important additional information, see [Monitoring Agents Overview](../agents
|||||||| |AverageReadLatency|Yes|Average read latency|MilliSeconds|Average|Average read latency in milliseconds per operation|No Dimensions| |AverageWriteLatency|Yes|Average write latency|MilliSeconds|Average|Average write latency in milliseconds per operation|No Dimensions|
-|CbsVolumeBackupActive|Yes|Is Volume Backup suspended|Count|Average|Is the backup policy suspended for the volume? 1 if yes, 0 if no.|No Dimensions|
+|CbsVolumeBackupActive|Yes|Is Volume Backup suspended|Count|Average|Is the backup policy suspended for the volume? 0 if yes, 1 if no.|No Dimensions|
|CbsVolumeLogicalBackupBytes|Yes|Volume Backup Bytes|Bytes|Average|Total bytes backed up for this Volume.|No Dimensions| |CbsVolumeOperationComplete|Yes|Is Volume Backup Operation Complete|Count|Average|Did the last volume backup or restore operation complete successfully? 1 if yes, 0 if no.|No Dimensions| |CbsVolumeOperationTransferredBytes|Yes|Volume Backup Last Transferred Bytes|Bytes|Average|Total bytes transferred for last backup or restore operation.|No Dimensions|
For important additional information, see [Monitoring Agents Overview](../agents
|Throughput|No|Throughput|BitsPerSecond|Average|Throughput processed by this firewall|No Dimensions|
+## microsoft.network/bastionHosts
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|pingmesh|No|Bastion Communication Status|Count|Average|Communication status shows 1 if all communication is good and 0 if its bad.||
+|sessions|No|Session Count|Count|Total|Sessions Count for the Bastion. View in sum and per instance.|host|
+|total|Yes|Total Memory|Count|Average|Total memory stats.|host|
+|usage_user|No|Used CPU|Count|Average|CPU Usage stats.|cpu, host|
+|used|Yes|Used Memory|Count|Average|Memory Usage stats.|host|
++ ## Microsoft.Network/connections |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
For important additional information, see [Monitoring Agents Overview](../agents
|ErGatewayConnectionBitsOutPerSecond|No|BitsOutPerSecond|BitsPerSecond|Average|Bits egressing Azure per second|ConnectionName| |ExpressRouteGatewayCountOfRoutesAdvertisedToPeer|Yes|Count Of Routes Advertised to Peer(Preview)|Count|Maximum|Count Of Routes Advertised To Peer by ExpressRouteGateway|roleInstance| |ExpressRouteGatewayCountOfRoutesLearnedFromPeer|Yes|Count Of Routes Learned from Peer (Preview)|Count|Maximum|Count Of Routes Learned From Peer by ExpressRouteGateway|roleInstance|
-|ExpressRouteGatewayCpuUtilization|Yes|CPU utilization (Preview)|Count|Average|CPU Utilization of the ExpressRoute Gateway|roleInstance|
+|ExpressRouteGatewayCpuUtilization|Yes|CPU utilization|Count|Average|CPU Utilization of the ExpressRoute Gateway|roleInstance|
|ExpressRouteGatewayFrequencyOfRoutesChanged|No|Frequency of Routes change (Preview)|Count|Total|Frequency of Routes change in ExpressRoute Gateway|roleInstance| |ExpressRouteGatewayNumberOfVmInVnet|No|Number of VMs in the Virtual Network(Preview)|Count|Maximum|Number of VMs in the Virtual Network|No Dimensions|
-|ExpressRouteGatewayPacketsPerSecond|No|Packets per second (Preview)|CountPerSecond|Average|Packet count of ExpressRoute Gateway|roleInstance|
+|ExpressRouteGatewayPacketsPerSecond|No|Packets per second|CountPerSecond|Average|Packet count of ExpressRoute Gateway|roleInstance|
## Microsoft.Network/expressRoutePorts
For important additional information, see [Monitoring Agents Overview](../agents
|AverageBandwidth|Yes|Gateway S2S Bandwidth|BytesPerSecond|Average|Average site-to-site bandwidth of a gateway in bytes per second|No Dimensions| |ExpressRouteGatewayCountOfRoutesAdvertisedToPeer|Yes|Count Of Routes Advertised to Peer(Preview)|Count|Maximum|Count Of Routes Advertised To Peer by ExpressRouteGateway|roleInstance| |ExpressRouteGatewayCountOfRoutesLearnedFromPeer|Yes|Count Of Routes Learned from Peer (Preview)|Count|Maximum|Count Of Routes Learned From Peer by ExpressRouteGateway|roleInstance|
-|ExpressRouteGatewayCpuUtilization|Yes|CPU utilization (Preview)|Count|Average|CPU Utilization of the ExpressRoute Gateway|roleInstance|
+|ExpressRouteGatewayCpuUtilization|Yes|CPU utilization|Count|Average|CPU Utilization of the ExpressRoute Gateway|roleInstance|
|ExpressRouteGatewayFrequencyOfRoutesChanged|No|Frequency of Routes change (Preview)|Count|Total|Frequency of Routes change in ExpressRoute Gateway|roleInstance| |ExpressRouteGatewayNumberOfVmInVnet|No|Number of VMs in the Virtual Network(Preview)|Count|Maximum|Number of VMs in the Virtual Network|No Dimensions|
-|ExpressRouteGatewayPacketsPerSecond|No|Packets per second (Preview)|CountPerSecond|Average|Packet count of ExpressRoute Gateway|roleInstance|
+|ExpressRouteGatewayPacketsPerSecond|No|Packets per second|CountPerSecond|Average|Packet count of ExpressRoute Gateway|roleInstance|
|P2SBandwidth|Yes|Gateway P2S Bandwidth|BytesPerSecond|Average|Average point-to-site bandwidth of a gateway in bytes per second|No Dimensions| |P2SConnectionCount|Yes|P2S Connection Count|Count|Maximum|Point-to-site connection count of a gateway|Protocol| |TunnelAverageBandwidth|Yes|Tunnel Bandwidth|BytesPerSecond|Average|Average bandwidth of a tunnel in bytes per second|ConnectionName, RemoteIP|
For important additional information, see [Monitoring Agents Overview](../agents
|TunnelIngressBytes|Yes|Tunnel Ingress Bytes|Bytes|Total|Incoming bytes of a tunnel|ConnectionName, RemoteIP| |TunnelIngressPacketDropTSMismatch|Yes|Tunnel Ingress TS Mismatch Packet Drop|Count|Total|Incoming packet drop count from traffic selector mismatch of a tunnel|ConnectionName, RemoteIP| |TunnelIngressPackets|Yes|Tunnel Ingress Packets|Count|Total|Incoming packet count of a tunnel|ConnectionName, RemoteIP|
+|TunnelNatAllocations|No|Tunnel NAT Allocations|Count|Total|Count of allocations for a NAT rule on a tunnel|NatRule, ConnectionName, RemoteIP|
+|TunnelNatedBytes|No|Tunnel NATed Bytes|Bytes|Total|Number of bytes that were NATed on a tunnel by a NAT rule |NatRule, ConnectionName, RemoteIP|
+|TunnelNatedPackets|No|Tunnel NATed Packets|Count|Total|Number of packets that were NATed on a tunnel by a NAT rule|NatRule, ConnectionName, RemoteIP|
+|TunnelNatFlowCount|No|Tunnel NAT Flows|Count|Total|Number of NAT flows on a tunnel by flow type and NAT rule|NatRule, ConnectionName, RemoteIP, FlowType|
+|TunnelNatPacketDrop|No|Tunnel NAT Packet Drops|Count|Total|Number of NATed packets on a tunnel that dropped by drop type and NAT rule|NatRule, ConnectionName, RemoteIP, DropType|
+|TunnelReverseNatedBytes|No|Tunnel Reverse NATed Bytes|Bytes|Total|Number of bytes that were reverse NATed on a tunnel by a NAT rule|NatRule, ConnectionName, RemoteIP|
+|TunnelReverseNatedPackets|No|Tunnel Reverse NATed Packets|Count|Total|Number of packets on a tunnel that were reverse NATed by a NAT rule|NatRule, ConnectionName, RemoteIP|
## Microsoft.Network/virtualNetworks
For important additional information, see [Monitoring Agents Overview](../agents
|TunnelIngressBytes|Yes|Tunnel Ingress Bytes|Bytes|Total|Incoming bytes of a tunnel|ConnectionName, RemoteIP| |TunnelIngressPacketDropTSMismatch|Yes|Tunnel Ingress TS Mismatch Packet Drop|Count|Total|Incoming packet drop count from traffic selector mismatch of a tunnel|ConnectionName, RemoteIP| |TunnelIngressPackets|Yes|Tunnel Ingress Packets|Count|Total|Incoming packet count of a tunnel|ConnectionName, RemoteIP|
+|TunnelNatAllocations|No|Tunnel NAT Allocations|Count|Total|Count of allocations for a NAT rule on a tunnel|NatRule, ConnectionName, RemoteIP|
+|TunnelNatedBytes|No|Tunnel NATed Bytes|Bytes|Total|Number of bytes that were NATed on a tunnel by a NAT rule |NatRule, ConnectionName, RemoteIP|
+|TunnelNatedPackets|No|Tunnel NATed Packets|Count|Total|Number of packets that were NATed on a tunnel by a NAT rule|NatRule, ConnectionName, RemoteIP|
+|TunnelNatFlowCount|No|Tunnel NAT Flows|Count|Total|Number of NAT flows on a tunnel by flow type and NAT rule|NatRule, ConnectionName, RemoteIP, FlowType|
+|TunnelNatPacketDrop|No|Tunnel NAT Packet Drops|Count|Total|Number of NATed packets on a tunnel that dropped by drop type and NAT rule|NatRule, ConnectionName, RemoteIP, DropType|
+|TunnelReverseNatedBytes|No|Tunnel Reverse NATed Bytes|Bytes|Total|Number of bytes that were reverse NATed on a tunnel by a NAT rule|NatRule, ConnectionName, RemoteIP|
+|TunnelReverseNatedPackets|No|Tunnel Reverse NATed Packets|Count|Total|Number of packets on a tunnel that were reverse NATed by a NAT rule|NatRule, ConnectionName, RemoteIP|
## Microsoft.NotificationHubs/Namespaces/NotificationHubs
For important additional information, see [Monitoring Agents Overview](../agents
|QueryPoolJobQueueLength|Yes|Query Pool Job Queue Length (Datasets) (Gen1)|Count|Average|Number of jobs in the queue of the query thread pool. Supported only for Power BI Embedded Generation 1 resources.|No Dimensions|
-## Microsoft.ProjectBabylon/accounts
-
-|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
-||||||||
-|ScanCancelled|Yes|Scan Cancelled|Count|Total|Indicates the number of scans cancelled.|ResourceId|
-|ScanCompleted|Yes|Scan Completed|Count|Total|Indicates the number of scans completed successfully.|ResourceId|
-|ScanFailed|Yes|Scan Failed|Count|Total|Indicates the number of scans failed.|ResourceId|
-|ScanTimeTaken|Yes|Scan time taken|Seconds|Total|Indicates the total scan time in seconds.|ResourceId|
-- ## microsoft.purview/accounts |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
For important additional information, see [Monitoring Agents Overview](../agents
|UserErrors|Yes|User Errors|Percent|Maximum|The percentage of user errors|No Dimensions|
+## Microsoft.SignalRService/WebPubSub
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|ConnectionCount|Yes|Connection Count|Count|Maximum|The amount of user connection.|No Dimensions|
+|InboundTraffic|Yes|Inbound Traffic|Bytes|Total|The inbound traffic of service|No Dimensions|
+|OutboundTraffic|Yes|Outbound Traffic|Bytes|Total|The outbound traffic of service|No Dimensions|
++ ## Microsoft.Sql/managedInstances |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
For important additional information, see [Monitoring Agents Overview](../agents
|||||||| |ServerSyncSessionResult|Yes|Sync Session Result|Count|Average|Metric that logs a value of 1 each time the Server Endpoint successfully completes a Sync Session with the Cloud Endpoint|SyncGroupName, ServerEndpointName, SyncDirection| |StorageSyncBatchTransferredFileBytes|Yes|Bytes synced|Bytes|Total|Total file size transferred for Sync Sessions|SyncGroupName, ServerEndpointName, SyncDirection|
+|StorageSyncRecallComputedSuccessRate|Yes|Cloud tiering recall success rate|Percent|Average|Percentage of all recalls that were successful|SyncGroupName, ServerName|
|StorageSyncRecalledNetworkBytesByApplication|Yes|Cloud tiering recall size by application|Bytes|Total|Size of data recalled by application|SyncGroupName, ServerName, ApplicationName| |StorageSyncRecalledTotalNetworkBytes|Yes|Cloud tiering recall size|Bytes|Total|Size of data recalled|SyncGroupName, ServerName| |StorageSyncRecallIOTotalSizeBytes|Yes|Cloud tiering recall|Bytes|Total|Total size of data recalled by the server|ServerName|
For important additional information, see [Monitoring Agents Overview](../agents
|LateInputEvents|Yes|Late Input Events|Count|Total|Late Input Events|LogicalName, PartitionId| |OutputEvents|Yes|Output Events|Count|Total|Output Events|LogicalName, PartitionId| |OutputWatermarkDelaySeconds|Yes|Watermark Delay|Seconds|Maximum|Watermark Delay|LogicalName, PartitionId|
+|ProcessCPUUsagePercentage|Yes|CPU % Utilization (Preview)|Percent|Maximum|CPU % Utilization (Preview)|LogicalName, PartitionId|
|ResourceUtilization|Yes|SU % Utilization|Percent|Maximum|SU % Utilization|LogicalName, PartitionId|
For important additional information, see [Monitoring Agents Overview](../agents
|AverageResponseTime|Yes|Average Response Time (deprecated)|Seconds|Average|The average time taken for the app to serve requests, in seconds.|Instance| |BytesReceived|Yes|Data In|Bytes|Total|The amount of incoming bandwidth consumed by the app, in MiB.|Instance| |BytesSent|Yes|Data Out|Bytes|Total|The amount of outgoing bandwidth consumed by the app, in MiB.|Instance|
-|CpuTime|Yes|CPU Time|Seconds|Total|The amount of CPU consumed by the app, in seconds. For more information about this metric. Please see https://docs.microsoft.com/azure/app-service/web-sites-monitor#cpu-time-vs-cpu-percentage (CPU time vs CPU percentage). Not applicable to Azure Functions.|Instance|
+|CpuTime|Yes|CPU Time|Seconds|Total|The amount of CPU consumed by the app, in seconds. For more information about this metric. Not applicable to Azure Functions. Please see https://aka.ms/website-monitor-cpu-time-vs-cpu-percentage (CPU time vs CPU percentage).|Instance|
|CurrentAssemblies|Yes|Current Assemblies|Count|Average|The current number of Assemblies loaded across all AppDomains in this application.|Instance| |FileSystemUsage|Yes|File System Usage|Bytes|Average|Percentage of filesystem quota consumed by the app.|No Dimensions| |FunctionExecutionCount|Yes|Function Execution Count|Count|Total|Function Execution Count. Only present for Azure Functions.|Instance|
For important additional information, see [Monitoring Agents Overview](../agents
|AverageResponseTime|Yes|Average Response Time (deprecated)|Seconds|Average|The average time taken for the app to serve requests, in seconds.|Instance| |BytesReceived|Yes|Data In|Bytes|Total|The amount of incoming bandwidth consumed by the app, in MiB.|Instance| |BytesSent|Yes|Data Out|Bytes|Total|The amount of outgoing bandwidth consumed by the app, in MiB.|Instance|
-|CpuTime|Yes|CPU Time|Seconds|Total|The amount of CPU consumed by the app, in seconds. For more information about this metric. Please see https://docs.microsoft.com/azure/app-service/web-sites-monitor#cpu-time-vs-cpu-percentage (CPU time vs CPU percentage).|Instance|
+|CpuTime|Yes|CPU Time|Seconds|Total|The amount of CPU consumed by the app, in seconds. For more information about this metric. Not applicable to Azure Functions. Please see https://aka.ms/website-monitor-cpu-time-vs-cpu-percentage (CPU time vs CPU percentage).|Instance|
|CurrentAssemblies|Yes|Current Assemblies|Count|Average|The current number of Assemblies loaded across all AppDomains in this application.|Instance| |FileSystemUsage|Yes|File System Usage|Bytes|Average|Percentage of filesystem quota consumed by the app.|No Dimensions| |FunctionExecutionCount|Yes|Function Execution Count|Count|Total|Function Execution Count|Instance|
For important additional information, see [Monitoring Agents Overview](../agents
|SiteErrors|Yes|SiteErrors|Count|Total|SiteErrors|Instance| |SiteHits|Yes|SiteHits|Count|Total|SiteHits|Instance| - ## Next steps - [Read about metrics in Azure Monitor](../data-platform.md)
azure-monitor Resource Logs Categories https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/resource-logs-categories.md
Title: Azure Monitor Resource Logs supported services and categories description: Reference of Azure Monitor Understand the supported services and event schema for Azure resource logs. Previously updated : 01/29/2021 Last updated : 03/30/2021 # Supported categories for Azure Resource Logs
Following is a list of the types of logs available for each resource type.
Some categories may only be supported for specific types of resources. See the resource-specific documentation if you feel you are missing a resource. For example, Microsoft.Sql/servers/databases categories aren't available for all types of databases. For more information, see [information on SQL Database diagnostic logging](../../azure-sql/database/metrics-diagnostic-telemetry-logging-streaming-export-configure.md). If you think there is something is missing, you can open a GitHub comment at the bottom of this article.
-## Microsoft.AAD/domainServices
+
+## Microsoft.AAD/DomainServices
|Category|Category Display Name|Costs To Export| ||||
If you think there is something is missing, you can open a GitHub comment at the
|SystemSecurity|SystemSecurity|No|
+## microsoft.aadiam/tenants
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Signin|Signin|Yes|
++ ## Microsoft.AnalysisServices/servers |Category|Category Display Name|Costs To Export|
If you think there is something is missing, you can open a GitHub comment at the
|JobStreams|Job Streams|No|
+## Microsoft.AutonomousDevelopmentPlatform/accounts
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Audit|Audit|Yes|
+|Operational|Operational|Yes|
++ ## Microsoft.Batch/batchAccounts |Category|Category Display Name|Costs To Export|
If you think there is something is missing, you can open a GitHub comment at the
|Category|Category Display Name|Costs To Export| |||| |BotRequest|Requests from the channels to the bot|No|
-|DependencyRequest|Requests to dependencies|No|
## Microsoft.Cdn/cdnwebapplicationfirewallpolicies
If you think there is something is missing, you can open a GitHub comment at the
|Category|Category Display Name|Costs To Export| ||||
+|AuthOperational|Operational Authentication Logs|Yes|
|ChatOperational|Operational Chat Logs|No| |SMSOperational|Operational SMS Logs|No| |Usage|Usage Records|No|
If you think there is something is missing, you can open a GitHub comment at the
|||| |ActivityRuns|Pipeline activity runs log|No| |PipelineRuns|Pipeline runs log|No|
+|SandboxActivityRuns|Sandbox Activity runs log|Yes|
+|SandboxPipelineRuns|Sandbox Pipeline runs log|Yes|
|SSISIntegrationRuntimeLogs|SSIS integration runtime logs|No| |SSISPackageEventMessageContext|SSIS package event message context|No| |SSISPackageEventMessages|SSIS package event messages|No|
If you think there is something is missing, you can open a GitHub comment at the
|PostgreSQLLogs|PostgreSQL Server Logs|No|
+## Microsoft.DBForPostgreSQL/serverGroupsv2
+
+|Category|Category Display Name|Costs To Export|
+||||
+|PostgreSQLLogs|PostgreSQL Server Logs|Yes|
++ ## Microsoft.DBforPostgreSQL/servers |Category|Category Display Name|Costs To Export|
If you think there is something is missing, you can open a GitHub comment at the
|AppTraces|Traces|No|
-## Microsoft.IoTSpaces/Graph
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Audit|Audit|No|
-|Egress|Egress|No|
-|Ingress|Ingress|No|
-|Operational|Operational|No|
-|Trace|Trace|No|
-|UserDefinedFunction|UserDefinedFunction|No|
-- ## microsoft.keyvault/managedhsms |Category|Category Display Name|Costs To Export|
If you think there is something is missing, you can open a GitHub comment at the
|Engine|Engine|No|
-## Microsoft.ProjectBabylon/accounts
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ScanStatusLogEvent|ScanStatus|No|
-- ## microsoft.purview/accounts |Category|Category Display Name|Costs To Export|
If you think there is something is missing, you can open a GitHub comment at the
|AllLogs|Azure SignalR Service Logs.|No|
+## Microsoft.SignalRService/WebPubSub
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AllLogs|Azure Web PubSub Service Logs.|Yes|
++ ## Microsoft.Sql/managedInstances |Category|Category Display Name|Costs To Export|
If you think there is something is missing, you can open a GitHub comment at the
|||| |BuiltinSqlReqsEnded|Built-in Sql Pool Requests Ended|No| |GatewayApiRequests|Synapse Gateway Api Requests|No|
+|IntegrationActivityRuns|Integration Activity Runs|Yes|
+|IntegrationPipelineRuns|Integration Pipeline Runs|Yes|
+|IntegrationTriggerRuns|Integration Trigger Runs|Yes|
|SQLSecurityAuditEvents|SQL Security Audit Event|No| |SynapseRbacOperations|Synapse RBAC Operations|No|
If you think there is something is missing, you can open a GitHub comment at the
|FunctionAppLogs|Function Application Logs|No| - ## Next Steps * [Learn more about resource logs](../essentials/platform-logs-overview.md)
azure-netapp-files Solutions Windows Virtual Desktop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/solutions-windows-virtual-desktop.md
Azure NetApp Files is a highly performant file storage service from Azure. It ca
## Sample blueprints
-The following sample blueprints show the integration of Windows Virtual Desktop with Azure NetApp Files. In a pooled desktop scenario, users are directed to the best available session (the [breadth-first mode](../virtual-desktop/host-pool-load-balancing.md#breadth-first-load-balancing-method)) host in the pool, using [multi-session virtual machines](../virtual-desktop/windows-10-multisession-faq.md#what-is-windows-10-enterprise-multi-session). On the other hand, personal desktops are reserved for scenarios in which each user has their own virtual machine.
+The following sample blueprints show the integration of Windows Virtual Desktop with Azure NetApp Files. In a pooled desktop scenario, users are directed to the best available session (the [breadth-first mode](../virtual-desktop/host-pool-load-balancing.md#breadth-first-load-balancing-method)) host in the pool, using [multi-session virtual machines](../virtual-desktop/windows-10-multisession-faq.yml#what-is-windows-10-enterprise-multi-session). On the other hand, personal desktops are reserved for scenarios in which each user has their own virtual machine.
### Pooled desktop scenario
azure-resource-manager Azure Services Resource Providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/azure-services-resource-providers.md
The resources providers that are marked with **- registered** are registered by
| Microsoft.HybridData | [StorSimple](../../storsimple/index.yml) | | Microsoft.HybridNetwork | [Private Edge Zones](../../networking/edge-zones-overview.md) | | Microsoft.ImportExport | [Azure Import/Export](../../import-export/storage-import-export-service.md) |
-| microsoft.insights | [Azure Monitor](../../azure-monitor/index.yml) |
+| Microsoft.Insights | [Azure Monitor](../../azure-monitor/index.yml) |
| Microsoft.IoTCentral | [Azure IoT Central](../../iot-central/index.yml) | | Microsoft.IoTSpaces | [Azure Digital Twins](../../digital-twins/index.yml) | | Microsoft.Intune | [Azure Monitor](../../azure-monitor/index.yml) |
ResourceType : Microsoft.KeyVault/vaults
## Next steps
-For more information about resource providers, including how to register a resource provider, see [Azure resource providers and types](resource-providers-and-types.md).
+For more information about resource providers, including how to register a resource provider, see [Azure resource providers and types](resource-providers-and-types.md).
azure-resource-manager Networking Move Limitations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/move-limitations/networking-move-limitations.md
This article describes how to move virtual networks and other networking resourc
## Dependent resources
-When moving a virtual network, you must also move its dependent resources. For VPN Gateways, you must move IP addresses, virtual network gateways, and all associated connection resources. Local network gateways can be in a different resource group.
+> [!NOTE]
+> Please note that VPN Gateways associated with public IP addresses are not currently able to move between resource groups or subscriptions.
+
+When moving a resource, you must also move its dependent resources (e.g. public IP addresses, virtual network gateways, all associated connection resources). Local network gateways can be in a different resource group.
To move a virtual machine with a network interface card to a new subscription, you must move all dependent resources. Move the virtual network for the network interface card, all other network interface cards for the virtual network, and the VPN gateways.
azure-resource-manager Bicep File https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/bicep-file.md
param <parameter-name> <parameter-data-type> = <default-value>
var <variable-name> = <variable-value>
+resource <resource-symbolic-name> '<resource-type>@<api-version>' = {
+ <resource-properties>
+}
+
+// conditional deployment
+resource <resource-symbolic-name> '<resource-type>@<api-version>' = if (<condition-to-deploy>) {
+ <resource-properties>
+}
+
+// iterative deployment
+@<decorator>(<argument>)
+resource <resource-symbolic-name> '<resource-type>@<api-version>' = [for <item> in <collection>: {
+ <resource-properties>
+}]
+ module <module-symbolic-name> '<path-to-file>' = { name: '<linked-deployment-name>' params: {
module <module-symbolic-name> '<path-to-file>' = {
} }
-resource <resource-symbolic-name> '<resource-type>@<api-version>' = {
- <resource-properties>
+// conditional deployment
+module <module-symbolic-name> '<path-to-file>' = if (<condition-to-deploy>) {
+ name: '<linked-deployment-name>'
+ params: {
+ <parameter-names-and-values>
+ }
}
-resource <resource-symbolic-name> '<resource-type>@<api-version>' = if (<condition-to-deploy>) {
- <resource-properties>
-}
+// iterative deployment
+module <module-symbolic-name> '<path-to-file>' = [for <item> in <collection>: {
+ name: '<linked-deployment-name>'
+ params: {
+ <parameter-names-and-values>
+ }
+}]
output <output-name> <output-data-type> = <output-value> ```
You don't specify a [data type](data-types.md) for a variable. Instead, the data
For more information, see [Variables in templates](template-variables.md).
-## Modules
-
-Use modules to link to other Bicep files that contain code you want to reuse. The module contains one or more resources to deploy. Those resources are deployed along with any other resources in your Bicep file.
-
-```bicep
-module webModule './webApp.bicep' = {
- name: 'webDeploy'
- params: {
- skuName: 'S1'
- location: location
- }
-}
-```
-
-The symbolic name enables you to reference the module from somewhere else in the file. For example, you can get an output value from a module by using the symbolic name and the name of the output value.
-
-For more information, see [Use Bicep modules](bicep-modules.md).
- ## Resource Use the `resource` keyword to define a resource to deploy. Your resource declaration includes a symbolic name for the resource. You'll use this symbolic name in other parts of the Bicep file if you need to get a value from the resource.
resource stg 'Microsoft.Storage/storageAccounts@2019-06-01' = {
In your resource declaration, you include properties for the resource type. These properties are unique to each resource type.
-To [conditionally deploy a resource](conditional-resource-deployment.md), add an `if` statement.
+For more information, see [Resource declaration in templates](resource-declaration.md).
+
+To [conditionally deploy a resource](conditional-resource-deployment.md), add an `if` expression.
```bicep resource sa 'Microsoft.Storage/storageAccounts@2019-06-01' = if (newOrExisting == 'new') {
- name: uniqueStorageName
+ name: uniqueStorageName
location: location sku: { name: storageSKU
resource sa 'Microsoft.Storage/storageAccounts@2019-06-01' = if (newOrExisting =
} ```
-For more information, see [Resource declaration in templates](resource-declaration.md).
+To [deploy more than one instance](https://github.com/Azure/bicep/blob/main/docs/spec/loops.md) of a resource type, add a `for` expression. The expression can iterate over members of an array.
+
+```bicep
+resource sa 'Microsoft.Storage/storageAccounts@2019-06-01' = [for storageName in storageAccounts: {
+ name: storageName
+ location: location
+ sku: {
+ name: storageSKU
+ }
+ kind: 'StorageV2'
+ properties: {
+ supportsHttpsTrafficOnly: true
+ }
+}]
+```
+
+## Modules
+
+Use modules to link to other Bicep files that contain code you want to reuse. The module contains one or more resources to deploy. Those resources are deployed along with any other resources in your Bicep file.
+
+```bicep
+module webModule './webApp.bicep' = {
+ name: 'webDeploy'
+ params: {
+ skuName: 'S1'
+ location: location
+ }
+}
+```
+
+The symbolic name enables you to reference the module from somewhere else in the file. For example, you can get an output value from a module by using the symbolic name and the name of the output value.
+
+Like resources, you can conditionally or iteratively deploy a module. The syntax is the same for modules as resources.
+
+For more information, see [Use Bicep modules](bicep-modules.md).
+
+## Resource and module decorators
+
+You can add a decorator to a resource or module definition. The only supported decorator is `batchSize(int)`. You can only apply it to a resource or module definition that uses a `for` expression.
+
+By default, resources are deployed in parallel. You don't know the order in which they finish. When you add the `batchSize` decorator, you deploy instances serially. Use the integer argument to specify the number of instances to deploy in parallel.
+
+```bicep
+@batchSize(3)
+resource storageAccountResources 'Microsoft.Storage/storageAccounts@2019-06-01' = [for storageName in storageAccounts: {
+ ...
+}]
+```
+
+For more information, see [Serial or Parallel](copy-resources.md#serial-or-parallel).
## Outputs
azure-signalr Signalr Resource Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/signalr-resource-faq.md
For new applications, only default and serverless mode should be used. The main
Classic mode is designed for backward compatibility for existing applications so should not be used for new applications.
-For more information about service mode in [this doc](concept-service-mode.md).
+For more information about service mode, see [Service mode in Azure SignalR Service](concept-service-mode.md).
## Can I send message from client in serverless mode? You can send message from client if you configure upstream in your SignalR instance. Upstream is a set of endpoints that can receive messages and connection events from SignalR service. If no upstream is configured, messages from client will be ignored.
-For more information about upstream see [this doc](concept-upstream.md).
+For more information about upstream, see [Upstream settings](concept-upstream.md).
Upstream is currently in public preview.
You can configure Azure SignalR Service for different service modes: `Classic`,
## Where does my data reside?
-Azure SignalR Service works as a data processor service. It won't store any customer content, and data residency is included by design. If you use Azure SignalR Service together with other Azure services, like Azure Storage for diagnostics, see [this white paper](https://azure.microsoft.com/resources/achieving-compliant-data-residency-and-security-with-azure/) for guidance about how to keep data residency in Azure regions.
+Azure SignalR Service works as a data processor service. It won't store any customer content, and data residency is included by design. If you use Azure SignalR Service together with other Azure services, like Azure Storage for diagnostics, see [this white paper](https://azure.microsoft.com/resources/achieving-compliant-data-residency-and-security-with-azure/) for guidance about how to keep data residency in Azure regions.
azure-vmware Ecosystem Back Up Vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/ecosystem-back-up-vms.md
You can find more information on these backup solutions here:
- [Commvault](https://documentation.commvault.com/11.21/essential/128997_support_for_azure_vmware_solution.html) - [Veritas](https://vrt.as/nb4avs) - [Veeam](https://www.veeam.com/kb4012)-- [Cohesity](https://www.cohesity.com/resource-assets/solution-brief/Cohesity-Azure-Solution-Brief.pdf)
+- [Cohesity](https://www.cohesity.com/blogs/expanding-cohesitys-support-for-microsofts-ecosystem-azure-stack-and-azure-vmware-solution/)
- [Dell Technologies](https://www.delltechnologies.com/resources/en-us/asset/briefs-handouts/solutions/dell-emc-data-protection-for-avs.pdf)
backup Encryption At Rest With Cmk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/encryption-at-rest-with-cmk.md
Title: Encryption of backup data using customer-managed keys description: Learn how Azure Backup allows you to encrypt your backup data using customer-managed keys (CMK). Previously updated : 07/08/2020 Last updated : 04/01/2021 # Encryption of backup data using customer-managed keys
This article discusses the following:
>[!NOTE] >Use Az module 5.3.0 or greater to use customer managed keys for backups in the Recovery Services vault.
+
+ >[!Warning]
+ >If you are using PowerShell for managing encryption keys for Backup, we do not recommend to update the keys from the portal.<br></br>If you update the key from the portal, you canΓÇÖt use PowerShell to update the encryption key further, till a PowerShell update to support the new model is available. However, you can continue updating the key from the Azure portal.
If you haven't created and configured your Recovery Services vault, you can [read how to do so here](backup-create-rs-vault.md).
Using the **Select from Key Vault** option helps to enable auto-rotation for the
- Key version update may take up to an hour to take effect. - When a new version of the key takes effect, the old version should also be available (in enabled state) for at least one subsequent backup job after the key update has taken effect.
-### Using Azure Policies for auditing and enforcing encryption utilizing customer-managed keys (in preview)
-
-Azure Backup allows you to use Azure Polices to audit and enforce encryption, using customer-managed keys, of data in the Recovery Services vault. Using the Azure Policies:
--- The audit policy can be used for auditing vaults with encryption using customer-managed keys, enabled after 3/31/2021. For vaults with the CMK encryption enabled before this date, the policy may fail to apply or may show false negatives results (that is, these vaults may be reported as non-compliant, despite having the CMK encryption enabled).-- To use the audit policy for auditing vaults with the CMK encryption enabled before 3/31/2021, use the Azure portal to update an encryption key. This helps to upgrade to the new model. If you do not want to change the encryption key, provide the same key again through the key URI or the key selection option. -
- >[!Warning]
- >For users using PowerShell for managing encryption keys for Backup, it is not recommended to upgrade to the new model.<br></br>If you update the key from the portal, you canΓÇÖt use PowerShell to update the encryption key further, till a PowerShell update to support the new model is available. However, you can continue updating the key from the Azure portal.
- ## Frequently asked questions ### Can I encrypt an existing Backup vault with customer-managed keys?
batch Quick Run Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/quick-run-dotnet.md
See the file `Program.cs` and the following sections for details.
### Preliminaries
-To interact with a storage account, the app uses the Azure Storage Client Library for .NET. It creates a reference to the account with [CloudStorageAccount](/dotnet/api/microsoft.azure.cosmos.table.cloudstorageaccount), and from that creates a [CloudBlobClient](/dotnet/api/microsoft.azure.storage.blob.cloudblobclient).
+To interact with a storage account, the app uses the Azure Storage Client Library for .NET. It creates a reference to the account with [CloudStorageAccount](/dotnet/api/microsoft.azure.storage.cloudstorageaccount), and from that creates a [CloudBlobClient](/dotnet/api/microsoft.azure.storage.blob.cloudblobclient).
```csharp CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
cloud-services-extended-support Available Sizes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/available-sizes.md
This article describes the available virtual machine sizes for Cloud Services (e
|[D](../virtual-machines/sizes-previous-gen.md?bc=%2fazure%2fvirtual-machines%2flinux%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json#d-series) | 160 | |[Dv2](../virtual-machines/dv2-dsv2-series.md) | 160 - 190* | |[Dv3](../virtual-machines/dv3-dsv3-series.md) | 160 - 190* |
-|[Ev3](../virtual-machines/ev3-esv3-series.md) | 160 - 190*
+|[Dav4](../virtual-machines/dav4-dasv4-series.md) | 230 - 260 |
+|[Eav4](../virtual-machines/eav4-easv4-series.md) | 230 - 260 |
+|[Ev3](../virtual-machines/ev3-esv3-series.md) | 160 - 190* |
|[G](../virtual-machines/sizes-previous-gen.md?bc=%2fazure%2fvirtual-machines%2flinux%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json#g-series) | 180-240* | |[H](../virtual-machines/h-series.md) | 290 - 300* |
To retrieve a list of available sizes see [Resource Skus - List](/rest/api/compu
## Next steps - Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support). - Review [frequently asked questions](faq.md) for Cloud Services (extended support).-- Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).
+- Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).
cloud-services-extended-support Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-portal.md
# Deploy a Azure Cloud Services (extended support) using the Azure portal This article explains how to use the Azure portal to create a Cloud Service (extended support) deployment.
-> [!IMPORTANT]
-> Cloud Services (extended support) is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## Before you begin Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support) and create the associated resources.
Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services
## Next steps - Review [frequently asked questions](faq.md) for Cloud Services (extended support). - Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).-- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support)
+- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support)
cloud-services-extended-support Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-powershell.md
This article shows how to use the `Az.CloudService` PowerShell module to deploy Cloud Services (extended support) in Azure that has multiple roles (WebRole and WorkerRole) and remote desktop extension.
-> [!IMPORTANT]
-> Cloud Services (extended support) is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## Before you begin Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support) and create the associated resources.
Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services
$virtualNetwork = New-AzVirtualNetwork -Name ΓÇ£ContosoVNetΓÇ¥ -Location ΓÇ£East USΓÇ¥ -ResourceGroupName ΓÇ£ContosOrgΓÇ¥ -AddressPrefix "10.0.0.0/24" -Subnet $subnet ```
-7. Create a public IP address and (optionally) set the DNS label property of the public IP address. If you are using a static IP, it needs to referenced as a Reserved IP in Service Configuration file.
+7. Create a public IP address and set the DNS label property of the public IP address. Cloud Services (extended support) only supports [Basic] (https://docs.microsoft.com/azure/virtual-network/public-ip-addresses#basic) SKU Public IP addresses. Standard SKU Public IPs do not work with Cloud Services.
+If you are using a Static IP you need to reference it as a Reserved IP in Service Configuration (.cscfg) file
```powershell $publicIp = New-AzPublicIpAddress -Name ΓÇ£ContosIpΓÇ¥ -ResourceGroupName ΓÇ£ContosOrgΓÇ¥ -Location ΓÇ£East USΓÇ¥ -AllocationMethod Dynamic -IpAddressVersion IPv4 -DomainNameLabel ΓÇ£contosoappdnsΓÇ¥ -Sku Basic ```
-8. Create Network Profile Object and associate public IP address to the frontend of the platform created load balancer.
+8. Create a Network Profile Object and associate the public IP address to the frontend of the load balancer. The Azure platform automatically creates a 'Classic' SKU load balancer resource in the same subscription as the cloud service resource. The load balancer resource is a read-only resource in ARM. Any updates to the resource are supported only via the cloud service deployment files (.cscfg & .csdef)
```powershell $publicIP = Get-AzPublicIpAddress -ResourceGroupName ContosOrg -Name ContosIp
cloud-services-extended-support Deploy Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-sdk.md
Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services
m_NrpClient.VirtualNetworks.CreateOrUpdate(resourceGroupName, ΓÇ£ContosoVNetΓÇ¥, vnet); ```
-7. Create a public IP address and (optionally) set the DNS label property of the public IP address. If you're using a static IP, it needs to be referenced as a reserved IP in the service configuration file.
+7. Create a public IP address and set the DNS label property of the public IP address. Cloud Services (extended support) only supports [Basic] (https://docs.microsoft.com/azure/virtual-network/public-ip-addresses#basic) SKU Public IP addresses. Standard SKU Public IPs do not work with Cloud Services.
+If you are using a Static IP you need to reference it as a Reserved IP in Service Configuration (.cscfg) file
```csharp PublicIPAddress publicIPAddressParams = new PublicIPAddress(name: ΓÇ£ContosIpΓÇ¥)
Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services
PublicIPAddress publicIpAddress = m_NrpClient.PublicIPAddresses.CreateOrUpdate(resourceGroupName, publicIPAddressName, publicIPAddressParams); ```
-8. Create a network profile object and associate a public IP address with the front end of the platform-created load balancer.
+8. Create a Network Profile Object and associate the public IP address to the frontend of the load balancer. The Azure platform automatically creates a 'Classic' SKU load balancer resource in the same subscription as the cloud service resource. The load balancer resource is a read-only resource in ARM. Any updates to the resource are supported only via the cloud service deployment files (.cscfg & .csdef)
```csharp LoadBalancerFrontendIPConfiguration feipConfiguration = new LoadBalancerFrontendIPConfiguration()
Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services
## Next steps - Review [frequently asked questions](faq.md) for Cloud Services (extended support). - Deploy Cloud Services (extended support) by using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), a [template](deploy-template.md), or [Visual Studio](deploy-visual-studio.md).-- Visit the [Samples repository for Cloud Services (extended support)](https://github.com/Azure-Samples/cloud-services-extended-support)
+- Visit the [Samples repository for Cloud Services (extended support)](https://github.com/Azure-Samples/cloud-services-extended-support)
cloud-services-extended-support Deploy Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-template.md
This tutorial explains how to create a Cloud Service (extended support) deployment using [ARM templates](../azure-resource-manager/templates/overview.md).
-> [!IMPORTANT]
-> Cloud Services (extended support) is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-- ## Before you begin 1. Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support) and create the associated resources. 2. Create a new resource group using the [Azure portal](../azure-resource-manager/management/manage-resource-groups-portal.md) or [PowerShell](../azure-resource-manager/management/manage-resource-groups-powershell.md). This step is optional if you are using an existing resource group.+
+3. Create a public IP address and set the DNS label property of the public IP address. Cloud Services (extended support) only supports [Basic] (https://docs.microsoft.com/azure/virtual-network/public-ip-addresses#basic) SKU Public IP addresses. Standard SKU Public IPs do not work with Cloud Services.
+If you are using a Static IP, it needs to be referenced as a Reserved IP in Service Configuration (.cscfg) file. If using an existing IP address, skip this step and add the IP address information directly into the load balancer configuration settings of your ARM template.
+
+4. Create a Network Profile Object and associate the public IP address to the frontend of the load balancer. The Azure platform automatically creates a 'Classic' SKU load balancer resource in the same subscription as the cloud service resource. The load balancer resource is a read-only resource in ARM. Any updates to the resource are supported only via the cloud service deployment files (.cscfg & .csdef)
-3. Create a new storage account using the [Azure portal](../storage/common/storage-account-create.md?tabs=azure-portal) or [PowerShell](../storage/common/storage-account-create.md?tabs=azure-powershell). This step is optional if you are using an existing storage account.
+5. Create a new storage account using the [Azure portal](../storage/common/storage-account-create.md?tabs=azure-portal) or [PowerShell](../storage/common/storage-account-create.md?tabs=azure-powershell). This step is optional if you are using an existing storage account.
-4. Upload your Service Definition (.csdef) and Service Configuration (.cscfg) files to the storage account using the [Azure portal](../storage/blobs/storage-quickstart-blobs-portal.md#upload-a-block-blob), [AzCopy](../storage/common/storage-use-azcopy-blobs-upload.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) or [PowerShell](../storage/blobs/storage-quickstart-blobs-powershell.md#upload-blobs-to-the-container). Obtain the SAS URIs of both files to be added to the ARM template later in this tutorial.
+6. Upload your Service Definition (.csdef) and Service Configuration (.cscfg) files to the storage account using the [Azure portal](../storage/blobs/storage-quickstart-blobs-portal.md#upload-a-block-blob), [AzCopy](../storage/common/storage-use-azcopy-blobs-upload.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) or [PowerShell](../storage/blobs/storage-quickstart-blobs-powershell.md#upload-blobs-to-the-container). Obtain the SAS URIs of both files to be added to the ARM template later in this tutorial.
-5. (Optional) Create a key vault and upload the certificates.
+6. (Optional) Create a key vault and upload the certificates.
- Certificates can be attached to cloud services to enable secure communication to and from the service. In order to use certificates, their thumbprints must be specified in your Service Configuration (.cscfg) file and uploaded to a key vault. A key vault can be created through the [Azure portal](../key-vault/general/quick-create-portal.md) or [PowerShell](../key-vault/general/quick-create-powershell.md). - The associated key vault must be located in the same region and subscription as cloud service.
This tutorial explains how to create a Cloud Service (extended support) deployme
- Review [frequently asked questions](faq.md) for Cloud Services (extended support). - Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).-- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support)
+- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support)
cloud-services-extended-support Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/overview.md
The major differences between Cloud Services (classic) and Cloud Services (exten
- All resources deployed through the [Azure Resource Manager](../azure-resource-manager/templates/overview.md) must be inside a virtual network. Virtual networks and subnets are created in Azure Resource Manager using existing Azure Resource Manager APIs and will need to be referenced within the NetworkConfiguration section of the .cscfg when deploying Cloud Services (extended support). - Each cloud service (extended support) is a single independent deployment. Cloud services (extended support) does not support multiple slots within a single cloud service.
- - VIP Swap<sup>*</sup> capability may be used to swap between two cloud services (extended support). To test and stage a new release of a cloud service, deploy a cloud service (extended support) and tag it as VIP swappable with another cloud service (extended support)
+ - VIP Swap capability may be used to swap between two cloud services (extended support). To test and stage a new release of a cloud service, deploy a cloud service (extended support) and tag it as VIP swappable with another cloud service (extended support)
- Domain Name Service (DNS) label is optional for a cloud service (extended support). In Azure Resource Manager, the DNS label is a property of the Public IP resource associated with the cloud service. -
-<sup>*</sup> VIP swap for Cloud Services (extended support) is not available during Public Preview.
- ## Migration to Azure Resource Manager Cloud Services (extended support) provides two paths for you to migrate from [Azure Service Manager](/powershell/azure/servicemanagement/overview) to [Azure Resource Manager](../azure-resource-manager/management/overview.md).
Depending on the application, Cloud Services (extended support) may require subs
## Next steps - Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support). - Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).-- Review [frequently asked questions](faq.md) for Cloud Services (extended support).
+- Review [frequently asked questions](faq.md) for Cloud Services (extended support).
cognitive-services How To Pronunciation Assessment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-pronunciation-assessment.md
With pronunciation assessment, language learners can practice, get instant feedb
In this article, you'll learn how to set up `PronunciationAssessmentConfig` and retrieve the `PronunciationAssessmentResult` using the speech SDK. > [!NOTE]
-> The pronunciation assessment feature only supports language `en-US` currently.
+> The pronunciation assessment feature currently supports `en-US` language, which is available on all [speech-to-text regions](regions.md#speech-to-text-text-to-speech-and-translation). The support for `en-GB` and `zh-CN` languages is under preview, which is available on `westus`, `eastasia` and `centralindia` regions.
## Pronunciation assessment with the Speech SDK
This table lists the result parameters of pronunciation assessment.
| `PronunciationScore` | Overall score indicating the pronunciation quality of the given speech. This is aggregated from `AccuracyScore`, `FluencyScore` and `CompletenessScore` with weight. | | `ErrorType` | This value indicates whether a word is omitted, inserted or badly pronounced, compared to `ReferenceText`. Possible values are `None` (meaning no error on this word), `Omission`, `Insertion` and `Mispronunciation`. |
+### Sample responses
+
+A typical pronunciation assessment result in JSON:
+
+```json
+{
+ "RecognitionStatus": "Success",
+ "Offset": "400000",
+ "Duration": "11000000",
+ "NBest": [
+ {
+ "Confidence" : "0.87",
+ "Lexical" : "good morning",
+ "ITN" : "good morning",
+ "MaskedITN" : "good morning",
+ "Display" : "Good morning.",
+ "PronunciationAssessment":
+ {
+ "PronScore" : 84.4,
+ "AccuracyScore" : 100.0,
+ "FluencyScore" : 74.0,
+ "CompletenessScore" : 100.0,
+ },
+ "Words": [
+ {
+ "Word" : "Good",
+ "Offset" : 500000,
+ "Duration" : 2700000,
+ "PronunciationAssessment":
+ {
+ "AccuracyScore" : 100.0,
+ "ErrorType" : "None"
+ }
+ },
+ {
+ "Word" : "morning",
+ "Offset" : 5300000,
+ "Duration" : 900000,
+ "PronunciationAssessment":
+ {
+ "AccuracyScore" : 100.0,
+ "ErrorType" : "None"
+ }
+ }
+ ]
+ }
+ ]
+}
+```
+ ## Next steps <!-- TODO: update JavaScript sample links after release -->
+* Watch the [video introduction](https://www.youtube.com/watch?v=cBE8CUHOFHQ) and [video tutorial](https://www.youtube.com/watch?v=zFlwm7N4Awc) of pronunciation assessment
+
+* Try out the [pronunciation assessment demo](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/PronunciationAssessment/BrowserJS)
+ ::: zone pivot="programming-language-csharp" * See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_recognition_samples.cs#L949) on GitHub for pronunciation assessment. ::: zone-end
This table lists the result parameters of pronunciation assessment.
::: zone-end * [Speech SDK reference documentation](speech-sdk.md)+
+* [Create a free Azure account](https://azure.microsoft.com/free/cognitive-services/)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/language-support.md
https://cris.ai -> Click on Adaptation Data -> scroll down to section "Pronuncia
| Dutch (Netherlands) | `nl-NL` | Audio (20201015)<br>Text | Yes | | English (Australia) | `en-AU` | Audio (20201019)<br>Text | Yes | | English (Canada) | `en-CA` | Audio (20201019)<br>Text | Yes |
-| English (Ghana) | `en-GH` | Text | |
| English (Hong Kong) | `en-HK` | Text | | | English (India) | `en-IN` | Audio (20200923)<br>Text | Yes | | English (Ireland) | `en-IE` | Text | |
-| English (Kenya) | `en-KE` | Text | |
| English (New Zealand) | `en-NZ` | Audio (20201019)<br>Text | Yes | | English (Nigeria) | `en-NG` | Text | | | English (Philippines) | `en-PH` | Text | | | English (Singapore) | `en-SG` | Text | | | English (South Africa) | `en-ZA` | Text | |
-| English (Tanzania) | `en-TZ` | Text | |
| English (United Kingdom) | `en-GB` | Audio (20201019)<br>Text<br>Pronunciation| Yes |
-| English (United States) | `en-US` | Audio (20201019, 20210223)<br>Text<br>Pronunciation| Yes |
+| English (United States) | `en-US` | Audio (20201019)<br>Text<br>Pronunciation| Yes |
| Estonian(Estonia) | `et-EE` | Text | |
-| Filipino (Philippines) | `fil-PH`| Text | |
| Finnish (Finland) | `fi-FI` | Text | Yes | | French (Canada) | `fr-CA` | Audio (20201015)<br>Text | Yes | | French (France) | `fr-FR` | Audio (20201015)<br>Text<br>Pronunciation| Yes |
-| French (Switzerland) | `fr-CH` | Text | |
-| German (Austria) | `de-AT` | Text | |
| German (Germany) | `de-DE` | Audio (20190701, 20200619, 20201127)<br>Text<br>Pronunciation| Yes | | Greek (Greece) | `el-GR` | Text | | | Gujarati (Indian) | `gu-IN` | Text | | | Hindi (India) | `hi-IN` | Audio (20200701)<br>Text | Yes | | Hungarian (Hungary) | `hu-HU` | Text | |
-| Indonesian (Indonesia) | `id-ID` | Text | |
| Irish(Ireland) | `ga-IE` | Text | | | Italian (Italy) | `it-IT` | Audio (20201016)<br>Text<br>Pronunciation| Yes | | Japanese (Japan) | `ja-JP` | Text | Yes | | Korean (Korea) | `ko-KR` | Audio (20201015)<br>Text | Yes | | Latvian (Latvia) | `lv-LV` | Text | | | Lithuanian (Lithuania) | `lt-LT` | Text | |
-| Malay(Malaysia) | `ms-MY` | Text | |
-| Maltese(Malta) | `mt-MT` | Text | |
+| Maltese (Malta) | `mt-MT` | Text | |
| Marathi (India) | `mr-IN` | Text | | | Norwegian (Bokmål, Norway) | `nb-NO` | Text | Yes | | Polish (Poland) | `pl-PL` | Text | Yes |
https://cris.ai -> Click on Adaptation Data -> scroll down to section "Pronuncia
| Telugu (India) | `te-IN` | Text | | | Thai (Thailand) | `th-TH` | Text | Yes | | Turkish (Turkey) | `tr-TR` | Text | |
-| Vietnamese (Vietnam) | `vi-VN` | Text | |
## Text-to-speech
Neural voices can be used to make interactions with chatbots and voice assistant
| Language | Locale | Gender | Voice name | Style support | |||||| | Arabic (Egypt) | `ar-EG` | Female | `ar-EG-SalmaNeural` | General |
-| Arabic (Egypt) | `ar-EG` | Male | `ar-EG-ShakirNeural` <sup>New</sup> | General |
+| Arabic (Egypt) | `ar-EG` | Male | `ar-EG-ShakirNeural` | General |
| Arabic (Saudi Arabia) | `ar-SA` | Female | `ar-SA-ZariyahNeural` | General |
-| Arabic (Saudi Arabia) | `ar-SA` | Male | `ar-SA-HamedNeural` <sup>New</sup> | General |
+| Arabic (Saudi Arabia) | `ar-SA` | Male | `ar-SA-HamedNeural` | General |
| Bulgarian (Bulgaria) | `bg-BG` | Female | `bg-BG-KalinaNeural` | General |
-| Bulgarian (Bulgaria) | `bg-BG` | Male | `bg-BG-BorislavNeural` <sup>New</sup> | General |
+| Bulgarian (Bulgaria) | `bg-BG` | Male | `bg-BG-BorislavNeural` | General |
| Catalan (Spain) | `ca-ES` | Female | `ca-ES-AlbaNeural` | General |
-| Catalan (Spain) | `ca-ES` | Female | `ca-ES-JoanaNeural` <sup>New</sup> | General |
-| Catalan (Spain) | `ca-ES` | Male | `ca-ES-EnricNeural` <sup>New</sup> | General |
+| Catalan (Spain) | `ca-ES` | Female | `ca-ES-JoanaNeural` | General |
+| Catalan (Spain) | `ca-ES` | Male | `ca-ES-EnricNeural` | General |
| Chinese (Cantonese, Traditional) | `zh-HK` | Female | `zh-HK-HiuGaaiNeural` | General |
-| Chinese (Cantonese, Traditional) | `zh-HK` | Female | `zh-HK-HiuMaanNeural` <sup>New</sup> | General |
-| Chinese (Cantonese, Traditional) | `zh-HK` | Male | `zh-HK-WanLungNeural` <sup>New</sup> | General |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoxiaoNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoyouNeural` | Kid voice, optimized for story narrating |
+| Chinese (Cantonese, Traditional) | `zh-HK` | Female | `zh-HK-HiuMaanNeural` | General |
+| Chinese (Cantonese, Traditional) | `zh-HK` | Male | `zh-HK-WanLungNeural` | General |
+| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoxiaoNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoyouNeural` | Child voice, optimized for story narrating |
| Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunyangNeural` | Optimized for news reading,<br /> multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunyeNeural` | Optimized for story narrating |
-| Chinese (Taiwanese Mandarin) | `zh-TW` | Female | `zh-TW-HsiaoChenNeural` <sup>New</sup> | General |
+| Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunyeNeural` | Optimized for story narrating |
+| Chinese (Taiwanese Mandarin) | `zh-TW` | Female | `zh-TW-HsiaoChenNeural` | General |
| Chinese (Taiwanese Mandarin) | `zh-TW` | Female | `zh-TW-HsiaoYuNeural` | General |
-| Chinese (Taiwanese Mandarin) | `zh-TW` | Male | `zh-TW-YunJheNeural` <sup>New</sup> | General |
+| Chinese (Taiwanese Mandarin) | `zh-TW` | Male | `zh-TW-YunJheNeural` | General |
| Croatian (Croatia) | `hr-HR` | Female | `hr-HR-GabrijelaNeural` | General |
-| Croatian (Croatia) | `hr-HR` | Male | `hr-HR-SreckoNeural` <sup>New</sup> | General |
+| Croatian (Croatia) | `hr-HR` | Male | `hr-HR-SreckoNeural` | General |
| Czech (Czech) | `cs-CZ` | Female | `cs-CZ-VlastaNeural` | General |
-| Czech (Czech) | `cs-CZ` | Male | `cs-CZ-AntoninNeural` <sup>New</sup> | General |
+| Czech (Czech) | `cs-CZ` | Male | `cs-CZ-AntoninNeural` | General |
| Danish (Denmark) | `da-DK` | Female | `da-DK-ChristelNeural` | General |
-| Danish (Denmark) | `da-DK` | Male | `da-DK-JeppeNeural` <sup>New</sup> | General |
+| Danish (Denmark) | `da-DK` | Male | `da-DK-JeppeNeural` | General |
+| Dutch (Belgium) | `nl-BE` | Female | `nl-BE-DenaNeural` <sup>New</sup> | General |
+| Dutch (Belgium) | `nl-BE` | Male | `nl-BE-ArnaudNeural` <sup>New</sup> | General |
| Dutch (Netherlands) | `nl-NL` | Female | `nl-NL-ColetteNeural` | General |
-| Dutch (Netherlands) | `nl-NL` | Female | `nl-NL-FennaNeural` <sup>New</sup> | General |
-| Dutch (Netherlands) | `nl-NL` | Male | `nl-NL-MaartenNeural` <sup>New</sup> | General |
+| Dutch (Netherlands) | `nl-NL` | Female | `nl-NL-FennaNeural` | General |
+| Dutch (Netherlands) | `nl-NL` | Male | `nl-NL-MaartenNeural` | General |
| English (Australia) | `en-AU` | Female | `en-AU-NatashaNeural` | General | | English (Australia) | `en-AU` | Male | `en-AU-WilliamNeural` | General | | English (Canada) | `en-CA` | Female | `en-CA-ClaraNeural` | General |
-| English (Canada) | `en-CA` | Male | `en-CA-LiamNeural` <sup>New</sup> | General |
+| English (Canada) | `en-CA` | Male | `en-CA-LiamNeural` | General |
| English (India) | `en-IN` | Female | `en-IN-NeerjaNeural` | General |
-| English (India) | `en-IN` | Male | `en-IN-PrabhatNeural` <sup>New</sup> | General |
+| English (India) | `en-IN` | Male | `en-IN-PrabhatNeural` | General |
| English (Ireland) | `en-IE` | Female | `en-IE-EmilyNeural` | General |
-| English (Ireland) | `en-IE` | Male | `en-IE-ConnorNeural` <sup>New</sup> | General |
+| English (Ireland) | `en-IE` | Male | `en-IE-ConnorNeural` | General |
+| English (Philippines) | `en-PH` | Female | `en-PH-RosaNeural` <sup>New</sup> | General |
+| English (Philippines) | `en-PH` | Male | `en-PH-JamesNeural` <sup>New</sup> | General |
| English (United Kingdom) | `en-GB` | Female | `en-GB-LibbyNeural` | General | | English (United Kingdom) | `en-GB` | Female | `en-GB-MiaNeural` | General | | English (United Kingdom) | `en-GB` | Male | `en-GB-RyanNeural` | General |
-| English (United States) | `en-US` | Female | `en-US-AriaNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| English (United States) | `en-US` | Female | `en-US-JennyNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| English (United States) | `en-US` | Male | `en-US-GuyNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| English (United States) | `en-US` | Female | `en-US-AriaNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| English (United States) | `en-US` | Female | `en-US-JennyNeural` | General |
+| English (United States) | `en-US` | Male | `en-US-GuyNeural` | General |
+| Estonian (Estonia) | `et-EE` | Female | `et-EE-AnuNeural` | General |
+| Estonian (Estonia) | `et-EE` | Male | `et-EE-KertNeural` | General |
| Finnish (Finland) | `fi-FI` | Female | `fi-FI-NooraNeural` | General |
-| Finnish (Finland) | `fi-FI` | Female | `fi-FI-SelmaNeural` <sup>New</sup> | General |
-| Finnish (Finland) | `fi-FI` | Male | `fi-FI-HarriNeural` <sup>New</sup> | General |
+| Finnish (Finland) | `fi-FI` | Female | `fi-FI-SelmaNeural` | General |
+| Finnish (Finland) | `fi-FI` | Male | `fi-FI-HarriNeural` | General |
+| French (Belgium) | `fr-BE` | Female | `fr-BE-CharlineNeural` <sup>New</sup> | General |
+| French (Belgium) | `fr-BE` | Male | `fr-BE-GerardNeural` <sup>New</sup> | General |
| French (Canada) | `fr-CA` | Female | `fr-CA-SylvieNeural` | General |
-| French (Canada) | `fr-CA` | Male | `fr-CA-AntoineNeural` <sup>New</sup> | General |
+| French (Canada) | `fr-CA` | Male | `fr-CA-AntoineNeural` | General |
| French (Canada) | `fr-CA` | Male | `fr-CA-JeanNeural` | General | | French (France) | `fr-FR` | Female | `fr-FR-DeniseNeural` | General | | French (France) | `fr-FR` | Male | `fr-FR-HenriNeural` | General | | French (Switzerland) | `fr-CH` | Female | `fr-CH-ArianeNeural` | General |
-| French (Switzerland) | `fr-CH` | Male | `fr-CH-FabriceNeural` <sup>New</sup> | General |
+| French (Switzerland) | `fr-CH` | Male | `fr-CH-FabriceNeural` | General |
| German (Austria) | `de-AT` | Female | `de-AT-IngridNeural` | General |
-| German (Austria) | `de-AT` | Male | `de-AT-JonasNeural` <sup>New</sup> | General |
+| German (Austria) | `de-AT` | Male | `de-AT-JonasNeural` | General |
| German (Germany) | `de-DE` | Female | `de-DE-KatjaNeural` | General | | German (Germany) | `de-DE` | Male | `de-DE-ConradNeural` | General | | German (Switzerland) | `de-CH` | Female | `de-CH-LeniNeural` | General |
-| German (Switzerland) | `de-CH` | Male | `de-CH-JanNeural` <sup>New</sup> | General |
+| German (Switzerland) | `de-CH` | Male | `de-CH-JanNeural` | General |
| Greek (Greece) | `el-GR` | Female | `el-GR-AthinaNeural` | General |
-| Greek (Greece) | `el-GR` | Male | `el-GR-NestorasNeural` <sup>New</sup> | General |
+| Greek (Greece) | `el-GR` | Male | `el-GR-NestorasNeural` | General |
| Hebrew (Israel) | `he-IL` | Female | `he-IL-HilaNeural` | General |
-| Hebrew (Israel) | `he-IL` | Male | `he-IL-AvriNeural` <sup>New</sup> | General |
+| Hebrew (Israel) | `he-IL` | Male | `he-IL-AvriNeural` | General |
| Hindi (India) | `hi-IN` | Female | `hi-IN-SwaraNeural` | General |
-| Hindi (India) | `hi-IN` | Male | `hi-IN-MadhurNeural` <sup>New</sup> | General |
+| Hindi (India) | `hi-IN` | Male | `hi-IN-MadhurNeural` | General |
| Hungarian (Hungary) | `hu-HU` | Female | `hu-HU-NoemiNeural` | General |
-| Hungarian (Hungary) | `hu-HU` | Male | `hu-HU-TamasNeural` <sup>New</sup> | General |
-| Indonesian (Indonesia) | `id-ID` | Female | `id-ID-GadisNeural` <sup>New</sup> | General |
+| Hungarian (Hungary) | `hu-HU` | Male | `hu-HU-TamasNeural` | General |
+| Indonesian (Indonesia) | `id-ID` | Female | `id-ID-GadisNeural` | General |
| Indonesian (Indonesia) | `id-ID` | Male | `id-ID-ArdiNeural` | General |
+| Irish (Ireland) | `ga-IE` | Female | `ga-IE-OrlaNeural` | General |
+| Irish (Ireland) | `ga-IE` | Male | `ga-IE-ColmNeural` | General |
| Italian (Italy) | `it-IT` | Female | `it-IT-ElsaNeural` | General | | Italian (Italy) | `it-IT` | Female | `it-IT-IsabellaNeural` | General | | Italian (Italy) | `it-IT` | Male | `it-IT-DiegoNeural` | General |
Neural voices can be used to make interactions with chatbots and voice assistant
| Japanese (Japan) | `ja-JP` | Male | `ja-JP-KeitaNeural` | General | | Korean (Korea) | `ko-KR` | Female | `ko-KR-SunHiNeural` | General | | Korean (Korea) | `ko-KR` | Male | `ko-KR-InJoonNeural` | General |
+| Latvian (Latvia) | `lv-LV` | Female | `lv-LV-EveritaNeural` | General |
+| Latvian (Latvia) | `lv-LV` | Male | `lv-LV-NilsNeural` | General |
+| Lithuanian (Lithuania) | `lt-LT` | Female | `lt-LT-OnaNeural` | General |
+| Lithuanian (Lithuania) | `lt-LT` | Male | `lt-LT-LeonasNeural` | General |
| Malay (Malaysia) | `ms-MY` | Female | `ms-MY-YasminNeural` | General |
-| Malay (Malaysia) | `ms-MY` | Male | `ms-MY-OsmanNeural` <sup>New</sup> | General |
+| Malay (Malaysia) | `ms-MY` | Male | `ms-MY-OsmanNeural` | General |
+| Maltese (Malta) | `mt-MT` | Female | `mt-MT-GraceNeural` | General |
+| Maltese (Malta) | `mt-MT` | Male | `mt-MT-JosephNeural` | General |
| Norwegian (Bokmål, Norway) | `nb-NO` | Female | `nb-NO-IselinNeural` | General |
-| Norwegian (Bokmål, Norway) | `nb-NO` | Female | `nb-NO-PernilleNeural` <sup>New</sup> | General |
-| Norwegian (Bokmål, Norway) | `nb-NO` | Male | `nb-NO-FinnNeural` <sup>New</sup> | General |
-| Polish (Poland) | `pl-PL` | Female | `pl-PL-AgnieszkaNeural` <sup>New</sup> | General |
+| Norwegian (Bokmål, Norway) | `nb-NO` | Female | `nb-NO-PernilleNeural` | General |
+| Norwegian (Bokmål, Norway) | `nb-NO` | Male | `nb-NO-FinnNeural` | General |
+| Polish (Poland) | `pl-PL` | Female | `pl-PL-AgnieszkaNeural` | General |
| Polish (Poland) | `pl-PL` | Female | `pl-PL-ZofiaNeural` | General |
-| Polish (Poland) | `pl-PL` | Male | `pl-PL-MarekNeural` <sup>New</sup> | General |
-| Portuguese (Brazil) | `pt-BR` | Female | `pt-BR-FranciscaNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| Polish (Poland) | `pl-PL` | Male | `pl-PL-MarekNeural` | General |
+| Portuguese (Brazil) | `pt-BR` | Female | `pt-BR-FranciscaNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
| Portuguese (Brazil) | `pt-BR` | Male | `pt-BR-AntonioNeural` | General | | Portuguese (Portugal) | `pt-PT` | Female | `pt-PT-FernandaNeural` | General |
-| Portuguese (Portugal) | `pt-PT` | Female | `pt-PT-RaquelNeural` <sup>New</sup> | General |
-| Portuguese (Portugal) | `pt-PT` | Male | `pt-PT-DuarteNeural` <sup>New</sup> | General |
+| Portuguese (Portugal) | `pt-PT` | Female | `pt-PT-RaquelNeural` | General |
+| Portuguese (Portugal) | `pt-PT` | Male | `pt-PT-DuarteNeural` | General |
| Romanian (Romania) | `ro-RO` | Female | `ro-RO-AlinaNeural` | General |
-| Romanian (Romania) | `ro-RO` | Male | `ro-RO-EmilNeural` <sup>New</sup> | General |
+| Romanian (Romania) | `ro-RO` | Male | `ro-RO-EmilNeural` | General |
| Russian (Russia) | `ru-RU` | Female | `ru-RU-DariyaNeural` | General |
-| Russian (Russia) | `ru-RU` | Female | `ru-RU-SvetlanaNeural` <sup>New</sup> | General |
-| Russian (Russia) | `ru-RU` | Male | `ru-RU-DmitryNeural` <sup>New</sup> | General |
+| Russian (Russia) | `ru-RU` | Female | `ru-RU-SvetlanaNeural` | General |
+| Russian (Russia) | `ru-RU` | Male | `ru-RU-DmitryNeural` | General |
| Slovak (Slovakia) | `sk-SK` | Female | `sk-SK-ViktoriaNeural` | General |
-| Slovak (Slovakia) | `sk-SK` | Male | `sk-SK-LukasNeural` <sup>New</sup> | General |
+| Slovak (Slovakia) | `sk-SK` | Male | `sk-SK-LukasNeural` | General |
| Slovenian (Slovenia) | `sl-SI` | Female | `sl-SI-PetraNeural` | General |
-| Slovenian (Slovenia) | `sl-SI` | Male | `sl-SI-RokNeural` <sup>New</sup> | General |
+| Slovenian (Slovenia) | `sl-SI` | Male | `sl-SI-RokNeural` | General |
| Spanish (Mexico) | `es-MX` | Female | `es-MX-DaliaNeural` | General | | Spanish (Mexico) | `es-MX` | Male | `es-MX-JorgeNeural` | General | | Spanish (Spain) | `es-ES` | Female | `es-ES-ElviraNeural` | General | | Spanish (Spain) | `es-ES` | Male | `es-ES-AlvaroNeural` | General | | Swedish (Sweden) | `sv-SE` | Female | `sv-SE-HilleviNeural` | General |
-| Swedish (Sweden) | `sv-SE` | Female | `sv-SE-SofieNeural` <sup>New</sup> | General |
-| Swedish (Sweden) | `sv-SE` | Male | `sv-SE-MattiasNeural` <sup>New</sup> | General |
+| Swedish (Sweden) | `sv-SE` | Female | `sv-SE-SofieNeural` | General |
+| Swedish (Sweden) | `sv-SE` | Male | `sv-SE-MattiasNeural` | General |
| Tamil (India) | `ta-IN` | Female | `ta-IN-PallaviNeural` | General |
-| Tamil (India) | `ta-IN` | Male | `ta-IN-ValluvarNeural` <sup>New</sup> | General |
+| Tamil (India) | `ta-IN` | Male | `ta-IN-ValluvarNeural` | General |
| Telugu (India) | `te-IN` | Female | `te-IN-ShrutiNeural` | General |
-| Telugu (India) | `te-IN` | Male | `te-IN-MohanNeural` <sup>New</sup> | General |
+| Telugu (India) | `te-IN` | Male | `te-IN-MohanNeural` | General |
| Thai (Thailand) | `th-TH` | Female | `th-TH-AcharaNeural` | General | | Thai (Thailand) | `th-TH` | Female | `th-TH-PremwadeeNeural` | General |
-| Thai (Thailand) | `th-TH` | Male | `th-TH-NiwatNeural` <sup>New</sup> | General |
+| Thai (Thailand) | `th-TH` | Male | `th-TH-NiwatNeural` | General |
| Turkish (Turkey) | `tr-TR` | Female | `tr-TR-EmelNeural` | General |
-| Turkish (Turkey) | `tr-TR` | Male | `tr-TR-AhmetNeural` <sup>New</sup> | General |
+| Turkish (Turkey) | `tr-TR` | Male | `tr-TR-AhmetNeural` | General |
+| Ukrainian (Ukraine) | `uk-UA` | Female | `en-ZA-LeahNeural` <sup>New</sup> | General |
+| Ukrainian (Ukraine) | `uk-UA` | Male | `en-ZA-LukeNeural` <sup>New</sup> | General |
+| Urdu (Pakistan) | `ur-PK` | Female | `uk-UA-PolinaNeural` <sup>New</sup> | General |
+| Urdu (Pakistan) | `ur-PK` | Male | `uk-UA-OstapNeural` <sup>New</sup> | General |
| Vietnamese (Vietnam) | `vi-VN` | Female | `vi-VN-HoaiMyNeural` | General |
-| Vietnamese (Vietnam) | `vi-VN` | Male | `vi-VN-NamMinhNeural` <sup>New</sup> | General |
+| Vietnamese (Vietnam) | `vi-VN` | Male | `vi-VN-NamMinhNeural` | General |
+| Welsh (UK) | `cy-GB` | Female | `cy-GB-NiaNeural` <sup>New</sup> | General |
+| Welsh (UK) | `cy-GB` | Male | `cy-GB-AledNeural` <sup>New</sup> | General |
#### Neural voices in preview
Below neural voices are in public preview.
| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoruiNeural` | Senior voice, multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) | | Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoxuanNeural` | General, multiple role-play and styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) | | Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunxiNeural` | General, multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| Estonian (Estonia) | `et-EE` | Female | `et-EE-AnuNeural` | General |
-| Estonian (Estonia) | `et-EE` | Male | `et-EE-KertNeural` <sup>New</sup> | General |
-| Irish (Ireland) | `ga-IE` | Female | `ga-IE-OrlaNeural` | General |
-| Irish (Ireland) | `ga-IE` | Male | `ga-IE-ColmNeural` <sup>New</sup> | General |
-| Latvian (Latvia) | `lv-LV` | Female | `lv-LV-EveritaNeural` | General |
-| Latvian (Latvia) | `lv-LV` | Male | `lv-LV-NilsNeural` <sup>New</sup> | General |
-| Lithuanian (Lithuania) | `lt-LT` | Female | `lt-LT-OnaNeural` | General |
-| Lithuanian (Lithuania) | `lt-LT` | Male | `lt-LT-LeonasNeural` <sup>New</sup> | General |
-| Maltese (Malta) | `mt-MT` | Female | `mt-MT-GraceNeural` | General |
-| Maltese (Malta) | `mt-MT` | Male | `mt-MT-JosephNeural` <sup>New</sup> | General |
> [!IMPORTANT] > Voices in public preview are only available in 3 service regions: East US, West Europe and Southeast Asia.
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/releasenotes.md
Title: Release Notes - Speech Service
+ Title: Release notes - Speech Service
description: A running log of Speech Service feature releases, improvements, bug fixes, and known issues.
Previously updated : 03/18/2021 Last updated : 01/27/2021 # Speech Service release notes
-## Speech SDK 1.16.0: 2021-March release
+## Text-to-speech 2021-March release
-**Note**: The Speech SDK on Windows depends on the shared Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019. Download it [here](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads).
+**New languages and voices added for neural TTS**
-**Known issues**
+- **Six new languages introduced** - 12 new voices in 6 new locales are added into the neural TTS language list: Nia in `cy-GB` Welsh (United Kingdom), Aled in `cy-GB` Welsh (United Kingdom), Rosa in `en-PH` English (Philippines), James in `en-PH` English (Philippines), Charline in `fr-BE` French (Belgium), Gerard in `fr-BE` French (Belgium), Dena in `nl-BE` Dutch (Belgium), Arnaud in `nl-BE` Dutch (Belgium), Polina in `uk-UA` Ukrainian (Ukraine), Ostap in `uk-UA` Ukrainian (Ukraine), Uzma in `ur-PK` Urdu (Pakistan), Asad in `ur-PK` Urdu (Pakistan).
-**C++/C#/Java**: `DialogServiceConnector` cannot use a `CustomCommandsConfig` to access a Custom Commands application and will instead encounter a connection error. This can be worked around by manually adding your application ID to the request with `config.SetServiceProperty("X-CommandsAppId", "your-application-id", ServicePropertyChannel.UriQueryParameter)`. The expected behavior of `CustomCommandsConfig` will be restored in the next release.
+- **Five languages from preview to GA** - 10 voices in 5 locales introduced in 2020-November now are GA: Kert in `et-EE` Estonian (Estonia), Colm in `ga-IE` Irish (Ireland), Nils in `lv-LV` Latvian (Latvia), Leonas in `lt-LT` Lithuanian (Lithuania), Joseph in `mt-MT` Maltese (Malta).
-**Highlights summary**
-- Smaller memory and disk footprint making the SDK more efficient - this time the focus was on Android.-- Improved support for compressed audio for both speech-to-text and text-to-speech, creating more efficient client/server communication.-- Animated characters that speak with text-to-speech voices can now move their lips and faces naturally, following what they are saying.-- New features and improvements to make the Speech SDK useful for more use cases and in more configurations.-- Several Bug fixes to address issues YOU, our valued customers, have flagged on GitHub! THANK YOU! Keep the feedback coming!
+- **New male voice added for French (Canada)** - A new voice Antoine is available for `fr-CA` French (Canada).
+
+- **Quality improvement** - Pronunciation error rate reduction on `hu-HU` Hungarian - 48.17%, `nb-NO` Norwegian - 52.76%, `nl-NL` Dutch (Netherlands) - 22.11%.
+
+With this release, we now support a total of 142 neural voices across 60 languages/locales. In addition, over 70 standard voices are available in 49 languages/locales. Visit [Language support](language-support.md#text-to-speech) for the full list.
+
+**Get facial pose events to animate characters**
+
+The [Viseme event](how-to-speech-synthesis-viseme.md) is added to Neural TTS, which allows users to get the facial pose sequence and duration from synthesized speech. Viseme can be used to control the movement of 2D and 3D avatar models, perfectly matching mouth movements to synthesized speech. Now, viseme only works for en-US-AriaNeural voice.
+
+**Add the bookmark element in Speech Synthesis Markup Language (SSML)**
+
+The [bookmark element](speech-synthesis-markup.md#bookmark-element) allows you to insert custom markers in SSML to get the offset of each marker in the audio stream. It can be used to reference a specific location in the text or tag sequence.
+
+## Speech SDK 1.16.0: 2021-March release
+
+> [!NOTE]
+> The Speech SDK on Windows depends on the shared Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019. Download it [here](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads).
#### New features -- **C++/C#/Java/Python**: Moved to the latest version of GStreamer (1.18.3) to add support for transcribing _any_ media format on Windows, Linux and Android. See documentation [here](/azure/cognitive-services/speech-service/how-to-use-codec-compressed-audio-input-streams). Previously, the SDK only supported a subset of GStreamer supported formats. This gives you the flexibility to use the audio format that is right for your use case.-- **C++/C#/Java/Objective-C/Python**: Added support to decode compressed TTS/synthesized audio with the SDK. If you set output audio format to PCM and GStreamer is available on your system, the SDK will automatically request compressed audio from the service to save bandwidth and decode the audio on the client. This can lower the bandwidth needed for your use case. You can set `SpeechServiceConnection_SynthEnableCompressedAudioTransmission` to `false` to disable this feature. Details for [C++](/cpp/cognitive-services/speech/microsoft-cognitiveservices-speech-namespace#propertyid), [C#](/dotnet/api/microsoft.cognitiveservices.speech.propertyid), [Java](/java/api/com.microsoft.cognitiveservices.speech.propertyid), [Objective-C](/objectivec/cognitive-services/speech/spxpropertyid), [Python](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.propertyid).-- **JavaScript**: Node.js users can now use the [`AudioConfig.fromWavFileInput` API](/javascript/api/microsoft-cognitiveservices-speech-sdk/audioconfig#fromWavFileInput_File_), allowing customers to send the path on disk to a wav file to the SDK which the SDK will then recognize. This addresses [GitHub issue #252](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/252).-- **C++/C#/Java/Objective-C/Python**: Added `GetVoicesAsync()` method for TTS to return all available synthesis voices programmatically. This allows you to list available voices in your application, or programmatically choose from different voices. Details for [C++](/cpp/cognitive-services/speech/speechsynthesizer#getvoicesasync), [C#](/dotnet/api/microsoft.cognitiveservices.speech.speechsynthesizer#methods), [Java](/java/api/com.microsoft.cognitiveservices.speech.speechsynthesizer#methods), [Objective-C](/objectivec/cognitive-services/speech/spxspeechsynthesizer#getvoices), and [Python](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechsynthesizer#methods).-- **C++/C#/Java/JavaScript/Objective-C/Python**: Added `VisemeReceived` event for TTS/speech synthesis to return synchronous viseme animation. Visemes enable you to create more natural news broadcast assistants, more interactive gaming and cartoon characters, and more intuitive language teaching videos. People with hearing impairment can also pick up sounds visually and "lip-read" any speech content. See documentation [here](/azure/cognitive-services/speech-service/how-to-speech-synthesis-viseme).-- **C++/C#/Java/JavaScript/Objective-C/Python**: Added `BookmarkReached` event for TTS. You can set bookmarks in the input SSML and get the audio offsets for each bookmark. You might use this in your application to take an action when certain words are spoken by text-to-speech. See documentation [here](/azure/cognitive-services/speech-service/speech-synthesis-markup#bookmark-element).
-<!--
-- **Java**: Added support for speaker recognition APIs, allowing you to use speaker recognition from Java. Details [here](/java/api/com.microsoft.cognitiveservices.speech.speakerrecognizer).>-- **C++/C#/Java/JavaScript/Objective-C/Python**: Added two new output audio formats with WebM container for TTS (Webm16Khz16BitMonoOpus and Webm24Khz16BitMonoOpus). These are better formats for streaming audio with the Opus codec. Details for [C++](/cpp/cognitive-services/speech/microsoft-cognitiveservices-speech-namespace#speechsynthesisoutputformat), [C#](/dotnet/api/microsoft.cognitiveservices.speech.speechsynthesisoutputformat), [Java](/java/api/com.microsoft.cognitiveservices.speech.speechsynthesisoutputformat), [JavaScript](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechsynthesisoutputformat), [Objective-C](/objectivec/cognitive-services/speech/spxspeechsynthesisoutputformat), [Python](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechsynthesisoutputformat).-- **C++/C#/Java/Python**: Added support on Linux to allow connections to succeed in environments where network access to Certificate Revocation Lists has been blocked. This enables scenarios where you choose to let the client machine only connect to the Azure Speech service. See documentation [here](/azure/cognitive-services/speech-service/how-to-configure-openssl-linux).-- **C++/C#/Java**: Added support for retrieving voice profile for speaker recognition scenario so that an app can compare speaker data to an existing voice profile. Details for [C++](/cpp/cognitive-services/speech/speakerrecognizer), [C#](/dotnet/api/microsoft.cognitiveservices.speech.speakerrecognizer), and Java. This addresses [GitHub issue #808](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/808).
+- **C++/C#/Java/Python**: Moved to the latest version of GStreamer (1.18.3) to add support for transcribing any media format on Windows, Linux and Android. See documentation [here](https://docs.microsoft.com/azure/cognitive-services/speech-service/how-to-use-codec-compressed-audio-input-streams).
+- **C++/C#/Java/Objective-C/Python**: Added support for decoding compressed TTS/synthesized audio to the SDK. If you set output audio format to PCM and GStreamer is available on your system, the SDK will automatically request compressed audio from the service to save bandwidth and decode the audio on the client. You can set `SpeechServiceConnection_SynthEnableCompressedAudioTransmission` to `false` to disable this feature. Details for [C++](https://docs.microsoft.com/cpp/cognitive-services/speech/microsoft-cognitiveservices-speech-namespace#propertyid), [C#](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.propertyid?view=azure-dotnet), [Java](https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.propertyid?view=azure-java-stable), [Objective-C](https://docs.microsoft.com/objectivec/cognitive-services/speech/spxpropertyid), [Python](https://docs.microsoft.com/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.propertyid?view=azure-python).
+- **JavaScript**: Node.js users can now use the [`AudioConfig.fromWavFileInput` API](https://docs.microsoft.com/javascript/api/microsoft-cognitiveservices-speech-sdk/audioconfig?view=azure-node-latest#fromWavFileInput_File_). This addresses [GitHub issue #252](https://github.com/microsoft/cognitive-services-speech-sdk-JavaScript/issues/252).
+- **C++/C#/Java/Objective-C/Python**: Added `GetVoicesAsync()` method for TTS to return all available synthesis voices. Details for [C++](https://docs.microsoft.com/cpp/cognitive-services/speech/speechsynthesizer#getvoicesasync), [C#](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.speechsynthesizer?view=azure-dotnet#methods), [Java](https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.speechsynthesizer?view=azure-java-stable#methods), [Objective-C](https://docs.microsoft.com/objectivec/cognitive-services/speech/spxspeechsynthesizer#getvoiceasync), and [Python](https://docs.microsoft.com/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechsynthesizer?view=azure-python#methods).
+- **C++/C#/Java/JavaScript/Objective-C/Python**: Added `VisemeReceived` event for TTS/speech synthesis to return synchronous viseme animation. See documentation [here](https://docs.microsoft.com/azure/cognitive-services/speech-service/how-to-speech-synthesis-viseme).
+- **C++/C#/Java/JavaScript/Objective-C/Python**: Added `BookmarkReached` event for TTS. You can set bookmarks in the input SSML and get the audio offsets for each bookmark. See documentation [here](https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-synthesis-markup#bookmark-element).
+- **Java**: Added support for speaker recognition APIs. Details [here](https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.speakerrecognizer?view=azure-java-stable).
+- **C++/C#/Java/JavaScript/Objective-C/Python**: Added two new output audio formats with WebM container for TTS (Webm16Khz16BitMonoOpus and Webm24Khz16BitMonoOpus). These are better formats for streaming audio with the Opus codec. Details for [C++](https://docs.microsoft.com/cpp/cognitive-services/speech/microsoft-cognitiveservices-speech-namespace#speechsynthesisoutputformat), [C#](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.speechsynthesisoutputformat?view=azure-dotnet), [Java](https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.speechsynthesisoutputformat?view=azure-java-stable), [JavaScript](https://docs.microsoft.com/javascript/api/microsoft-cognitiveservices-speech-sdk/speechsynthesisoutputformat?view=azure-node-latest), [Objective-C](https://docs.microsoft.com/objectivec/cognitive-services/speech/spxspeechsynthesisoutputformat), [Python](https://docs.microsoft.com/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechsynthesisoutputformat?view=azure-python).
+- **C++/C#/Java**: Added support for retrieving voice profile for speaker recognition scenario. Details for [C++](https://docs.microsoft.com/cpp/cognitive-services/speech/speakerrecognizer), [C#](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.speakerrecognizer?view=azure-dotnet), and [Java](https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.speakerrecognizer?view=azure-java-stable).
+- **C++/C#/Java/Objective-C/Python**: Added support for separate shared library for audio microphone and speaker control. This allows to use the SDK in environments that do not have required audio library dependencies.
- **Objective-C/Swift**: Added support for module framework with umbrella header. This allows to import Speech SDK as a module in iOS/Mac Objective-C/Swift apps. This addresses [GitHub issue #452](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/452).-- **Python**: Added support for [Python 3.9](/azure/cognitive-services/speech-service/quickstarts/setup-platform?pivots=programming-language-python) and dropped support for Python 3.5 per Python's [end-of-life for 3.5](https://devguide.python.org/devcycle/#end-of-life-branches).
+- **Python**: Added support for [Python 3.9](https://docs.microsoft.com/azure/cognitive-services/speech-service/quickstarts/setup-platform?pivots=programming-language-python) and dropped support for Python 3.5 per Python's [end-of-life for 3.5](https://devguide.python.org/devcycle/#end-of-life-branches).
#### Improvements -- **Java**: As part of our multi release effort to reduce the Speech SDK's memory usage and disk footprint, Android binaries are now 3% to 5% smaller.-- **C#**: Improved accuracy, readability and see-also sections of our C# reference documentation [here](/dotnet/api/microsoft.cognitiveservices.speech) to improve usability of the SDK in C#.-- **C++/C#/Java/Objective-C/Python**: Moved microphone and speaker control into separate shared library. This allows use of the SDK in use cases that do not require audio hardware, for example if you don't need a microphone or speaker for your use case on Linux, you don't need to install libasound.
+- As part of our multi release effort to reduce the Speech SDK's memory usage and disk footprint, Android binaries are now 3% to 5% smaller.
+- Improved accuracy, readability and see-also sections of our C# reference documentation [here](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech?view=azure-dotnet).
#### Bug fixes
## Speech CLI (also known as SPX): 2021-March release
-**Note**: Get started with the Azure Speech service command line interface (CLI) [here](/azure/cognitive-services/speech-service/spx-basics). The CLI enables you to use the Azure Speech service without writing any code.
+> [!NOTE]
+> Get started with the Azure Speech service command line interface (CLI) [here](https://docs.microsoft.com/azure/cognitive-services/speech-service/spx-basics). The CLI enables you to use the Azure Speech service without writing any code.
#### New features
As the ongoing pandemic continues to require our engineers to work from home, pre-pandemic manual verification scripts have been significantly reduced. We test on fewer devices with fewer configurations, and the likelihood of environment-specific bugs slipping through may be increased. We still rigorously validate with a large set of automation. In the unlikely event that we missed something, please let us know on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues?q=is%3Aissue+is%3Aopen).<br> Stay healthy!
+## Text-to-speech 2021-February release
+
+**Custom Neural Voice GA**
+Custom Neural Voice is GA in February in 13 languages: Chinese (Mandarin, Simplified), English (Australia), English (India), English (United Kingdom), English (United States), French (Canada), French (France), German (Germany), Italian (Italy), Japanese (Japan), Korean (Korea), Portuguese (Brazil), Spanish (Mexico), and Spanish (Spain). Learn more about [what is Custom Neural Voice](custom-neural-voice.md) and [how to use it responsibly](concepts-guidelines-responsible-deployment-synthetic.md).
+Custom Neural Voice feature requires registration and Microsoft may limit access based on MicrosoftΓÇÖs eligibility criteria. Learn more about the [limited access](https://docs.microsoft.com/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context).
## Speech SDK 1.15.0: 2021-January release
-**Note**: The Speech SDK on Windows depends on the shared Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019. Download it [here](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads).
+> [!NOTE]
+> The Speech SDK on Windows depends on the shared Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019. Download it [here](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads).
**Highlights summary** - Smaller memory and disk footprint making the SDK more efficient.
Stay healthy!
**New features** - **All**: New 48KHz output formats available for the private preview of custom neural voice through the TTS speech synthesis API: Audio48Khz192KBitRateMonoMp3, audio-48khz-192kbitrate-mono-mp3, Audio48Khz96KBitRateMonoMp3, audio-48khz-96kbitrate-mono-mp3, Raw48Khz16BitMonoPcm, raw-48khz-16bit-mono-pcm, Riff48Khz16BitMonoPcm, riff-48khz-16bit-mono-pcm.-- **All**: Custom voice is also easier to use. Added support for setting custom voice via `EndpointId` ([C++](/cpp/cognitive-services/speech/speechconfig#setendpointid), [C#](/dotnet/api/microsoft.cognitiveservices.speech.speechconfig.endpointid#Microsoft_CognitiveServices_Speech_SpeechConfig_EndpointId), [Java](/java/api/com.microsoft.cognitiveservices.speech.speechconfig.setendpointid#com_microsoft_cognitiveservices_speech_SpeechConfig_setEndpointId_String_), [JavaScript](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechconfig#endpointId), [Objective-C](/objectivec/cognitive-services/speech/spxspeechconfiguration#endpointid), [Python](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechconfig#endpoint-id)). Before this change, custom voice users needed to set the endpoint URL via the `FromEndpoint` method. Now customers can use the `FromSubscription` method just like public voices, and then provide the deployment id by setting `EndpointId`. This simplifies setting up custom voices.
+- **All**: Custom voice is also easier to use. Added support for setting custom voice via `EndpointId` ([C++](/cpp/cognitive-services/speech/speechconfig#setendpointid), [C#](/dotnet/api/microsoft.cognitiveservices.speech.speechconfig.endpointid#Microsoft_CognitiveServices_Speech_SpeechConfig_EndpointId), [Java](/java/api/com.microsoft.cognitiveservices.speech.speechconfig.setendpointid#com_microsoft_cognitiveservices_speech_SpeechConfig_setEndpointId_String_), [JavaScript](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechconfig#endpointId), [Objective-C](/objectivec/cognitive-services/speech/spxspeechconfiguration#endpointid), [Python](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechconfig#endpoint-id)). Before this change, custom voice users needed to set the endpoint URL via the `FromEndpoint` method. Now customers can use the `FromSubscription` method just like public voices, and then provide the deployment ID by setting `EndpointId`. This simplifies setting up custom voices.
- **C++/C#/Jav#add-a-languageunderstandingmodel-and-intents).-- **C++/C#/Java**: Make your voice assistant or bot stop listening immediatedly. `DialogServiceConnector` ([C++](/cpp/cognitive-services/speech/dialog-dialogserviceconnector), [C#](/dotnet/api/microsoft.cognitiveservices.speech.dialog.dialogserviceconnector), [Java](/java/api/com.microsoft.cognitiveservices.speech.dialog.dialogserviceconnector)) now has a `StopListeningAsync()` method to accompany `ListenOnceAsync()`. This will immediately stop audio capture and gracefully wait for a result, making it perfect for use with "stop now" button-press scenarios.
+- **C++/C#/Java**: Make your voice assistant or bot stop listening immediately. `DialogServiceConnector` ([C++](/cpp/cognitive-services/speech/dialog-dialogserviceconnector), [C#](/dotnet/api/microsoft.cognitiveservices.speech.dialog.dialogserviceconnector), [Java](/java/api/com.microsoft.cognitiveservices.speech.dialog.dialogserviceconnector)) now has a `StopListeningAsync()` method to accompany `ListenOnceAsync()`. This will immediately stop audio capture and gracefully wait for a result, making it perfect for use with "stop now" button-press scenarios.
- **C++/C#/Java/JavaScript**: Make your voice assistant or bot react better to underlying system errors. `DialogServiceConnector` ([C++](/cpp/cognitive-services/speech/dialog-dialogserviceconnector), [C#](/dotnet/api/microsoft.cognitiveservices.speech.dialog.dialogserviceconnector), [Java](/java/api/com.microsoft.cognitiveservices.speech.dialog.dialogserviceconnector), [JavaScript](/javascript/api/microsoft-cognitiveservices-speech-sdk/dialogserviceconnector)) now has a new `TurnStatusReceived` event handler. These optional events correspond to every [`ITurnContext`](/dotnet/api/microsoft.bot.builder.iturncontext) resolution on the Bot and will report turn execution failures when they happen, e.g. as a result of an unhandled exception, timeout, or network drop between Direct Line Speech and the bot. `TurnStatusReceived` makes it easier to respond to failure conditions. For example, if a bot takes too long on a backend database query (e.g. looking up a product), `TurnStatusReceived` allows the client to know to reprompt with "sorry, I didn't quite get that, could you please try again" or something similar.-- **C++/C#**: Use the Speech SDK on more platforms. The [Speech SDK nuget package](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech) now supports Windows ARM/ARM64 desktop native binaries (UWP was already supported) to make the Speech SDK more useful on more machine types.
+- **C++/C#**: Use the Speech SDK on more platforms. The [Speech SDK NuGet package](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech) now supports Windows ARM/ARM64 desktop native binaries (UWP was already supported) to make the Speech SDK more useful on more machine types.
- **Java**: [`DialogServiceConnector`](/java/api/com.microsoft.cognitiveservices.speech.dialog.dialogserviceconnector) now has a `setSpeechActivityTemplate()` method that was unintentionally excluded from the language previously. This is equivalent to setting the `Conversation_Speech_Activity_Template` property and will request that all future Bot Framework activities originated by the Direct Line Speech service merge the provided content into their JSON payloads. - **Java**: Improved low level debugging. The [`Connection`](/java/api/com.microsoft.cognitiveservices.speech.connection) class now has a `MessageReceived` event, similar to other programing languages (C++, C#). This event provides low-level access to incoming data from the service and can be useful for diagnostics and debugging. - **JavaScript**: Easier setup for Voice Assistants and bots through [`BotFrameworkConfig`](/javascript/api/microsoft-cognitiveservices-speech-sdk/botframeworkconfig), which now has `fromHost()` and `fromEndpoint()` factory methods that simplify the use of custom service locations versus manually setting properties. We also standardized optional specification of `botId` to use a non-default bot across the configuration factories.
Stay healthy!
- **JavaScript**: Simplified error handling on microphone authorization, allowing more descriptive message to bubble up when user has not allowed microphone input on their browser. - **JavaScript**: Fixed [GitHub issue #249](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/249) where type errors in `ConversationTranslator` and `ConversationTranscriber` caused a compilation error for TypeScript users. - **Objective-C**: Fixed an issue where GStreamer build failed for iOS on Xcode 11.4, addressing [GitHub issue #911](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/911).-- **Python**: Fixed [GitHub issue #870](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/870), removing "DeprecationWarning: the imp module is deprecated in favour of importlib".
+- **Python**: Fixed [GitHub issue #870](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/870), removing "DeprecationWarning: the imp module is deprecated in favor of importlib".
**Samples** - [From-file sample for JavaScript browser](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/quickstart/javascript/browser/from-file/https://docsupdatetracker.net/index.html) now uses files for speech recognition. This addresses [GitHub issue #884](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/884).
Stay healthy!
## Speech CLI (also known as SPX): 2021-January release **New features**-- Speech CLI is now available as a [NuGet package](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech.CLI/) and can be installed via .Net CLI as a .Net global tool you can call from the shell/command line.
+- Speech CLI is now available as a [NuGet package](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech.CLI/) and can be installed via .NET CLI as a .NET global tool you can call from the shell/command line.
- The [Custom Speech DevOps Template repo](https://github.com/Azure-Samples/Speech-Service-DevOps-Template) has been updated to use Speech CLI for its Custom Speech workflows. **COVID-19 abridged testing**:
With this release, we now support a total of 129 neural voices across 54 languag
**Updates for Audio Content Creation** - Improved voice selection UI with voice categories and detailed voice descriptions. - Enabled intonation tuning for all neural voices across different languages.-- Automated the UI localizaiton based on the language of the browser.
+- Automated the UI localization based on the language of the browser.
- Enabled `StyleDegree` controls for all `zh-CN` Neural voices. Visit the [Audio Content Creation tool](https://speech.microsoft.com/audiocontentcreation) to check out the new features.
Visit the [Audio Content Creation tool](https://speech.microsoft.com/audioconten
- With Neural TTS Container, developers can run speech synthesis with the most natural digital voices in their own environment for specific security and data governance requirements. Check [how to install Speech Containers](speech-container-howto.md). **New features**-- **Custom Voice**: enabed users to copy a voice model from one region to another; supported endpoint suspension and resuming. Go to the [portal](https://speech.microsoft.com/customvoice) here.
+- **Custom Voice**: enabled users to copy a voice model from one region to another; supported endpoint suspension and resuming. Go to the [portal](https://speech.microsoft.com/customvoice) here.
- [SSML silence tag](speech-synthesis-markup.md#add-silence) support. - General TTS voice quality improvements: Improved word-level pronunciation accuracy in nb-NO. Reduced 53% pronunciation error.
Visit the [Audio Content Creation tool](https://speech.microsoft.com/audioconten
## Speech SDK 1.14.0: 2020-October release
-**Note**: The Speech SDK on Windows depends on the shared Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019. Download it [here](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads).
+> [!NOTE]
+> The Speech SDK on Windows depends on the shared Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019. Download it [here](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads).
**New features** - **Linux**: Added support for Debian 10 and Ubuntu 20.04 LTS.
Download the latest version [here](./spx-basics.md). <br>
### New features * **Neural TTS**
- * **Extended to support 18 new languages/locales.** They are Bulgarian, Czech, German (Austria), German (Switzerland), Greek, English (Ireland), French (Switzerland), Hebrew, Croatian, Hungarian, Indonesian, Malay, Romanian, Slovak, Slovenian, Tamil, Telugu and Vietnamese.
- * **Released 14 new voices to enrich the variety in the existing languages.** See [full language and voice list](language-support.md#neural-voices).
- * **New speaking styles for `en-US` and `zh-CN` voices.** Jenny, the new voice in English (US), supports chatbot, customer service, and assistant styles. 10 new speaking styles are available with our zh-CN voice, XiaoXiao. In addition, the XiaoXiao neural voice supports `StyleDegree` tuning. See [how to use the speaking styles in SSML](speech-synthesis-markup.md#adjust-speaking-styles).
+ * **Extended to support 18 new languages/locales.** They are Bulgarian, Czech, German (Austria), German (Switzerland), Greek, English (Ireland), French (Switzerland), Hebrew, Croatian, Hungarian, Indonesian, Malay, Romanian, Slovak, Slovenian, Tamil, Telugu and Vietnamese.
+ * **Released 14 new voices to enrich the variety in the existing languages.** See [full language and voice list](language-support.md#neural-voices).
+ * **New speaking styles for `en-US` and `zh-CN` voices.** Jenny, the new voice in English (US), supports chatbot, customer service, and assistant styles. 10 new speaking styles are available with our zh-CN voice, XiaoXiao. In addition, the XiaoXiao neural voice supports `StyleDegree` tuning. See [how to use the speaking styles in SSML](speech-synthesis-markup.md#adjust-speaking-styles).
* **Containers: Neural TTS Container released in public preview with 16 voices available in 14 languages.** Learn more on [how to deploy Speech Containers for Neural TTS](speech-container-howto.md)
Speech-to-text released 26 new locales in August: 2 European languages `cs-CZ` a
## Speech SDK 1.13.0: 2020-July release
-**Note**: The Speech SDK on Windows depends on the shared Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019. Download and install it from [here](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads).
+> [!NOTE]
+> The Speech SDK on Windows depends on the shared Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019. Download and install it from [here](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads).
**New features** - **C#**: Added support for asynchronous conversation transcription. See documentation [here](./how-to-async-conversation-transcription.md).
Stay healthy!
| `es-MX` | $1.58 | un peso cincuenta y ocho centavos | | `es-ES` | $1.58 | un d├│lar cincuenta y ocho centavos |
- * Support for negative currency (like "-325ΓÇ»&euro;" ) in following locales: `en-US`, `en-GB`, `fr-FR`, `it-IT`, `en-AU`, `en-CA`.
+ * Support for negative currency (like “-325 €” ) in following locales: `en-US`, `en-GB`, `fr-FR`, `it-IT`, `en-AU`, `en-CA`.
* Improved address reading in `pt-PT`. * Fixed Natasha (`en-AU`) and Libby (`en-UK`) pronunciation issues on the word "for" and "four".
Stay healthy!
- Windows: Added compressed audio input format support on Windows platform for all the win32 console applications. Details [here](./how-to-use-codec-compressed-audio-input-streams.md). - JavaScript: Support speech synthesis (text-to-speech) in NodeJS. Learn more [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/javascript/node/text-to-speech). - JavaScript: Add new API's to enable inspection of all send and received messages. Learn more [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/javascript).
-
+
**Bug fixes** - C#, C++: Fixed an issue so `SendMessageAsync` now sends binary message as binary type. Details for [C#](/dotnet/api/microsoft.cognitiveservices.speech.connection.sendmessageasync#Microsoft_CognitiveServices_Speech_Connection_SendMessageAsync_System_String_System_Byte___System_UInt32_), [C++](/cpp/cognitive-services/speech/connection). - C#, C++: Fixed an issue where using `Connection MessageReceived` event may cause crash if `Recognizer` is disposed before `Connection` object. Details for [C#](/dotnet/api/microsoft.cognitiveservices.speech.connection.messagereceived), [C++](/cpp/cognitive-services/speech/connection#messagereceived).
Stay healthy!
- Android: Fixed an [issue](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/563) with x86 Android emulator in Android Studio. - JavaScript: Added support for Regions in China with the `fromSubscription` API. Details [here](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechconfig#fromsubscription-string--string-). - JavaScript: Add more error information for connection failures from NodeJS.
-
+
**Samples** - Unity: Intent recognition public sample is fixed, where LUIS json import was failing. Details [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/369). - Python: Sample added for `Language ID`. Details [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_sample.py).
-
+
**Covid19 abridged testing:** Due to working remotely over the last few weeks, we couldn't do as much manual device verification testing as we normally do. For example, we couldn't test microphone input and speaker output on Linux, iOS, and macOS. We haven't made any changes we think could have broken anything on these platforms, and our automated tests all passed. In the unlikely event that we missed something, please let us know on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues?q=is%3Aissue+is%3Aopen).<br> Thank you for your continued support. As always, please post questions or feedback on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues?q=is%3Aissue+is%3Aopen) or [Stack Overflow](https://stackoverflow.microsoft.com/questions/tagged/731).<br>
This is a bug fix release and only affecting the native/managed SDK. It is not a
**Bug fixes** - Fix FromSubscription when used with Conversation Transcription.-- Fix bug in keyword recognition for voice assistants.
+- Fix bug in keyword spotting for voice assistants.
## Speech SDK 1.5.0: 2019-May release **New features** -- Keyword recognition is now available for Windows and Linux. This functionality might work with any microphone type, but official support is currently limited to the microphone arrays found in the Azure Kinect DK hardware or the Speech Devices SDK.
+- Keyword spotting (KWS) is now available for Windows and Linux. KWS functionality might work with any microphone type, official KWS support, however, is currently limited to the microphone arrays found in the Azure Kinect DK hardware or the Speech Devices SDK.
- Phrase hint functionality is available through the SDK. For more information, see [here](./get-started-speech-to-text.md). - Conversation transcription functionality is available through the SDK. See [here](./conversation-transcription.md). - Add support for voice assistants using the Direct Line Speech channel.
This is a bug fix release and only affecting the native/managed SDK. It is not a
**New Features** - The Speech SDK supports selection of the input microphone through the `AudioConfig` class. This allows you to stream audio data to the Speech service from a non-default microphone. For more information, see the documentation describing [audio input device selection](how-to-select-audio-input-devices.md). This feature is not yet available from JavaScript.-- The Speech SDK now supports Unity in a beta version. Provide feedback through the issue section in the [GitHub sample repository](https://github.com/Azure-Samples/cognitive-services-speech-sdk). This release supports Unity on Windows x86 and x64 (desktop or Universal Windows Platform applications), and Android (ARM32/64, x86). More information is available in our [Unity quickstart](./get-started-speech-to-text.md?pivots=programming-language-csharp&tabs=unity).
+- The Speech SDK now supports Unity in a beta version. Provide feedback through the issue section in the [GitHub sample repository](https://aka.ms/csspeech/samples). This release supports Unity on Windows x86 and x64 (desktop or Universal Windows Platform applications), and Android (ARM32/64, x86). More information is available in our [Unity quickstart](./get-started-speech-to-text.md?pivots=programming-language-csharp&tabs=unity).
- The file `Microsoft.CognitiveServices.Speech.csharp.bindings.dll` (shipped in previous releases) isn't needed anymore. The functionality is now integrated into the core SDK. **Samples**
-The following new content is available in our [sample repository](https://github.com/Azure-Samples/cognitive-services-speech-sdk):
+The following new content is available in our [sample repository](https://aka.ms/csspeech/samples):
- Additional samples for `AudioConfig.FromMicrophoneInput`. - Additional Python samples for intent recognition and translation.
This is a JavaScript-only release. No features have been added. The following fi
**Samples** - Updated and fixed several samples (for example output voices for translation, etc.).-- Added Node.js samples in the [sample repository](https://github.com/Azure-Samples/cognitive-services-speech-sdk).
+- Added Node.js samples in the [sample repository](https://aka.ms/csspeech/samples).
## Speech SDK 1.1.0
This is a JavaScript-only release. No features have been added. The following fi
**Samples** -- Added C++ and C# samples for pull and push stream usage in the [sample repository](https://github.com/Azure-Samples/cognitive-services-speech-sdk).
+- Added C++ and C# samples for pull and push stream usage in the [sample repository](https://aka.ms/csspeech/samples).
## Speech SDK 1.0.1
Reliability improvements and bug fixes:
- JavaScript: Fixed regarding events and their payloads. - Documentation improvements.
-In our [sample repository](https://github.com/Azure-Samples/cognitive-services-speech-sdk), a new sample for JavaScript was added.
+In our [sample repository](https://aka.ms/csspeech/samples), a new sample for JavaScript was added.
## Cognitive Services Speech SDK 1.0.0: 2018-September release
In our [sample repository](https://github.com/Azure-Samples/cognitive-services-s
- Support .NET Standard 2.0 on Windows. Check out the [.NET Core quickstart](./get-started-speech-to-text.md?pivots=programming-language-csharp&tabs=dotnetcore). - Experimental: Support UWP on Windows (version 1709 or later). - Check out the [UWP quickstart](./get-started-speech-to-text.md?pivots=programming-language-csharp&tabs=uwp).
- - Note: UWP apps built with the Speech SDK do not yet pass the Windows App Certification Kit (WACK).
+ - Note that UWP apps built with the Speech SDK do not yet pass the Windows App Certification Kit (WACK).
- Support long-running recognition with automatic reconnection. **Functional changes**
In our [sample repository](https://github.com/Azure-Samples/cognitive-services-s
- On Windows, C# .NET assemblies now are strong named. - Documentation fix: `Region` is required information to create a recognizer.
-More samples have been added and are constantly being updated. For the latest set of samples, see the [Speech SDK samples GitHub repository](https://github.com/Azure-Samples/cognitive-services-speech-sdk).
+More samples have been added and are constantly being updated. For the latest set of samples, see the [Speech SDK samples GitHub repository](https://aka.ms/csspeech/samples).
## Cognitive Services Speech SDK 0.2.12733: 2018-May release
cognitive-services Rest Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/rest-speech-to-text.md
var pronAssessmentHeader = Convert.ToBase64String(pronAssessmentParamsBytes);
We strongly recommend streaming (chunked) uploading while posting the audio data, which can significantly reduce the latency. See [sample code in different programming languages](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/PronunciationAssessment) for how to enable streaming. >[!NOTE]
->The pronunciation assessment feature is currently only available on `en-US` language.
+> The pronunciation assessment feature currently supports `en-US` language, which is available on all [speech-to-text regions](regions.md#speech-to-text). The support for `en-GB` and `zh-CN` languages is under preview, which is available on `westus`, `eastasia` and `centralindia` regions.
### Sample request
cognitive-services What Is Personalizer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/what-is-personalizer.md
Azure Personalizer is a cloud-based service that helps your applications choose
> [!TIP] > Content is any unit of information, such as text, images, URL, emails, or anything else that you want to select from and show to your users.
-Before you get started, feel free to try out [Personalizer with this interactive demo](https://personalizationdemo.azurewebsites.net/).
+This documentation contains the following article types:
-<!--
-![What is personalizer animation](./media/what-is-personalizer.gif)
>
+* [**Quickstarts**](quickstart-personalizer-sdk.md) are getting-started instructions to guide you through making requests to the service.
+* [**How-to guides**](how-to-settings.md) contain instructions for using the service in more specific or customized ways.
+* [**Concepts**](how-personalizer-works.md) provide in-depth explanations of the service functionality and features.
+* [**Tutorials**](tutorial-use-personalizer-web-app.md) are longer guides that show you how to use the service as a component in broader business solutions.
+
+Before you get started, try out [Personalizer with this interactive demo](https://personalizationdemo.azurewebsites.net/).
## How does Personalizer select the best content item?
Since Personalizer uses collective information in near real-time to return the s
* Or sometime later in an offline system 1. [Evaluate your loop](concepts-offline-evaluation.md) with an offline evaluation after a period of use. An offline evaluation allows you to test and assess the effectiveness of the Personalizer Service without changing your code or affecting user experience.
-## Complete a quickstart
-
-We offer quickstarts in C#, JavaScript, and Python. Each quickstart is designed to teach you basic design patterns, and have you running code in less than 10 minutes.
-
-* [Quickstart: How to use the Personalizer client library](./quickstart-personalizer-sdk.md)
-
-After you've had a chance to get started with the Personalizer service, try our tutorials and learn how to use Personalizer in web applications, chat bots, or an Azure Notebook.
-
-* [Tutorial: Use Personalizer in a .NET web app](tutorial-use-personalizer-web-app.md)
-* [Tutorial: Use Personalizer in a .NET chat bot](tutorial-use-personalizer-chat-bot.md)
-* [Tutorial: Use Personalizer in an Azure Notebook](tutorial-use-azure-notebook-generate-loop-data.md)
- ## Reference * [Personalizer C#/.NET SDK](/dotnet/api/overview/azure/cognitiveservices/client/personalizer)
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/chat/concepts.md
# Chat concepts + Azure Communication Services Chat SDKs can be used to add real-time text chat to your applications. This page summarizes key Chat concepts and capabilities. See the [Communication Services Chat SDK Overview](./sdk-features.md) to learn more about specific SDK languages and capabilities.
communication-services Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/chat/sdk-features.md
# Chat SDK overview + Azure Communication Services Chat SDKs can be used to add rich, real-time chat to your applications. ## Chat SDK capabilities
communication-services Meeting Interop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/chat/meeting-interop.md
# Quickstart: Join your chat app to a Teams meeting + > [!IMPORTANT] > To enable/disable [Teams tenant interoperability](../../concepts/teams-interop.md), complete [this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR21ouQM6BHtHiripswZoZsdURDQ5SUNQTElKR0VZU0VUU1hMOTBBMVhESS4u).
communication-services Chat Hero Sample https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/samples/chat-hero-sample.md
# Get started with the group chat hero sample + > [!IMPORTANT]
-> [This sample is available on GitHub.](https://github.com/Azure-Samples/communication-services-web-chat-hero)
+> [This sample is available **on GitHub**.](https://github.com/Azure-Samples/communication-services-web-chat-hero)
The Azure Communication Services **Group Chat Hero Sample** demonstrates how the Communication Services Chat Web SDK can be used to build a group calling experience.
communication-services Web Calling Sample https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/samples/web-calling-sample.md
This sample was built for developers and makes it very easy for you to get start
## Get started with the web calling sample > [!IMPORTANT]
-> [This sample is available on Github.](https://github.com/Azure-Samples/communication-services-web-calling-tutorial/).
+> [This sample is available **on Github**.](https://github.com/Azure-Samples/communication-services-web-calling-tutorial/).
Follow the /Project/readme.md to set up the project and run it locally on your machine. Once the [web calling sample](https://github.com/Azure-Samples/communication-services-web-calling-tutorial) is running on your machine, you'll see the following landing page:
container-registry Container Registry Tasks Multi Step https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-tasks-multi-step.md
steps:
build: -t $Registry/hello-world:$ID . when: ["-"] - id: build-tests
- build -t $Registry/hello-world-tests ./funcTests
+ build: -t $Registry/hello-world-tests ./funcTests
when: ["-"] - id: push push: ["$Registry/helloworld:$ID"]
You can find multi-step task reference and examples here:
<!-- LINKS - Internal --> [az-acr-task-create]: /cli/azure/acr/task#az-acr-task-create [az-acr-run]: /cli/azure/acr#az-acr-run
-[az-acr-task]: /cli/azure/acr/task
+[az-acr-task]: /cli/azure/acr/task
cost-management-billing Mosp New Customer Experience https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/understand/mosp-new-customer-experience.md
Previously updated : 01/11/2021 Last updated : 03/31/2021
Managing costs and invoices is one of the key components of your cloud experienc
The following diagram compares your old and the new billing account:
-![Diagram showing the comparison between billing hierarchy in the old and the new account](./media/mosp-new-customer-experience/comparison-old-new-account.png)
Your new billing account contains one or more billing profiles that let you manage your invoices and payment methods. Each billing profile contains one or more invoice sections that let you organize costs on the billing profile's invoice.
-![Diagram showing the new billing hierarchy](./media/mosp-new-customer-experience/new-billing-account-hierarchy.png)
Roles on the billing account have the highest level of permissions. These roles should be assigned to users that need to view invoices, and track costs for your entire account like finance or IT managers in an organization or the individual who signed up for an account. For more information, see [billing account roles and tasks](../manage/understand-mca-roles.md#billing-account-roles-and-tasks). When your account is updated, the user who has an account administrator role in the old billing account is given an owner role on the new account.
Your new experience includes the following cost management and billing capabilit
**More predictable monthly billing period** - In your new account, the billing period begins from the first day of the month and ends at the last day of the month, no matter when you sign up to use Azure. An invoice will be generated at the beginning of each month, and will contain all charges from the previous month.
-**Get a single monthly invoice for multiple subscriptions** - You have the flexibility of either getting one monthly invoice for each of your subscriptions or a single invoice for multiple subscriptions.
+**Get a single monthly invoice for multiple subscriptions** - In your existing account, you get an invoice for each Azure subscription. When your account is updated, the existing behavior is maintained but you have the flexibility to consolidate the charges of your subscriptions on a single invoice. After you account is updated, follow the steps below to consolidate your charges on a single invoice:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+2. Search for **Cost Management + Billing**.
+ ![Screenshot that shows search in the Azure portal for Cost Management + Billing.](./media/mosp-new-customer-experience/billing-search-cost-management-billing.png)
+3. Select **Azure subscriptions** from the left-side of the screen.
+4. The table lists Azure subscriptions that you're paying for. In the billing profile column, you would find the billing profile that is billed for the subscription. The subscription charges are displayed on the invoice for the billing profile. To consolidate the charges for all your subscriptions on a single invoice, you need to link all your subscriptions to a single billing profile.
+ :::image type="content" source="./media/mosp-new-customer-experience/list-azure-subscriptions.png" alt-text="Screenshot that shows the list of Azure subscriptions." lightbox="./media/mosp-new-customer-experience/list-azure-subscriptions.png" :::
+5. Pick a billing profile that you want to use.
+6. Select a subscription that is not linked to the billing profile that you chose in step 5. Click on ellipsis (three dots) for the subscription. Select **Change invoice section**.
+ :::image type="content" source="./media/mosp-new-customer-experience/select-change-invoice-section.png" alt-text="Screenshot that shows where to find the option to change invoice section." lightbox="./media/mosp-new-customer-experience/select-change-invoice-section-zoomed-in.png" :::
+7. Select the billing profile that you chose in step #5.
+ :::image type="content" source="./media/mosp-new-customer-experience/change-invoice-section.png" alt-text="Screenshot that shows how to change invoice section." lightbox="./media/mosp-new-customer-experience/change-invoice-section-zoomed-in.png" :::
+8. Select **Change**.
+9. Repeat steps 6-8 for all other subscriptions.
**Receive a single monthly invoice for Azure subscriptions, support plans, and Azure Marketplace products** - You'll get one monthly invoice for all charges including usage charges for Azure subscriptions, and support plans and Azure Marketplace purchases.
We recommend the following to get prepared for your new experience:
In the new experience, your invoice will be generated around the ninth day of each month and it contains all charges from previous month. This date might differ from the date when your invoice is generated in the old account. If you share your invoices with others, notify them of the change in the date. +
+**Invoices in the first month after migration**
+
+The day your account is updated, your existing unbilled charges are finalized and you'll receive the invoices for these charges on the day when you typically receive your invoices. For example, John has two Azure subscriptions - Azure sub 01 with billing cycle from the fifth day of the month to the fourth day of the next month and Azure sub 02 with billing cycle from the tenth day of a month to the ninth day of next month. John gets invoices for both Azure subscriptions typically on fifth of the month. Now if John's account is updated on April 4th, the charges for Azure sub 01 from March 5th to April 4th and charges for Azure sub 02 from March 10th to April 4th will be finalized. John will receive two invoices, one for each sub on April 5th. After the account is updated, John's billing cycle will be based on calendar month and will cover all charges incurred from the beginning of a calendar month to the end of that calendar month. Invoice for the previous calendar month’s charges are available on the 9th of each month. So in the example above, John will receive another invoice on May 5th for the billing period of April 5th to April 30th.
++ **New billing and cost management APIs** If you're using Cost Management or Billing APIs to query and update your billing or cost data, then you must use new APIs. The table below lists the APIs that won't work with the new billing account and the changes that you need to make in your new billing account.
If you're using Cost Management or Billing APIs to query and update your billing
|[Billing Accounts - List](/rest/api/billing/2019-10-01-preview/billingaccounts/list) | In the Billing Accounts - List API, your old billing account has agreementType **MicrosoftOnlineServiceProgram**, your new billing account would have agreementType **MicrosoftCustomerAgreement**. If you take a dependency on agreementType, update it. | |[Invoices - List By Billing Subscription](/rest/api/billing/2019-10-01-preview/invoices/listbybillingsubscription) | This API will only return invoices that were generated before your account was updated. You would have to use [Invoices - List By Billing Account](/rest/api/billing/2019-10-01-preview/invoices/listbybillingaccount) API to get invoices that are generated in your new billing account. | + ## Cost Management updates after account update Your updated Azure billing account for your Microsoft Customer Agreement gives you access to new and expanded Cost Management experiences in the Azure portal that you didn't have with your pay-as-you-go account.
With your updated account, you receive a single invoice for all Azure charges. Y
For example, if your billing period was November 24 to December 23 for your old account, then after the upgrade the period becomes November 1 to November 30, December 1 to December 31 and so on. #### Budgets
Your new billing account provides improved export functionality. For example, yo
For example, for a billing period from December 23 to January 22, the exported CSV file would have cost and usage data for that period. After the update, the export will contain data for the calendar month. For example, January 1 to January 31 and so on. ## Additional information
cost-management-billing Pay Bill https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/understand/pay-bill.md
To pay invoices in the Azure portal, you must have the correct [MCA permissions]
The invoice status shows *paid* within 24 hours.
-## Pay now for customers in India
-
-The Reserve Bank of India issued [new regulations](https://www.rbi.org.in/Scripts/NotificationUser.aspx?Id=12002&Mode=0) that will take effect on April 1st 2021. After this date, banks in India may start declining automatic recurring payments, and payments will need to be made manually in the Azure portal.
-
-If your bank declines an automatic recurring payment, weΓÇÖll notify you via email and provide instructions on how to proceed.
-
-Beginning April 1st 2021, you may pay an outstanding balance any time by following these steps:
-
-1. Sign in to the [Azure portal](https://portal.azure.com/) as the Account Administrator.
-1. Search for **Cost Management + Billing**.
-1. On the Overview page, select the **Pay now** button. (If you don't see the **Pay now** button, you do not have an outstanding balance.)
- ## Check access to a Microsoft Customer Agreement [!INCLUDE [billing-check-mca](../../../includes/billing-check-mca.md)]
data-factory Continuous Integration Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/continuous-integration-deployment.md
Previously updated : 03/11/2021 Last updated : 04/01/2021 # Continuous integration and delivery in Azure Data Factory
When running a post-deployment script, you will need to specify a variation of t
`-armTemplate "$(System.DefaultWorkingDirectory)/<your-arm-template-location>" -ResourceGroupName <your-resource-group-name> -DataFactoryName <your-data-factory-name> -predeployment $false -deleteDeployment $true`
+> [!NOTE]
+> The `-deleteDeployment` flag is used to specify the deletion of the ADF deployment entry from the deployment history in ARM.
+ ![Azure PowerShell task](media/continuous-integration-deployment/continuous-integration-image11.png) Here is the script that can be used for pre- and post-deployment. It accounts for deleted resources and resource references.
data-factory Control Flow For Each Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-for-each-activity.md
Activities | The activities to be executed. | List of Activities | Yes
If **isSequential** is set to false, the activity iterates in parallel with a maximum of 20 concurrent iterations. This setting should be used with caution. If the concurrent iterations are writing to the same folder but to different files, this approach is fine. If the concurrent iterations are writing concurrently to the exact same file, this approach most likely causes an error. ## Iteration expression language
-In the ForEach activity, provide an array to be iterated over for the property **items**." Use `@item()` to iterate over a single enumeration in ForEach activity. For example, if **items** is an array: [1, 2, 3], `@item()` returns 1 in the first iteration, 2 in the second iteration, and 3 in the third iteration.
+In the ForEach activity, provide an array to be iterated over for the property **items**." Use `@item()` to iterate over a single enumeration in ForEach activity. For example, if **items** is an array: [1, 2, 3], `@item()` returns 1 in the first iteration, 2 in the second iteration, and 3 in the third iteration. You can also use `@range(0,10)` like expression to iterate ten times starting with 0 ending with 9.
## Iterating over a single activity **Scenario:** Copy from the same source file in Azure Blob to multiple destination files in Azure Blob.
data-factory Pipeline Trigger Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/pipeline-trigger-troubleshoot-guide.md
Title: Troubleshoot pipeline orchestration and triggers in Azure Data Factory
description: Use different methods to troubleshoot pipeline trigger issues in Azure Data Factory. Previously updated : 03/13/2021 Last updated : 04/01/2021
You've reached the integration runtime's capacity limit. You might be running a
- Run your pipelines at different trigger times. - Create a new integration runtime, and split your pipelines across multiple integration runtimes.
+### A pipeline run error while invoking REST api in a Web activity
+
+**Issue**
+
+Error message:
+
+`
+Operation on target Cancel failed: {ΓÇ£errorΓÇ¥:{ΓÇ£codeΓÇ¥:ΓÇ¥AuthorizationFailedΓÇ¥,ΓÇ¥messageΓÇ¥:ΓÇ¥The client ΓÇÿ<client>ΓÇÖ with object id ΓÇÿ<object>ΓÇÖ does not have authorization to perform action ΓÇÿMicrosoft.DataFactory/factories/pipelineruns/cancel/actionΓÇÖ over scope ΓÇÿ/subscriptions/<subscription>/resourceGroups/<resource group>/providers/Microsoft.DataFactory/factories/<data factory name>/pipelineruns/<pipeline run id>ΓÇÖ or the scope is invalid. If access was recently granted, please refresh your credentials.ΓÇ¥}}
+`
+
+**Cause**
+
+Pipelines may use the Web activity to call ADF REST API methods if and only if the Azure Data Factory member is assigned the Contributor role. You must first configure add the Azure Data Factory managed identity to the Contributor security role.
+
+**Resolution**
+
+Before using the Azure Data FactoryΓÇÖs REST API in a Web activityΓÇÖs Settings tab, security must be configured. Azure Data Factory pipelines may use the Web activity to call ADF REST API methods if and only if the Azure Data Factory managed identity is assigned the *Contributor* role. Begin by opening the Azure portal and clicking the **All resources** link on the left menu. Select **Azure Data Factory** to add ADF managed identity with Contributor role by clicking the **Add** button in the *Add a role assignment** box.
++ ### How to perform activity-level errors and failures in pipelines **Cause**
The degree of parallelism in *ForEach* is actually max degree of parallelism. We
Known Facts about *ForEach* * Foreach has a property called batch count(n) where default value is 20 and the max is 50.
- * The batch count, n, is used to construct n queues. Later we will discuss some details on how these queues are constructed.
+ * The batch count, n, is used to construct n queues.
* Every queue runs sequentially, but you can have several queues running in parallel. * The queues are pre-created. This means there is no rebalancing of the queues during the runtime. * At any time, you have at most one item being process per queue. This means at most n items being processed at any given time.
Known Facts about *ForEach*
**Resolution** * You should not use *SetVariable* activity inside *For Each* that runs in parallel.
- * Taking in consideration the way the queues are constructed, customer can improve the foreach performance by setting multiple *foreaches* where each foreach will have items with similar processing time. This will ensure that long runs are processed in parallel rather sequentially.
+ * Taking in consideration the way the queues are constructed, customer can improve the foreach performance by setting multiples of *foreach* where each *foreach* will have items with similar processing time.
+ * This will ensure that long runs are processed in parallel rather sequentially.
### Pipeline status is queued or stuck for a long time
For more troubleshooting help, try these resources:
* [Data Factory feature requests](https://feedback.azure.com/forums/270578-data-factory) * [Azure videos](https://azure.microsoft.com/resources/videos/index/?sort=newest&services=data-factory) * [Microsoft Q&A question page](/answers/topics/azure-data-factory.html)
-* [Twitter information about Data Factory](https://twitter.com/hashtag/DataFactory)
+* [Twitter information about Data Factory](https://twitter.com/hashtag/DataFactory)
event-grid Event Schema Media Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/event-schema-media-services.md
The data object has the following properties:
| `encoderPort` | string | Port of the encoder from where this stream is coming. | | `resultCode` | string | The reason the connection was rejected. The result codes are listed in the following table. |
-You can find the error result codes in [live Event error codes](../media-services/latest/live-event-error-codes.md).
+You can find the error result codes in [live Event error codes](../media-services/latest/live-event-error-codes-reference.md).
### LiveEventEncoderConnected
The data object has the following properties:
| `encoderPort` | string | Port of the encoder from where this stream is coming. | | `resultCode` | string | The reason for the encoder disconnecting. It could be graceful disconnect or from an error. The result codes are listed in the following table. |
-You can find the error result codes in [live Event error codes](../media-services/latest/live-event-error-codes.md).
+You can find the error result codes in [live Event error codes](../media-services/latest/live-event-error-codes-reference.md).
The graceful disconnect result codes are:
An event has the following top-level data:
- [EventGrid .NET SDK that includes Media Service events](https://www.nuget.org/packages/Microsoft.Azure.EventGrid/) - [Definitions of Media Services events](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/eventgrid/data-plane/Microsoft.Media/stable/2018-01-01/MediaServices.json)-- [Live Event error codes](../media-services/latest/live-event-error-codes.md)
+- [Live Event error codes](../media-services/latest/live-event-error-codes-reference.md)
expressroute Expressroute Faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-faqs.md
See [here](./designing-for-high-availability-with-expressroute.md) for designing
### How do I ensure high availability on a virtual network connected to ExpressRoute?
-You can achieve high availability by connecting up to four ExpressRoute circuits in the same peering location to your virtual network, or by connecting ExpressRoute circuits in different peering locations (for example, Singapore, Singapore2) to your virtual network. If one ExpressRoute circuit goes down, connectivity will fail over to another ExpressRoute circuit. By default, traffic leaving your virtual network is routed based on Equal Cost Multi-path Routing (ECMP). You can use Connection Weight to prefer one circuit to another. For more information, see [Optimizing ExpressRoute Routing](expressroute-optimize-routing.md).
+You can achieve high availability by connecting up to 16 ExpressRoute circuits in the same peering location to your virtual network, or by connecting ExpressRoute circuits in different peering locations (for example, Singapore, Singapore2) to your virtual network. If one ExpressRoute circuit goes down, connectivity will fail over to another ExpressRoute circuit. By default, traffic leaving your virtual network is routed based on Equal Cost Multi-path Routing (ECMP). You can use Connection Weight to prefer one circuit to another. For more information, see [Optimizing ExpressRoute Routing](expressroute-optimize-routing.md).
### How do I ensure that my traffic destined for Azure Public services like Azure Storage and Azure SQL on Microsoft peering or public peering is preferred on the ExpressRoute path?
frontdoor Front Door Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-faq.md
- Title: Azure Front Door - Frequently Asked Questions
-description: This page provides answers to frequently asked questions about Azure Front Door
----- Previously updated : 10/20/2020---
-# Frequently asked questions for Azure Front Door
-
-This article answers common questions about Azure Front Door features and functionality. If you don't see the answer to your question, you can contact us through the following channels (in escalating order):
-
-1. The comments section of this article.
-2. [Azure Front Door UserVoice](https://feedback.azure.com/forums/217313-networking?category_id=345025).
-3. **Microsoft Support:** To create a new support request, in the Azure portal, on the **Help** tab, select the **Help + support** button, and then select **New support request**.
-
-## General
-
-### What is Azure Front Door?
-
-Azure Front Door is an Application Delivery Network (ADN) as a service, offering various layer 7 load-balancing capabilities for your applications. It provides dynamic site acceleration (DSA) along with global load balancing with near real-time failover. It is a highly available and scalable service, which is fully managed by Azure.
-
-### What features does Azure Front Door support?
-
-Azure Front Door supports dynamic site acceleration (DSA), TLS/SSL offloading and end to end TLS, Web Application Firewall, cookie-based session affinity, url path-based routing, free certificates and multiple domain management, and others. For a full list of supported features, see [Overview of Azure Front Door](front-door-overview.md).
-
-### What is the difference between Azure Front Door and Azure Application Gateway?
-
-While both Front Door and Application Gateway are layer 7 (HTTP/HTTPS) load balancers, the primary difference is that Front Door is a global service whereas Application Gateway is a regional service. While Front Door can load balance between your different scale units/clusters/stamp units across regions, Application Gateway allows you to load balance between your VMs/containers etc. that is within the scale unit.
-
-### When should we deploy an Application Gateway behind Front Door?
-
-The key scenarios why one should use Application Gateway behind Front Door are:
--- Front Door can perform path-based load balancing only at the global level but if one wants to load balance traffic even further within their virtual network (VNET) then they should use Application Gateway.-- Since Front Door doesn't work at a VM/container level, so it cannot do Connection Draining. However, Application Gateway allows you to do Connection Draining. -- With an Application Gateway behind Front Door, one can achieve 100% TLS/SSL offload and route only HTTP requests within their virtual network (VNET).-- Front Door and Application Gateway both support session affinity. While Front Door can direct subsequent traffic from a user session to the same cluster or backend in a given region, Application Gateway can direct affinitize the traffic to the same server within the cluster. -
-### Can we deploy Azure Load Balancer behind Front Door?
-
-Azure Front Door needs a public VIP or a publicly available DNS name to route the traffic to. Deploying an Azure Load Balancer behind Front Door is a common use case.
-
-### What protocols does Azure Front Door support?
-
-Azure Front Door supports HTTP, HTTPS and HTTP/2.
-
-### How does Azure Front Door support HTTP/2?
-
-HTTP/2 protocol support is available to clients connecting to Azure Front Door only. The communication to backends in the backend pool is over HTTP/1.1. HTTP/2 support is enabled by default.
-
-### What resources are supported today as part of backend pool?
-
-Backend pools can be composed of Storage, Web App, Kubernetes instances, or any other custom hostname that has public connectivity. Azure Front Door requires that the backends are defined either via a public IP or a publicly resolvable DNS hostname. Members of backend pools can be across zones, regions, or even outside of Azure as long as they have public connectivity.
-
-### What regions is the service available in?
-
-Azure Front Door is a global service and is not tied to any specific Azure region. The only location you need to specify while creating a Front Door is the resource group location, which is basically specifying where the metadata for the resource group will be stored. Front Door resource itself is created as a global resource and the configuration is deployed globally to all the POPs (Point of Presence).
-
-### What are the POP locations for Azure Front Door?
-
-Azure Front Door has the same list of POP (Point of Presence) locations as Azure CDN from Microsoft. For the complete list of our POPs, kindly refer [Azure CDN POP locations from Microsoft](../cdn/cdn-pop-locations.md).
-
-### Is Azure Front Door a dedicated deployment for my application or is it shared across customers?
-
-Azure Front Door is a globally distributed multi-tenant service. So, the infrastructure for Front Door is shared across all its customers. However, by creating a Front Door profile, you define the specific configuration required for your application and no changes made to your Front Door impact other Front Door configurations.
-
-### Is HTTP->HTTPS redirection supported?
-
-Yes. In fact, Azure Front Door supports host, path, and query string redirection as well as part of URL redirection. Learn more about [URL redirection](front-door-url-redirect.md).
-
-### In what order are routing rules processed?
-
-Routes for your Front Door are not ordered and a specific route is selected based on the best match. Learn more about [How Front Door matches requests to a routing rule](front-door-route-matching.md).
-
-### How do I lock down the access to my backend to only Azure Front Door?
-
-> [!NOTE]
-> New SKU Front Door Premium provides a more recommended way to lock down your application via Private Endpoint. [Learn more about Private Endpoint](./standard-premium/concept-private-link.md)
-
-To lock down your application to accept traffic only from your specific Front Door, you will need to set up IP ACLs for your backend and then restrict the traffic on your backend to the specific value of the header 'X-Azure-FDID' sent by Front Door. These steps are detailed out as below:
--- Configure IP ACLing for your backends to accept traffic from Azure Front Door's backend IP address space and Azure's infrastructure services only. Refer the IP details below for ACLing your backend:
-
- - Refer *AzureFrontDoor.Backend* section in [Azure IP Ranges and Service Tags](https://www.microsoft.com/download/details.aspx?id=56519) for Front Door's IPv4 backend IP address range or you can also use the service tag *AzureFrontDoor.Backend* in your [network security groups](../virtual-network/network-security-groups-overview.md#security-rules).
- - Azure's [basic infrastructure services](../virtual-network/network-security-groups-overview.md#azure-platform-considerations) through virtualized host IP addresses: `168.63.129.16` and `169.254.169.254`
-
- > [!WARNING]
- > Front Door's backend IP space may change later, however, we will ensure that before that happens, that we would have integrated with [Azure IP Ranges and Service Tags](https://www.microsoft.com/download/details.aspx?id=56519). We recommend that you subscribe to [Azure IP Ranges and Service Tags](https://www.microsoft.com/download/details.aspx?id=56519) for any changes or updates.
--- Look for the `Front Door ID` value under the Overview section from Front Door portal page. You can then filter on the incoming header '**X-Azure-FDID**' sent by Front Door to your backend with that value to ensure only your own specific Front Door instance is allowed (because the IP ranges above are shared with other Front Door instances of other customers).--- Apply rule filtering in your backend web server to restrict traffic based on the resulting 'X-Azure-FDID' header value. Note that some services like Azure App Service provide this [header based filtering](../app-service/app-service-ip-restrictions.md#restrict-access-to-a-specific-azure-front-door-instance) capability without needing to change your application or host.-
- Here's an example for [Microsoft Internet Information Services (IIS)](https://www.iis.net/):
-
- ``` xml
- <?xml version="1.0" encoding="UTF-8"?>
- <configuration>
- <system.webServer>
- <rewrite>
- <rules>
- <rule name="Filter_X-Azure-FDID" patternSyntax="Wildcard" stopProcessing="true">
- <match url="*" />
- <conditions>
- <add input="{HTTP_X_AZURE_FDID}" pattern="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" negate="true" />
- </conditions>
- <action type="AbortRequest" />
- </rule>
- </rules>
- </rewrite>
- </system.webServer>
- </configuration>
- ```
---
-### Can the anycast IP change over the lifetime of my Front Door?
-
-The frontend anycast IP for your Front Door should typically not change and may remain static for the lifetime of the Front Door. However, there are **no guarantees** for the same. Kindly do not take any direct dependencies on the IP.
-
-### Does Azure Front Door support static or dedicated IPs?
-
-No, Azure Front Door currently doesn't support static or dedicated frontend anycast IPs.
-
-### Does Azure Front Door support x-forwarded-for headers?
-
-Yes, Azure Front Door supports the X-Forwarded-For, X-Forwarded-Host, and X-Forwarded-Proto headers. For X-Forwarded-For if the header was already present then Front Door appends the client socket IP to it. Else, it adds the header with the client socket IP as the value. For X-Forwarded-Host and X-Forwarded-Proto, the value is overridden.
-
-Learn more about the [Front Door supported HTTP headers](front-door-http-headers-protocol.md).
-
-### How long does it take to deploy an Azure Front Door? Does my Front Door still work when being updated?
-
-A new Front Door creation or any updates to an existing Front Door takes about 3 to 5 minutes for global deployment. That means in about 3 to 5 minutes, your Front Door configuration will be deployed across all of our POPs globally.
-
-Note - Custom TLS/SSL certificate updates take about 30 minutes to be deployed globally.
-
-Any updates to routes or backend pools etc. are seamless and will cause zero downtime (if the new configuration is correct). Certificate updates are also atomic and will not cause any outage, unless switching from 'AFD Managed' to 'Use your own cert' or vice versa.
--
-## Configuration
-
-### Can Azure Front Door load balance or route traffic within a virtual network?
-
-Azure Front Door (AFD) requires a public IP or publicly resolvable DNS name to route traffic. So, the answer is no AFD directly cannot route within a virtual network, but using an Application Gateway or Azure Load Balancer in between will solve this scenario.
-
-### What are the various timeouts and limits for Azure Front Door?
-
-Learn about all the documented [timeouts and limits for Azure Front Door](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-front-door-service-limits).
-
-### How long does it take for a rule to take effect after being added to the Front Door Rules Engine?
-
-The Rules Engine configuration takes about 10 to 15 minutes to complete an update. You can expect the rule to take effect as soon as the update is completed.
-
-### Can I configure Azure CDN behind my Front Door profile or vice versa?
-
-Azure Front Door and Azure CDN can't be configured together because both services utilizes the same Azure edge sites when responding to requests.
-
-## Performance
-
-### How does Azure Front Door support high availability and scalability?
-
-Azure Front Door is a globally distributed multi-tenant platform with huge volumes of capacity to cater to your application's scalability needs. Delivered from the edge of Microsoft's global network, Front Door provides global load-balancing capability that allows you to fail over your entire application or even individual microservices across regions or different clouds.
-
-## TLS configuration
-
-### What TLS versions are supported by Azure Front Door?
-
-All Front Door profiles created after September 2019 use TLS 1.2 as the default minimum.
-
-Front Door supports TLS versions 1.0, 1.1 and 1.2. TLS 1.3 is not yet supported.
-
-### What certificates are supported on Azure Front Door?
-
-To enable the HTTPS protocol for securely delivering content on a Front Door custom domain, you can choose to use a certificate that is managed by Azure Front Door or use your own certificate.
-The Front Door managed option provisions a standard TLS/SSL certificate via Digicert and stored in Front Door's Key Vault. If you choose to use your own certificate, then you can onboard a certificate from a supported CA and can be a standard TLS, extended validation certificate, or even a wildcard certificate. Self-signed certificates are not supported. Learn [how to enable HTTPS for a custom domain](./front-door-custom-domain-https.md).
-
-### Does Front Door support autorotation of certificates?
-
-For the Front Door managed certificate option, the certificates are autorotated by Front Door. If you are using a Front Door managed certificate and see that the certificate expiry date is less than 60 days away, file a support ticket.
-</br>For your own custom TLS/SSL certificate, autorotation isn't supported. Similar to how it was set up the first time for a given custom domain, you will need to point Front Door to the right certificate version in your Key Vault and ensure that the service principal for Front Door still has access to the Key Vault. This updated certificate rollout operation by Front Door is atomic and doesn't cause any production impact provided the subject name or SAN for the certificate doesn't change.
-
-### What are the current cipher suites supported by Azure Front Door?
-
-For TLS1.2 the following cipher suites are supported:
--- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384-- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256-- TLS_DHE_RSA_WITH_AES_256_GCM_SHA384-- TLS_DHE_RSA_WITH_AES_128_GCM_SHA256-
-When using custom domains with TLS1.0/1.1 enabled the following cipher suites are supported:
--- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256-- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384-- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256-- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384-- TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256-- TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384-- TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256-- TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384-- TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA-- TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA-- TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA-- TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA-- TLS_RSA_WITH_AES_256_GCM_SHA384-- TLS_RSA_WITH_AES_128_GCM_SHA256-- TLS_RSA_WITH_AES_256_CBC_SHA256-- TLS_RSA_WITH_AES_128_CBC_SHA256-- TLS_RSA_WITH_AES_256_CBC_SHA-- TLS_RSA_WITH_AES_128_CBC_SHA-- TLS_DHE_RSA_WITH_AES_128_GCM_SHA256-- TLS_DHE_RSA_WITH_AES_256_GCM_SHA384-
-### Can I configure TLS policy to control TLS Protocol versions?
-
-You can configure a minimum TLS version in Azure Front Door in the custom domain HTTPS settings via Azure portal or the [Azure REST API](/rest/api/frontdoorservice/frontdoor/frontdoors/createorupdate#minimumtlsversion). Currently, you can choose between 1.0 and 1.2.
-
-### Can I configure Front Door to only support specific cipher suites?
-
-No, configuring Front Door for specific cipher suites is not supported. However, you can get your own custom TLS/SSL certificate from your Certificate Authority (say Verisign, Entrust, or Digicert) and have specific cipher suites marked on the certificate when you have it generated.
-
-### Does Front Door support OCSP stapling?
-
-Yes, OCSP stapling is supported by default by Front Door and no configuration is required.
-
-### Does Azure Front Door also support re-encryption of traffic to the backend?
-
-Yes, Azure Front Door supports TLS/SSL offload, and end to end TLS, which re-encrypts the traffic to the backend. In fact, since the connections to the backend happen over its public IP, it is recommended that you configure your Front Door to use HTTPS as the forwarding protocol.
-
-### Does Front Door support self-signed certificates on the backend for HTTPS connection?
-
-No, self-signed certificates are not supported on Front Door and the restriction applies to both:
-
-1. **Backends**: You cannot use self-signed certificates when you are forwarding the traffic as HTTPS or HTTPS health probes or filling the cache for from origin for routing rules with caching enabled.
-2. **Frontend**: You cannot use self-signed certificates when using your own custom TLS/SSL certificate for enabling HTTPS on your custom domain.
-
-### Why is HTTPS traffic to my backend failing?
-
-For having successful HTTPS connections to your backend whether for health probes or for forwarding requests, there could be two reasons why HTTPS traffic might fail:
-
-1. **Certificate subject name mismatch**: For HTTPS connections, Front Door expects that your backend presents certificate from a valid CA with subject name(s) matching the backend hostname. As an example, if your backend hostname is set to `myapp-centralus.contosonews.net` and the certificate that your backend presents during the TLS handshake neither has `myapp-centralus.contosonews.net` nor `*myapp-centralus*.contosonews.net` in the subject name, the Front Door will refuse the connection and result in an error.
- 1. **Solution**: While it is not recommended from a compliance standpoint, you can workaround this error by disabling certificate subject name check for your Front Door. This is present under Settings in Azure portal and under BackendPoolsSettings in the API.
-2. **Backend hosting certificate from invalid CA**: Only certificates from [valid CAs](./front-door-troubleshoot-allowed-ca.md) can be used at the backend with Front Door. Certificates from internal CAs or self-signed certificates are not allowed.
-
-### Can I use client/mutual authentication with Azure Front Door?
-
-No. Although Azure Front Door supports TLS 1.2, which introduced client/mutual authentication in [RFC 5246](https://tools.ietf.org/html/rfc5246), currently, Azure Front Door doesn't support client/mutual authentication.
-
-## Diagnostics and logging
-
-### What types of metrics and logs are available with Azure Front Door?
-
-For information on logs and other diagnostic capabilities, see [Monitoring metrics and logs for Front Door](front-door-diagnostics.md).
-
-### What is the retention policy on the diagnostics logs?
-
-Diagnostic logs flow to the customers storage account and customers can set the retention policy based on their preference. Diagnostic logs can also be sent to an Event Hub or Azure Monitor logs. For more information, see [Azure Front Door Diagnostics](front-door-diagnostics.md).
-
-### How do I get audit logs for Azure Front Door?
-
-Audit logs are available for Azure Front Door. In the portal, click **Activity Log** in the menu blade of your Front Door to access the audit log.
-
-### Can I set alerts with Azure Front Door?
-
-Yes, Azure Front Door does support alerts. Alerts are configured on metrics.
-
-## Next steps
--- Learn how to [create a Front Door](quickstart-create-front-door.md).-- Learn [how Front Door works](front-door-routing-architecture.md).
frontdoor Front Door How To Redirect Https https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-how-to-redirect-https.md
You can use the Azure portal to [create a Front Door](quickstart-create-front-do
1. Choose a *subscription* and then either use an existing resource group or create a new one. Select **Next** to enter the configuration tab. > [!NOTE]
- > The location asked in the UI is for the resource group only. Your Front Door configuration will get deployed across all of [Azure Front Door's POP locations](front-door-faq.md#what-are-the-pop-locations-for-azure-front-door).
+ > The location asked in the UI is for the resource group only. Your Front Door configuration will get deployed across all of [Azure Front Door's POP locations](front-door-faq.yml#what-are-the-pop-locations-for-azure-front-door-).
:::image type="content" source="./media/front-door-url-redirect/front-door-create-basics.png" alt-text="Configure basics for new Front Door":::
frontdoor Front Door Http Headers Protocol https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-http-headers-protocol.md
Front Door includes headers for an incoming request unless they're removed becau
| X-Azure-SocketIP | *X-Azure-SocketIP: 127.0.0.1* </br> Represents the socket IP address associated with the TCP connection that the current request originated from. A request's client IP address might not be equal to its socket IP address because it can be arbitrarily overwritten by a user.| | X-Azure-Ref | *X-Azure-Ref: 0zxV+XAAAAABKMMOjBv2NT4TY6SQVjC0zV1NURURHRTA2MTkANDM3YzgyY2QtMzYwYS00YTU0LTk0YzMtNWZmNzA3NjQ3Nzgz* </br> A unique reference string that identifies a request served by Front Door. It's used to search access logs and critical for troubleshooting.| | X-Azure-RequestChain | *X-Azure-RequestChain: hops=1* </br> A header that Front Door uses to detect request loops, and users shouldn't take a dependency on it. |
-| X-Azure-FDID | *X-Azure-FDID: 55ce4ed1-4b06-4bf1-b40e-4638452104da* <br/> A reference string that identifies the request came from a specific Front Door resource. The value can be seen in the Azure portal or retrieved using the management API. You can use this header in combination with IP ACLs to lock down your endpoint to only accept requests from a specific Front Door resource. See the FAQ for [more detail](front-door-faq.md#how-do-i-lock-down-the-access-to-my-backend-to-only-azure-front-door) |
+| X-Azure-FDID | *X-Azure-FDID: 55ce4ed1-4b06-4bf1-b40e-4638452104da* <br/> A reference string that identifies the request came from a specific Front Door resource. The value can be seen in the Azure portal or retrieved using the management API. You can use this header in combination with IP ACLs to lock down your endpoint to only accept requests from a specific Front Door resource. See the FAQ for [more detail](front-door-faq.yml#how-do-i-lock-down-the-access-to-my-backend-to-only-azure-front-door-) |
| X-Forwarded-For | *X-Forwarded-For: 127.0.0.1* </br> The X-Forwarded-For (XFF) HTTP header field often identifies the originating IP address of a client connecting to a web server through an HTTP proxy or load balancer. If there's an existing XFF header, then Front Door appends the client socket IP to it or adds the XFF header with the client socket IP. | | X-Forwarded-Host | *X-Forwarded-Host: contoso.azurefd.net* </br> The X-Forwarded-Host HTTP header field is a common method used to identify the original host requested by the client in the Host HTTP request header. This is because the host name from Front Door may differ for the backend server handling the request. | | X-Forwarded-Proto | *X-Forwarded-Proto: http* </br> The X-Forwarded-Proto HTTP header field is often used to identify the originating protocol of an HTTP request. Front Door based on configuration might communicate with the backend by using HTTPS. This is true even if the request to the reverse proxy is HTTP. |
frontdoor Front Door Waf https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-waf.md
Finally, if you're using a custom domain to reach your web application and want
## Lock down your web application
-We recommend you ensure only Azure Front Door edges can communicate with your web application. Doing so will ensure no one can bypass the Azure Front Door protection and access your application directly. To accomplish this lockdown, see [How do I lock down the access to my backend to only Azure Front Door?](./front-door-faq.md#how-do-i-lock-down-the-access-to-my-backend-to-only-azure-front-door).
+We recommend you ensure only Azure Front Door edges can communicate with your web application. Doing so will ensure no one can bypass the Azure Front Door protection and access your application directly. To accomplish this lockdown, see [How do I lock down the access to my backend to only Azure Front Door?](./front-door-faq.yml#how-do-i-lock-down-the-access-to-my-backend-to-only-azure-front-door-).
## Clean up resources
governance Scope https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/scope.md
# Understand scope in Azure Policy There are many settings that determine which resources are capable of being evaluated and which
-resources are evaluated by Azure Policy. The primary concept for these controls is _scope_. For a
-high-level overview, see
+resources are evaluated by Azure Policy. The primary concept for these controls is _scope_. Scope in
+Azure Policy is based on how scope works in Azure Resource Manager. For a high-level overview, see
[Scope in Azure Resource Manager](../../../azure-resource-manager/management/overview.md#understand-scope). This article explains the importance of _scope_ in Azure Policy and it's related objects and properties.
iot-accelerators Iot Accelerators Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-accelerators/iot-accelerators-permissions.md
For more information about users and roles in Azure AD, see the following resour
## Choose your device
-The AzureIoTSolutions.com site links to the [Azure Certified for IoT device catalog](https://catalog.azureiotsolutions.com/).
+The AzureIoTSolutions.com site links to the [Azure Certified for IoT device catalog](https://devicecatalog.azure.com/).
The catalog lists hundreds of certified IoT hardware devices you can connect to your solution accelerators to start building your IoT solution.
iot-central Howto Set Up Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-set-up-template.md
Some [application templates](concepts-app-templates.md) already include device t
## Create a device template from the device catalog
-As a builder, you can quickly start building out your solution by using a certified device. See the list in the [Azure IoT Device Catalog](https://catalog.azureiotsolutions.com/alldevices). IoT Central integrates with the device catalog so you can import a device model from any of the certified devices. To create a device template from one of these devices in IoT Central:
+As a builder, you can quickly start building out your solution by using a certified device. See the list in the [Azure IoT Device Catalog](https://devicecatalog.azure.com). IoT Central integrates with the device catalog so you can import a device model from any of the certified devices. To create a device template from one of these devices in IoT Central:
1. Go to the **Device templates** page in your IoT Central application. 1. Select **+ New**, and then select any of the certified devices from the catalog. IoT Central creates a device template based on this device model.
iot-edge How To Auto Provision Simulated Device Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-auto-provision-simulated-device-linux.md
The tasks are as follows:
1. Install the IoT Edge runtime and connect the device to IoT Hub. > [!TIP]
-> This article describes how to test DPS provisioning using a TPM simulator, but much of it applies to physical TPM hardware such as the [Infineon OPTIGA&trade; TPM](https://catalog.azureiotsolutions.com/details?title=OPTIGA-TPM-SLB-9670-Iridium-Board), an Azure Certified for IoT device.
+> This article describes how to test DPS provisioning using a TPM simulator, but much of it applies to physical TPM hardware such as the [Infineon OPTIGA&trade; TPM](https://devicecatalog.azure.com/devices/3f52cdee-bbc4-d74e-6c79-a2546f73df4e), an Azure Certified for IoT device.
> > If you're using a physical device, you can skip ahead to the [Retrieve provisioning information from a physical device](#retrieve-provisioning-information-from-a-physical-device) section in this article.
iot-fundamentals Iot Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-fundamentals/iot-introduction.md
An IoT device is typically made up of a circuit board with sensors attached that
* An accelerometer in an elevator. * Presence sensors in a room.
-There's a wide variety of devices available from different manufacturers to build your solution. For a list of devices certified to work with Azure IoT Hub, see the [Azure Certified for IoT device catalog](https://catalog.azureiotsolutions.com/alldevices). For prototyping, you can use devices such as an [MXChip IoT DevKit](https://microsoft.github.io/azure-iot-developer-kit/) or a [Raspberry Pi](https://www.raspberrypi.org/). The Devkit has built-in sensors for temperature, pressure, humidity, and a gyroscope, accelerometer, and magnetometer. The Raspberry Pi lets you attach many different types of sensor.
+There's a wide variety of devices available from different manufacturers to build your solution. For a list of devices certified to work with Azure IoT Hub, see the [Azure Certified for IoT device catalog](https://devicecatalog.azure.com). For prototyping, you can use devices such as an [MXChip IoT DevKit](https://microsoft.github.io/azure-iot-developer-kit/) or a [Raspberry Pi](https://www.raspberrypi.org/). The Devkit has built-in sensors for temperature, pressure, humidity, and a gyroscope, accelerometer, and magnetometer. The Raspberry Pi lets you attach many different types of sensor.
Microsoft provides open-source [Device SDKs](../iot-hub/iot-hub-devguide-sdks.md) that you can use to build the apps that run on your devices. These [SDKs simplify and accelerate](https://azure.microsoft.com/blog/benefits-of-using-the-azure-iot-sdks-in-your-azure-iot-solution/) the development of your IoT solutions.
iot-fundamentals Iot Services And Technologies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-fundamentals/iot-services-and-technologies.md
The [IoT Central application platform](https://apps.azureiotcentral.com) reduces
Azure IoT Central is a fully managed application platform that you can use to create custom IoT solutions. IoT Central uses application templates to create solutions. There are templates for generic solutions and for specific industries such as energy, healthcare, government, and retail. IoT Central application templates let you deploy an IoT Central application in minutes that you can then customize with themes, dashboards, and views.
-Choose devices from the [Azure Certified for IoT device catalog](https://catalog.azureiotsolutions.com) to quickly connect to your solution. Use the IoT Central web UI to monitor and manage your devices to keep them healthy and connected. Use connectors and APIs to integrate your IoT Central application with other business applications.
+Choose devices from the [Azure Certified for IoT device catalog](https://devicecatalog.azure.com) to quickly connect to your solution. Use the IoT Central web UI to monitor and manage your devices to keep them healthy and connected. Use connectors and APIs to integrate your IoT Central application with other business applications.
As a fully managed application platform, IoT Central has a simple, predictable pricing model.
To build an IoT solution from scratch, or extend a solution created using IoT Ce
### Devices
-Develop your IoT devices using one of the [Azure IoT Starter Kits](https://catalog.azureiotsolutions.com/kits) or choose a device to use from the [Azure Certified for IoT device catalog](https://catalog.azureiotsolutions.com). Implement your embedded code using the open-source [device SDKs](../iot-hub/iot-hub-devguide-sdks.md). The device SDKs support multiple operating systems, such as Linux, Windows, and real-time operating systems. There are SDKs for multiple programming languages, such as [C](https://github.com/Azure/azure-iot-sdk-c), [Node.js](https://github.com/Azure/azure-iot-sdk-node), [Java](https://github.com/Azure/azure-iot-sdk-java), [.NET](https://github.com/Azure/azure-iot-sdk-csharp), and [Python](https://github.com/Azure/azure-iot-sdk-python).
+Develop your IoT devices using one of the [Azure IoT Starter Kits](https://devicecatalog.azure.com/kits) or choose a device to use from the [Azure Certified for IoT device catalog](https://devicecatalog.azure.com). Implement your embedded code using the open-source [device SDKs](../iot-hub/iot-hub-devguide-sdks.md). The device SDKs support multiple operating systems, such as Linux, Windows, and real-time operating systems. There are SDKs for multiple programming languages, such as [C](https://github.com/Azure/azure-iot-sdk-c), [Node.js](https://github.com/Azure/azure-iot-sdk-node), [Java](https://github.com/Azure/azure-iot-sdk-java), [.NET](https://github.com/Azure/azure-iot-sdk-csharp), and [Python](https://github.com/Azure/azure-iot-sdk-python).
You can further simplify how you create the embedded code for your devices by using the [IoT Plug and Play](../iot-pnp/overview-iot-plug-and-play.md) service. IoT Plug and Play enables solution developers to integrate devices with their solutions without writing any embedded code. At the core of IoT Plug and Play, is a _device capability model_ schema that describes device capabilities. Use the device capability model to generate your embedded device code and configure a cloud-based solution such as an IoT Central application.
iot-hub-device-update Device Update Agent Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-agent-provisioning.md
-# Device Update Agent
+# Device Update Agent Provisioning
-Device Update for IoT Hub supports two forms of updates ΓÇô image-based and package-based.
+The Device Update Module agent can run alongside other system processes and [IoT Edge modules](https://docs.microsoft.com/azure/iot-edge/iot-edge-modules) that connect to your IoT Hub as part of the same logical device. This section describes how to provision the Device Update agent as a module identity.
-* Image updates provide a higher level of confidence in the end-state of the device. It is typically easier to replicate the results of an image-update between a pre-production environment and a production environment, since it doesnΓÇÖt pose the same challenges as packages and their dependencies. Due to their atomic nature, one can also adopt an A/B failover model easily.
-* Package-based updates are targeted updates that alter only a specific component or application on the device. Thus, leading to lower consumption of bandwidth and helps reduce the time to download and install the update. Package updates typically allow for less downtime of devices when applying an update and avoid the overhead of creating images.
-Follow the links below on how to Build, Run and Modify the Device Update Agent.
+## Module identity vs device identity
-## Build the Device Update Agent
+In IoT Hub, under each device identity, you can create up to 50 module identities. Each module identity implicitly generates a module twin. On the device side, the IoT Hub device SDKs enable you to create modules where each one opens an independent connection to IoT Hub. Module identity and module twin provide the similar capabilities as device identity and device twin but at a finer granularity. [Learn more about Module Identities in IoT Hub](https://docs.microsoft.com/azure/iot-hub/iot-hub-devguide-module-twins)
++
+## Support for Device Update
+
+The following IoT device types are currently supported with Device Update:
+
+* Linux devices (IoT Edge and Non-IoT Edge devices):
+ * Image A/B update:
+ - Yocto - ARM64 (reference image), extensible via open source to [build you own images](device-update-agent-provisioning.md#how-to-build-and-run-device-update-agent) for other architecture as needed.
+ - Ubuntu 18.04 simulator
+
+ * Package Agent supported builds for the following platforms/architectures:
+ - Ubuntu Server 18.04 x64 Package Agent
+ - Debian 9
+
+* Constrained devices:
+ * AzureRTOS Device Update agent samples: [Device Update for Azure IoT Hub tutorial for Azure-Real-Time-Operating-System](device-update-azure-real-time-operating-system.md)
+
+* Disconnected devices:
+ * [Understand support for disconnected device update](connected-cache-disconnected-device-update.md)
++
+## Prerequisites
+
+If you're setting up the IoT device/IoT Edge device for [package based updates](https://docs.microsoft.com/azure/iot-hub-device-update/understand-device-update#support-for-a-wide-range-of-update-artifacts), add packages.microsoft.com to your machineΓÇÖs repositories by following these steps:
+
+1. Log onto the machine or IoT device on which you intend to install the Device Update agent.
+
+1. Open a Terminal window.
+
+1. Install the repository configuration that matches your deviceΓÇÖs operating system.
+ ```shell
+ curl https://packages.microsoft.com/config/ubuntu/18.04/multiarch/prod.list > ./microsoft-prod.list
+ ```
+
+1. Copy the generated list to the sources.list.d directory.
+ ```shell
+ sudo cp ./microsoft-prod.list /etc/apt/sources.list.d/
+ ```
+
+1. Install the Microsoft GPG public key.
+ ```shell
+ curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
+ ```
+
+ ```shell
+ sudo cp ./microsoft.gpg /etc/apt/trusted.gpg.d/
+ ```
+
+## How to provision the Device Update agent as a Module Identity
+
+This section describes how to provision the Device Update agent as a module identity on IoT Edge enabled devices, non-Edge IoT devices, and other IoT devices.
++
+### On IoT Edge enabled devices
+
+Follow these instructions to provision the Device Update agent on [IoT Edge enabled devices](https://docs.microsoft.com/azure/iot-edge).
+
+1. Follow the instructions to [Install and provision the Azure IoT Edge runtime](https://docs.microsoft.com/azure/iot-edge/how-to-install-iot-edge?view=iotedge-2020-11&preserve-view=true).
+
+1. Then install the Device Update agent from [Artifacts](https://github.com/Azure/iot-hub-device-update/releases) and you are now ready to start the Device Update agent on your IoT Edge device.
++
+### On non-Edge IoT Linux devices
+
+Follow these instructions to provision the Device Update agent on your IoT Linux devices.
+
+1. Install the IoT Identity Service and add the latest version to your IoT device.
+ 1. Log onto the machine or IoT device.
+ 1. Open a terminal window.
+ 1. Install the latest [IoT Identity Service](https://github.com/Azure/iot-identity-service/blob/main/docs/packaging.md#installing-and-configuring-the-package) on your IoT device using this command:
+
+ ```shell
+ sudo apt-get install aziot-identity-service
+ ```
+
+1. Provisioning IoT Identity service to get the IoT device information.
+ * Create a custom copy of the configuration template so we can add the provisioning information. In a terminal, enter the below command.
+
+ ```shell
+ sudo cp /etc/aziot/config.toml.template /etc/aziot/config.toml
+ ```
+
+1. Next edit the configuration file to include the connection string of the device you wish to act as the provisioner for this device or machine. In a terminal, enter the below command.
+
+ ```shell
+ sudo nano /etc/aziot/config.toml
+ ```
+
+1. You should see a message like the following example:
+
+ :::image type="content" source="media/understand-device-update/config.png" alt-text="Diagram of IoT Identity Service config file." lightbox="media/understand-device-update/config.png":::
+
+ 1. In the same nano window, find the block with ΓÇ£Manual provisioning with connection stringΓÇ¥.
+ 1. In the window, delete the ΓÇ£#ΓÇ¥ symbol ahead of 'provisioning'
+ 1. In the window, delete the ΓÇ£#ΓÇ¥ symbol ahead of 'source'
+ 1. In the window, delete the ΓÇ£#ΓÇ¥ symbol ahead of 'connection_string'
+ 1. In the window, delete the string within the quotes to the right of 'connection_string' and then add your connection string there
+ 1. Save your changes to the file with 'Ctrl+X' and then 'Y' and hit the 'enter' key to save your changes.
+
+1. Now apply and restart the IoT Identity service with the command below. You should now see a ΓÇ£Done!ΓÇ¥ printout that means you have successfully configured the IoT Identity Service.
+
+ > [!Note]
+ > The IoT Identity service registers module identities with IoT Hub by using symmetric keys currently.
+
+ ```shell
+ sudo aziotctl config apply
+ ```
+
+1. Finally install the Device Update agent from [Artifacts](https://github.com/Azure/iot-hub-device-update/releases) and you are now ready to start the Device Update agent on your IoT Edge device.
++
+### Other IoT devices
+
+The Device Update agent can also be configured without the IoT Identity service for testing or on constrained devices. Follow the below steps to provision the Device Update agent using a connection string (from the Module or Device).
+
+1. Install Device Update agent from [Artifacts](https://github.com/Azure/iot-hub-device-update/releases).
+
+1. Log onto the machine or IoT Edge device/IoT device.
+
+1. Open a terminal window.
+
+1. Add the connection string to the [Device Update configuration file](device-update-configuration-file.md):
+ 1. Enter the below in the terminal window:
+ - [Package updates](device-update-ubuntu-agent.md) use: sudo nano /etc/adu/adu-conf.txt
+ - [Image updates](device-update-raspberry-pi.md) use: sudo nano /adu/adu-conf.txt
+
+ 1. You should see a window open with some text in it. Delete the entire string following 'connection_String=' the first-time you provision the Device Update agent on the IoT device. It is just place holder text.
+
+ 1. In the terminal, replace <your-connection-string> with the connection string of the device for your instance of Device Update agent.
+
+ > [!Important]
+ > Do not add quotes around the connection string.
+
+ - connection_string=<your-connection-string>
+
+ 1. Enter and save.
+
+1. Now you are now ready to start the Device Update agent on your IoT Edge device.
++
+## How to start the Device Update Agent
+
+This section describes how to start and verify the Device Update agent as a module identity running successfully on your IoT device.
+
+1. Log into the machine or device that has the Device Update agent installed.
+
+1. Open a Terminal window, and enter the command below.
+ ```shell
+ sudo systemctl restart adu-agent
+ ```
+
+1. You can check the status of the agent using the command below. If you see any issues, refer to this [troubleshooting guide](troubleshoot-device-update.md).
+ ```shell
+ sudo systemctl status adu-agent
+ ```
+
+ You should see status OK.
+
+1. On the IoT Hub portal, go to IoT device or IoT Edge devices to find the device that you configured with Device Update agent. There you will see the Device Update agent running as a module. For example:
+
+ :::image type="content" source="media/understand-device-update/device-update-module.png " alt-text="Diagram of Device Update module name." lightbox="media/understand-device-update/device-update-module.png":::
++
+## How to build and run Device Update Agent
+
+You can also build and modify your own customer Device Update agent.
Follow the instructions to [build](https://github.com/Azure/iot-hub-device-update/blob/main/docs/agent-reference/how-to-build-agent-code.md) the Device Update Agent from source.
-## Run the Device Update Agent
- Once the agent is successfully building, it's time [run](https://github.com/Azure/iot-hub-device-update/blob/main/docs/agent-reference/how-to-run-agent.md) the agent.
-## Modifying the Device Update Agent
- Now, make the changes needed to incorporate the agent into your image. Look at how to [modify](https://github.com/Azure/iot-hub-device-update/blob/main/docs/agent-reference/how-to-modify-the-agent-code.md) the Device Update Agent for guidance.
-### Troubleshooting Guide
+
+## Troubleshooting guide
If you run into issues, review the Device Update for IoT Hub [Troubleshooting Guide](troubleshoot-device-update.md) to help unblock any possible issues and collect necessary information to provide to Microsoft.
-## Next Steps
-Use below pre-built images and binaries for an easy demonstration of Device Update for IoT Hub.
+## Next steps
+
+You can use the following pre-built images and binaries for a simple demonstration of Device Update for IoT Hub:
+
+- [Image Update: Getting Started with Raspberry Pi 3 B+ Reference Yocto Image](device-update-raspberry-pi.md) extensible via open source to build you own images for other architecture as needed.
-[Image Update: Getting Started with Raspberry Pi 3 B+ Reference Yocto Image](device-update-raspberry-pi.md)
+- [Getting Started Using Ubuntu (18.04 x64) Simulator Reference Agent](device-update-simulator.md)
-[Image Update:Getting Started Using Ubuntu (18.04 x64) Simulator Reference Agent](device-update-simulator.md)
+- [Package Update:Getting Started using Ubuntu Server 18.04 x64 Package agent](device-update-ubuntu-agent.md)
-[Package Update:Getting Started using Ubuntu Server 18.04 x64 Package agent](device-update-ubuntu-agent.md)
+- [Device Update for Azure IoT Hub tutorial for Azure-Real-Time-Operating-System](device-update-azure-real-time-operating-system.md)
iot-hub-device-update Device Update Raspberry Pi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-raspberry-pi.md
Read the license terms prior to using the agent. Your installation and use const
Now, the device needs to be added to the Azure IoT Hub. From within Azure IoT Hub, a connection string will be generated for the device.
-1. From the Azure portal, launch the Device Update IoT Hub.
+1. From the Azure portal, launch the Azure IoT Hub.
2. Create a new device. 3. On the left-hand side of the page, navigate to 'Explorers' > 'IoT Devices' > Select "New".
IoT Hub, a connection string will be generated for the device.
Replace `<device connection string>` with your connection string ```markdown
- echo "connection_string=<device connection string>" > adu-conf.txt
- echo "aduc_manufacturer=ADUTeam" >> adu-conf.txt
- echo "aduc_model=RefDevice" >> adu-conf.txt
+ echo "connection_string=<device connection string>" > /adu/adu-conf.txt
+ echo "aduc_manufacturer=ADUTeam" >> /adu/adu-conf.txt
+ echo "aduc_model=RefDevice" >> /adu/adu-conf.txt
``` ## Connect the device in Device Update IoT Hub
Use that version number in the Import Update step below.
1. Log into [Azure portal](https://portal.azure.com) and navigate to the IoT Hub.
-2. From 'IoT Devices' or 'IoT Edge' on the left navigation pane find your IoT device and navigate to the Device Twin.
+2. From 'IoT Devices' or 'IoT Edge' on the left navigation pane find your IoT device and navigate to the Device Twin or Module Twin.
-3. In the Device Twin, delete any existing Device Update tag value by setting them to null.
+3. In the Module Twin of the Device Update agent module, delete any existing Device Update tag value by setting them to null. If you are using Device identity with Device Update agent make these changes on the Device Twin.
4. Add a new Device Update tag value as shown below.
iot-hub-device-update Device Update Simulator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-simulator.md
Agent running. [main]
1. Log into [Azure portal](https://portal.azure.com) and navigate to the IoT Hub.
-2. From 'IoT Devices' or 'IoT Edge' on the left navigation pane find your IoT device and navigate to the Device Twin.
+2. From 'IoT Devices' or 'IoT Edge' on the left navigation pane find your IoT device and navigate to the Device Twin or Module Twin.
-3. In the Device Twin, delete any existing Device Update tag value by setting them to null.
+3. In the Module Twin of the Device Update agent module, delete any existing Device Update tag value by setting them to null. If you are using Device identity with Device Update agent make these changes on the Device Twin.
4. Add a new Device Update tag value as shown below.
iot-hub-device-update Device Update Ubuntu Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-ubuntu-agent.md
Device Update for IoT Hub supports two forms of updates ΓÇô image-based and package-based.
-Package-based updates are targeted updates that alter only a specific component or application on the device. This leads to lower consumption of bandwidth and helps reduce the time to download and install the update. Package updates typically allow for less downtime of devices when applying an update and avoid the overhead of creating images.
+Package-based updates are targeted updates that alter only a specific component or application on the device. Package-based updates lead to lower consumption of bandwidth and helps reduce the time to download and install the update. Package updates typically allow for less downtime of devices when applying an update and avoid the overhead of creating images.
This end-to-end tutorial walks you through updating Azure IoT Edge on Ubuntu Server 18.04 x64 by using the Device Update package agent. Although the tutorial demonstrates updating IoT Edge, using similar steps you could update other packages such as the container engine it uses.
In this tutorial you will learn how to:
## Prepare a device ### Using the Automated Deploy to Azure Button
-For convenience, this tutorial uses a [cloud-init](../virtual-machines/linux/using-cloud-init.md)-based [Azure Resource Manager template](../azure-resource-manager/templates/overview.md) to help you quickly set up an Ubuntu 18.04 LTS virtual machine. It installs both the Azure IoT Edge runtime and the Device Update package agent and then automatically configures the device with provisioning information using the device connection string for an IoT Edge device (prerequisite) that you supply. This avoids the need to start an SSH session to complete setup.
+For convenience, this tutorial uses a [cloud-init](../virtual-machines/linux/using-cloud-init.md)-based [Azure Resource Manager template](../azure-resource-manager/templates/overview.md) to help you quickly set up an Ubuntu 18.04 LTS virtual machine. It installs both the Azure IoT Edge runtime and the Device Update package agent and then automatically configures the device with provisioning information using the device connection string for an IoT Edge device (prerequisite) that you supply. The Azure Resource Manager template also avoids the need to start an SSH session to complete setup.
1. To begin, click the button below:
For convenience, this tutorial uses a [cloud-init](../virtual-machines/linux/usi
1. Verify that the deployment has completed successfully. Allow a few minutes after deployment completes for the post-installation and configuration to finish installing IoT Edge and the Device Package update agent.
- A virtual machine resource should have been deployed into the selected resource group. Take note of the machine name, this should be in the format `vm-0000000000000`. Also, take note of the associated **DNS Name**, which should be in the format `<dnsLabelPrefix>`.`<location>`.cloudapp.azure.com.
+ A virtual machine resource should have been deployed into the selected resource group. Take note of the machine name that should be in the format `vm-0000000000000`. Also, take note of the associated **DNS Name**, which should be in the format `<dnsLabelPrefix>`.`<location>`.cloudapp.azure.com.
The **DNS Name** can be obtained from the **Overview** section of the newly deployed virtual machine within the Azure portal.
For convenience, this tutorial uses a [cloud-init](../virtual-machines/linux/usi
`ssh <adminUsername>@<DNS_Name>` ### (Optional) Manually prepare a device
-The following manual steps to install and configure the device are equivalent to those that were automated by this [cloud-init script](https://github.com/Azure/iotedge-vm-deploy/blob/1.2.0-rc4/cloud-init.txt). They can be used to prepare a physical device.
+Similar to the steps automated by the [cloud-init script](https://github.com/Azure/iotedge-vm-deploy/blob/1.2.0-rc4/cloud-init.txt), following are manual steps to install and configure the device. These steps can be used to prepare a physical device.
1. Follow the instructions to [Install the Azure IoT Edge runtime](../iot-edge/how-to-install-iot-edge.md?view=iotedge-2020-11&preserve-view=true). > [!NOTE]
Read the license terms prior to using a package. Your installation and use of a
1. Log into [Azure portal](https://portal.azure.com) and navigate to the IoT Hub.
-2. From 'IoT Edge' on the left navigation pane find your IoT Edge device and navigate to the Device Twin.
+2. From 'IoT Edge' on the left navigation pane, find your IoT Edge device and navigate to the Device Twin or Module Twin.
-3. In the Device Twin, delete any existing Device Update tag value by setting them to null.
+3. In the Module Twin of the Device Update agent module, delete any existing Device Update tag value by setting them to null. If you are using Device identity with Device Update agent make these changes on the Device Twin.
4. Add a new Device Update tag value as shown below.
Read the license terms prior to using a package. Your installation and use of a
## Import update
-1. Go to [Device Update releases](https://github.com/Azure/iot-hub-device-update/releases) in Github and click the "Assets" drop-down.
+1. Go to [Device Update releases](https://github.com/Azure/iot-hub-device-update/releases) in GitHub and click the "Assets" drop-down.
3. Download the `Edge.package.update.samples.zip` by clicking on it.
This update will update the `aziot-identity-service` and the `aziot-edge` packag
8. Select "Submit" to start the import process.
-9. The import process begins, and the screen changes to the "Import History" section. Select "Refresh" to view progress until the import process completes. Depending on the size of the update, this may complete in a few minutes but could take longer.
+9. The import process begins, and the screen changes to the "Import History" section. Select "Refresh" to view progress until the import process completes. Depending on the size of the update, the import process may complete in a few minutes but could take longer.
:::image type="content" source="media/import-update/update-publishing-sequence-2.png" alt-text="Screenshot showing update import sequence." lightbox="media/import-update/update-publishing-sequence-2.png":::
This update will update the `aziot-identity-service` and the `aziot-edge` packag
1. Select Refresh to view the latest status details. Continue this process until the status changes to Succeeded.
-You have now completed a successful end-to-end package update using Device Update for IoT Hub on a Ubuntu Server 18.04 x64 device.
+You have now completed a successful end-to-end package update using Device Update for IoT Hub on an Ubuntu Server 18.04 x64 device.
## Clean up resources
-When no longer needed, clean up your device update account, instance, IoT Hub and the IoT Edge device (if you created the VM via the Deploy to Azure button). You can do so, by going to each individual resource and selecting "Delete". Note that you need to clean up a device update instance before cleaning up the device update account.
+When no longer needed, clean up your device update account, instance, IoT Hub, and the IoT Edge device (if you created the VM via the Deploy to Azure button). You can do so, by going to each individual resource and selecting "Delete". You need to clean up a device update instance before cleaning up the device update account.
## Next steps
iot-hub-device-update Understand Device Update https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/understand-device-update.md
Importing is how your updates are ingested into Device Update so they can be dep
full-image updates that update an entire OS partition at once, or an apt Manifest that describes all the packages you want to update on your device. To import updates into Device Update, you first create an import manifest describing the update, then upload the update file(s) and the import
-manifest to an Internet-accessible location. After that, you can use the Azure portal or the [Device Update Import
-REST API](https://github.com/Azure/iot-hub-device-update/tree/main/docs/publish-api-reference) to initiate the asynchronous process of update import. Device Update uploads the files, processes
+manifest to an Internet-accessible location. After that, you can use the Azure portal or the [Device Update
+REST API](https://docs.microsoft.com/rest/api/deviceupdate/) to initiate the asynchronous process of update import. Device Update uploads the files, processes
them, and makes them available for distribution to IoT devices. For sensitive content, protect the download using a shared access signature (SAS), such as an ad-hoc SAS for Azure Blob Storage. [Learn more about
iot-hub Iot Hub Devguide Sdks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-sdks.md
Learn about the [benefits of developing using Azure IoT SDKs](https://azure.micr
Supported platforms for the SDKs can be found in [Azure IoT SDKs Platform Support](iot-hub-device-sdk-platform-support.md).
-For more information about SDK compatibility with specific hardware devices, see the [Azure Certified for IoT device catalog](https://catalog.azureiotsolutions.com/) or individual repository.
+For more information about SDK compatibility with specific hardware devices, see the [Azure Certified for IoT device catalog](https://devicecatalog.azure.com/) or individual repository.
## Azure IoT Hub Device SDKs
iot-hub Iot Hub Device Sdk C Intro https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-device-sdk-c-intro.md
The **Azure IoT device SDK** is a set of libraries designed to simplify the proc
The Azure IoT device SDK for C is written in ANSI C (C99) to maximize portability. This feature makes the libraries well suited to operate on multiple platforms and devices, especially where minimizing disk and memory footprint is a priority.
-There are a broad range of platforms on which the SDK has been tested (see the [Azure Certified for IoT device catalog](https://catalog.azureiotsolutions.com/) for details). Although this article includes walkthroughs of sample code running on the Windows platform, the code described in this article is identical across the range of supported platforms.
+There are a broad range of platforms on which the SDK has been tested (see the [Azure Certified for IoT device catalog](https://devicecatalog.azure.com/) for details). Although this article includes walkthroughs of sample code running on the Windows platform, the code described in this article is identical across the range of supported platforms.
The following video presents an overview of the Azure IoT SDK for C:
iot-hub Iot Hub Device Sdk Platform Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-device-sdk-platform-support.md
In addition to the device SDKs, Microsoft provides several other avenues to empo
* Microsoft collaborates with several partner companies to help them publish development kits, based on the Azure IoT C SDK, for their hardware platforms.
-* Microsoft works with Microsoft trusted partners to provide an ever-expanding set of devices that have been tested and certified for Azure IoT. For a current list of these devices, see the [Azure certified for IoT device catalog](https://catalog.azureiotsolutions.com/).
+* Microsoft works with Microsoft trusted partners to provide an ever-expanding set of devices that have been tested and certified for Azure IoT. For a current list of these devices, see the [Azure certified for IoT device catalog](https://devicecatalog.azure.com/).
* Microsoft provides a platform abstraction layer (PAL) in the Azure IoT Hub Device C SDK that helps developers to easily port the SDK to their platform. To learn more, see the [C SDK porting guidance](https://github.com/Azure/azure-c-shared-utility/blob/master/devdoc/porting_guide.md).
If your device platform isn't covered by one of the previous sections, you can c
Microsoft works with a number of partners to continually expand the Azure IoT universe with Azure IoT tested and certified devices.
-* To browse Azure IoT certified devices, see [Microsoft Azure Certified for IoT Device Catalog](https://catalog.azureiotsolutions.com/).
+* To browse Azure IoT certified devices, see [Microsoft Azure Certified for IoT Device Catalog](https://devicecatalog.azure.com/).
-* To learn more about the Azure Certified for IoT ecosystem, see [Join the Certified for IoT ecosystem](https://catalog.azureiotsolutions.com/register).
+* To learn more about the Azure Certified for IoT ecosystem, see [Join the Certified for IoT ecosystem](../certification/overview.md).
## Connecting to IoT Hub without an SDK
iot-hub Iot Hub Mqtt Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-mqtt-support.md
To learn more about the MQTT protocol, see the [MQTT documentation](https://mqtt
To learn more about planning your IoT Hub deployment, see:
-* [Azure Certified for IoT device catalog](https://catalog.azureiotsolutions.com/)
+* [Azure Certified for IoT device catalog](https://devicecatalog.azure.com/)
* [Support additional protocols](iot-hub-protocol-gateway.md) * [Compare with Event Hubs](iot-hub-compare-event-hubs.md) * [Scaling, HA, and DR](iot-hub-scaling.md)
key-vault Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/security-overview.md
When you create a key vault in an Azure subscription, it's automatically associa
- **Application-only**: The application represents a service principal or managed identity. This identity is the most common scenario for applications that periodically need to access certificates, keys, or secrets from the key vault. For this scenario to work, the `objectId` of the application must be specified in the access policy and the `applicationId` must _not_ be specified or must be `null`. - **User-only**: The user accesses the key vault from any application registered in the tenant. Examples of this type of access include Azure PowerShell and the Azure portal. For this scenario to work, the `objectId` of the user must be specified in the access policy and the `applicationId` must _not_ be specified or must be `null`.-- **Application-plus-user** (sometimes referred as _compound identity_): The user is required to access the key vault from a specific application _and_ the application must use the on-behalf-of authentication (OBO) flow to impersonate the user. For this scenario to work, both `applicationId` and `objectId` must be specified in the access policy. The `applicationId` identifies the required application and the `objectId` identifies the user. Currently, this option isn't available for data plane Azure RBAC (preview).
+- **Application-plus-user** (sometimes referred as _compound identity_): The user is required to access the key vault from a specific application _and_ the application must use the on-behalf-of authentication (OBO) flow to impersonate the user. For this scenario to work, both `applicationId` and `objectId` must be specified in the access policy. The `applicationId` identifies the required application and the `objectId` identifies the user. Currently, this option isn't available for data plane Azure RBAC.
In all types of access, the application authenticates with Azure AD. The application uses any [supported authentication method](../../active-directory/develop/authentication-vs-authorization.md) based on the application type. The application acquires a token for a resource in the plane to grant access. The resource is an endpoint in the management or data plane, based on the Azure environment. The application uses the token and sends a REST API request to Key Vault. To learn more, review the [whole authentication flow](../../active-directory/develop/v2-oauth2-auth-code-flow.md).
Access to vaults takes place through two interfaces or planes. These planes are
- The *management plane* is where you manage Key Vault itself and it is the interface used to create and delete vaults. You can also read key vault properties and manage access policies. - The *data plane* allows you to work with the data stored in a key vault. You can add, delete, and modify keys, secrets, and certificates.
-Applications access the planes through endpoints. The access controls for the two planes work independently. To grant an application access to use keys in a key vault, you grant data plane access by using a Key Vault access policy or Azure RBAC (preview). To grant a user read access to Key Vault properties and tags, but not access to data (keys, secrets, or certificates), you grant management plane access with Azure RBAC.
+Applications access the planes through endpoints. The access controls for the two planes work independently. To grant an application access to use keys in a key vault, you grant data plane access by using a Key Vault access policy or Azure RBAC. To grant a user read access to Key Vault properties and tags, but not access to data (keys, secrets, or certificates), you grant management plane access with Azure RBAC.
The following table shows the endpoints for the management and data planes. | Access&nbsp;plane | Access endpoints | Operations | Access&nbsp;control mechanism | | | | | | | Management plane | **Global:**<br> management.azure.com:443<br><br> **Azure China 21Vianet:**<br> management.chinacloudapi.cn:443<br><br> **Azure US Government:**<br> management.usgovcloudapi.net:443<br><br> **Azure Germany:**<br> management.microsoftazure.de:443 | Create, read, update, and delete key vaults<br><br>Set Key Vault access policies<br><br>Set Key Vault tags | Azure RBAC |
-| Data plane | **Global:**<br> &lt;vault-name&gt;.vault.azure.net:443<br><br> **Azure China 21Vianet:**<br> &lt;vault-name&gt;.vault.azure.cn:443<br><br> **Azure US Government:**<br> &lt;vault-name&gt;.vault.usgovcloudapi.net:443<br><br> **Azure Germany:**<br> &lt;vault-name&gt;.vault.microsoftazure.de:443 | Keys: encrypt, decrypt, wrapKey, unwrapKey, sign, verify, get, list, create, update, import, delete, recover, backup, restore, purge<br><br> Certificates: managecontacts, getissuers, listissuers, setissuers, deleteissuers, manageissuers, get, list, create, import, update, delete, recover, backup, restore, purge<br><br> Secrets: get, list, set, delete,recover, backup, restore, purge | Key Vault access policy or Azure RBAC (preview)|
+| Data plane | **Global:**<br> &lt;vault-name&gt;.vault.azure.net:443<br><br> **Azure China 21Vianet:**<br> &lt;vault-name&gt;.vault.azure.cn:443<br><br> **Azure US Government:**<br> &lt;vault-name&gt;.vault.usgovcloudapi.net:443<br><br> **Azure Germany:**<br> &lt;vault-name&gt;.vault.microsoftazure.de:443 | Keys: encrypt, decrypt, wrapKey, unwrapKey, sign, verify, get, list, create, update, import, delete, recover, backup, restore, purge<br><br> Certificates: managecontacts, getissuers, listissuers, setissuers, deleteissuers, manageissuers, get, list, create, import, update, delete, recover, backup, restore, purge<br><br> Secrets: get, list, set, delete,recover, backup, restore, purge | Key Vault access policy or Azure RBAC |
### Managing administrative access to Key Vault
key-vault Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/managed-hsm/overview.md
Previously updated : 09/15/2020 Last updated : 04/01/2021 #Customer intent: As an IT Pro, Decision maker or developer I am trying to learn what Managed HSM is and if it offers anything that could be used in my organization.
Azure Key Vault Managed HSM is a fully managed, highly available, single-tenant,
- **Isolated access control**: Managed HSM "local RBAC" access control model allows designated HSM cluster administrators to have complete control over the HSMs that even management group, subscription, or resource group administrators cannot override. - **FIPS 140-2 Level 3 validated HSMs**: Protect your data and meet compliance requirements with FIPS ((Federal Information Protection Standard)) 140-2 Level 3 validated HSMs. Managed HSMs use Marvell LiquidSecurity HSM adapters. - **Monitor and audit**: fully integrated with Azure monitor. Get complete logs of all activity via Azure Monitor. Use Azure Log Analytics for analytics and alerts.
+- **Data residency**: Managed HSM doesn't store/process customer data outside the region the customer deploys the HSM instance in.
### Integrated with Azure and Microsoft PaaS/SaaS services
load-balancer Tutorial Load Balancer Ip Backend Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/tutorial-load-balancer-ip-backend-portal.md
+
+ Title: 'Tutorial: Create a public load balancer with an IP-based backend - Azure portal'
+
+description: In this tutorial, learn how to create a public load balancer with an IP based backend pool.
++++ Last updated : 3/31/2021+++
+# Tutorial: Create a public load balancer with an IP-based backend using the Azure portal
+
+In this tutorial, you'll learn how to create a public load balancer with an IP based backend pool.
+
+A traditional deployment of Azure Load Balancer uses the network interface of the virtual machines. With an IP-based backend, the virtual machines are added to the backend by IP address.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Create a virtual network
+> * Create a NAT gateway for outbound connectivity
+> * Create an Azure Load Balancer
+> * Create an IP based backend pool
+> * Create two virtual machines
+> * Test the load balancer
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+## Create a virtual network
+
+In this section, you'll create a virtual network for the load balancer, NAT gateway, and virtual machines.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. On the upper-left side of the screen, select **Create a resource > Networking > Virtual network** or search for **Virtual network** in the search box.
+
+2. Select **Create**.
+
+3. In **Create virtual network**, enter or select this information in the **Basics** tab:
+
+ | **Setting** | **Value** |
+ ||--|
+ | **Project Details** | |
+ | Subscription | Select your Azure subscription |
+ | Resource Group | Select **TutorPubLBIP-rg** |
+ | **Instance details** | |
+ | Name | Enter **myVNet** |
+ | Region | Select **(US) East US** |
+
+4. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
+
+5. In the **IP Addresses** tab, enter this information:
+
+ | Setting | Value |
+ |--|-|
+ | IPv4 address space | Enter **10.1.0.0/16** |
+
+6. Under **Subnet name**, select the word **default**.
+
+7. In **Edit subnet**, enter this information:
+
+ | Setting | Value |
+ |--|-|
+ | Subnet name | Enter **myBackendSubnet** |
+ | Subnet address range | Enter **10.1.0.0/24** |
+
+8. Select **Save**.
+
+9. Select the **Security** tab.
+
+10. Under **BastionHost**, select **Enable**. Enter this information:
+
+ | Setting | Value |
+ |--|-|
+ | Bastion name | Enter **myBastionHost** |
+ | AzureBastionSubnet address space | Enter **10.1.1.0/27** |
+ | Public IP Address | Select **Create new**. </br> For **Name**, enter **myBastionIP**. </br> Select **OK**. |
++
+11. Select the **Review + create** tab or select the **Review + create** button.
+
+12. Select **Create**.
+## Create NAT gateway
+
+In this section, you'll create a NAT gateway and assign it to the subnet in the virtual network you created previously.
+
+1. On the upper-left side of the screen, select **Create a resource > Networking > NAT gateway** or search for **NAT gateway** in the search box.
+
+2. Select **Create**.
+
+3. In **Create network address translation (NAT) gateway**, enter or select this information in the **Basics** tab:
+
+ | **Setting** | **Value** |
+ ||--|
+ | **Project Details** | |
+ | Subscription | Select your Azure subscription. |
+ | Resource Group | Select **Create new** and enter **TutorPubLBIP-rg** in the text box. </br> Select **OK**. |
+ | **Instance details** | |
+ | Name | Enter **myNATgateway** |
+ | Region | Select **(US) East US** |
+ | Availability Zone | Select **None**. |
+ | Idle timeout (minutes) | Enter **10**. |
+
+4. Select the **Outbound IP** tab, or select the **Next: Outbound IP** button at the bottom of the page.
+
+5. In the **Outbound IP** tab, enter or select the following information:
+
+ | **Setting** | **Value** |
+ | -- | |
+ | Public IP addresses | Select **Create a new public IP address**. </br> In **Name**, enter **myPublicIP-NAT**. </br> Select **OK**. |
+
+6. Select the **Subnet** tab, or select the **Next: Subnet** button at the bottom of the page.
+
+7. In the **Subnet** tab, select **myVNet** in the **Virtual network** pull-down.
+
+8. Check the box next to **myBackendSubnet**.
+
+9. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page.
+
+10. Select **Create**.
+## Create load balancer
+
+In this section, you'll create a Standard Azure Load Balancer.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+2. Select **Create a resource**.
+3. In the search box, enter **Load balancer**. Select **Load balancer** in the search results.
+4. In the **Load balancer** page, select **Create**.
+5. On the **Create load balancer** page enter, or select the following information:
+
+ | Setting | Value |
+ | | |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **TutorPubLBIP-rg**.|
+ | **Instance details** | |
+ | Name | Enter **myLoadBalancer** |
+ | Region | Select **(US) East US**. |
+ | Type | Select **Public**. |
+ | SKU | Leave the default **Standard**. |
+ | Tier | Leave the default **Regional**. |
+ | **Public IP address** | |
+ | Public IP address | Select **Create new**. </br> If you have an existing Public IP you would like to use, select **Use existing**. |
+ | Public IP address name | Enter **myPublicIP-LB** in the text box.|
+ | Availability zone | Select **Zone-redundant** to create a resilient load balancer. To create a zonal load balancer, select a specific zone from 1, 2, or 3 |
+ | Add a public IPv6 address | Select **No**. </br> For more information on IPv6 addresses and load balancer, see [What is IPv6 for Azure Virtual Network?](../virtual-network/ipv6-overview.md) |
+ | Routing preference | Leave the default of **Microsoft network**. </br> For more information on routing preference, see [What is routing preference (preview)?](../virtual-network/routing-preference-overview.md). |
+
+6. Accept the defaults for the remaining settings, and then select **Review + create**.
+
+7. In the **Review + create** tab, select **Create**.
+
+## Create load balancer resources
+
+In this section, you configure:
+
+* Load balancer settings for a backend address pool.
+* A health probe.
+* A load balancer rule.
+
+### Create a backend pool
+
+A backend address pool contains the IP addresses of the virtual (NICs) connected to the load balancer.
+
+Create the backend address pool **myBackendPool** to include virtual machines for load-balancing internet traffic.
+
+1. Select **All services** in the left-hand menu, select **All resources**, and then select **myLoadBalancer** from the resources list.
+
+2. Under **Settings**, select **Backend pools**, then select **+ Add**.
+
+3. On the **Add a backend pool** page, enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **myBackendPool**. |
+ | Virtual network | Select **myVNet**. |
+ | Backend Pool Configuration | Select **IP Address**. |
+ | IP Version | Select **IPv4**. |
+
+4. Select **Add**.
+
+### Create a health probe
+
+The load balancer monitors the status of your app with a health probe.
+
+The health probe adds or removes VMs from the load balancer based on their response to health checks.
+
+Create a health probe named **myHealthProbe** to monitor the health of the VMs.
+
+1. Select **All services** in the left-hand menu, select **All resources**, and then select **myLoadBalancer** from the resources list.
+
+2. Under **Settings**, select **Health probes**, then select **+ Add**.
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **myHealthProbe**. |
+ | Protocol | Select **TCP**. |
+ | Port | Enter **80**.|
+ | Interval | Enter **15** for number of **Interval** in seconds between probe attempts. |
+ | Unhealthy threshold | Select **2**. |
+
+3. Leave the rest the defaults and Select **Add**.
+
+### Create a load balancer rule
+
+A load balancer rule is used to define how traffic is distributed to the VMs. You define the frontend IP configuration for the incoming traffic and the backend IP pool to receive the traffic. The source and destination port are defined in the rule.
+
+In this section, you'll create a load balancer rule:
+
+* Named **myHTTPRule**.
+* In the frontend named **LoadBalancerFrontEnd**.
+* Listening on **Port 80**.
+* Directs load balanced traffic to the backend named **myBackendPool** on **Port 80**.
+
+1. Select **All services** in the left-hand menu, select **All resources**, and then select **myLoadBalancer** from the resources list.
+
+2. Under **Settings**, select **Load-balancing rules**, then select **+ Add**.
+
+3. Enter or select the following information for the load balancer rule:
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **myHTTPRule**. |
+ | IP Version | Select **IPv4** |
+ | Frontend IP address | Select **LoadBalancerFrontEnd** |
+ | Protocol | Select **TCP**. |
+ | Port | Enter **80**.|
+ | Backend port | Enter **80**. |
+ | Backend pool | Select **myBackendPool**.|
+ | Health probe | Select **myHealthProbe**. |
+ | Session persistence | Leave the default of **None**. |
+ | Idle timeout (minutes) | Enter **15** minutes. |
+ | TCP reset | Select **Enabled**. |
+ | Floating IP | Select **Disabled**. |
+ | Outbound source network address translation (SNAT) | Select **(Recommended) Use outbound rules to provide backend pool members access to the internet.** |
+
+4. Leave the rest of the defaults and then select **Add**.
+
+## Create virtual machines
+
+In this section, you'll create two VMs (**myVM1** and **myVM2**) in two different zones (**Zone 1** and **Zone 2**).
+
+These VMs are added to the backend pool of the load balancer that was created earlier.
+
+1. On the upper-left side of the portal, select **Create a resource** > **Compute** > **Virtual machine**.
+
+2. In **Create a virtual machine**, enter or select the values in the **Basics** tab:
+
+ | Setting | Value |
+ |--|-|
+ | **Project Details** | |
+ | Subscription | Select your Azure subscription |
+ | Resource Group | Select **TutorPubLBIP-rg** |
+ | **Instance details** | |
+ | Virtual machine name | Enter **myVM1** |
+ | Region | Select **(US) East US** |
+ | Availability Options | Select **Availability zones** |
+ | Availability zone | Select **1** |
+ | Image | Select **Windows Server 2019 Datacenter** |
+ | Azure Spot instance | Leave the default |
+ | Size | Choose VM size or take default setting |
+ | **Administrator account** | |
+ | Username | Enter a username |
+ | Password | Enter a password |
+ | Confirm password | Reenter password |
+ | **Inbound port rules** | |
+ | Public inbound ports | Select **None** |
+
+3. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
+
+4. In the Networking tab, select or enter:
+
+ | Setting | Value |
+ |-|-|
+ | **Network interface** | |
+ | Virtual network | **myVNet** |
+ | Subnet | **myBackendSubnet** |
+ | Public IP | Select **None**. |
+ | NIC network security group | Select **Advanced**|
+ | Configure network security group | Select **Create new**. </br> In the **Create network security group**, enter **myNSG** in **Name**. </br> Within **Inbound rules**, select **+Add an inbound rule**. </br> Under **Service**, select **HTTP**. </br> In **Priority**, enter **100**. </br> Under **Name**, enter **myHTTPRule** </br> Select **Add** </br> Select **OK** |
+ | **Load balancing** |
+ | Place this virtual machine behind an existing load-balancing solution? | Select the check box.|
+ | **Load balancing settings** |
+ | Load balancing options | Select **Azure load balancer** |
+ | Select a load balancer | Select **myLoadBalancer** |
+ | Select a backend pool | Select **myBackendPool** |
+
+5. Select **Review + create**.
+
+6. Review the settings, and then select **Create**.
+
+7. Follow the steps 1 to 6 to create a VM with the following values and all the other settings the same as **myVM1**:
+
+ | Setting | VM 2 |
+ | - | -- |
+ | Name | **myVM2** |
+ | Availability zone | **2** |
+ | Network security group | Select the existing **myNSG**|
+
+## Install IIS
+
+1. Select **All services** in the left-hand menu, select **All resources**, and then from the resources list, select **myVM1** that is located in the **TutorPubLBIP-rg** resource group.
+
+2. On the **Overview** page, select **Connect**, then **Bastion**.
+
+3. Select the **Use Bastion** button.
+
+4. Enter the username and password entered during VM creation.
+
+5. Select **Connect**.
+
+6. On the server desktop, navigate to **Windows Administrative Tools** > **Windows PowerShell**.
+
+7. In the PowerShell Window, run the following commands to:
+
+ * Install the IIS server
+ * Remove the default iisstart.htm file
+ * Add a new iisstart.htm file that displays the name of the VM:
+
+ ```powershell
+ # install IIS server role
+ Install-WindowsFeature -name Web-Server -IncludeManagementTools
+
+ # remove default htm file
+ Remove-Item C:\inetpub\wwwroot\iisstart.htm
+
+ # Add a new htm file that displays server name
+ Add-Content -Path "C:\inetpub\wwwroot\iisstart.htm" -Value $("Hello World from " + $env:computername)
+ ```
+8. Close the Bastion session with **myVM1**.
+
+9. Repeat steps 1 to 7 to install IIS and the updated iisstart.htm file on **myVM2**.
+
+## Test the load balancer
+
+1. Find the public IP address for the load balancer on the **Overview** screen. Select **All services** in the left-hand menu, select **All resources**, and then select **myPublicIP-LB**.
+
+2. Copy the public IP address, and then paste it into the address bar of your browser. The default page of IIS Web server is displayed on the browser.
+
+ ![IIS Web server](./media/tutorial-load-balancer-standard-zonal-portal/load-balancer-test.png)
+
+To see the load balancer distribute traffic to myVM2, force-refresh your web browser from the client machine.
+## Clean up resources
+
+If you're not going to continue to use this application, delete
+the virtual network, virtual machine, and NAT gateway with the following steps:
+
+1. From the left-hand menu, select **Resource groups**.
+
+2. Select the **TutorPubLBIP-rg** resource group.
+
+3. Select **Delete resource group**.
+
+4. Enter **TutorPubLBIP-rg** and select **Delete**.
+
+## Next steps
+
+In this tutorial you:
+
+* Created a virtual network
+* Created a NAT gateway
+* Created a load balancer with an IP-based backend pool
+* Tested the load balancer
+
+Advance to the next article to learn how to create a cross-region load balancer:
+> [!div class="nextstepaction"]
+> [Create a cross-region Azure Load Balancer using the Azure portal](tutorial-cross-region-portal.md)
machine-learning Azure Machine Learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/azure-machine-learning-release-notes.md
__RSS feed__: Get notified when this page is updated by copying and pasting the
+ Render CSV/TSV. Users will be able to render and TSV/CSV file in a grid format for easier data analysis. + SSO Authentication for Compute Instance. Users can now easily authenticate any new compute instances directly in the Notebook UI, making it easier to authenticate and use Azure SDKs directly in AzureML. + Compute Instance Metrics. Users will be able to view compute metrics like CPU usage and memory via terminal.
+ + File Details. Users can now see file details including the last modified time, and file size by clicking the 3 dots beside a file.
+ **Bug fixes and improvements** + Improved page load times.
machine-learning How To Attach Compute Targets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-attach-compute-targets.md
To use compute targets managed by Azure Machine Learning, see:
## What's a compute target?
-With Azure Machine Learning, you can train your model on a variety of resources or environments, collectively referred to as [__compute targets__](concept-azure-machine-learning-architecture.md#compute-targets). A compute target can be a local machine or a cloud resource, such as an Azure Machine Learning Compute, Azure HDInsight, or a remote virtual machine. You also use compute targets for model deployment as described in ["Where and how to deploy your models"](how-to-deploy-and-where.md).
+With Azure Machine Learning, you can train your model on various resources or environments, collectively referred to as [__compute targets__](concept-azure-machine-learning-architecture.md#compute-targets). A compute target can be a local machine or a cloud resource, such as an Azure Machine Learning Compute, Azure HDInsight, or a remote virtual machine. You also use compute targets for model deployment as described in ["Where and how to deploy your models"](how-to-deploy-and-where.md).
## <a id="local"></a>Local computer
When you use your local computer for **inference**, you must have Docker install
## <a id="vm"></a>Remote virtual machines
-Azure Machine Learning also supports attaching an Azure Virtual Machine. The VM must be an Azure Data Science Virtual Machine (DSVM). This VM is a pre-configured data science and AI development environment in Azure. The VM offers a curated choice of tools and frameworks for full-lifecycle machine learning development. For more information on how to use the DSVM with Azure Machine Learning, see [Configure a development environment](./how-to-configure-environment.md#dsvm).
+Azure Machine Learning also supports attaching an Azure Virtual Machine. The VM must be an Azure Data Science Virtual Machine (DSVM). The VM offers a curated choice of tools and frameworks for full-lifecycle machine learning development. For more information on how to use the DSVM with Azure Machine Learning, see [Configure a development environment](./how-to-configure-environment.md#dsvm).
-1. **Create**: Create a DSVM before using it to train your model. To create this resource, see [Provision the Data Science Virtual Machine for Linux (Ubuntu)](./data-science-virtual-machine/dsvm-ubuntu-intro.md).
+> [!TIP]
+> Instead of a remote VM, we recommend using the [Azure Machine Learning compute instance](concept-compute-instance.md). It is a fully managed, cloud-based compute solution that is specific to Azure Machine Learning. For more information, see [create and manage Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md).
+
+1. **Create**: Azure Machine Learning cannot create a remote VM for you. Instead, you must create the VM and then attach it to your Azure Machine Learning workspace. For information on creating a DSVM, see [Provision the Data Science Virtual Machine for Linux (Ubuntu)](./data-science-virtual-machine/dsvm-ubuntu-intro.md).
> [!WARNING] > Azure Machine Learning only supports virtual machines that run **Ubuntu**. When you create a VM or choose an existing VM, you must select a VM that uses Ubuntu.
Azure Machine Learning also supports attaching an Azure Virtual Machine. The VM
src = ScriptRunConfig(source_directory=".", script="train.py", compute_target=compute, environment=myenv) ```
+> [!TIP]
+> If you want to __remove__ (detach) a VM from your workspace, use the [RemoteCompute.detach()](https://docs.microsoft.com/python/api/azureml-core/azureml.core.compute.remotecompute#detach--) method.
+>
+> Azure Machine Learning does not delete the VM for you. You must manually delete the VM using the Azure portal, CLI, or the SDK for Azure VM.
+ ## <a id="hdinsight"></a>Azure HDInsight Azure HDInsight is a popular platform for big-data analytics. The platform provides Apache Spark, which can be used to train your model.
-1. **Create**: Create the HDInsight cluster before you use it to train your model. To create a Spark on HDInsight cluster, see [Create a Spark Cluster in HDInsight](../hdinsight/spark/apache-spark-jupyter-spark-sql.md).
+1. **Create**: Azure Machine Learning cannot create an HDInsight cluster for you. Instead, you must create the cluster and then attach it to your Azure Machine Learning workspace. For more information, see [Create a Spark Cluster in HDInsight](../hdinsight/spark/apache-spark-jupyter-spark-sql.md).
> [!WARNING] > Azure Machine Learning requires the HDInsight cluster to have a __public IP address__.
Azure HDInsight is a popular platform for big-data analytics. The platform provi
[!code-python[](~/aml-sdk-samples/ignore/doc-qa/how-to-set-up-training-targets/hdi.py?name=run_hdi)] -
-Now that you've attached the compute and configured your run, the next step is to [submit the training run](how-to-set-up-training-targets.md).
+> [!TIP]
+> If you want to __remove__ (detach) an HDInsight cluster from the workspace, use the [HDInsightCompute.detach()](https://docs.microsoft.com/python/api/azureml-core/azureml.core.compute.hdinsight.hdinsightcompute#detach--) method.
+>
+> Azure Machine Learning does not delete the HDInsight cluster for you. You must manually delete it using the Azure portal, CLI, or the SDK for Azure HDInsight.
## <a id="azbatch"></a>Azure Batch
print("Using Batch compute:{}".format(batch_compute.cluster_resource_id))
Azure Databricks is an Apache Spark-based environment in the Azure cloud. It can be used as a compute target with an Azure Machine Learning pipeline.
-Create an Azure Databricks workspace before using it. To create a workspace resource, see the [Run a Spark job on Azure Databricks](/azure/databricks/scenarios/quickstart-create-databricks-workspace-portal) document.
+> [!IMPORTANT}
+> Azure Machine Learning cannot create an Azure Databricks compute target. Instead, you must create an Azure Databricks workspace, and then attach it to your Azure Machine Learning workspacee. To create a workspace resource, see the [Run a Spark job on Azure Databricks](/azure/databricks/scenarios/quickstart-create-databricks-workspace-portal) document.
To attach Azure Databricks as a compute target, provide the following information:
Azure Container Instances (ACI) are created dynamically when you deploy a model.
## Azure Kubernetes Service
-Azure Kubernetes Service (AKS) allows for a variety of configuration options when used with Azure Machine Learning. For more information, see [How to create and attach Azure Kubernetes Service](how-to-create-attach-kubernetes.md).
-
+Azure Kubernetes Service (AKS) allows for various configuration options when used with Azure Machine Learning. For more information, see [How to create and attach Azure Kubernetes Service](how-to-create-attach-kubernetes.md).
## Notebook examples
machine-learning How To Create Attach Compute Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-attach-compute-cluster.md
Compute clusters can run jobs securely in a [virtual network environment](how-to
## Limitations
-* **Do not create multiple, simultaneous attachments to the same compute** from your workspace. For example, attaching one compute cluster to a workspace using two different names. Each new attachment will break the previous existing attachment(s).
-
- If you want to re-attach a compute target, for example to change cluster configuration settings, you must first remove the existing attachment.
- * Some of the scenarios listed in this document are marked as __preview__. Preview functionality is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+* We currently support only creation (and not updating) of clusters through ARM templates [https://docs.microsoft.com/azure/templates/microsoft.machinelearningservices/workspaces/computes?tabs=json]. For updating compute, we recommend using the SDK, CLI or UX for now.
+ * Azure Machine Learning Compute has default limits, such as the number of cores that can be allocated. For more information, see [Manage and request quotas for Azure resources](how-to-manage-quotas.md). * Azure allows you to place _locks_ on resources, so that they cannot be deleted or are read only. __Do not apply resource locks to the resource group that contains your workspace__. Applying a lock to the resource group that contains your workspace will prevent scaling operations for Azure ML compute clusters. For more information on locking resources, see [Lock resources to prevent unexpected changes](../azure-resource-manager/management/lock-resources.md).
machine-learning How To Custom Dns https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-custom-dns.md
Previously updated : 03/12/2021 Last updated : 04/01/2021 # How to use your workspace with a custom DNS server
-When using an Azure Machine Learning workspace with a private endpoint, there are [several ways to handle DNS name resolution](../private-link/private-endpoint-dns.md). By default, Azure automatically handles name resolution for your workspace and private endpoint. If you instead _use your own custom DNS server__, you must manually create DNS entries or use conditional forwarders for the workspace.
+When using an Azure Machine Learning workspace with a private endpoint, there are [several ways to handle DNS name resolution](../private-link/private-endpoint-dns.md). By default, Azure automatically handles name resolution for your workspace and private endpoint. If you instead __use your own custom DNS server__, you must manually create DNS entries or use conditional forwarders for the workspace.
> [!IMPORTANT] > This article only covers how to find the fully qualified domain name (FQDN) and IP addresses for these entries it does NOT provide information on configuring the DNS records for these items. Consult the documentation for your DNS software for information on how to add records.
The following list contains the fully qualified domain names (FQDN) used by your
* `<instance-name>.<region>.instances.azureml.ms` > [!NOTE]
- > Compute instances can be accessed only from within the virtual network.
+ > * Compute instances can be accessed only from within the virtual network.
+ > * The IP address for this FQDN is **not** the IP of the compute instance. Instead, use the private IP address of the workspace private endpoint (the IP of the `*.api.azureml.ms` entries.)
## Azure China 21Vianet regions
The information returned from all methods is the same; a list of the FQDN and pr
> * `<workspace-GUID>.workspace.<region>.experiments.azureml.net` > * `<workspace-GUID>.workspace.<region>.modelmanagement.azureml.net` > * `<workspace-GUID>.workspace.<region>.aether.ms`
-> * If you have a compute instance, use `<instance-name>.<region>.instances.azureml.ms`, where `<instance-name>` is the name of your compute instance. Please use private IP address of workspace private endpoint. Please note compute instance can be accessed only from within the virtual network.
+> * If you have a compute instance, use `<instance-name>.<region>.instances.azureml.ms`, where `<instance-name>` is the name of your compute instance. Use the private IP address of workspace private endpoint. The compute instance can be accessed only from within the virtual network.
> > For all of these IP address, use the same address as the `*.api.azureml.ms` entries returned from the previous steps.
machine-learning How To Machine Learning Interpretability Automl https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-machine-learning-interpretability-automl.md
You can call the `explain()` method in MimicWrapper with the transformed test sa
engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) ```
+For models trained with automated ML, you can get the best model using the `get_output()` method and compute explanations locally. You can visualize the explanation results with `ExplanationDashboard` from `interpret-community` package.
+
+```python
+best_run, fitted_model = remote_run.get_output()
+
+from azureml.train.automl.runtime.automl_explain_utilities import AutoMLExplainerSetupClass, automl_setup_model_explanations
+automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train,
+ X_test=X_test, y=y_train,
+ task='regression')
+
+from interpret.ext.glassbox import LGBMExplainableModel
+from azureml.interpret.mimic_wrapper import MimicWrapper
+
+explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, LGBMExplainableModel,
+ init_dataset=automl_explainer_setup_obj.X_transform, run=best_run,
+ features=automl_explainer_setup_obj.engineered_feature_names,
+ feature_maps=[automl_explainer_setup_obj.feature_map],
+ classes=automl_explainer_setup_obj.classes)
+
+pip install interpret-community[visualization]
+
+engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform)
+print(engineered_explanations.get_feature_importance_dict()),
+from interpret_community.widget import ExplanationDashboard
+ExplanationDashboard(engineered_explanations, automl_explainer_setup_obj.automl_estimator, datasetX=automl_explainer_setup_obj.X_test_transform)
+
+
+
+raw_explanations = explainer.explain(['local', 'global'], get_raw=True,
+ raw_feature_names=automl_explainer_setup_obj.raw_feature_names,
+ eval_dataset=automl_explainer_setup_obj.X_test_transform)
+print(raw_explanations.get_feature_importance_dict()),
+from interpret_community.widget import ExplanationDashboard
+ExplanationDashboard(raw_explanations, automl_explainer_setup_obj.automl_pipeline, datasetX=automl_explainer_setup_obj.X_test_raw)
+```
### Use Mimic Explainer for computing and visualizing raw feature importance
marketplace Marketplace Commercial Transaction Capabilities And Considerations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/marketplace-commercial-transaction-capabilities-and-considerations.md
The transact publishing option is only available for use with the following mark
- **Azure Virtual Machine** ΓÇô Select from free, bring-your-own-license, or usage-based pricing models and present as plans defined at the offer level. On the customer's Azure bill, Microsoft presents the publisher software license fees separately from the underlying Azure infrastructure fees. Azure infrastructure fees are driven by use of the publisher software. -- **Azure application: solution template or managed app** ΓÇô Must provision one or more virtual machines and pulls through the sum of the virtual machine pricing. For managed apps on a single plan, a flat-rate monthly subscription can be selected as the pricing model instead the virtual machine pricing. In some cases, Azure infrastructure usage fees are passed to the customer separately from software license fees, but on the same billing statement. However, if you configure a managed app offering for ISV infrastructure charges, the Azure resources are billed to the publisher, and the customer receives a flat fee that includes the cost of infrastructure, software licenses, and management services.
+- **Azure application: solution template or managed app** ΓÇô In some cases, Azure infrastructure usage fees are passed to the customer separately from software license fees, but on the same billing statement. However, if you configure a managed app offering for ISV infrastructure charges, the Azure resources are billed to the publisher, and the customer receives a flat fee that includes the cost of infrastructure, software licenses, and management services.
- **SaaS application** - Must be a multitenant solution, use [Azure Active Directory](https://azure.microsoft.com/services/active-directory/) for authentication, and integrate with the [SaaS Fulfillment APIs](partner-center-portal/pc-saas-fulfillment-api-v2.md). Azure infrastructure usage is managed and billed directly to you (the partner), so you must account for Azure infrastructure usage fees and software licensing fees as a single cost item. For detailed guidance, see [Create a new SaaS offer in the commercial marketplace](./create-new-saas-offer.md). ## Next steps - Review the eligibility requirements in the publishing options by offer type section to finalize the selection and configuration of your offer.-- Review the publishing patterns by online store for examples on how your solution maps to an offer type and configuration.
+- Review the publishing patterns by online store for examples on how your solution maps to an offer type and configuration.
marketplace Azure Iot Edge Module Creation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/partner-center-portal/azure-iot-edge-module-creation.md
Provide supplemental online documents about your offer. You can add up to 25 lin
- **Title** - Customers will see the title on your offer's details page. - **Link (URL)** - Enter a link for customers to view your online document. The link must start with `http://` or `https://`.
-Make sure to add at least one link to your documentation and one link to the compatible IoT Edge devices from theΓÇ»[Azure IoT device catalog](https://catalog.azureiotsolutions.com/).
+Make sure to add at least one link to your documentation and one link to the compatible IoT Edge devices from theΓÇ»[Azure IoT device catalog](https://devicecatalog.azure.com/).
### Contact information
marketplace Create Iot Edge Module Asset https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/partner-center-portal/create-iot-edge-module-asset.md
Your module must support all Tier 1 platforms supported by IoT Edge (as recorded
- Provide a latest tag and a version tag (for example, 1.0.1) that are manifest tags built with the [GitHub Manifest-tool](https://github.com/estesp/manifest-tool). -- Use the offer listing tab in [Partner Center](https://partner.microsoft.com/dashboard/commercial-marketplace) to add a link under the **Useful links** section to the [Azure IoT Edge Certified device catalog](https://catalog.azureiotsolutions.com/alldevices?filters={%2218%22:[%221%22]}/).
+- Use the offer listing tab in [Partner Center](https://partner.microsoft.com/dashboard/commercial-marketplace) to add a link under the **Useful links** section to the [Azure IoT Edge Certified device catalog](https://devicecatalog.azure.com/devices?certificationBadgeTypes=IoTEdgeCompatible).
#### A subset of Tier 1 platforms supported by IoT Edge Your module must support a subset (at least one) of Tier 1 platforms supported by IoT Edge (as recorded in [Azure IoT Edge support](../../iot-edge/support.md)). A module using this platform option must: - Provide a latest tag and a version tag (for example, 1.0.1) that are manifest tags built with the GitHub [manifest-tool](https://github.com/estesp/manifest-tool) if more than one platform is supported. Manifest tags are optional only when one platform is supported.-- Use the offer listing tab in [Partner Center](https://partner.microsoft.com/dashboard/commercial-marketplace) to add a link under the **Useful links** section to at least one IoT Edge device from the [Azure IoT Edge Certified device catalog](https://catalog.azureiotsolutions.com/).
+- Use the offer listing tab in [Partner Center](https://partner.microsoft.com/dashboard/commercial-marketplace) to add a link under the **Useful links** section to at least one IoT Edge device from the [Azure IoT Edge Certified device catalog](https://devicecatalog.azure.com/).
:::image type="content" source="media/iot-edge-module-technical-assets-offer-listing.png" alt-text="This is an image of the Offer Listing section within Partner Center":::
Your module must support a subset (at least one) of Tier 1 platforms supported b
IoT Edge module dimensions (such as CPU, RAM, storage, and GPU) on targeted IoT Edge devices must meet the following requirements: -- The module must work with at least one IoT Edge device from the [Azure IoT Edge Certified device catalog](https://catalog.azureiotsolutions.com/).
+- The module must work with at least one IoT Edge device from the [Azure IoT Edge Certified device catalog](https://devicecatalog.azure.com/).
- The minimum hardware requirements must be documented as the last paragraph in the description of the offer (under the offer listing tab in [Partner Center](https://partner.microsoft.com/dashboard/commercial-marketplace)). Optionally, you can also list the recommended hardware requirements if they differ significantly. For example, add the following section at the end of your offer description:
media-services Access Api Howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/access-api-howto.md
na ms.devlang: na Previously updated : 03/17/2021 Last updated : 03/31/2021 # Get credentials to access Media Services API
media-services Concept Media Reserved Units https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/concept-media-reserved-units.md
You are charged based on number of minutes the Media Reserved Units are provisio
## See also
-* [Quotas and limits](limits-quotas-constraints.md)
+* [Quotas and limits](limits-quotas-constraints-reference.md)
media-services Concepts Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/concepts-overview.md
The fundamental concepts described in these topics should be reviewed before sta
|Analyzing content (Video Indexer)|Media Services v3 lets you extract insights from your video and audio files using Media Services v3 presets. To analyze your content using Media Services v3 presets, you need to create **Transforms** and **Jobs**.<br/><br/>If you want more detailed insights, use [Video Indexer](../video-indexer/index.yml) directly.|[Analyzing video and audio files](analyze-video-audio-files-concept.md)| |Packaging and delivery|Once your content is encoded, you can take advantage of **Dynamic Packaging**. In Media Services, a **Streaming Endpoint** is the dynamic packaging service used to deliver media content to client players. To make videos in the output asset available to clients for playback, you have to create a **Streaming Locator** and then build streaming URLs. <br/><br/>When creating the **Streaming Locator**, in addition to the asset's name, you need to specify **Streaming Policy**. **Streaming Policies** enable you to define streaming protocols and encryption options (if any) for your **Streaming Locators**. Dynamic Packaging is used whether you stream your content live or on-demand. <br/><br/>You can use Media Services **Dynamic Manifests** to stream only a specific rendition or subclips of your video.|[Dynamic packaging](encode-dynamic-packaging-concept.md)<br/><br/>[Streaming Endpoints](streaming-endpoint-concept.md)<br/><br/>[Streaming Locators](streaming-locators-concept.md)<br/><br/>[Streaming Policies](streaming-policy-concept.md)<br/><br/>[Dynamic manifests](filters-dynamic-manifest-concept.md)<br/><br/>[Filters](filters-concept.md)| |Content protection|With Media Services, you can deliver your live and on-demand content encrypted dynamically with Advanced Encryption Standard (AES-128) or/and any of the three major DRM systems: Microsoft PlayReady, Google Widevine, and Apple FairPlay. Media Services also provides a service for delivering AES keys and DRM (PlayReady, Widevine, and FairPlay) licenses to authorized clients. <br/><br/>If specifying encryption options on your stream, create the **Content Key Policy** and associate it with your **Streaming Locator**. The **Content Key Policy** enables you to configure how the content key is delivered to end clients.<br/><br/> Try to reuse policies whenever the same options are needed.| [Content Key Policies](drm-content-key-policy-concept.md)<br/><br/>[Content protection](drm-content-protection-concept.md)|
-|Live streaming|Media Services enables you to deliver live events to your customers on the Azure cloud. **Live Events** are responsible for ingesting and processing the live video feeds. When you create a **Live Event**, an input endpoint is created that you can use to send a live signal from a remote encoder. Once you have the stream flowing into the **Live Event**, you can begin the streaming event by creating an **Asset**, **Live Output**, and **Streaming Locator**. **Live Output** will archive the stream into the **Asset** and make it available to viewers through the **Streaming Endpoint**. A live event can be set to either a *pass-through* (an on-premises live encoder sends a multiple bitrate stream) or *live encoding* (an on-premises live encoder sends a single bitrate stream). |[Live streaming overview](live-streaming-overview.md)<br/><br/>[Live Events and Live Outputs](live-events-outputs-concept.md)|
+|Live streaming|Media Services enables you to deliver live events to your customers on the Azure cloud. **Live Events** are responsible for ingesting and processing the live video feeds. When you create a **Live Event**, an input endpoint is created that you can use to send a live signal from a remote encoder. Once you have the stream flowing into the **Live Event**, you can begin the streaming event by creating an **Asset**, **Live Output**, and **Streaming Locator**. **Live Output** will archive the stream into the **Asset** and make it available to viewers through the **Streaming Endpoint**. A live event can be set to either a *pass-through* (an on-premises live encoder sends a multiple bitrate stream) or *live encoding* (an on-premises live encoder sends a single bitrate stream). |[Live streaming overview](stream-live-streaming-concept.md)<br/><br/>[Live Events and Live Outputs](live-event-outputs-concept.md)|
|Monitoring with Event Grid|To see the progress of the job, use **Event Grid**. Media Services also emits the live event types. With Event Grid, your apps can listen for and react to events from virtually all Azure services, as well as custom sources. |[Handling Event Grid events](monitoring/reacting-to-media-services-events.md)<br/><br/>[Schemas](monitoring/media-services-event-schemas.md)| |Monitoring with Azure Monitor|Monitor metrics and diagnostic logs that help you understand how your apps are performing with Azure Monitor.|[Metrics and diagnostic logs](monitoring/monitor-media-services-data-reference.md)<br/><br/>[Diagnostic logs schemas](monitoring/monitor-media-services-data-reference.md)| |Player clients|You can use Azure Media Player to play back media content streamed by Media Services on a wide variety of browsers and devices. Azure Media Player uses industry standards, such as HTML5, Media Source Extensions (MSE), and Encrypted Media Extensions (EME) to provide an enriched adaptive streaming experience. |[Azure Media Player overview](use-azure-media-player.md)|
media-services Drm Content Key Policy Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/drm-content-key-policy-concept.md
Usually, you associate your content key policy with your [Streaming Locator](str
> [!IMPORTANT] > Please review the following recommendations.
-* You should design a limited set of policies for your Media Service account and reuse them for your streaming locators whenever the same options are needed. For more information, see [Quotas and limits](limits-quotas-constraints.md).
+* You should design a limited set of policies for your Media Service account and reuse them for your streaming locators whenever the same options are needed. For more information, see [Quotas and limits](limits-quotas-constraints-reference.md).
* Content key policies are updatable. It can take up to 15 minutes for the key delivery caches to update and pick up the updated policy. By updating the policy, you are overwriting your existing CDN cache which could cause playback issue for customers that are using cached content.
media-services Encode Dynamic Packaging Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/encode-dynamic-packaging-concept.md
A live event can be set to either a *pass-through* (an on-premises live encoder
Here's a common workflow for live streaming with *dynamic packaging*:
-1. Create a [live event](live-events-outputs-concept.md).
+1. Create a [live event](live-event-outputs-concept.md).
1. Get the ingest URL and configure your on-premises encoder to use the URL to send the contribution feed. 1. Get the preview URL and use it to verify that the input from the encoder is being received. 1. Create a new asset.
This diagram shows the workflow for live streaming with *dynamic packaging*:
![Diagram of a workflow for pass-through encoding with dynamic packaging](./media/live-streaming/pass-through.svg)
-For information about live streaming in Media Services v3, see [Live streaming overview](live-streaming-overview.md).
+For information about live streaming in Media Services v3, see [Live streaming overview](stream-live-streaming-concept.md).
## Video codecs supported by Dynamic Packaging
media-services Encode On Premises Encoder Partner https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/encode-on-premises-encoder-partner.md
As an Azure Media Services on-premises encoder partner, Media Services promotes
## Pass-through Live Event verification 1. In your Media Services account, make sure that the **Streaming Endpoint** is running.
-2. Create and start the **pass-through** Live Event. <br/> For more information, see [Live Event states and billing](live-event-states-billing.md).
+2. Create and start the **pass-through** Live Event. <br/> For more information, see [Live Event states and billing](live-event-states-billing-concept.md).
3. Get the ingest URLs and configure your on-premises encoder to use the URL to send a multi-bitrate live stream to Media Services. 4. Get the preview URL and use it to verify that the input from the encoder is actually being received. 5. Create a new **Asset** object.
As an Azure Media Services on-premises encoder partner, Media Services promotes
## Live encoding Live Event verification 1. In your Media Services account, make sure that the **Streaming Endpoint** is running.
-2. Create and start the **live encoding** Live Event. <br/> For more information, see [Live Event states and billing](live-event-states-billing.md).
+2. Create and start the **live encoding** Live Event. <br/> For more information, see [Live Event states and billing](live-event-states-billing-concept.md).
3. Get the ingest URLs and configure your encoder to push a single-bitrate live stream to Media Services. 4. Get the preview URL and use it to verify that the input from the encoder is actually being received. 5. Create a new **Asset** object.
Finally, email your recorded settings and live archive parameters to Azure Media
## Next steps
-[Live streaming with Media Services v3](live-streaming-overview.md)
+[Live streaming with Media Services v3](stream-live-streaming-concept.md)
media-services Filter Order Page Entitites How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/filter-order-page-entitites-how-to.md
The following table shows how you can apply the filtering and ordering options t
* [List Streaming Policies](/rest/api/media/streamingpolicies/list) * [List Streaming Locators](/rest/api/media/streaminglocators/list) * [Stream a file](stream-files-dotnet-quickstart.md)
-* [Quotas and limits](limits-quotas-constraints.md)
+* [Quotas and limits](limits-quotas-constraints-reference.md)
media-services Job Create Cli How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/job-create-cli-how-to.md
+
+ Title: Azure CLI Script Example - Create and submit a job
+description: The Azure CLI script in this topic shows how to submit a Job to a simple encoding Transform using HTTPs URL.
+
+documentationcenter: ''
++
+editor:
+
+ms.assetid:
+
+ms.devlang: azurecli
+
+ multiple
+ Last updated : 08/31/2020++++
+# CLI example: Create and submit a job
++
+In Media Services v3, when you submit Jobs to process your videos, you have to tell Media Services where to find the input video. One of the options is to specify an HTTPS URL as a job input (as shown in this article).
+
+## Prerequisites
+
+[Create a Media Services account](./account-create-how-to.md).
+
+## Example script
+
+When you run `az ams job start`, you can set a label on the job's output. The label can later be used to identify what this output asset is for.
+
+- If you assign a value to the label, set ΓÇÿ--output-assetsΓÇÖ to ΓÇ£assetname=labelΓÇ¥
+- If you do not assign a value to the label, set ΓÇÿ--output-assetsΓÇÖ to ΓÇ£assetname=ΓÇ¥.
+ Notice that you add "=" to the `output-assets`.
+
+```azurecli
+az ams job start \
+ --name testJob001 \
+ --transform-name testEncodingTransform \
+ --base-uri 'https://nimbuscdn-nimbuspm.streaming.mediaservices.windows.net/2b533311-b215-4409-80af-529c3e853622/' \
+ --files 'Ignite-short.mp4' \
+ --output-assets testOutputAssetName= \
+ -a amsaccount \
+ -g amsResourceGroup
+```
+
+You get a response similar to this:
+
+```
+{
+ "correlationData": {},
+ "created": "2019-02-15T05:08:26.266104+00:00",
+ "description": null,
+ "id": "/subscriptions/<id>/resourceGroups/amsResourceGroup/providers/Microsoft.Media/mediaservices/amsaccount/transforms/testEncodingTransform/jobs/testJob001",
+ "input": {
+ "baseUri": "https://nimbuscdn-nimbuspm.streaming.mediaservices.windows.net/2b533311-b215-4409-80af-529c3e853622/",
+ "files": [
+ "Ignite-short.mp4"
+ ],
+ "label": null,
+ "odatatype": "#Microsoft.Media.JobInputHttp"
+ },
+ "lastModified": "2019-02-15T05:08:26.266104+00:00",
+ "name": "testJob001",
+ "outputs": [
+ {
+ "assetName": "testOutputAssetName",
+ "error": null,
+ "label": "",
+ "odatatype": "#Microsoft.Media.JobOutputAsset",
+ "progress": 0,
+ "state": "Queued"
+ }
+ ],
+ "priority": "Normal",
+ "resourceGroup": "amsResourceGroup",
+ "state": "Queued",
+ "type": "Microsoft.Media/mediaservices/transforms/jobs"
+}
+```
+
+## Next steps
+
+[az ams job (CLI)](/cli/azure/ams/job)
media-services Job Download Results How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/job-download-results-how-to.md
+
+ Title: Download the results of a job - Azure Media Services
+description: This article demonstrates how to download the results of a job.
+
+documentationcenter: ''
++
+editor: ''
++++ Last updated : 08/31/2020++++
+# Download the results of a job
++
+In Azure Media Services, when processing your videos (for example, encoding or analyzing) you need to create an output [asset](assets-concept.md) to store the result of your [job](transforms-jobs-concept.md). You can then download these results to a local folder using Media Service and Storage APIs.
+
+This article demonstrates how to download the results using Java and .NET SDKs.
+
+## Java
+
+```java
+/**
+ * Use Media Service and Storage APIs to download the output files to a local folder
+ * @param manager The entry point of Azure Media resource management
+ * @param resourceGroup The name of the resource group within the Azure subscription
+ * @param accountName The Media Services account name
+ * @param assetName The asset name
+ * @param outputFolder The output folder for downloaded files.
+ * @throws StorageException
+ * @throws URISyntaxException
+ * @throws IOException
+ */
+private static void downloadResults(MediaManager manager, String resourceGroup, String accountName,
+ String assetName, File outputFolder) throws StorageException, URISyntaxException, IOException {
+ ListContainerSasInput parameters = new ListContainerSasInput()
+ .withPermissions(AssetContainerPermission.READ)
+ .withExpiryTime(DateTime.now().plusHours(1));
+ AssetContainerSas assetContainerSas = manager.assets()
+ .listContainerSasAsync(resourceGroup, accountName, assetName, parameters).toBlocking().first();
+
+ String strSas = assetContainerSas.assetContainerSasUrls().get(0);
+ CloudBlobContainer container = new CloudBlobContainer(new URI(strSas));
+
+ File directory = new File(outputFolder, assetName);
+ directory.mkdir();
+
+ ArrayList<ListBlobItem> blobs = container.listBlobsSegmented(null, true, EnumSet.noneOf(BlobListingDetails.class), 200, null, null, null).getResults();
+
+ for (ListBlobItem blobItem: blobs) {
+ if (blobItem instanceof CloudBlockBlob) {
+ CloudBlockBlob blob = (CloudBlockBlob)blobItem;
+ File downloadTo = new File(directory, blob.getName());
+
+ blob.downloadToFile(downloadTo.getPath());
+ }
+ }
+
+ System.out.println("Download complete.");
+}
+```
+
+See the full code sample: [EncodingWithMESPredefinedPreset](https://github.com/Azure-Samples/media-services-v3-java/blob/master/VideoEncoding/EncodingWithMESPredefinedPreset/src/main/java/sample/EncodingWithMESPredefinedPreset.java)
+
+## .NET
+
+```csharp
+/// <summary>
+/// Use Media Service and Storage APIs to download the output files to a local folder
+/// </summary>
+/// <param name="client">The Media Services client.</param>
+/// <param name="resourceGroupName">The name of the resource group within the Azure subscription.</param>
+/// <param name="accountName">The Media Services account name.</param>
+/// <param name="assetName">The asset name.</param>
+/// <param name="resultsFolder">The output folder name for downloaded files.</param>
+/// <returns>A task.</returns>
+private async static Task DownloadResults(IAzureMediaServicesClient client, string resourceGroupName, string accountName, string assetName, string resultsFolder)
+{
+ AssetContainerSas assetContainerSas = client.Assets.ListContainerSas(
+ resourceGroupName,
+ accountName,
+ assetName,
+ permissions: AssetContainerPermission.Read,
+ expiryTime: DateTime.UtcNow.AddHours(1).ToUniversalTime()
+ );
+
+ Uri containerSasUrl = new Uri(assetContainerSas.AssetContainerSasUrls.FirstOrDefault());
+ CloudBlobContainer container = new CloudBlobContainer(containerSasUrl);
+
+ string directory = Path.Combine(resultsFolder, assetName);
+ Directory.CreateDirectory(directory);
+
+ Console.WriteLine("Downloading results to {0}.", directory);
+
+ var blobs = container.ListBlobsSegmentedAsync(null,true, BlobListingDetails.None,200,null,null,null).Result;
+
+ foreach (var blobItem in blobs.Results)
+ {
+ if (blobItem is CloudBlockBlob)
+ {
+ CloudBlockBlob blob = blobItem as CloudBlockBlob;
+ string filename = Path.Combine(directory, blob.Name);
+
+ await blob.DownloadToFileAsync(filename, FileMode.Create);
+ }
+ }
+
+ Console.WriteLine("Download complete.");
+}
+```
+
+See the full code sample: [EncodingWithMESPredefinedPreset](https://github.com/Azure-Samples/media-services-v3-dotnet/blob/master/VideoEncoding/EncodingWithMESPredefinedPreset/Program.cs)
+
+## Next steps
+
+[Create a job input from an HTTPS URL](job-input-from-http-how-to.md).
media-services Job Error Codes Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/job-error-codes-reference.md
+
+ Title: Job (encoding and analyzing) error codes
+description: This article links to job error codes reference topic and gives useful links to related topics.
++
+editor: ''
+
+documentationcenter: ''
++
+ na
+ms.devlang: na
+ Last updated : 08/31/2020+++
+# Media Services job error codes
++
+This topic links to a REST reference document for detailed description of [Job](transforms-jobs-concept.md) error codes and messages.
+
+## Job error codes
+
+The following REST document gives detailed explanations about [Job error codes](/rest/api/media/jobs/get#joberrorcode).
+
+## Ask questions, give feedback, get updates
+
+Check out the [Azure Media Services community](media-services-community.md) article to see different ways you can ask questions, give feedback, and get updates about Media Services.
+
+## See also
+
+- [Streaming Endpoint error codes](streaming-endpoint-error-codes.md)
+- [Azure Media Services concepts](concepts-overview.md)
+- [Quotas and limits](limits-quotas-constraints-reference.md)
+
+## Next steps
+
+[Example: access ErrorCode and Message from ApiException with .NET](configure-connect-dotnet-howto.md#connect-to-the-net-client)
media-services Job Multiple Transform Outputs How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/job-multiple-transform-outputs-how-to.md
+
+ Title: Create a job with multiple transform outputs
+description: This topic demonstrates how to create an Azure Media Services job with multiple transform outputs.
+
+documentationcenter: ''
++
+editor: ''
+++ Last updated : 08/31/2020++++
+# Create a job with multiple transform outputs
++
+This topic shows how to create a Transform with two Transform Outputs. The first one calls for the input to be encoded for adaptive bitrate streaming with a built-in [AdaptiveStreaming](encode-concept.md#builtinstandardencoderpreset) preset. The second one calls for the audio signal in the input video to be processed with the [AudioAnalyzerPreset](analyze-video-audio-files-concept.md#built-in-presets). After the Transform is created, you can submit a job that will process your video accordingly. Since in this example we are specifying two Transform Outputs, we must specify two Job Outputs. You can choose to direct both Job Outputs to the same Asset (as shown below), or you can have the results be written to separate Assets.
+
+> [!TIP]
+> Before you start developing, review [Developing with Media Services v3 APIs](media-services-apis-overview.md) (includes information on accessing APIs, naming conventions, etc.)
+
+## Create a transform
+
+The following code shows how to create a transform that produces two outputs.
+
+```csharp
+private static async Task<Transform> GetOrCreateTransformAsync(
+ IAzureMediaServicesClient client,
+ string resourceGroupName,
+ string accountName,
+ string transformName)
+{
+ // Does a Transform already exist with the desired name? Assume that an existing Transform with the desired name
+ // also uses the same recipe or Preset for processing content.
+ Transform transform = await client.Transforms.GetAsync(resourceGroupName, accountName, transformName);
+
+ if (transform == null)
+ {
+ // You need to specify what you want it to produce as an output
+ TransformOutput[] output = new TransformOutput[]
+ {
+ new TransformOutput
+ {
+ Preset = new BuiltInStandardEncoderPreset()
+ {
+ // This sample uses the built-in encoding preset for Adaptive Bitrate Streaming.
+ PresetName = EncoderNamedPreset.AdaptiveStreaming
+ }
+ },
+ // Create an analyzer preset with video insights.
+ new TransformOutput(new AudioAnalyzerPreset("en-US"))
+ };
+
+ // Create the Transform with the output defined above
+ transform = await client.Transforms.CreateOrUpdateAsync(resourceGroupName, accountName, transformName, output);
+ }
+
+ return transform;
+}
+```
+
+## Submit a job
+
+Create a job with an HTTPS URL input and with two job outputs.
+
+```csharp
+private static async Task<Job> SubmitJobAsync(IAzureMediaServicesClient client,
+ string resourceGroup,
+ string accountName,
+ string transformName)
+{
+ // Output from the encoding Job must be written to an Asset, so let's create one
+ string outputAssetName1 = $"output-" + Guid.NewGuid().ToString("N");
+ Asset outputAsset = await client.Assets.CreateOrUpdateAsync(resourceGroup, accountName, outputAssetName1, new Asset());
+
+ // This example shows how to encode from any HTTPs source URL - a new feature of the v3 API.
+ // Change the URL to any accessible HTTPs URL or SAS URL from Azure.
+ JobInputHttp jobInput =
+ new JobInputHttp(files: new[] { "https://nimbuscdn-nimbuspm.streaming.mediaservices.windows.net/2b533311-b215-4409-80af-529c3e853622/Ignite-short.mp4" });
+
+ JobOutput[] jobOutputs =
+ {
+ // Since we are specifying two Transform Outputs, two Job Outputs are needed.
+ // In this example, the first Job Output is for the results from adaptive bitrate encoding,
+ // and the second is for the results from audio analysis. In this example, both are written to the
+ // same output Asset. Or, you can specify different Assets.
+
+ new JobOutputAsset(outputAsset.Name),
+ new JobOutputAsset(outputAsset.Name)
+
+ };
+
+ // In this example, we are using a unique job name.
+ //
+ // If you already have a job with the desired name, use the Jobs.Get method
+ // to get the existing job. In Media Services v3, Get methods on entities returns null
+ // if the entity doesn't exist (a case-insensitive check on the name).
+ Job job;
+ try
+ {
+ string jobName = $"job-" + Guid.NewGuid().ToString("N");
+ job = await client.Jobs.CreateAsync(
+ resourceGroup,
+ accountName,
+ transformName,
+ jobName,
+ new Job
+ {
+ Input = jobInput,
+ Outputs = jobOutputs,
+ });
+ }
+ catch (Exception exception)
+ {
+ if (exception.GetBaseException() is ApiErrorException apiException)
+ {
+ Console.Error.WriteLine(
+ $"ERROR: API call failed with error code '{apiException.Body.Error.Code}' and message '{apiException.Body.Error.Message}'.");
+ }
+ throw exception;
+ }
+
+ return job;
+}
+```
+## Job error codes
+
+See [Error codes](/rest/api/media/jobs/get#joberrorcode).
+
+## Next steps
+
+[Azure Media Services v3 samples using .NET](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/master/)
media-services Limits Quotas Constraints Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/limits-quotas-constraints-reference.md
+
+ Title: Quotas and limits in Azure Media Services
+description: This topic describes quotas and limits in Microsoft Azure Media Services.
+
+documentationcenter: ''
++
+editor: ''
++++ Last updated : 10/23/2020+++
+<!-- If you update limits in this topic, make sure to also update https://docs.microsoft.com/azure/azure-resource-manager/management/azure-subscription-service-limits#media-services-limits -->
+# Azure Media Services quotas and limits
++
+This article lists some of the most common Microsoft Azure Media Services limits, which are also sometimes called quotas.
+
+> [!NOTE]
+> For resources that aren't fixed, open a support ticket to ask for an increase in the quotas. Don't create additional Azure Media Services accounts in an attempt to obtain higher limits.
+
+## Account limits
+
+| Resource | Default Limit |
+| | |
+| [Media Services accounts](account-move-account-how-to.md) in a single subscription | 100 (fixed) |
+
+## Asset limits
+
+| Resource | Default Limit |
+| | |
+| [Assets](assets-concept.md) per Media Services account | 1,000,000|
+
+## Storage limits
+
+| Resource | Default Limit |
+| | |
+| File size| In some scenarios, there is a limit on the maximum file size supported for processing in Media Services. <sup>(1)</sup> |
+| [Storage accounts](storage-account-concept.md) | 100<sup>(2)</sup> (fixed) |
+
+<sup>1</sup> The maximum size supported for a single blob is currently up to 5 TB in Azure Blob Storage. Additional limits apply in Media Services based on the VM sizes that are used by the service. The size limit applies to the files that you upload and also the files that get generated as a result of Media Services processing (encoding or analyzing). If your source file is larger than 260-GB, your Job will likely fail.
+
+The following table shows the limits on the media reserved units S1, S2, and S3. If your source file is larger than the limits defined in the table, your encoding job fails. If you encode 4K resolution sources of long duration, you're required to use S3 media reserved units to achieve the performance needed. If you have 4K content that's larger than the 260-GB limit on the S3 media reserved units, open a support ticket.
+
+|Media reserved unit type|Maximum input size (GB)|
+|||
+|S1 | 26|
+|S2 | 60|
+|S3 |260|
+
+<sup>2</sup> The storage accounts must be from the same Azure subscription.
+
+## Jobs (encoding & analyzing) limits
+
+| Resource | Default Limit |
+| | |
+| [Jobs](transforms-jobs-concept.md) per Media Services account | 500,000 <sup>(3)</sup> (fixed)|
+| Job inputs per Job | 50 (fixed)|
+| Job outputs per Job | 20 (fixed) |
+| [Transforms](transforms-jobs-concept.md) per Media Services account | 100 (fixed)|
+| Transform outputs in a Transform | 20 (fixed) |
+| Files per job input|10 (fixed)|
+
+<sup>3</sup> This number includes queued, finished, active, and canceled Jobs. It does not include deleted Jobs.
+
+Any Job record in your account older than 90 days will be automatically deleted, even if the total number of records is below the maximum quota.
+
+## Live streaming limits
+
+| Resource | Default Limit |
+| | |
+| [Live Events](live-event-outputs-concept.md) <sup>(4)</sup> per Media Services account |5|
+| Live Outputs per Live Event |3 <sup>(5)</sup> |
+| Max Live Output duration | [Size of the DVR window](live-event-cloud-dvr-time-how-to.md) |
+
+<sup>4</sup> For detailed information about Live Event limits, see [Live Event types comparison and limits](live-event-types-comparison-reference.md).
+
+<sup>5</sup> Live Outputs start on creation and stop when deleted.
+
+## Packaging & delivery limits
+
+| Resource | Default Limit |
+| | |
+| [Streaming Endpoints](streaming-endpoint-concept.md) (stopped or running) per Media Services account | 2 |
+| Premium streaming units | 10 |
+| [Dynamic Manifest Filters](filters-dynamic-manifest-concept.md)|100|
+| [Streaming Policies](streaming-policy-concept.md) | 100 <sup>(6)</sup> |
+| Unique [Streaming Locators](streaming-locators-concept.md) associated with an Asset at one time | 100<sup>(7)</sup> (fixed) |
+
+<sup>6</sup> When using a custom [Streaming Policy](/rest/api/media/streamingpolicies), you should design a limited set of such policies for your Media Service account, and re-use them for your StreamingLocators whenever the same encryption options and protocols are needed. You should not be creating a new Streaming Policy for each Streaming Locator.
+
+<sup>7</sup> Streaming Locators are not designed for managing per-user access control. To give different access rights to individual users, use Digital Rights Management (DRM) solutions.
+
+## Protection limits
+
+| Resource | Default Limit |
+| | |
+| Options per [Content Key Policy](drm-content-key-policy-concept.md) |30 |
+| Licenses per month for each of the DRM types on Media Services key delivery service per account|1,000,000|
+
+## Support ticket
+
+For resources that are not fixed, you may ask for the quotas to be raised, by opening a [support ticket](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). Include detailed information in the request on the desired quota changes, use-case scenarios, and regions required. <br/>Do **not** create additional Azure Media Services accounts in an attempt to obtain higher limits.
+
+## Next steps
+
+[Overview](media-services-overview.md)
media-services Live Event Cloud Dvr Time How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/live-event-cloud-dvr-time-how-to.md
+
+ Title: Use time-shifting to create on-demand video playback
+description: This article describes how to use time-shifting and Live Outputs to record Live Streams and create on-demand playback.
+
+documentationcenter: ''
++
+editor: ''
++
+ na
+ms.devlang: ne
+ Last updated : 08/31/2020+++
+# Use time-shifting and Live Outputs to create on-demand video playback
++
+In Azure Media Services, a [Live Output](/rest/api/media/liveoutputs) object is like a digital video recorder that will catch and record your live stream into an asset in your Media Services account. The recorded content is persisted into the container defined by the [Asset](/rest/api/media/assets) resource (the container is in the Azure Storage account attached to your account). The Live Output also allows you to control some properties of the outgoing live stream, like how much of the stream is kept in the archive recording (for example, the capacity of the cloud DVR) or when viewers can start watching the live stream. The archive on disk is a circular archive "window" that only holds the amount of content that's specified in the **archiveWindowLength** property of the Live Output. Content that falls outside of this window is automatically discarded from the storage container and isn't recoverable. The archiveWindowLength value represents an ISO-8601 timespan duration (for example, PTHH:MM:SS), which specifies the capacity of the DVR. The value can be set from a minimum of one minute to a maximum of 25 hours.
+
+The relationship between a Live Event and its Live Outputs is similar to traditional TV broadcast, in that a channel (Live Event) represents a constant stream of video and a recording (Live Output) is scoped to a specific time segment (for example, evening news from 6:30PM to 7:00PM). Once you have the stream flowing into the Live Event, you can begin the streaming event by creating an asset, Live Output, and streaming locator. Live Output will archive the stream and make it available to viewers through the [Streaming Endpoint](/rest/api/medi#general-steps) section.
+
+## Using a DVR during an event
+
+This section discusses how to use a DVR during an event to control what portions of the stream is available for ΓÇÿrewindΓÇÖ.
+
+The `archiveWindowLength` value determines how far back in time a viewer can go from the current live position. The `archiveWindowLength` value also determines how long the client manifests can grow.
+
+Suppose you're streaming a football game, and it has an `ArchiveWindowLength` of only 30 minutes. A viewer who starts watching your event 45 minutes after the game started can seek back to at most the 15-minute mark. Your Live Outputs for the game will continue until the Live Event is stopped. Content that falls outside of archiveWindowLength is continuously discarded from storage and is non-recoverable. In this example, the video between the start of the event and the 15-minute mark would have been purged from your DVR and from the container in blob storage for the asset. The archive isn't recoverable and is removed from the container in Azure blob storage.
+
+A Live Event supports up to three concurrently running Live Outputs (you can create at most 3 recordings/archives from one live stream at the same time). This support allows you to publish and archive different parts of an event as needed. Suppose you need to broadcast a 24x7 live linear feed, and create "recordings" of the different programs throughout the day to offer to customers as on-demand content for catch-up viewing. For this scenario, you first create a primary Live Output with a short archive window of 1 hour or lessΓÇôthis is the primary live stream that your viewers would tune into. You would create a Streaming Locator for this Live Output and publish it to your app or web site as the "Live" feed. While the Live Event is running, you can programmatically create a second concurrent Live Output at the beginning of a program (or 5 minutes early to provide some handles to trim later). This second Live Output can be deleted 5 minutes after the program ends. With this second asset, you can create a new Streaming Locator to publish this program as an on-demand asset in your app's catalog. You can repeat this process multiple times for other program boundaries or highlights that you wish to share as on-demand videos, all while the "Live" feed from the first Live Output continues to broadcast the linear feed.
+
+## Creating an archive for on-demand playback
+
+The asset that the Live Output is archiving to automatically becomes an on-demand asset when the Live Output is deleted. You must delete all Live Outputs before a Live Event can be stopped. You can use an optional flag [removeOutputsOnStop](/rest/api/media/liveevents/stop#request-body) to automatically remove Live Outputs on stop.
+
+Even after you stop and delete the event, users can stream your archived content as a video on-demand, for as long as you don't delete the asset. An asset shouldn't be deleted if it's used by an event; the event must be deleted first.
+
+If you've published the asset of your Live Output using a streaming locator, the Live Event (up to the DVR window length) will continue to be viewable until the streaming locatorΓÇÖs expiry or deletion, whichever comes first.
+
+For more information, see:
+
+- [Live streaming overview](stream-live-streaming-concept.md)
+- [Live streaming tutorial](stream-live-tutorial-with-api.md)
+
+> [!NOTE]
+> When you delete the Live Output, you're not deleting the underlying asset and content in the asset.
+
+## Next steps
+
+* [Subclip your videos](subclip-video-rest-howto.md).
+* [Define filters for your assets](filters-dynamic-manifest-rest-howto.md).
media-services Live Event Error Codes Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/live-event-error-codes-reference.md
+
+ Title: Azure Media Services live event error codes
+description: This article lists live event error codes.
++
+editor: ''
+
+documentationcenter: ''
+++
+ na
+ms.devlang: na
+ Last updated : 03/26/2021++++
+# Media Services Live Event error codes
++
+The following tables list the [Live Event](live-event-outputs-concept.md) error codes.
+
+## LiveEventConnectionRejected
+
+When you subscribe to the [Event Grid](../../event-grid/index.yml) events for a
+live event, you may see one of the following errors from the
+[LiveEventConnectionRejected](monitoring/media-services-event-schemas.md\#liveeventconnectionrejected)
+event.
+> [!div class="mx-tdCol2BreakAll"]
+>| Error | Information |
+>|--|--|
+>|**MPE_RTMP_APPID_AUTH_FAILURE** ||
+>|Description | Incorrect ingest URL |
+>|Suggested solution| APPID is a GUID token in RTMP ingest URL. Make sure it matches with Ingest URL from API. |
+>|**MPE_INGEST_ENCODER_CONNECTION_DENIED** ||
+>| Description |Encoder IP isn't present in the configured IP allow list |
+>| Suggested solution| Make sure the encoder's IP is in the IP Allow List. Use an online tool such as *whoismyip* or *CIDR calculator* to set the proper value. Make sure the encoder can reach the server before the actual live event. |
+>|**MPE_INGEST_RTMP_SETDATAFRAME_NOT_RECEIVED** ||
+>| Description|The RTMP encoder did not send the `setDataFrame` command. |
+>| Suggested solution|Most commercial encoders send stream metadata. For an encoder that pushes a single bitrate ingest, this may not be issue. The LiveEvent is able to calculate incoming bitrate when the stream metadata is missing. For multi-bitrate ingest for a PassThru channel or double push scenario, you can try to append the query string with 'videodatarate' and 'audiodatarate' in the ingest URL. The approximate value may work. The unit is in Kbit. For example, `rtmp://hostname:1935/live/GUID_APPID/streamname?videodatarate=5000&audiodatarate=192` |
+>|**MPE_INGEST_CODEC_NOT_SUPPORTED** ||
+>| Description|The codec specified isn't supported.|
+>| Suggested solution| The LiveEvent received unsupported codec. For example, an RTMP ingest, LiveEvent received non-AVC video codec. Check encoder preset. |
+>|**MPE_INGEST_DESCRIPTION_INFO_NOT_RECEIVED** ||
+>| Description |The media description information was not received before the actual media data was delivered. |
+>| Suggested solution|The LiveEvent does not receive the stream description (header or FLV tag) from the encoder. This is a protocol violation. Contact encoder vendor. |
+>|**MPE_INGEST_MEDIA_QUALITIES_EXCEEDED** ||
+>| Description|The count of qualities for audio or video type exceeded the maximum allowed limit. |
+>| Suggested solution|When Live Event mode is Live Encoding, the encoder should push a single bitrate of video and audio. Note that a redundant push from the same bitrate is allowed. Check the encoder preset or output settings to make sure it outputs a single bitrate stream. |
+>|**MPE_INGEST_BITRATE_AGGREGATED_EXCEEDED** ||
+>| Description|The total incoming bitrate in a live event or channel service exceeded the maximum allowed limit. |
+>| Suggested solution|The encoder exceeded the maximum incoming bitrate. This limit aggregates all incoming data from the contributing encoder. Check encoder preset or output settings to reduce bitrate. |
+>|**MPE_RTMP_FLV_TAG_TIMESTAMP_INVALID** ||
+>| Description|The timestamp for video or audio FLVTag is invalid from the RTMP encoder. |
+>| Suggested solution|Deprecated. |
+>|**MPE_INGEST_FRAMERATE_EXCEEDED** ||
+>| Description|The incoming encoder ingested streams with frame rates exceeded the maximum allowed 30 fps for encoding live events/channels. |
+>| Suggested solution|Check encoder preset to lower frame rate to under 36 fps. |
+>|**MPE_INGEST_VIDEO_RESOLUTION_NOT_SUPPORTED** ||
+>| Description|The incoming encoder ingested streams exceeded the following allowed resolutions: 1920x1088 for encoding live events/channels and 4096 x 2160 for pass-through live events/channels. |
+>| Suggested solution|Check encoder preset to lower video resolution so it doesn't exceed the limit. |
+>|**MPE_INGEST_RTMP_TOO_LARGE_UNPROCESSED_FLV** |
+>| Description|The live event has received a large amount of audio data at once, or a large amount of video data without any key frames. We have disconnected the encoder to give it a chance to retry with correct data. |
+>| Suggested solution|Ensure that the encoder sends a key frame for every key frame interval(GOP). Enable settings like "Constant bitrate(CBR)" or "Align Key Frames". Sometimes, resetting the contributing encoder may help. If it doesn't help, contact encoder vendor. |
+
+## LiveEventEncoderDisconnected
+
+You may see one of the following errors from the
+[LiveEventEncoderDisconnected](monitoring/media-services-event-schemas.md\#liveeventencoderdisconnected)
+event.
+
+> [!div class="mx-tdCol2BreakAll"]
+>| Error | Information |
+>|--|--|
+>|**MPE_RTMP_SESSION_IDLE_TIMEOUT** |
+>| Description|RTMP session timed out after being idle for allowed time limit. |
+>|Suggested solution|This typically happens when an encoder stops receiving the input feed so that the session becomes idle because there is no data to push out. Check if the encoder or input feed status is in a healthy state. |
+>|**MPE_RTMP_FLV_TAG_TIMESTAMP_INVALID** |
+>|Description| The timestamp for the video or audio FLVTag is invalid from RTMP encoder. |
+>| Suggested solution| Deprecated. |
+>|**MPE_CAPACITY_LIMIT_REACHED** |
+>| Description|Encoder sending data too fast. |
+>| Suggested solution|This happens when the encoder bursts out a large set of fragments in a brief period. This can theoretically happen when the encoder can't push data for while due to a network issue and the bursts out data when the network is available. Find the reason from encoder log or system log. |
+>|**Unknown error codes** |
+>| Description| These error codes can range from memory error to duplicate entries in hash map. This can happen when the encoder sends out a large set of fragments in a brief period. This can also happen when the encoder couldn't push data for while due to a network issue and then sends all the delayed fragments at once when the network becomes available. |
+>|Suggested solution| Check the encoder logs.|
+
+## Other error codes
+
+> [!div class="mx-tdCol2BreakAll"]
+>| Error | Information |Rejected/Disconnected Event|
+>|--|--|--|
+>|**ERROR_END_OF_MEDIA** ||Yes|
+>| Description|This is general error. ||
+>|Suggested solution| None.||
+>|**MPI_SYSTEM_MAINTENANCE** ||Yes|
+>| Description|The encoder disconnected due to service update or system maintenance. ||
+>|Suggested solution|Make sure the encoder enables 'auto connect'. It allows the encoder to reconnect to the redundant live event endpoint that is not in maintenance. ||
+>|**MPE_BAD_URL_SYNTAX** ||Yes|
+>| Description|The ingest URL is incorrectly formatted. ||
+>|Suggested solution|Make sure the ingest URL is correctly formatted. For RTMP, it should be `rtmp[s]://hostname:port/live/GUID_APPID/streamname` ||
+>|**MPE_CLIENT_TERMINATED_SESSION** ||Yes|
+>| Description|The encoder disconnected the session. ||
+>|Suggested solution|This is not error. The encoder initiated disconnection, including graceful disconnection. If this is an unexpected disconnect, check the encoder logs. |
+>|**MPE_INGEST_BITRATE_NOT_MATCH** ||No|
+>| Description|The incoming data rate does not match with expected bitrate. ||
+>|Suggested solution|This is a warning which happens when incoming data rate is too slow or fast. Check encoder log or system log.||
+>|**MPE_INGEST_DISCONTINUITY** ||No|
+>| Description| There is discontinuty in incoming data.||
+>|Suggested solution| This is a warning that the encoder drops data due to a network issue or a system resource issue. Check the encoder log or system log. Monitor the system resource (CPU, memory or network) as well. If the system CPU is too high, try to lower the bitrate or use the H/W encoder option from the system graphics card.||
+
+## See also
+
+[Streaming Endpoint (Origin) error codes](streaming-endpoint-error-codes.md)
+
+## Next steps
+
+[Tutorial: Stream live with Media Services](stream-live-tutorial-with-api.md)
media-services Live Event Latency Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/live-event-latency-reference.md
+
+ Title: LiveEvent low latency settings in Azure Media Services
+description: This topic gives an overview of LiveEvent low latency settings and shows how to set low latency.
+
+documentationcenter: ''
++
+editor: ''
+++
+ na
+ms.devlang: ne
+ Last updated : 08/31/2020+++++
+# Live Event low latency settings
++
+This article shows how to set low latency on a [Live Event](/rest/api/media/liveevents). It also discusses typical results that you see when using the low latency settings in various players. The results vary based on CDN and network latency.
+
+To use the new **LowLatency** feature, you set the **StreamOptionsFlag** to **LowLatency** on the **LiveEvent**. When creating [LiveOutput](/rest/api/media/liveoutputs) for HLS playback, set [LiveOutput.Hls.fragmentsPerTsSegment](/rest/api/media/liveoutputs/create#hls) to 1. Once the stream is up and running, you can use the [Azure Media Player](https://ampdemo.azureedge.net/) (AMP demo page), and set the playback options to use the "Low Latency Heuristics Profile".
+
+> [!NOTE]
+> Currently, the LowLatency HeuristicProfile in Azure Media Player is designed for playing back streams in MPEG-DASH protocol, with either CSF or CMAF format (for example, `format=mdp-time-csf` or `format=mdp-time-cmaf`).
+
+The following .NET example shows how to set **LowLatency** on the **LiveEvent**:
+
+[!code-csharp[Main](../../../media-services-v3-dotnet/Live/LiveEventWithDVR/Program.cs#NewLiveEvent)]
+
+See the full example: [Live Event with DVR](https://github.com/Azure-Samples/media-services-v3-dotnet/blob/main/Live/LiveEventWithDVR/Program.cs).
+
+## Live Events latency
+
+The following tables show typical results for latency (when the LowLatency flag is enabled) in Media Services, measured from the time the contribution feed reaches the service to when a viewer sees the playback on the player. To use low latency optimally, you should tune your encoder settings down to 1 second "Group Of Pictures" (GOP) length. When using a higher GOP length, you minimize bandwidth consumption and reduce bitrate under same frame rate. It is especially beneficial in videos with less motion.
+
+### Pass-through
+
+||2s GOP low latency enabled|1s GOP low latency enabled|
+||||
+|**DASH in AMP**|10s|8s|
+|**HLS on native iOS player**|14s|10s|
+
+### Live encoding
+
+||2s GOP low latency enabled|1s GOP low latency enabled|
+||||
+|**DASH in AMP**|14s|10s|
+|**HLS on native iOS player**|18s|13s|
+
+> [!NOTE]
+> The end-to-end latency can vary depending on local network conditions or by introducing a CDN caching layer. You should test your exact configurations.
+
+## Next steps
+
+- [Live streaming overview](stream-live-streaming-concept.md)
+- [Live streaming tutorial](stream-live-tutorial-with-api.md)
media-services Live Event Live Transcription How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/live-event-live-transcription-how-to.md
+
+ Title: Live transcription
+: Azure Media Services
+description: Learn about Azure Media Services live transcription.
+
+documentationcenter: ''
++
+editor: ''
++
+ na
+ms.devlang: ne
+ Last updated : 08/31/2020++++
+# Live transcription (preview)
++
+Azure Media Service delivers video, audio, and text in different protocols. When you publish your live stream using MPEG-DASH or HLS/CMAF, then along with video and audio, our service delivers the transcribed text in IMSC1.1 compatible TTML. The delivery is packaged into MPEG-4 Part 30 (ISO/IEC 14496-30) fragments. If using delivery via HLS/TS, then text is delivered as chunked VTT.
+
+Additional charges apply when live transcription is turned on. Please review the pricing information in the Live Video section of the [Media Services pricing page](https://azure.microsoft.com/pricing/details/media-services/).
+
+This article describes how to enable live transcription when streaming a Live Event with Azure Media Services. Before you continue, make sure you're familiar with the use of Media Services v3 REST APIs (see [this tutorial](stream-files-tutorial-with-rest.md) for details). You should also be familiar with the [live streaming](stream-live-streaming-concept.md) concept. It's recommended to complete the [Stream live with Media Services](stream-live-tutorial-with-api.md) tutorial.
+
+## Live transcription preview regions and languages
+
+Live transcription is available in the following regions:
+
+- Southeast Asia
+- West Europe
+- North Europe
+- East US
+- Central US
+- South Central US
+- West US 2
+- Brazil South
+
+This is the list of available languages that can be transcribed, use the language code in the API.
+
+| Language | Language code |
+| -- | - |
+| Catalan | ca-ES |
+| Danish (Denmark) | da-DK |
+| German (Germany) | de-DE |
+| English (Australia) | en-AU |
+| English (Canada) | en-CA |
+| English (United Kingdom) | en-GB |
+| English (India) | en-IN |
+| English (New Zealand) | en-NZ |
+| English (United States) | en-US |
+| Spanish (Spain) | es-ES |
+| Spanish (Mexico) | es-MX |
+| Finnish (Finland) | fi-FI |
+| French (Canada) | fr-CA |
+| French (France) | fr-FR |
+| Italian (Italy) | it-IT |
+| Dutch (Netherlands) | nl-NL |
+| Portuguese (Brazil) | pt-BR |
+| Portuguese (Portugal) | pt-PT |
+| Swedish (Sweden) | sv-SE |
+
+## Create the live event with live transcription
+
+To create a live event with the transcription turned on, send the PUT operation with the 2019-05-01-preview API version, for example:
+
+```
+PUT https://management.azure.com/subscriptions/:subscriptionId/resourceGroups/:resourceGroupName/providers/Microsoft.Media/mediaServices/:accountName/liveEvents/:liveEventName?api-version=2019-05-01-preview&autoStart=true
+```
+
+The operation has the following body (where a pass-through Live Event is created with RTMP as the ingest protocol). Note the addition of a transcriptions property.
+
+```
+{
+ "properties": {
+ "description": "Demonstrate how to enable live transcriptions",
+ "input": {
+ "streamingProtocol": "RTMP",
+ "accessControl": {
+ "ip": {
+ "allow": [
+ {
+ "name": "Allow All",
+ "address": "0.0.0.0",
+ "subnetPrefixLength": 0
+ }
+ ]
+ }
+ }
+ },
+ "preview": {
+ "accessControl": {
+ "ip": {
+ "allow": [
+ {
+ "name": "Allow All",
+ "address": "0.0.0.0",
+ "subnetPrefixLength": 0
+ }
+ ]
+ }
+ }
+ },
+ "encoding": {
+ "encodingType": "None"
+ },
+ "transcriptions": [
+ {
+ "language": "en-US"
+ }
+ ],
+ "useStaticHostname": false,
+ "streamOptions": [
+ "Default"
+ ]
+ },
+ "location": "West US 2"
+}
+```
+
+## Start or stop transcription after the live event has started
+
+You can start and stop live transcription while the live event is in running state. For more information about starting and stopping live events, read the Long-running operations section at [Develop with Media Services v3 APIs](media-services-apis-overview.md#long-running-operations).
+
+To turn on live transcriptions or to update the transcription language, patch the live event to include a ΓÇ£transcriptionsΓÇ¥ property. To turn off live transcriptions, remove the ΓÇ£transcriptionsΓÇ¥ property from the live event object.
+
+> [!NOTE]
+> Turning the transcription on or off **more than once** during the live event is not a supported scenario.
+
+This is the sample call to turn on live transcriptions.
+
+PATCH: ```https://management.azure.com/subscriptions/:subscriptionId/resourceGroups/:resourceGroupName/providers/Microsoft.Media/mediaServices/:accountName/liveEvents/:liveEventName?api-version=2019-05-01-preview```
+
+```
+{
+ "properties": {
+ "description": "Demonstrate how to enable live transcriptions",
+ "input": {
+ "streamingProtocol": "RTMP",
+ "accessControl": {
+ "ip": {
+ "allow": [
+ {
+ "name": "Allow All",
+ "address": "0.0.0.0",
+ "subnetPrefixLength": 0
+ }
+ ]
+ }
+ }
+ },
+ "preview": {
+ "accessControl": {
+ "ip": {
+ "allow": [
+ {
+ "name": "Allow All",
+ "address": "0.0.0.0",
+ "subnetPrefixLength": 0
+ }
+ ]
+ }
+ }
+ },
+ "encoding": {
+ "encodingType": "None"
+ },
+ "transcriptions": [
+ {
+ "language": "en-US"
+ }
+ ],
+ "useStaticHostname": false,
+ "streamOptions": [
+ "Default"
+ ]
+ },
+ "location": "West US 2"
+}
+```
+
+## Transcription delivery and playback
+
+Review the [Dynamic packaging overview](encode-dynamic-packaging-concept.md#to-prepare-your-source-files-for-delivery) article of how our service uses dynamic packaging to deliver video, audio, and text in different protocols. When you publish your live stream using MPEG-DASH or HLS/CMAF, then along with video and audio, our service delivers the transcribed text in IMSC1.1 compatible TTML. This delivery is packaged into MPEG-4 Part 30 (ISO/IEC 14496-30) fragments. If using delivery via HLS/TS, then the text is delivered as chunked VTT. You can use a web player such as the [Azure Media Player](use-azure-media-player.md) to play the stream.
+
+> [!NOTE]
+> If using Azure Media Player, use version 2.3.3 or later.
+
+## Known issues
+
+For preview, the following are known issues with live transcription:
+
+- Apps need to use the preview APIs, described in the [Media Services v3 OpenAPI Specification](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/mediaservices/resource-manager/Microsoft.Media/preview/2019-05-01-preview/streamingservice.json).
+- Digital rights management (DRM) protection does not apply to the text track, only AES envelope encryption is possible.
+
+## Next steps
+
+* [Media Services overview](media-services-overview.md)
media-services Live Event Obs Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/live-event-obs-quickstart.md
+
+ Title: Create a live stream with OBS Studio
+description: Learn how to create an Azure Media Services live stream by using the portal and OBS Studio
+++++ Last updated : 03/20/2021++
+# Create an Azure Media Services live stream with OBS
++
+This quickstart will help you create a Media Services Live Event by using the Azure portal and broadcast using Open Broadcasting Studio (OBS). It assumes that you have an Azure subscription and have created a Media Services account.
+
+In this quickstart, we'll cover:
+
+- Setting up an on-premises encoder with OBS.
+- Setting up a live stream.
+- Setting up live stream outputs.
+- Running a default streaming endpoint.
+- Using Azure Media Player to view the live stream and on-demand output.
+
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+
+## Sign in to the Azure portal
+
+Open your web browser, and go to the [Microsoft Azure portal](https://portal.azure.com/). Enter your credentials to sign in to the portal. The default view is your service dashboard.
+
+## Set up an on-premises encoder by using OBS
+
+1. Download and install OBS for your operating system on the [Open Broadcaster Software website](https://obsproject.com/).
+1. Start the application and keep it open.
+
+## Run the default streaming endpoint
+
+1. Select **Streaming endpoints** in the Media Services listing.
+
+ ![Streaming endpoints menu item.](media/live-events-obs-quickstart/streaming-endpoints.png)
+1. If the default streaming endpoint status is stopped, select it. This step takes you to the page for that endpoint.
+1. Select **Start**.
+
+ ![Start button for the streaming endpoint.](media/live-events-obs-quickstart/start.png)
+
+## Set up an Azure Media Services live stream
+
+1. Go to the Azure Media Services account within the portal, and then select **Live streaming** from the **Media Services** listing.
+
+ ![Live streaming link.](media/live-events-obs-quickstart/select-live-streaming.png)
+1. Select **Add live event** to create a new live streaming event.
+
+ ![Add live event icon.](media/live-events-obs-quickstart/add-live-event.png)
+1. Enter a name for your new event, such as *TestLiveEvent*, in the **Live event name** box.
+
+ ![Live event name box.](media/live-events-obs-quickstart/live-event-name.png)
+1. Enter an optional description of the event in the **Description** box.
+1. Select the **Pass-through ΓÇô no cloud encoding** option.
+
+ ![Cloud encoding option.](media/live-events-obs-quickstart/cloud-encoding.png)
+1. Select the **RTMP** option.
+1. Make sure that the **No** option is selected for **Start live event**, to avoid being billed for the live event before it's ready. (Billing will begin when the live event is started.)
+
+ ![Start live event option.](media/live-events-obs-quickstart/start-live-event-no.png)
+1. Select the **Review + create** button to review the settings.
+1. Select the **Create** button to create the live event. You're then returned to the live event listing.
+1. Select the link to the live event that you created. Notice that your event is stopped.
+1. Keep this page open in your browser. We'll come back to it later.
+
+## Set up a live stream by using OBS Studio
+
+OBS starts with a default scene but with no inputs selected.
+
+ ![OBS default screen](media/live-events-obs-quickstart/live-event-obs-default-screen.png)
+
+### Add a video source
+
+1. From the **Sources** panel, select the **add** icon to select a new source device. The **Sources** menu will open.
+
+1. Select **Video Capture Device** from the source device menu. The **Create/Select Source** menu will open.
+
+ ![OBS sources menu with video device selected.](media/live-events-obs-quickstart/live-event-obs-video-device-menu.png)
+
+1. Select the **Add Existing** radio button, then select **OK**. The **Properties for Video Device** menu will open.
+
+ ![OBS new video source menu with add existing selected.](media/live-events-obs-quickstart/live-event-obs-new-video-source.png)
+
+1. From the **Device** dropdown list, select the video input you want to use for your broadcast. Leave the rest of the settings alone for now, and select **OK**. The input source will be added to the **Sources** panel, and the video input view will show up in the **Preview** area.
+
+ ![OBS camera settings](media/live-events-obs-quickstart/live-event-surface-camera.png)
+
+### Add an audio source
+
+1. From the **Sources** panel, select the **add** icon to select a new source device. The Source Device menu will open.
+
+1. Select **Audio Input Capture** from the source device menu. The **Create/Select Source** menu will open.
+
+ ![OBS sources menu with audio device selected.](media/live-events-obs-quickstart/live-event-obs-audio-device-menu.png)
+
+1. Select the **Add Existing** radio button, then select **OK**. The **Properties for Audio Input Capture** menu will open.
+
+ ![OBS audio source with add existing selected.](media/live-events-obs-quickstart/live-event-obs-new-audio-source.png)
+
+1. From the **Device** dropdown list, select the audio capture device you want to use for your broadcast. Leave the rest of the settings alone for now, and select OK. The audio capture device will be added to the audio mixer panel.
+
+ ![OBS audio device selection dropdown list](media/live-events-obs-quickstart/live-event-select-audio-device.png)
+
+### Set up streaming and advanced encoding settings in OBS
+
+In the next procedure, you'll go back to Azure Media Services in your browser to copy the input URL to enter into the output settings:
+
+1. On the Azure Media Services page of the portal, select **Start** to start the live stream event. (Billing starts now.)
+
+ ![Start icon.](media/live-events-obs-quickstart/start.png)
+1. Set the **RTMP** toggle to **RTMPS**.
+1. In the **Input URL** box, copy the URL to your clipboard.
+
+ ![Input URL.](media/live-events-obs-quickstart/input-url.png)
+
+1. Switch to the OBS application.
+
+1. Select the **Settings** button in the **Controls** panel. The Settings options will open.
+
+ ![OBS Controls panel with settings selected.](media/live-events-obs-quickstart/live-event-obs-settings.png)
+
+1. Select **Stream** from the **Settings** menu.
+
+1. From the **Service** dropdown list, select Show all, then select **Custom...**.
+
+1. In the **Server** field, paste the RTMPS URL you copied to your clipboard.
+
+1. Enter something into the **Stream key** field. It doesn't really matter what it is, but it needs to have a value.
+
+ ![OBS stream settings.](media/live-events-obs-quickstart/live-event-obs-stream-settings.png)
+
+1. Select **Output** from the **Settings** menu.
+
+1. Select the **Output Mode** dropdown at the top of the page and choose **Advanced** to access all of the available encoder settings.
+
+1. Select the **Streaming** tab to set up the encoder.
+
+1. Select the right encoder for your system. If your hardware supports GPU acceleration, choose from NVIDIA **NVENC** H.264 or Intel **QuickSync** H.264. If your system doesn't have a supported GPU, select the **X264** software encoder option.
+
+#### X264 Encoder settings
+
+1. If you have selected the **X264** encoding option select the **Rescale Output** box. Select either 1920x1080 if you are using a Premium Live Event in Media Services or 1280x720 if you're using a Standard (720P) Live Event. If you're using a pass-through live event, you can choose any available resolution.
+
+1. Set the **Bitrate** to anywhere between 1500 Kbps and 4000 Kbps. We recommend 2500 Kbps if you are using a Standard encoding Live Event at 720P. If you are using a 1080P Premium Live Event, 4000 Kbps is recommended. You may wish to adjust the bitrate based on available CPU capabilities and bandwidth on your network to achieve the desired quality setting.
+
+1. Enter *2* into the **Keyframe interval** field. The value sets the key frame interval to 2 seconds, which controls the final size of the fragments delivered over HLS or DASH from Media Services. Never set the key frame interval any higher than 4 seconds. If you are seeing high latency when broadcasting, you should always double check or inform your application users to always set this value to 2 seconds. When attempting to achieve lower latency live delivery you can choose to set this value to as low as 1 second.
+
+1. OPTIONAL: Set the CPU Usage Preset to **veryfast** and run some experiments to see if your local CPU can handle the combination of bitrate and preset with enough overhead. Try to avoid settings that would result in an average CPU higher than 80% to avoid any issues during live streaming. To improve quality, you can test with **faster** and **fast** preset settings until you reach your CPU limitations.
+
+ ![OBS X264 encoder settings](media/live-events-obs-quickstart/live-event-obs-x264-settings.png)
+
+1. Leave the rest of the settings unchanged and select **OK**.
+
+#### Nvidia NVENC Encoder settings
+
+1. If you have selected the **NVENC** GPU encoding option, check the **Rescale Output** box and select either 1920x1080 if you are using a Premium Live Event in Media Services, or 1280x720 if you are using a Standard (720P) Live Event. If you are using a pass-through live event, you can choose any available resolution.
+
+1. Set the **Rate Control** to CBR for Constant Bitrate rate control.
+
+1. Set the **Bitrate** anywhere between 1500 Kbps and 4000 Kbps. We recommend 2500 Kbps if you are using a Standard encoding Live Event at 720P. If you are using a 1080P Premium Live Event, 4000 Kbps is recommended. You may choose to adjust this based on available CPU capabilities and bandwidth on your network to achieve the desired quality setting.
+
+1. Set the **Keyframe Interval** to 2 seconds as noted above under the X264 options. Do not exceed 4 seconds, as this can significantly impact the latency of your live broadcast.
+
+1. Set the **Preset** to Low-Latency, Low-Latency Performance, or Low-Latency Quality depending on the CPU speed on your local machine. Experiment with these settings to achieve the best balance between quality and CPU utilization on your own hardware.
+
+1. Set the **Profile** to "main" or "high" if you are using a more powerful hardware configuration.
+
+1. Leave the **Look-ahead** unchecked. If you have a very powerful machine you can check this.
+
+1. Leave the **Psycho Visual Tuning** unchecked. If you have a very powerful machine you can check this.
+
+1. Set the **GPU** to 0 to automatically decide which GPUs to allocate. If desired, you can restrict GPU usage.
+
+1. Set the **Max B-frames** to 2
+
+ ![OBS NVidia NVidia NVENC GPU encoder settings.](media/live-events-obs-quickstart/live-event-obs-nvidia-settings.png)
+
+#### Intel QuickSync Encoder settings
+
+1. If you have selected the Intel **QuickSync** GPU encoding option, check the **Rescale Output** box and select either 1920x1080 if you are using a Premium Live Event in Media Services, or 1280x720 if you are using a Standard (720P) Live Event. If you are using a pass-through live event, you can choose any available resolution.
+
+1. Set the **Target Usage** to "balanced" or adjust as needed based on your CPU and GPU combined load. Adjust as necessary and experiment to achieve an 80% max CPU utilization on average with the quality that your hardware is capable of producing. If you are on more constrained hardware, test with "fast" or drop to "very fast" if you are having performance issues.
+
+1. Set the **Profile** to "main" or "high" if you are using a more powerful hardware configuration.
+
+1. Set the **Keyframe Interval** to 2 seconds as noted above under the X264 options. Do not exceed 4 seconds, as this can significantly impact the latency of your live broadcast.
+
+1. Set the **Rate Control** to CBR for Constant Bitrate rate control.
+
+1. Set the **Bitrate** anywhere between 1500 and 4000 Kbps. We recommend 2500 Kbps if you are using a Standard encoding Live Event at 720P. If you are using a 1080P Premium Live Event, 4000 Kbps is recommended. You may choose to adjust this based on available CPU capabilities and bandwidth on your network to achieve the desired quality setting.
+
+1. Set the **Latency** to "low".
+
+1. Set the **B frames** to 2.
+
+1. Leave the **Subjective Video Enhancements** unchecked.
+
+ ![OBS Intel QuickSync GPU encoder settings.](media/live-events-obs-quickstart/live-event-obs-intel-settings.png)
+
+### Set Audio settings
+
+In the next procedure, you will adjust the audio encoding settings.
+
+1. Select the Output->Audio tab in Settings.
+
+1. Set the Track 1 **Audio Bitrate** to 128 Kbps.
+
+ ![OBS Audio Bitrate settings.](media/live-events-obs-quickstart/live-event-obs-audio-output-panel.png)
+
+1. Select the Audio tab in Settings.
+
+1. Set the **Sample Rate** to 44.1 kHz.
+
+ ![OBS Audio Sample Rate settings.](media/live-events-obs-quickstart/live-event-obs-audio-sample-rate-settings.png)
+
+### Start streaming
+
+1. In the **Controls** panel, click **Start Streaming**.
+
+ ![OBS start streaming button.](media/live-events-obs-quickstart/live-event-obs-start-streaming.png)
+
+2. Switch to the Azure Media Services Live event screen in your browser and click the **Reload Player** link. You should now see your stream in the Preview player.
+
+## Set up outputs
+
+This part will set up your outputs and enable you to save a recording of your live stream.
+
+> [!NOTE]
+> For you to stream this output, the streaming endpoint must be running. See the later [Run the default streaming endpoint](#run-the-default-streaming-endpoint) section.
+
+1. Select the **Create outputs** link below the **Outputs** video viewer.
+1. If you like, edit the name of the output in the **Name** box to something more user-friendly so it's easy to find later.
+
+ ![Output name box.](media/live-events-wirecast-quickstart/output-name.png)
+1. Leave all the rest of the boxes alone for now.
+1. Select **Next** to add a streaming locator.
+1. Change the name of the locator to something more user-friendly, if you want.
+
+ ![Locator name box.](media/live-events-wirecast-quickstart/live-event-locator.png)
+1. Leave everything else on this screen alone for now.
+1. Select **Create**.
+
+## Play the output broadcast by using Azure Media Player
+
+1. Copy the streaming URL under the **Output** video player.
+1. In a web browser, open the [Azure Media Player demo](https://ampdemo.azureedge.net/azuremediaplayer.html).
+1. Paste the streaming URL into the **URL** box of Azure Media Player.
+1. Select the **Update Player** button.
+1. Select the **Play** icon on the video to see your live stream.
+
+## Stop the broadcast
+
+When you think you've streamed enough content, stop the broadcast.
+
+1. In the portal, select **Stop**.
+
+1. In OBS, select the **Stop Streaming** button in the **Controls** panel. This step stops the broadcast from OBS.
+
+## Play the on-demand output by using Azure Media Player
+
+The output that you created is now available for on-demand streaming as long as your streaming endpoint is running.
+
+1. Go to the Media Services listing and select **Assets**.
+1. Find the event output that you created earlier and select the link to the asset. The asset output page opens.
+1. Copy the streaming URL under the video player for the asset.
+1. Return to Azure Media Player in the browser and paste the streaming URL into the URL box.
+1. Select **Update Player**.
+1. Select the **Play** icon on the video to view the on-demand asset.
+
+## Clean up resources
+
+> [!IMPORTANT]
+> Stop the services! After you've completed the steps in this quickstart, be sure to stop the live event and the streaming endpoint, or you'll be billed for the time they remain running. To stop the live event, see the [Stop the broadcast](#stop-the-broadcast) procedure, steps 2 and 3.
+
+To stop the streaming endpoint:
+
+1. From the Media Services listing, select **Streaming endpoints**.
+2. Select the default streaming endpoint that you started earlier. This step opens the endpoint's page.
+3. Select **Stop**.
+
+> [!TIP]
+> If you don't want to keep the assets from this event, be sure to delete them so you're not billed for storage.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Live events and live outputs in Media Services](./live-event-outputs-concept.md)
media-services Live Event Outputs Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/live-event-outputs-concept.md
+
+ Title: Live events and live outputs concepts
+description: This topic provides an overview of live events and live outputs in Azure Media Services v3.
+
+documentationcenter: ''
++
+editor: ''
++
+ na
+ms.devlang: ne
+ Last updated : 10/23/2020++
+# Live events and live outputs in Media Services
++
+Azure Media Services lets you deliver live events to your customers on the Azure cloud. To set up your live streaming events in Media Services v3, you need to understand the concepts discussed in this article.
+
+> [!TIP]
+> For customers migrating from Media Services v2 APIs, the **live event** entity replaces **Channel** in v2 and **live output** replaces **program**.
+
+## Live events
+
+[Live events](/rest/api/media/liveevents) are responsible for ingesting and processing the live video feeds. When you create a live event, a primary and secondary input endpoint is created that you can use to send a live signal from a remote encoder. The remote live encoder sends the contribution feed to that input endpoint using either the [RTMP](https://www.adobe.com/devnet/rtmp.html) or [Smooth Streaming](/openspecs/windows_protocols/ms-sstr/8383f27f-7efe-4c60-832a-387274457251) (fragmented-MP4) input protocol. For the RTMP ingest protocol, the content can be sent in the clear (`rtmp://`) or securely encrypted on the wire(`rtmps://`). For the Smooth Streaming ingest protocol, the supported URL schemes are `http://` or `https://`.
+
+## Live event types
+
+A [live event](/rest/api/media/liveevents) can be set to either a *pass-through* (an on-premises live encoder sends a multiple bitrate stream) or *live encoding* (an on-premises live encoder sends a single bitrate stream). The types are set during creation using [LiveEventEncodingType](/rest/api/media/liveevents/create#liveeventencodingtype):
+
+* **LiveEventEncodingType.None**: An on-premises live encoder sends a multiple bitrate stream. The ingested stream passes through the live event without any further processing. Also called the pass-through mode.
+* **LiveEventEncodingType.Standard**: An on-premises live encoder sends a single bitrate stream to the live event and Media Services creates multiple bitrate streams. If the contribution feed is of 720p or higher resolution, the **Default720p** preset will encode a set of 6 resolution/bitrates pairs.
+* **LiveEventEncodingType.Premium1080p**: An on-premises live encoder sends a single bitrate stream to the live event and Media Services creates multiple bitrate streams. The Default1080p preset specifies the output set of resolution/bitrates pairs.
+
+### Pass-through
+
+![pass-through live event with Media Services example diagram](./media/live-streaming/pass-through.svg)
+
+When using the pass-through **live event**, you rely on your on-premises live encoder to generate a multiple bitrate video stream and send that as the contribution feed to the live event (using RTMP or fragmented-MP4 protocol). The live event then carries through the incoming video streams without any further processing. Such a pass-through live event is optimized for long-running live events or 24x365 linear live streaming. When creating this type of live event, specify None (LiveEventEncodingType.None).
+
+You can send the contribution feed at resolutions up to 4K and at a frame rate of 60 frames/second, with either H.264/AVC or H.265/HEVC video codecs, and AAC (AAC-LC, HE-AACv1, or HE-AACv2) audio codec. For more information, see [Live event types comparison](live-event-types-comparison-reference.md).
+
+> [!NOTE]
+> Using a pass-through method is the most economical way to do live streaming when you're doing multiple events over a long period of time, and you have already invested in on-premises encoders. See [Pricing](https://azure.microsoft.com/pricing/details/media-services/) details.
+>
+
+See the .NET code example for creating a pass-through Live Event in [Live Event with DVR](https://github.com/Azure-Samples/media-services-v3-dotnet/blob/4a436376e77bad57d6cbfdc02d7df6c615334574/Live/LiveEventWithDVR/Program.cs#L214).
+
+### Live encoding
+
+![live encoding with Media Services example diagram](./media/live-streaming/live-encoding.svg)
+
+When using live encoding with Media Services, you configure your on-premises live encoder to send a single bitrate video as the contribution feed to the live event (using RTMP or Fragmented-Mp4 protocol). You then set up a live event so that it encodes that incoming single bitrate stream to a [multiple bitrate video stream](https://en.wikipedia.org/wiki/Adaptive_bitrate_streaming), and makes the output available for delivery to play back devices via protocols like MPEG-DASH, HLS, and Smooth Streaming.
+
+When you use live encoding, you can send the contribution feed only at resolutions up to 1080p resolution at a frame rate of 30 frames/second, with H.264/AVC video codec and AAC (AAC-LC, HE-AACv1, or HE-AACv2) audio codec. Note that pass-through live events can support resolutions up to 4K at 60 frames/second. For more information, see [Live event types comparison](live-event-types-comparison-reference.md).
+
+The resolutions and bitrates contained in the output from the live encoder is determined by the preset. If using a **Standard** live encoder (LiveEventEncodingType.Standard), then the *Default720p* preset specifies a set of six resolution/bit rate pairs, going from 720p at 3.5 Mbps down to 192p at 200 kbps. Otherwise, if using a **Premium1080p** live encoder (LiveEventEncodingType.Premium1080p), then the *Default1080p* preset specifies a set of six resolution/bit rate pairs, going from 1080p at 3.5 Mbps down to 180p at 200 kbps. For information, see [System presets](live-event-types-comparison-reference.md#system-presets).
+
+> [!NOTE]
+> If you need to customize the live encoding preset, open a support ticket via Azure portal. Specify the desired table of resolution and bitrates. Verify that there's only one layer at 720p (if requesting a preset for a Standard live encoder) or at 1080p (if requesting a preset for a Premium1080p live encoder), and 6 layers at most.
+
+## Creating live events
+
+### Options
+
+When creating a live event, you can specify the following options:
+
+* You can give the live event a name and a description.
+* Cloud encoding includes Pass-through (no cloud encoding), Standard (up to 720p), or Premium (up to 1080p). For Standard and Premium encoding, you can choose the stretch mode of the encoded video.
+ * None: Strictly respects the output resolution specified in the encoding preset without considering the pixel aspect ratio or display aspect ratio of the input video.
+ * AutoSize: Overrides the output resolution and changes it to match the display aspect ratio of the input, without padding. For example, if the input is 1920x1080 and the encoding preset asks for 1280x1280, then the value in the preset is overridden, and the output will be at 1280x720, which maintains the input aspect ratio of 16:9.
+ * AutoFit: Pads the output (with either letterbox or pillar box) to honor the output resolution, while ensuring that the active video region in the output has the same aspect ratio as the input. For example, if the input is 1920x1080 and the encoding preset asks for 1280x1280, then the output will be at 1280x1280, which contains an inner rectangle of 1280x720 at aspect ratio of 16:9, with pillar box regions 280 pixels wide at the left and right.
+* Streaming protocol (currently, the RTMP and Smooth Streaming protocols are supported). You can't change the protocol option while the live event or its associated live outputs are running. If you require different protocols, create a separate live event for each streaming protocol.
+* Input ID which is a globally unique identifier for the live event input stream.
+* Static hostname prefix which includes none (in which case a random 128 bit hex string will be used), Use live event name, or Use custom name. When you choose to use a customer name, this value is the Custom hostname prefix.
+* You can reduce end-to-end latency between the live broadcast and the playback by setting the input key frame interval, which is the duration (in seconds), of each media segment in the HLS output. The value should be a non-zero integer in the range of 0.5 to 20 seconds. The value defaults to 2 seconds if *neither* of the input or output key frame intervals are set. The key frame interval is only allowed on pass-through events.
+* When creating the event, you can set it to autostart. When autostart is set to true, the live event will be started after creation. The billing starts as soon as the live event starts running. You must explicitly call Stop on the live event resource to halt further billing. Alternatively, you can start the event when you're ready to start streaming.
+
+> [!NOTE]
+> The max framerate is 30 fps for both Standard and Premium encoding.
+
+## StandBy mode
+
+When you create a live event, you can set it to StandBy mode. While the event is in StandBy mode, you can edit the Description, the Static hostname prefix and restrict input and preview access settings. StandBy mode is still a billable mode, but is priced differently than when you start a live stream.
+
+For more information, see [Live event states and billing](live-event-states-billing-concept.md).
+
+* IP restrictions on the ingest and preview. You can define the IP addresses that are allowed to ingest a video to this live event. Allowed IP addresses can be specified as either a single IP address (for example '10.0.0.1'), an IP range using an IP address and a CIDR subnet mask (for example, '10.0.0.1/22'), or an IP range using an IP address and a dotted decimal subnet mask (for example, '10.0.0.1(255.255.252.0)').
+<br/><br/>
+If no IP addresses are specified and there's no rule definition, then no IP address will be allowed. To allow any IP address, create a rule and set 0.0.0.0/0.<br/>The IP addresses have to be in one of the following formats: IpV4 address with four numbers or CIDR address range.
+<br/><br/>
+If you want to enable certain IPs on your own firewalls or want to constrain inputs to your live events to Azure IP addresses, download a JSON file from [Azure Datacenter IP address ranges](https://www.microsoft.com/download/details.aspx?id=41653). For details about this file, select the **Details** section on the page.
+
+* When creating the event, you can choose to turn on live transcriptions. By default, live transcription is disabled. For more information about live transcription read [Live transcription](live-event-live-transcription-how-to.md).
+
+### Naming rules
+
+* Max live event name is 32 characters.
+* The name should follow this [regex](/dotnet/standard/base-types/regular-expression-language-quick-reference) pattern: `^[a-zA-Z0-9]+(-*[a-zA-Z0-9])*$`.
+
+Also see [Streaming Endpoints naming conventions](streaming-endpoint-concept.md#naming-convention).
+
+> [!TIP]
+> To guarantee uniqueness of your live event name, you can generate a GUID then remove all the hyphens and curly brackets (if any). The string will be unique across all live events and its length is guaranteed to be 32.
+
+## Live event ingest URLs
+
+Once the live event is created, you can get ingest URLs that you'll provide to the live on-premises encoder. The live encoder uses these URLs to input a live stream. For more information, see [Recommended on-premises live encoders](recommended-on-premises-live-encoders.md).
+
+>[!NOTE]
+> As of the 2020-05-01 API release, "vanity" URLs are known as Static Host Names (useStaticHostname: true)
++
+> [!NOTE]
+> For an ingest URL to be static and predictable for use in a hardware encoder setup, set the **useStaticHostname** property to true and set the **accessToken** property to the same GUID on each creation.
+
+### Example LiveEvent and LiveEventInput configuration settings for a static (non random) ingest RTMP URL.
+
+```csharp
+ LiveEvent liveEvent = new LiveEvent(
+ location: mediaService.Location,
+ description: "Sample LiveEvent from .NET SDK sample",
+ // Set useStaticHostname to true to make the ingest and preview URL host name the same.
+ // This can slow things down a bit.
+ useStaticHostname: true,
+
+ // 1) Set up the input settings for the Live event...
+ input: new LiveEventInput(
+ streamingProtocol: LiveEventInputProtocol.RTMP, // options are RTMP or Smooth Streaming ingest format.
+ // This sets a static access token for use on the ingest path.
+ // Combining this with useStaticHostname:true will give you the same ingest URL on every creation.
+ // This is helpful when you only want to enter the URL into a single encoder one time for this Live Event name
+ accessToken: "acf7b6ef-8a37-425f-b8fc-51c2d6a5a86a", // Use this value when you want to make sure the ingest URL is static and always the same. If omitted, the service will generate a random GUID value.
+ accessControl: liveEventInputAccess, // controls the IP restriction for the source encoder.
+ keyFrameIntervalDuration: "PT2S" // Set this to match the ingest encoder's settings
+ ),
+```
+
+* Non static hostname
+
+ A non static hostname is the default mode in Media Services v3 when creating a **LiveEvent**. You can get the live event allocated slightly more quickly, but the ingest URL that you would need for your live encoding hardware or software will be randomized . The URL will change if you do stop/start the live event. Non static hostnames are only useful in scenarios where an end user wants to stream using an app that needs to get a live event very quickly and having a dynamic ingest URL isn't a problem.
+
+ If a client app doesn't need to pre-generate an ingest URL before the live event is created, let Media Services auto-generate the Access Token for the live event.
+
+* Static Hostnames
+
+ Static hostname mode is preferred by most operators that wish to pre-configure their live encoding hardware or software with an RTMP ingest URL that never changes on creation or stop/start of a specific live event. These operators want a predictive RTMP ingest URL which doesn't change over time. This is also very useful when you need to push a static RTMP ingest URL into the configuration settings of a hardware encoding device like the BlackMagic Atem Mini Pro, or similar hardware encoding and production tools.
+
+ > [!NOTE]
+ > In the Azure portal, the static hostname URL is called "*Static hostname prefix*".
+
+ To specify this mode in the API, set `useStaticHostName` to `true` at creation time (default is `false`). When `useStaticHostname` is set to true, the `hostnamePrefix` specifies the first part of the hostname assigned to the live event preview and ingest endpoints. The final hostname would be a combination of this prefix, the media service account name and a short code for the Azure Media Services data center.
+
+ To avoid a random token in the URL, you also need to pass your own access token (`LiveEventInput.accessToken`) at creation time. The access token has to be a valid GUID string (with or without the hyphens). Once the mode is set, it can't be updated.
+
+ The access token needs to be unique in your Azure region and Media Services account. If your app needs to use a static hostname ingest URL, it's recommended to always create fresh GUID instance for use with a specific combination of region, media services account, and live event.
+
+ Use the following APIs to enable the static hostname URL and set the access token to a valid GUID (for example, `"accessToken": "1fce2e4b-fb15-4718-8adc-68c6eb4c26a7"`).
+
+ |Language|Enable static hostname URL|Set access token|
+ ||||
+ |REST|[properties.useStaticHostname](/rest/api/media/liveevents/create#liveevent)|[LiveEventInput.useStaticHostname](/rest/api/media/liveevents/create#liveeventinput)|
+ |CLI|[--use-static-hostname](/cli/azure/ams/live-event#az-ams-live-event-create)|[--access-token](/cli/azure/ams/live-event#optional-parameters)|
+ |.NET|[LiveEvent.useStaticHostname](/dotnet/api/microsoft.azure.management.media.models.liveevent.usestatichostname?view=azure-dotnet&preserve-view=true#Microsoft_Azure_Management_Media_Models_LiveEvent_UseStaticHostname)|[LiveEventInput.AccessToken](/dotnet/api/microsoft.azure.management.media.models.liveeventinput.accesstoken#Microsoft_Azure_Management_Media_Models_LiveEventInput_AccessToken)|
+
+### Live ingest URL naming rules
+
+* The *random* string below is a 128-bit hex number (which is composed of 32 characters of 0-9 a-f).
+* *your access token*: The valid GUID string you set when using the static hostname setting. For example, `"1fce2e4b-fb15-4718-8adc-68c6eb4c26a7"`.
+* *stream name*: Indicates the stream name for a specific connection. The stream name value is usually added by the live encoder you use. You can configure the live encoder to use any name to describe the connection, for example: "video1_audio1", "video2_audio1", "stream".
+
+#### Non-static hostname ingest URL
+
+##### RTMP
+
+`rtmp://<random 128bit hex string>.channel.media.azure.net:1935/live/<auto-generated access token>/<stream name>`<br/>
+`rtmp://<random 128bit hex string>.channel.media.azure.net:1936/live/<auto-generated access token>/<stream name>`<br/>
+`rtmps://<random 128bit hex string>.channel.media.azure.net:2935/live/<auto-generated access token>/<stream name>`<br/>
+`rtmps://<random 128bit hex string>.channel.media.azure.net:2936/live/<auto-generated access token>/<stream name>`<br/>
+
+##### Smooth streaming
+
+`http://<random 128bit hex string>.channel.media.azure.net/<auto-generated access token>/ingest.isml/streams(<stream name>)`<br/>
+`https://<random 128bit hex string>.channel.media.azure.net/<auto-generated access token>/ingest.isml/streams(<stream name>)`<br/>
+
+#### Static hostname ingest URL
+
+In the following paths, `<live-event-name>` means either the name given to the event or the custom name used in the creation of the live event.
+
+##### RTMP
+
+`rtmp://<live event name>-<ams account name>-<region abbrev name>.channel.media.azure.net:1935/live/<your access token>/<stream name>`<br/>
+`rtmp://<live event name>-<ams account name>-<region abbrev name>.channel.media.azure.net:1936/live/<your access token>/<stream name>`<br/>
+`rtmps://<live event name>-<ams account name>-<region abbrev name>.channel.media.azure.net:2935/live/<your access token>/<stream name>`<br/>
+`rtmps://<live event name>-<ams account name>-<region abbrev name>.channel.media.azure.net:2936/live/<your access token>/<stream name>`<br/>
+
+##### Smooth streaming
+
+`http://<live event name>-<ams account name>-<region abbrev name>.channel.media.azure.net/<your access token>/ingest.isml/streams(<stream name>)`<br/>
+`https://<live event name>-<ams account name>-<region abbrev name>.channel.media.azure.net/<your access token>/ingest.isml/streams(<stream name>)`<br/>
+
+## Live event preview URL
+
+Once the live event starts receiving the contribution feed, you can use its preview endpoint to preview and validate that you're receiving the live stream before further publishing. After you've checked that the preview stream is good, you can use the live event to make the live stream available for delivery through one or more (pre-created) Streaming Endpoints. To accomplish this, create a new [live output](/rest/api/media/liveoutputs) on the live event.
+
+> [!IMPORTANT]
+> Make sure that the video is flowing to the preview URL before continuing!
+
+## Live event long-running operations
+
+For details, see [long-running operations](media-services-apis-overview.md#long-running-operations).
+
+## Live outputs
+
+Once you have the stream flowing into the live event, you can begin the streaming event by creating an [Asset](/rest/api/media/assets), [live output](/rest/api/media/liveoutputs), and [Streaming Locator](/rest/api/media/streaminglocators). live output will archive the stream and make it available to viewers through the [Streaming Endpoint](/rest/api/media/streamingendpoints).
+
+For detailed information about live outputs, see [Using a cloud DVR](live-event-cloud-dvr-time-how-to.md).
+
+## Live event output questions
+
+See the [live event output questions](questions-collection.md#live-streaming) article.
media-services Live Event States Billing Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/live-event-states-billing-concept.md
+
+ Title: Live event states and billing in Azure Media Services
+description: This topic gives an overview of Azure Media Services live event states and billing.
+
+documentationcenter: ''
++
+editor: ''
+++
+ na
+ms.devlang: ne
+ Last updated : 10/26/2020++++
+# Live event states and billing
++
+In Azure Media Services, a live event begins billing as soon as its state transitions to **Running** or **StandBy**. You will be billed even if there is no video flowing through the service. To stop the live event from billing, you have to stop the live event. Live Transcription is billed the same way as the live event.
+
+When **LiveEventEncodingType** on your [live event](/rest/api/media/liveevents) is set to Standard or Premium1080p, Media Services auto shuts off any live event that is still in the **Running** state 12 hours after the input feed is lost, and there are no **live output**s running. However, you will still be billed for the time the live event was in the **Running** state.
+
+> [!NOTE]
+> Pass-through live events are not automatically shut off and must be explicitly stopped through the API to avoid excessive billing.
+
+## States
+
+The live event can be in one of the following states.
+
+|State|Description|
+|||
+|**Stopped**| This is the initial state of the live event after creation (unless autostart was set to true.) No billing occurs in this state. No input can be received by the live event. |
+|**Starting**| The live event is starting and resources getting allocated. No billing occurs in this state. If an error occurs, the live event returns to the Stopped state.|
+| **Allocating** | The allocate action was called on the live event and resources are being provisioned for this live event. Once this operation is done successfully, the live event will transition to StandBy state.
+|**StandBy**| live event resources have been provisioned and is ready to start. Billing occurs in this state. Most properties can still be updated, however ingest or streaming is not allowed during this state.
+|**Running**| The live event resources have been allocated, ingest and preview URLs have been generated, and it is capable of receiving live streams. At this point, billing is active. You must explicitly call Stop on the live event resource to halt further billing.|
+|**Stopping**| The live event is being stopped and resources are being de-provisioned. No billing occurs in this transient state. |
+|**Deleting**| The live event is being deleted. No billing occurs in this transient state. |
+
+You can choose to enable live transcriptions when you create the live event. If you do so, you will be billed for Live Transcriptions whenever the live event is in the **Running** state. Note that you will be billed even if there is no audio flowing through the live event.
+
+## Next steps
+
+- [Live streaming overview](stream-live-streaming-concept.md)
+- [Live streaming tutorial](stream-live-tutorial-with-api.md)
media-services Live Event Types Comparison Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/live-event-types-comparison-reference.md
+
+ Title: Azure Media Services LiveEvent types
+description: In Azure Media Services, a live event can be set to either a *pass-through* or *live encoding*. This article shows a detailed table that compares Live Event types.
+
+documentationcenter: ''
++
+editor: ''
+++
+ na
+ms.devlang: ne
+ Last updated : 08/31/2020+++
+# Live Event types comparison
++
+In Azure Media Services, a [Live Event](/rest/api/media/liveevents) can be set to either a *pass-through* (an on-premises live encoder sends a multiple bitrate stream) or *live encoding* (an on-premises live encoder sends a single bitrate stream).
+
+This articles compares features of the live event types.
+
+## Types comparison
+
+The following table compares features of the Live Event types. The types are set during creation using [LiveEventEncodingType](/rest/api/media/liveevents/create#liveeventencodingtype):
+
+* **LiveEventEncodingType.None** - An on-premises live encoder sends a multiple bitrate stream. The ingested streams passes through the Live Event without any further processing. Also referred to as a pass-through Live Event.
+* **LiveEventEncodingType.Standard** - An on-premises live encoder sends a single bitrate stream to the Live Event and Media Services creates multiple bitrate streams. If the contribution feed is of 720p or higher resolution, the **Default720p** preset will encode a set of 6 resolution/bitrate pairs (details follow later in the article).
+* **LiveEventEncodingType.Premium1080p** - An on-premises live encoder sends a single bitrate stream to the Live Event and Media Services creates multiple bitrate streams. The Default1080p preset specifies the output set of resolution/bitrate pairs (details follow later in the article).
+
+| Feature | Pass-through Live Event | Standard or Premium1080p Live Event |
+| | | |
+| Single bitrate input is encoded into multiple bitrates in the cloud |No |Yes |
+| Maximum video resolution for contribution feed |4K (4096x2160 at 60 frames/sec) |1080p (1920x1088 at 30 frames/sec)|
+| Recommended maximum layers in contribution feed|Up to 12|One audio|
+| Maximum layers in output| Same as input|Up to 6 (see System Presets below)|
+| Maximum aggregate bandwidth of contribution feed|60 Mbps|N/A|
+| Maximum bitrate for a single layer in the contribution |20 Mbps|20 Mbps|
+| Support for multiple language audio tracks|Yes|No|
+| Supported input video codecs |H.264/AVC and H.265/HEVC|H.264/AVC|
+| Supported output video codecs|Same as input|H.264/AVC|
+| Supported video bit depth, input, and output|Up to 10-bit including HDR 10/HLG|8-bit|
+| Supported input audio codecs|AAC-LC, HE-AAC v1, HE-AAC v2|AAC-LC, HE-AAC v1, HE-AAC v2|
+| Supported output audio codecs|Same as input|AAC-LC|
+| Maximum video resolution of output video|Same as input|Standard - 720p, Premium1080p - 1080p|
+| Maximum frame rate of input video|60 frames/second|Standard or Premium1080p - 30 frames/second|
+| Input protocols|RTMP, fragmented-MP4 (Smooth Streaming)|RTMP, fragmented-MP4 (Smooth Streaming)|
+| Price|See the [pricing page](https://azure.microsoft.com/pricing/details/media-services/) and click on "Live Video" tab|See the [pricing page](https://azure.microsoft.com/pricing/details/media-services/) and click on "Live Video" tab|
+| Maximum run time| 24 hrs x 365 days, live linear | 24 hrs x 365 days, live linear (preview)|
+| Ability to pass through embedded CEA 608/708 captions data|Yes|Yes|
+| Ability to turn on Live Transcription|Yes|Yes|
+| Support for inserting slates|No|No|
+| Support for ad signaling via API| No|No|
+| Support for ad signaling via SCTE-35 in-band messages|Yes|Yes|
+| Ability to recover from brief stalls in contribution feed|Yes|Partial|
+| Support for non-uniform input GOPs|Yes|No ΓÇô input must have fixed GOP duration|
+| Support for variable frame rate input|Yes|No ΓÇô input must be fixed frame rate. Minor variations are tolerated, for example, during high motion scenes. But the contribution feed cannot drop the frame rate (for example, to 15 frames/second).|
+| Auto-shutoff of Live Event when input feed is lost|No|After 12 hours, if there is no LiveOutput running|
+
+## System presets
+
+The resolutions and bitrates contained in the output from the live encoder are determined by the [presetName](/rest/api/media/liveevents/create#liveeventencoding). If using a **Standard** live encoder (LiveEventEncodingType.Standard), then the *Default720p* preset specifies a set of 6 resolution/bitrate pairs described below. Otherwise, if using a **Premium1080p** live encoder (LiveEventEncodingType.Premium1080p), then the *Default1080p* preset specifies the output set of resolution/bitrate pairs.
+
+> [!NOTE]
+> You cannot apply the Default1080p preset to a Live Event if it has been setup for Standard live encoding - you will get an error. You will also get an error if you try to apply the Default720p preset to a Premium1080p live encoder.
+
+### Output Video Streams for Default720p
+
+If the contribution feed is of 720p or higher resolution, the **Default720p** preset will encode the feed into the following 6 layers. In the table below, Bitrate is in kbps, MaxFPS represents that maximum allowed frame rate (in frames/second), Profile represents the H.264 Profile used.
+
+| Bitrate | Width | Height | MaxFPS | Profile |
+| | | | | |
+| 3500 |1280 |720 |30 |High |
+| 2200 |960 |540 |30 |High |
+| 1350 |704 |396 |30 |High |
+| 850 |512 |288 |30 |High |
+| 550 |384 |216 |30 |High |
+| 200 |340 |192 |30 |High |
+
+> [!NOTE]
+> If you need to customize the live encoding preset, please open a support ticket via Azure Portal. You should specify the desired table of video resolution and bitrates. Customization of the audio encoding bitrate is not supported. Do verify that there is only one layer at 720p, and at most 6 layers. Also do specify that you are requesting a preset.
+
+### Output Video Streams for Default1080p
+
+If the contribution feed is of 1080p resolution, the **Default1080p** preset will encode the feed into the following 6 layers.
+
+| Bitrate | Width | Height | MaxFPS | Profile |
+| | | | | |
+| 5500 |1920 |1080 |30 |High |
+| 3000 |1280 |720 |30 |High |
+| 1600 |960 |540 |30 |High |
+| 800 |640 |360 |30 |High |
+| 400 |480 |270 |30 |High |
+| 200 |320 |180 |30 |High |
+
+> [!NOTE]
+> If you need to customize the live encoding preset, please open a support ticket via Azure Portal. You should specify the desired table of resolution and bitrates. Verify that there is only one layer at 1080p, and at most 6 layers. Also, specify that you are requesting a preset for a Premium1080p live encoder. The specific values of the bitrates and resolutions may be adjusted over time.
+
+### Output Audio Stream for Default720p and Default1080p
+
+For both *Default720p* and *Default1080p* presets, audio is encoded to stereo AAC-LC at 128 kbps. The sampling rate follows that of the audio track in the contribution feed.
+
+## Implicit properties of the live encoder
+
+The previous section describes the properties of the live encoder that can be controlled explicitly, via the preset - such as the number of layers, resolutions, and bitrates. This section clarifies the implicit properties.
+
+### Group of pictures (GOP) duration
+
+The live encoder follows the [GOP](https://en.wikipedia.org/wiki/Group_of_pictures) structure of the contribution feed - which means the output layers will have the same GOP duration. Hence, it is recommended that you configure the on-premises encoder to produce a contribution feed that has fixed GOP duration (typically 2 seconds). This will ensure that the outgoing HLS and MPEG DASH streams from the service also has fixed GOP durations. Small variations in GOP durations are likely to be tolerated by most devices.
+
+### Frame rate
+
+The live encoder also follows the durations of the individual video frames in the contribution feed - which means the output layers will have frames with the same durations. Hence, it is recommended that you configure the on-premises encoder to produce a contribution feed that has fixed frame rate (at most 30 frames/second). This will ensure that the outgoing HLS and MPEG DASH streams from the service also has fixed frame rates durations. Small variations in frame rates may be tolerated by most devices, but there is no guarantee that the live encoder will produce an output that will play correctly. Your on-premises live encoder should not be dropping frames (eg. under low battery conditions) or varying the frame rate in any way.
+
+### Resolution of contribution feed and output layers
+
+The live encoder is configured to avoid upconverting the contribution feed. As a result the maximum resolution of the output layers will not exceed that of the contribution feed.
+
+For example, if you send a contribution feed at 720p to a Live Event configured for Default1080p live encoding, the output will only have 5 layers, starting with 720p at 3Mbps, going down to 1080p at 200 kbps. Or if you send a contribution feed at 360p into a Live Event configured for Standard live encoding, the output will contain 3 layers (at resolutions of 288p, 216p, and 192p). In the degenerate case, if you send a contribution feed of, say, 160x90 pixels to a Standard live encoder, the output will contain one layer at 160x90 resolution at the same bitrate as that of the contribution feed.
+
+### Bitrate of contribution feed and output layers
+
+The live encoder is configured to honor the bitrate settings in the preset, irrespective of the bitrate of the contribution feed. As a result the bitrate of the output layers may exceed that of the contribution feed. For example, if you send in a contribution feed at a resolution of 720p at 1 Mbps, the output layers will remain the same as in the [table](live-event-types-comparison-reference.md#output-video-streams-for-default720p) above.
+
+## Next steps
+
+[Live streaming overview](stream-live-streaming-concept.md)
media-services Live Event Wirecast Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/live-event-wirecast-quickstart.md
+
+ Title: Create an Azure Media Services live stream
+description: Learn how to create an Azure Media Services live stream by using the portal and Wirecast
+++++ Last updated : 08/31/2020++
+# Create an Azure Media Services live stream
++
+This quickstart will help you create an Azure Media Services live stream by using the Azure portal and Telestream Wirecast. It assumes that you have an Azure subscription and have created a Media Services account.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+
+## Sign in to the Azure portal
+
+Open your web browser, and go to the [Microsoft Azure portal](https://portal.azure.com/). Enter your credentials to sign in to the portal. The default view is your service dashboard.
+
+In this quickstart, we'll cover:
+
+- Setting up an on-premises encoder with a free trial of Telestream Wirecast.
+- Setting up a live stream.
+- Setting up live stream outputs.
+- Running a default streaming endpoint.
+- Using Azure Media Player to view the live stream and on-demand output.
+
+To keep things simple, we'll use an encoding preset for Azure Media Services in Wirecast, pass-through cloud encoding, and RTMP.
+
+## Set up an on-premises encoder by using Wirecast
+
+1. Download and install Wirecast for your operating system on the [Telestream website](https://www.telestream.net).
+1. Start the application and use your favorite email address to register the product. Keep the application open.
+1. In the email that you receive, verify your email address. Then the application will start the free trial.
+1. Recommended: Watch the video tutorial in the opening application screen.
+
+## Set up an Azure Media Services live stream
+
+1. Go to the Azure Media Services account within the portal, and then select **Live streaming** from the **Media Services** listing.
+
+ ![Live streaming link](media/live-events-wirecast-quickstart/select-live-streaming.png)
+1. Select **Add live event** to create a new live streaming event.
+
+ ![Add live event icon](media/live-events-wirecast-quickstart/add-live-event.png)
+1. Enter a name for your new event, such as *TestLiveEvent*, in the **Live event name** box.
+
+ ![Live event name box](media/live-events-wirecast-quickstart/live-event-name.png)
+1. Enter an optional description of the event in the **Description** box.
+1. Select the **Pass-through ΓÇô no cloud encoding** option.
+
+ ![Cloud encoding option](media/live-events-wirecast-quickstart/cloud-encoding.png)
+1. Select the **RTMP** option.
+1. Make sure that the **No** option is selected for **Start live event**, to avoid being billed for the live event before it's ready. (Billing will begin when the live event is started.)
+
+ ![Start live event option](media/live-events-wirecast-quickstart/start-live-event-no.png)
+1. Select the **Review + create** button to review the settings.
+1. Select the **Create** button to create the live event. You're then returned to the live event listing.
+1. Select the link to the live event that you just created. Notice that your event is stopped.
+1. Keep this page open in your browser. We'll come back to it later.
+
+## Set up a live stream by using Wirecast Studio
+
+1. In the Wirecast application, select **Create Empty Document** from the main menu, and then select **Continue**.
+
+ ![Wirecast start screen](media/live-events-wirecast-quickstart/open-empty-document.png)
+1. Hover over the first layer in the **Wirecast layers** area. Select the **Add** icon that appears, and select the video input that you want to stream.
+
+ ![Wirecast add icon](media/live-events-wirecast-quickstart/add-icon.png)
+
+ The **Master Layer 1** dialog box opens.
+1. Select **Video Capture** from the menu, and then select the camera that you want to use.
+
+ ![Preview area for video capture](media/live-events-wirecast-quickstart/video-shot-selection.png)
+
+ The view from the camera appears in the preview area.
+1. Hover over the second layer in the **Wirecast layers** area. Select the **Add** icon that appears, and select the audio input that you want to stream. The **Master Layer 2** dialog box opens.
+1. Select **Audio capture** from the menu, and then select the audio input that you want to use.
+
+ ![Inputs for audio capture](media/live-events-wirecast-quickstart/audio-shot-select.png)
+1. From the main menu, select **Output settings**. The **Select an Output Destination** dialog box appears.
+1. Select **Azure Media Services** from the **Destination** drop-down list. The output setting for Azure Media Services automatically populates *most* of the output settings.
+
+ ![Wirecast output settings screen](media/live-events-wirecast-quickstart/azure-media-services.png)
++
+In the next procedure, you'll go back to Azure Media Services in your browser to copy the input URL to enter into the output settings:
+
+1. On the Azure Media Services page of the portal, select **Start** to start the live stream event. (Billing starts now.)
+
+ ![Start icon](media/live-events-wirecast-quickstart/start.png)
+2. Set the **Secure/Not secure** toggle to **Not secure**. This step sets the protocol to RTMP instead of RTMPS.
+3. In the **Input URL** box, copy the URL to your clipboard.
+
+ ![Input URL](media/live-events-wirecast-quickstart/input-url.png)
+4. Switch to the Wirecast application and paste the **Input URL** into the **Address** box in the output settings.
+
+ ![Wirecast input URL](media/live-events-wirecast-quickstart/input-url-wirecast.png)
+5. Select **OK**.
+
+## Set up outputs
+
+This part will set up your outputs and enable you to save a recording of your live stream.
+
+> [!NOTE]
+> For you to stream this output, the streaming endpoint must be running. See the later [Run the default streaming endpoint](#run-the-default-streaming-endpoint) section.
+
+1. Select the **Create outputs** link below the **Outputs** video viewer.
+1. If you like, edit the name of the output in the **Name** box to something more user friendly so it's easy to find later.
+
+ ![Output name box](media/live-events-wirecast-quickstart/output-name.png)
+1. Leave all the rest of the boxes alone for now.
+1. Select **Next** to add a streaming locator.
+1. Change the name of the locator to something more user friendly, if you want.
+
+ ![Locator name box](media/live-events-wirecast-quickstart/live-event-locator.png)
+1. Leave everything else on this screen alone for now.
+1. Select **Create**.
+
+## Start the broadcast
+
+1. In Wirecast, select **Output** > **Start / Stop Broadcasting** > **Start Azure Media
+
+ ![Start broadcast menu items](media/live-events-wirecast-quickstart/start-broadcast.png)
+
+ When the stream has been sent to the live event, the **Live** window in Wirecast appears in the video player on the live event page in Azure Media Services.
+
+1. Select the **Go** button under the preview window to start broadcasting the video and audio that you selected for the Wirecast layers.
+
+ ![Wirecast Go button](media/live-events-wirecast-quickstart/go-button.png)
+
+ > [!TIP]
+ > If there's an error, try reloading the player by selecting the **Reload player** link above the player.
+
+## Run the default streaming endpoint
+
+1. Select **Streaming endpoints** in the Media Services listing.
+
+ ![Streaming endpoints menu item](media/live-events-wirecast-quickstart/streaming-endpoints.png)
+1. If the default streaming endpoint status is stopped, select it. This step takes you to the page for that endpoint.
+1. Select **Start**.
+
+ ![Start button for the streaming endpoint](media/live-events-wirecast-quickstart/start.png)
+
+## Play the output broadcast by using Azure Media Player
+
+1. Copy the streaming URL under the **Output** video player.
+1. In a web browser, open the [Azure Media Player demo](https://ampdemo.azureedge.net/azuremediaplayer.html).
+1. Paste the streaming URL into the **URL** box of Azure Media Player.
+1. Select the **Update Player** button.
+1. Select the **Play** icon on the video to see your live stream.
+
+## Stop the broadcast
+
+When you think you've streamed enough content, stop the broadcast.
+
+1. In Wirecast, select the **Broadcast** button. This step stops the broadcast from Wirecast.
+1. In the portal, select **Stop**. You then get a warning message that the live stream will stop but the output will now become an on-demand asset.
+1. Select **Stop** in the warning message. Azure Media Player now shows an error, because the live stream is no longer available.
+
+## Play the on-demand output by using Azure Media Player
+
+The output that you created is now available for on-demand streaming as long as your streaming endpoint is running.
+
+1. Go to the Media Services listing and select **Assets**.
+1. Find the event output that you created earlier and select the link to the asset. The asset output page opens.
+1. Copy the streaming URL under the video player for the asset.
+1. Return to Azure Media Player in the browser and paste the streaming URL into the URL box.
+1. Select **Update Player**.
+1. Select the **Play** icon on the video to view the on-demand asset.
+
+## Clean up resources
+
+> [!IMPORTANT]
+> Stop the services! After you've completed the steps in this quickstart, be sure to stop the live event and the streaming endpoint, or you'll be billed for the time they remain running. To stop the live event, see the [Stop the broadcast](#stop-the-broadcast) procedure, steps 2 and 3.
+
+To stop the streaming endpoint:
+
+1. From the Media Services listing, select **Streaming endpoints**.
+2. Select the default streaming endpoint that you started earlier. This step opens the endpoint's page.
+3. Select **Stop**.
+
+> [!TIP]
+> If you don't want to keep the assets from this event, be sure to delete them so you're not billed for storage.
+
+## Next steps
+> [!div class="nextstepaction"]
+> [Live events and live outputs in Media Services](./live-event-outputs-concept.md)
media-services Media Reserved Units Cli How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/media-reserved-units-cli-how-to.md
You are charged based on number of minutes the Media Reserved Units are provisio
## See also
-* [Quotas and limits](limits-quotas-constraints.md)
+* [Quotas and limits](limits-quotas-constraints-reference.md)
media-services Migrate V 2 V 3 Migration Benefits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/migrate-v-2-v-3-migration-benefits.md
There have been significant improvements to Media Services with V3.
| For file-based job processing, you can use a HTTP(S) URL as the input. | You don't need to have content already stored in Azure, nor do you need to create input Assets. | | **Live events** || | Premium 1080p Live Events | New Live Event SKU allows customers to get cloud encoding with output up to 1080p in resolution. |
-| New [low latency](live-event-latency.md) live streaming support on Live Events. | This allows users to watch live events closer to real time than if they didn't have this setting enabled. |
+| New [low latency](live-event-latency-reference.md) live streaming support on Live Events. | This allows users to watch live events closer to real time than if they didn't have this setting enabled. |
| Live Event Preview supports [dynamic packaging](encode-dynamic-packaging-concept.md) and dynamic encryption. | This enables content protection on preview and DASH and HLS packaging. | | Live Outputs replace Programs | Live output is simpler to use than the program entity in the v2 APIs. | | RTMP ingest for Live Events is improved, with support for more encoders | Increases stability and provides source encoder flexibility. | | Live Events can stream 24x7 | You can host a Live Event and keep your audience engaged for longer periods. | | Live transcription on Live Events | Live transcription allows customers to automatically transcribe spoken language into text in real time during the live event broadcast. This significantly improves accessibility of live events. |
-| [Stand-by mode](live-events-outputs-concept.md#standby-mode) on Live Events | Live events that are in standby state are less costly than running live events. This allows customers to maintain a set of live events that are ready to start within seconds at a lower cost than maintaining a set of running live events. Reduced pricing for standby live events will become effective in February 2021 for most regions, with the rest to follow in April 2021.
+| [Stand-by mode](live-event-outputs-concept.md#standby-mode) on Live Events | Live events that are in standby state are less costly than running live events. This allows customers to maintain a set of live events that are ready to start within seconds at a lower cost than maintaining a set of running live events. Reduced pricing for standby live events will become effective in February 2021 for most regions, with the rest to follow in April 2021.
|**Content protection** || | [Content protection](drm-content-key-policy-concept.md) supports multi-key features. | Customers can now use multiple content encryption keys on their Streaming locators. | | **Monitoring** | |
media-services Migrate V 2 V 3 Migration Scenario Based Live Streaming https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/migrate-v-2-v-3-migration-scenario-based-live-streaming.md
The Azure portal now supports live event set up and management. You are encoura
Test the new way of delivering Live events with Media Services before moving your content from V2 to V3. Here are the V3 features to work with and consider for migration. -- Create a new v3 [Live Event](live-events-outputs-concept.md#live-events) for encoding. You can enable [1080P and 720P encoding presets](live-event-types-comparison.md#system-presets).-- Use the [Live Output](live-events-outputs-concept.md#live-outputs) entity instead of Programs
+- Create a new v3 [Live Event](live-event-outputs-concept.md#live-events) for encoding. You can enable [1080P and 720P encoding presets](live-event-types-comparison-reference.md#system-presets).
+- Use the [Live Output](live-event-outputs-concept.md#live-outputs) entity instead of Programs
- Create [streaming locators](streaming-locators-concept.md). - Consider your need for [HLS and DASH](encode-dynamic-packaging-concept.md) live streaming.-- If you require fast-start of live events explore the new [Standby mode](live-events-outputs-concept.md#standby-mode) features.-- If you want to transcribe your live event while it is happening, explore the new [live transcription](live-transcription.md) feature.
+- If you require fast-start of live events explore the new [Standby mode](live-event-outputs-concept.md#standby-mode) features.
+- If you want to transcribe your live event while it is happening, explore the new [live transcription](live-event-live-transcription-how-to.md) feature.
- Create 24x7x365 live events in v3 if you need a longer streaming duration. - Use [Event Grid](monitoring/monitor-events-portal-how-to.md) to monitor your live events.
See Live events concepts, tutorials and how to guides below for specific steps.
### Concepts -- [Live streaming with Azure Media Services v3](live-streaming-overview.md)-- [Live events and live outputs in Media Services](live-events-outputs-concept.md)
+- [Live streaming with Azure Media Services v3](stream-live-streaming-concept.md)
+- [Live events and live outputs in Media Services](live-event-outputs-concept.md)
- [Verified on-premises live streaming encoders](recommended-on-premises-live-encoders.md)-- [Use time-shifting and Live Outputs to create on-demand video playback](live-event-cloud-dvr.md)-- [Live-transcription (preview)](live-transcription.md)-- [Live Event types comparison](live-event-types-comparison.md)-- [Live event states and billing](live-event-states-billing.md)-- [Live Event low latency settings](live-event-latency.md)-- [Media Services Live Event error codes](live-event-error-codes.md)
+- [Use time-shifting and Live Outputs to create on-demand video playback](live-event-cloud-dvr-time-how-to.md)
+- [live-event-live-transcription-how-to (preview)](live-event-live-transcription-how-to.md)
+- [Live Event types comparison](live-event-types-comparison-reference.md)
+- [Live event states and billing](live-event-states-billing-concept.md)
+- [Live Event low latency settings](live-event-latency-reference.md)
+- [Media Services Live Event error codes](live-event-error-codes-reference.md)
### Tutorials and quickstarts - [Tutorial: Stream live with Media Services](stream-live-tutorial-with-api.md)-- [Create an Azure Media Services live stream with OBS](live-events-obs-quickstart.md)
+- [Create an Azure Media Services live stream with OBS](live-event-obs-quickstart.md)
- [Quickstart: Upload, encode, and stream content with portal](asset-create-asset-upload-portal-quickstart.md)-- [Quickstart: Create an Azure Media Services live stream with Wirecast](live-events-wirecast-quickstart.md)
+- [Quickstart: Create an Azure Media Services live stream with Wirecast](live-event-wirecast-quickstart.md)
## Samples
media-services Migrate V 2 V 3 Migration Scenario Based Publishing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/migrate-v-2-v-3-migration-scenario-based-publishing.md
See publishing concepts, tutorials and how to guides below for specific steps.
- [Manage streaming endpoints with Media Services v3](manage-streaming-endpoints-howto.md) - [CLI example: Publish an asset](cli-publish-asset.md) - [Create a streaming locator and build URLs](create-streaming-locator-build-url.md)-- [Download the results of a job](download-results-howto.md)
+- [Download the results of a job](job-download-results-how-to.md)
- [Signal descriptive audio tracks](signal-descriptive-audio-howto.md) - [Azure Media Player full setup](../azure-media-player/azure-media-player-full-setup.md) - [How to use the Video.js player with Azure Media Services](how-to-video-js-player.md)
media-services Media Services Event Schemas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/monitoring/media-services-event-schemas.md
The data object has the following properties:
| encoderPort | string | Port of the encoder from where this stream is coming. | | resultCode | string | The reason the connection was rejected. The result codes are listed in the following table. |
-You can find the error result codes in [live Event error codes](../live-event-error-codes.md).
+You can find the error result codes in [live Event error codes](../live-event-error-codes-reference.md).
### LiveEventEncoderConnected
The data object has the following properties:
| encoderPort | string | Port of the encoder from where this stream is coming. | | resultCode | string | The reason for the encoder disconnecting. It could be graceful disconnect or from an error. The result codes are listed in the following table. |
-You can find the error result codes in [live Event error codes](../live-event-error-codes.md).
+You can find the error result codes in [live Event error codes](../live-event-error-codes-reference.md).
The graceful disconnect result codes are:
An event has the following top-level data:
- [EventGrid .NET SDK that includes Media Service events](https://www.nuget.org/packages/Microsoft.Azure.EventGrid/) - [Definitions of Media Services events](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/eventgrid/data-plane/Microsoft.Media/stable/2018-01-01/MediaServices.json)-- [Live Event error codes](../live-event-error-codes.md)
+- [Live Event error codes](../live-event-error-codes-reference.md)
media-services Monitor Media Services Data Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/monitoring/monitor-media-services-data-reference.md
You can monitor the following account metrics.
|StreamingPolicyQuota|Streaming Policy quota|Streaming Policies quota in your account.| |StreamingPolicyQuotaUsedPercentage|Streaming Policy quota used percentage|The percentage of the Streaming Policy quota already used.|
-You should also review [account quotas and limits](../limits-quotas-constraints.md).
+You should also review [account quotas and limits](../limits-quotas-constraints-reference.md).
### Streaming Endpoint
media-services Questions Collection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/questions-collection.md
Often, customers have invested in a license server farm either in their own data
Currently, you can use the [Azure portal](https://portal.azure.com/) to:
-* Manage [Live Events](live-events-outputs-concept.md) in Media Services v3.
+* Manage [Live Events](live-event-outputs-concept.md) in Media Services v3.
* View (not manage) v3 [assets](assets-concept.md). * [Get info about accessing APIs](./access-api-howto.md).
media-services Recommended On Premises Live Encoders https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/recommended-on-premises-live-encoders.md
In Azure Media Services, a [Live Event](/rest/api/media/liveevents) (channel) re
This article discusses verified on-premises live streaming encoders. The verification is done through vendor self-verification or customer verification. Microsoft Azure Media Services does not do full or rigorous testing of each encoder, and does not continually re-verify on updates. For instructions on how to verify your on-premises live encoder, see [verify your on-premises encoder](encode-on-premises-encoder-partner.md)
-For detailed information about live encoding with Media Services, see [Live streaming with Media Services v3](live-streaming-overview.md).
+For detailed information about live encoding with Media Services, see [Live streaming with Media Services v3](stream-live-streaming-concept.md).
## Encoder requirements
Media Services recommends using one of the following live encoders that have mul
## Configuring on-premises live encoder settings
-For information about what settings are valid for your live event type, see [Live Event types comparison](live-event-types-comparison.md).
+For information about what settings are valid for your live event type, see [Live Event types comparison](live-event-types-comparison-reference.md).
### Playback requirements
To play back content, both an audio and video stream must be present. Playback o
## See also
-[Live streaming with Media Services v3](live-streaming-overview.md)
+[Live streaming with Media Services v3](stream-live-streaming-concept.md)
## Next steps
media-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/release-notes.md
See the latest samples in the **[media-services-v3-node-tutorials](https://githu
Live Events now support a lower-cost billing mode for "stand-by". This allows customers to pre-allocate Live Events at a lower cost for the creation of "hot pools". Customers can then use the stand-by live events to transition to the Running state faster than starting from cold on creation. This reduces the time to start the channel significantly and allows for fast hot-pool allocation of machines running in a lower price mode. See the latest pricing details [here](https://azure.microsoft.com/pricing/details/media-services).
-For more information on the StandBy state and the other states of Live Events see the article - [Live event states and billing.](./live-event-states-billing.md)
+For more information on the StandBy state and the other states of Live Events see the article - [Live event states and billing.](./live-event-states-billing-concept.md)
## December 2020
For more information about the Basic Audio Analyzer mode, see [Analyzing Video a
Updates to most properties are now allowed when live events are stopped. In addition, users are allowed to specify a prefix for the static hostname for the live event's input and preview URLs. VanityUrl is now called `useStaticHostName` to better reflect the intent of the property.
-Live events now have a StandBy state. See [Live Events and Live Outputs in Media Services](./live-events-outputs-concept.md).
+Live events now have a StandBy state. See [Live Events and Live Outputs in Media Services](./live-event-outputs-concept.md).
A live event supports receiving various input aspect ratios. Stretch mode allows customers to specify the stretching behavior for the output.
To see part of the header exchange in action, you can try the following steps:
Live transcription is now in public preview and available for use in the West US 2 region.
-Live transcription is designed to work in conjunction with live events as an add-on capability. It is supported on both pass-through and Standard or Premium encoding live events. When this feature is enabled, the service uses the [Speech-To-Text](../../cognitive-services/speech-service/speech-to-text.md) feature of Cognitive Services to transcribe the spoken words in the incoming audio into text. This text is then made available for delivery along with video and audio in MPEG-DASH and HLS protocols. Billing is based on a new add-on meter that is additional cost to the live event when it is in the "Running" state. For details on Live transcription and billing, see [Live transcription](live-transcription.md)
+Live transcription is designed to work in conjunction with live events as an add-on capability. It is supported on both pass-through and Standard or Premium encoding live events. When this feature is enabled, the service uses the [Speech-To-Text](../../cognitive-services/speech-service/speech-to-text.md) feature of Cognitive Services to transcribe the spoken words in the incoming audio into text. This text is then made available for delivery along with video and audio in MPEG-DASH and HLS protocols. Billing is based on a new add-on meter that is additional cost to the live event when it is in the "Running" state. For details on Live transcription and billing, see [Live transcription](live-event-live-transcription-how-to.md)
> [!NOTE] > Currently, live transcription is only available as a preview feature in the West US 2 region. It supports transcription of spoken words in English (en-us) only at this time.
For more information, see [Clouds and regions in which Media Services v3 exists]
Added updates that include Media Services performance improvements.
-* The maximum file size supported for processing was updated. See, [Quotas, and limits](limits-quotas-constraints.md).
+* The maximum file size supported for processing was updated. See, [Quotas, and limits](limits-quotas-constraints-reference.md).
* [Encoding speeds improvements](concept-media-reserved-units.md). ## April 2019
media-services Stream Live Streaming Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/stream-live-streaming-concept.md
+
+ Title: Overview of Live streaming
+description: This article gives an overview of live streaming using Azure Media Services v3.
+++++ Last updated : 03/25/2021++
+# Live streaming with Azure Media Services v3
++
+Azure Media Services enables you to deliver live events to your customers on the Azure cloud. To stream your live events with Media Services, you need the following:
+
+- A camera that is used to capture the live event.<br/>For setup ideas, check out [Simple and portable event video gear setup]( https://link.medium.com/KNTtiN6IeT).
+
+ If you do not have access to a camera, tools such as [Telestream Wirecast](https://www.telestream.net/wirecast/overview.htm) can be used to generate a live feed from a video file.
+- A live video encoder that converts signals from a camera (or another device, like a laptop) into a contribution feed that is sent to Media Services. The contribution feed can include signals related to advertising, such as SCTE-35 markers.<br/>For a list of recommended live streaming encoders, see [live streaming encoders](recommended-on-premises-live-encoders.md). Also, check out this blog: [Live streaming production with OBS](https://link.medium.com/ttuwHpaJeT).
+- Components in Media Services, which enable you to ingest, preview, package, record, encrypt, and broadcast the live event to your customers, or to a CDN for further distribution.
+
+For customers looking to deliver content to large internet audiences, we recommend that you enable CDN on the [streaming endpoint](streaming-endpoint-concept.md).
+
+This article gives an overview and guidance of live streaming with Media Services and links to other relevant articles.
+
+> [!NOTE]
+> You can use the [Azure portal](https://portal.azure.com/) to manage v3 [Live Events](live-event-outputs-concept.md), view v3 [assets](assets-concept.md), get info about accessing APIs. For all other management tasks (for example, Transforms and Jobs), use the [REST API](/rest/api/medi#sdks).
+
+## Dynamic packaging and delivery
+
+With Media Services, you can take advantage of [dynamic packaging](encode-dynamic-packaging-concept.md), which allows you to preview and broadcast your live streams in [MPEG DASH, HLS, and Smooth Streaming formats](https://en.wikipedia.org/wiki/Adaptive_bitrate_streaming) from the contribution feed that is being sent to the service. Your viewers can play back the live stream with any HLS, DASH, or Smooth Streaming compatible players. You can use [Azure Media Player](https://amp.azure.net/libs/amp/latest/docs/https://docsupdatetracker.net/index.html) in your web or mobile applications to deliver your stream in any of these protocols.
+
+## Dynamic encryption
+
+Dynamic encryption enables you to dynamically encrypt your live or on-demand content with AES-128 or any of the three major digital rights management (DRM) systems: Microsoft PlayReady, Google Widevine, and Apple FairPlay. Media Services also provides a service for delivering AES keys and DRM (PlayReady, Widevine, and FairPlay) licenses to authorized clients. For more information, see [dynamic encryption](drm-content-protection-concept.md).
+
+> [!NOTE]
+> Widevine is a service provided by Google Inc. and subject to the terms of service and Privacy Policy of Google, Inc.
+
+## Dynamic filtering
+
+Dynamic filtering is used to control the number of tracks, formats, bitrates, and presentation time windows that are sent out to the players. For more information, see [filters and dynamic manifests](filters-dynamic-manifest-concept.md).
+
+## Live event types
+
+[Live events](/rest/api/medi).
+
+### Pass-through
+
+![Diagram showing how the video and audio feeds from a pass-through Live Event are ingested and processed.](./media/live-streaming/pass-through.svg)
+
+When using the pass-through **Live Event**, you rely on your on-premises live encoder to generate a multiple bitrate video stream and send that as the contribution feed to the Live Event (using RTMP or fragmented-MP4 input protocol). The Live Event then carries through the incoming video streams to the dynamic packager (Streaming Endpoint) without any further transcoding. Such a pass-through Live Event is optimized for long-running live events or 24x365 linear live streaming.
+
+### Live encoding
+
+![live encoding](./media/live-streaming/live-encoding.svg)
+
+When using cloud encoding with Media Services, you would configure your on-premises live encoder to send a single bitrate video as the contribution feed (up to 32Mbps aggregate) to the Live Event (using RTMP or fragmented-MP4 input protocol). The Live Event transcodes the incoming single bitrate stream into [multiple bitrate video streams](https://en.wikipedia.org/wiki/Adaptive_bitrate_streaming) at varying resolutions to improve delivery and makes it available for delivery to playback devices via industry standard protocols like MPEG-DASH, Apple HTTP Live Streaming (HLS), and Microsoft Smooth Streaming.
+
+### Live transcription (preview)
+
+Live transcription is a feature you can use with live events that are either pass-through or live encoding. For more information, see [live transcription](live-event-live-transcription-how-to.md). When this feature is enabled, the service uses the [Speech-To-Text](../../cognitive-services/speech-service/speech-to-text.md) feature of Cognitive Services to transcribe the spoken words in the incoming audio into text. This text is then made available for delivery along with video and audio in MPEG-DASH and HLS protocols.
+
+> [!NOTE]
+> Currently, live transcription is available as a preview feature in West US 2.
+
+## Live streaming workflow
+
+To understand the live streaming workflow in Media Services v3, you have to first review and understand the following concepts:
+
+- [Streaming endpoints](streaming-endpoint-concept.md)
+- [Live events and live outputs](live-event-outputs-concept.md)
+- [Streaming locators](streaming-locators-concept.md)
+
+### General steps
+
+1. In your Media Services account, make sure the **streaming endpoint** (origin) is running.
+2. Create a [live event](live-event-outputs-concept.md). <br/>When creating the event, you can specify to autostart it. Alternatively, you can start the event when you are ready to start streaming.<br/> When autostart is set to true, the Live Event will be started right after creation. The billing starts as soon as the Live Event starts running. You must explicitly call Stop on the live event resource to halt further billing. For more information, see [live event states and billing](live-event-states-billing-concept.md).
+3. Get the ingest URL(s) and configure your on-premises encoder to use the URL to send the contribution feed.<br/>See [recommended live encoders](recommended-on-premises-live-encoders.md).
+4. Get the preview URL and use it to verify that the input from the encoder is actually being received.
+5. Create a new **asset** object.
+
+ Each live output is associated with an asset, which it uses to record the video into the associated Azure blob storage container.
+6. Create a **live output** and use the asset name that you created so that the stream can be archived into the asset.
+
+ Live Outputs start on creation and stop when deleted. When you delete the Live Output, you are not deleting the underlying asset and content in the asset.
+7. Create a **streaming locator** with the [built-in streaming policy types](streaming-policy-concept.md).
+
+ To publish the live output, you must create a streaming locator for the associated asset.
+8. List the paths on the **streaming locator** to get back the URLs to use (these are deterministic).
+9. Get the hostname for the **streaming endpoint** (Origin) you wish to stream from.
+10. Combine the URL from step 8 with the hostname in step 9 to get the full URL.
+11. If you wish to stop making your **live event** viewable, you need to stop streaming the event and delete the **streaming locator**.
+12. If you are done streaming events and want to clean up the resources provisioned earlier, follow the following procedure.
+
+ * Stop pushing the stream from the encoder.
+ * Stop the live event. Once the live event is stopped, it will not incur any charges. When you need to start it again, it will have the same ingest URL so you won't need to reconfigure your encoder.
+ * You can stop your streaming endpoint, unless you want to continue to provide the archive of your live event as an on-demand stream. If the live event is in stopped state, it will not incur any charges.
+
+The asset that the live output is archiving to, automatically becomes an on-demand asset when the live output is deleted. You must delete all live outputs before a live event can be stopped. You can use an optional flag [removeOutputsOnStop](/rest/api/media/liveevents/stop#request-body) to automatically remove live outputs on stop.
+
+> [!TIP]
+> See [Live streaming tutorial](stream-live-tutorial-with-api.md), the article examines the code that implements the steps described above.
+
+## Other important articles
+
+- [Recommended live encoders](recommended-on-premises-live-encoders.md)
+- [Using a cloud DVR](live-event-cloud-dvr-time-how-to.md)
+- [Live event types feature comparison](live-event-types-comparison-reference.md)
+- [States and billing](live-event-states-billing-concept.md)
+- [Latency](live-event-latency-reference.md)
+
+## Live streaming questions
+
+See the [live streaming questions](questions-collection.md#live-streaming) article.
media-services Stream Live Tutorial With Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/stream-live-tutorial-with-api.md
The following items are required to complete the tutorial:
- For this sample, it is recommended to start with a software encoder like the free [Open Broadcast Software OBS Studio](https://obsproject.com/download) to make it simple to get started. > [!TIP]
-> Make sure to review [Live streaming with Media Services v3](live-streaming-overview.md) before proceeding.
+> Make sure to review [Live streaming with Media Services v3](stream-live-streaming-concept.md) before proceeding.
## Download and configure the sample
To start using Media Services APIs with .NET, you need to create an **AzureMedia
### Create a live event
-This section shows how to create a **pass-through** type of Live Event (LiveEventEncodingType set to None). For more information about the other available types of Live Events, see [Live Event types](live-events-outputs-concept.md#live-event-types). In addition to pass-through, you can use a live transcoding Live Event for 720P or 1080P adaptive bitrate cloud encoding.
+This section shows how to create a **pass-through** type of Live Event (LiveEventEncodingType set to None). For more information about the other available types of Live Events, see [Live Event types](live-event-outputs-concept.md#live-event-types). In addition to pass-through, you can use a live transcoding Live Event for 720P or 1080P adaptive bitrate cloud encoding.
Some things that you might want to specify when creating the live event are: * The ingest protocol for the Live Event (currently, the RTMP(S) and Smooth Streaming protocols are supported).<br/>You can't change the protocol option while the Live Event or its associated Live Outputs are running. If you require different protocols, create separate Live Event for each streaming protocol. * IP restrictions on the ingest and preview. You can define the IP addresses that are allowed to ingest a video to this Live Event. Allowed IP addresses can be specified as either a single IP address (for example '10.0.0.1'), an IP range using an IP address and a CIDR subnet mask (for example, '10.0.0.1/22'), or an IP range using an IP address and a dotted decimal subnet mask (for example, '10.0.0.1(255.255.252.0)').<br/>If no IP addresses are specified and there's no rule definition, then no IP address will be allowed. To allow any IP address, create a rule and set 0.0.0.0/0.<br/>The IP addresses have to be in one of the following formats: IpV4 address with four numbers or CIDR address range.
-* When creating the event, you can specify to autostart it. <br/>When autostart is set to true, the Live Event will be started after creation. That means the billing starts as soon as the Live Event starts running. You must explicitly call Stop on the Live Event resource to halt further billing. For more information, see [Live Event states and billing](live-event-states-billing.md).
+* When creating the event, you can specify to autostart it. <br/>When autostart is set to true, the Live Event will be started after creation. That means the billing starts as soon as the Live Event starts running. You must explicitly call Stop on the Live Event resource to halt further billing. For more information, see [Live Event states and billing](live-event-states-billing-concept.md).
There are also standby modes available to start the Live Event in a lower cost 'allocated' state that makes it faster to move to a 'Running' state. This is useful for situations like hotpools that need to hand out channels quickly to streamers.
-* For an ingest URL to be predictive and easier to maintain in a hardware based live encoder, set the "useStaticHostname" property to true. For detailed information, see [Live Event ingest URLs](live-events-outputs-concept.md#live-event-ingest-urls).
+* For an ingest URL to be predictive and easier to maintain in a hardware based live encoder, set the "useStaticHostname" property to true. For detailed information, see [Live Event ingest URLs](live-event-outputs-concept.md#live-event-ingest-urls).
[!code-csharp[Main](../../../media-services-v3-dotnet/Live/LiveEventWithDVR/Program.cs#CreateLiveEvent)]
media-services Streaming Endpoint Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/streaming-endpoint-concept.md
In Microsoft Azure Media Services, a [Streaming Endpoint](/rest/api/media/streamingendpoints) represents a dynamic (just-in-time) packaging and origin service that can deliver your live and on-demand content directly to a client player app using one of the common streaming media protocols (HLS or DASH). In addition, the **Streaming Endpoint** provides dynamic (just-in-time) encryption to industry-leading DRMs.
-When you create a Media Services account, a **default** Streaming Endpoint is created for you in a stopped state. More Streaming Endpoints can be created under the account (see [Quotas and limits](limits-quotas-constraints.md)).
+When you create a Media Services account, a **default** Streaming Endpoint is created for you in a stopped state. More Streaming Endpoints can be created under the account (see [Quotas and limits](limits-quotas-constraints-reference.md)).
> [!NOTE] > To start streaming videos, you need to start the **Streaming Endpoint** from which you want to stream the video.
media-services Streaming Endpoint Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/streaming-endpoint-error-codes.md
For filter guidance, see:
For live articles and samples, see: -- [Concept: live streaming overview](live-streaming-overview.md)-- [Concept: Live Events and Live Outputs](live-events-outputs-concept.md)
+- [Concept: live streaming overview](stream-live-streaming-concept.md)
+- [Concept: Live Events and Live Outputs](live-event-outputs-concept.md)
- [Sample: live streaming tutorial](stream-live-tutorial-with-api.md) ## 416 Range Not Satisfiable
Check out the [Azure Media Services community](media-services-community.md) arti
- [Encoding error codes](/rest/api/media/jobs/get#joberrorcode) - [Azure Media Services concepts](concepts-overview.md)-- [Quotas and limits](limits-quotas-constraints.md)
+- [Quotas and limits](limits-quotas-constraints-reference.md)
## Next steps
media-services Streaming Locators Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/streaming-locators-concept.md
You can also specify the start and end time on your Streaming Locator, which wil
* **Streaming Locators** are not updatable. * Properties of **Streaming Locators** that are of the Datetime type are always in UTC format.
-* You should design a limited set of policies for your Media Service account and reuse them for your Streaming Locators whenever the same options are needed. For more information, see [Quotas and limits](limits-quotas-constraints.md).
+* You should design a limited set of policies for your Media Service account and reuse them for your Streaming Locators whenever the same options are needed. For more information, see [Quotas and limits](limits-quotas-constraints-reference.md).
## Create Streaming Locators
media-services Streaming Policy Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/streaming-policy-concept.md
The following "Decision tree" helps you choose a predefined Streaming Policy for
> [!IMPORTANT] > * Properties of **Streaming Policies** that are of the Datetime type are always in UTC format.
-> * You should design a limited set of policies for your Media Service account and reuse them for your Streaming Locators whenever the same options are needed. For more information, see [Quotas and limits](limits-quotas-constraints.md).
+> * You should design a limited set of policies for your Media Service account and reuse them for your Streaming Locators whenever the same options are needed. For more information, see [Quotas and limits](limits-quotas-constraints-reference.md).
## Decision tree
mysql Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/concepts-supported-versions.md
In Azure Database for MySQL service, gateway nodes listens on port 3308 for v5.7
## Azure Database for MySQL currently supports the following major and minor versions of MySQL:
-| Version | [Single Server](overview.md) <br/> Current minor version |[Flexible Server (Preview)](/flexible-server/overview.md) <br/> Current minor version |
+| Version | [Single Server](overview.md) <br/> Current minor version |[Flexible Server (Preview)](/../flexible-server/overview.md) <br/> Current minor version |
|:-|:-|:| |MySQL Version 5.6 | [5.6.47](https://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-47.html) (Retired) | Not supported| |MySQL Version 5.7 | [5.7.29](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-29.html) | [5.7.29](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-29.html)|
mysql Concepts Version Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/concepts-version-policy.md
Azure Database for MySQL has been developed from [MySQL Community Edition](https
Azure Database for MySQL currently supports the following major and minor versions of MySQL:
-| Version | [Single Server](overview.md) <br/> Current minor version |[Flexible Server (Preview)](/flexible-server/overview.md) <br/> Current minor version |
+| Version | [Single Server](overview.md) <br/> Current minor version |[Flexible Server (Preview)](/../flexible-server/overview.md) <br/> Current minor version |
|:-|:-|:| |MySQL Version 5.6 | [5.6.47](https://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-47.html)(Retired) | Not supported| |MySQL Version 5.7 | [5.7.29](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-29.html) | [5.7.29](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-29.html)|
Azure Database for MySQL currently supports the following major and minor versio
> In the Single Server deployment option, a gateway is used to redirect the connections to server instances. After the connection is established, the MySQL client displays the version of MySQL set in the gateway, not the actual version running on your MySQL server instance. To determine the version of your MySQL server instance, use the `SELECT VERSION();` command at the MySQL prompt. If your application has a requirement to connect to specific major version say v5.7 or v8.0, you can do so by changing the port in your server connection string as explained in our documentation [here.](concepts-supported-versions.md#connect-to-a-gateway-node-that-is-running-a-specific-mysql-version) > [!IMPORTANT]
-> MySQL v5.6 is retired on single server as of Febuary 2021. Starting from September 1st 2021, you will not be able to create new v5.6 servers on Azure Database for MySQL - Single server deployment option. However, you will be able to perform point-in-time recoveries and create read replicas for your existing servers.
+> MySQL v5.6 is retired on Single Server as of Febuary 2021. Starting from September 1st 2021, you will not be able to create new v5.6 servers on Azure Database for MySQL - Single Server deployment option. However, you will be able to perform point-in-time recoveries and create read replicas for your existing servers.
Read the version support policy for retired versions in [version support policy documentation.](concepts-version-policy.md#retired-mysql-engine-versions-not-supported-in-azure-database-for-mysql)
networking Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/networking/fundamentals/networking-overview.md
+
+ Title: Azure networking services overview
+description: Learn about networking services in Azure, including connectivity, application protection, application delivery, and network monitoring services.
+
+documentationcenter: na
++
+ms.devlang: na
++ Last updated : 10/28/2020++++
+# Azure networking services overview
+
+The networking services in Azure provide a variety of networking capabilities that can be used together or separately. Click any of the following key capabilities to learn more about them:
+- [**Connectivity services**](#connect): Connect Azure resources and on-premises resources using any or a combination of these networking services in Azure - Virtual Network (VNet), Virtual WAN, ExpressRoute, VPN Gateway, Virtual network NAT Gateway, Azure DNS, Peering service, and Azure Bastion.
+- [**Application protection services**](#protect): Protect your applications using any or a combination of these networking services in Azure - Private Link, DDoS protection, Firewall, Network Security Groups, Web Application Firewall, and Virtual Network Endpoints.
+- [**Application delivery services**](#deliver): Deliver applications in the Azure network using any or a combination of these networking services in Azure - Content Delivery Network (CDN), Azure Front Door Service, Traffic Manager, Application Gateway, Internet Analyzer, and Load Balancer.
+- [**Network monitoring**](#monitor): Monitor your network resources using any or a combination of these networking services in Azure - Network Watcher, ExpressRoute Monitor, Azure Monitor, or VNet Terminal Access Point (TAP).
+
+## <a name="connect"></a>Connectivity services
+
+This section describes services that provide connectivity between Azure resources, connectivity from an on-premises network to Azure resources, and branch to branch connectivity in Azure - Virtual Network (VNet), ExpressRoute, VPN Gateway, Virtual WAN, Virtual network NAT Gateway, Azure DNS, Azure Peering service, and Azure Bastion.
++
+### <a name="vnet"></a>Virtual network
+
+Azure Virtual Network (VNet) is the fundamental building block for your private network in Azure. You can use a VNets to:
+- **Communicate between Azure resources**: You can deploy VMs, and several other types of Azure resources to a virtual network, such as Azure App Service Environments, the Azure Kubernetes Service (AKS), and Azure Virtual Machine Scale Sets. To view a complete list of Azure resources that you can deploy into a virtual network, see [Virtual network service integration](../../virtual-network/virtual-network-for-azure-services.md).
+- **Communicate between each other**: You can connect virtual networks to each other, enabling resources in either virtual network to communicate with each other, using virtual network peering. The virtual networks you connect can be in the same, or different, Azure regions. For more information, see [Virtual network peering](../../virtual-network/virtual-network-peering-overview.md).
+- **Communicate to the internet**: All resources in a VNet can communicate outbound to the internet, by default. You can communicate inbound to a resource by assigning a public IP address or a public Load Balancer. You can also use [Public IP addresses](../../virtual-network/virtual-network-public-ip-address.md) or public [Load Balancer](../../load-balancer/load-balancer-overview.md) to manage your outbound connections.
+- **Communicate with on-premises networks**: You can connect your on-premises computers and networks to a virtual network using [VPN Gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md) or [ExpressRoute](../../expressroute/expressroute-introduction.md).
+
+For more information, see [What is Azure Virtual Network?](../../virtual-network/virtual-networks-overview.md).
+
+### <a name="expressroute"></a>ExpressRoute
+ExpressRoute enables you to extend your on-premises networks into the Microsoft cloud over a private connection facilitated by a connectivity provider. This connection is private. Traffic does not go over the internet. With ExpressRoute, you can establish connections to Microsoft cloud services, such as Microsoft Azure, Microsoft 365, and Dynamics 365. For more information, see [What is ExpressRoute?](../../expressroute/expressroute-introduction.md).
++
+### <a name="vpngateway"></a>VPN Gateway
+VPN Gateway helps you create encrypted cross-premises connections to your virtual network from on-premises locations, or create encrypted connections between VNets. There are different configurations available for VPN Gateway connections, such as, site-to-site, point-to-site, or VNet-to-VNet.
+The following diagram illustrates multiple site-to-site VPN connections to the same virtual network.
++
+For more information about different types of VPN connections, see [VPN Gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md).
+
+### <a name="virtualwan"></a>Virtual WAN
+Azure Virtual WAN is a networking service that provides optimized and automated branch connectivity to, and through, Azure. Azure regions serve as hubs that you can choose to connect your branches to. You can leverage the Azure backbone to also connect branches and enjoy branch-to-VNet connectivity.
+Azure Virtual WAN brings together many Azure cloud connectivity services such as site-to-site VPN, ExpressRoute, point-to-site user VPN into a single operational interface. Connectivity to Azure VNets is established by using virtual network connections. For more information, see [What is Azure virtual WAN?](../../virtual-wan/virtual-wan-about.md).
++
+### <a name="dns"></a>Azure DNS
+Azure DNS is a hosting service for DNS domains that provides name resolution by using Microsoft Azure infrastructure. By hosting your domains in Azure, you can manage your DNS records by using the same credentials, APIs, tools, and billing as your other Azure services. For more information, see [What is Azure DNS?](../../dns/dns-overview.md).
+
+### <a name="bastion"></a>Azure Bastion
+The Azure Bastion service is a new fully platform-managed PaaS service that you provision inside your virtual network. It provides secure and seamless RDP/SSH connectivity to your virtual machines directly in the Azure portal over TLS. When you connect via Azure Bastion, your virtual machines do not need a public IP address. For more information, see [What is Azure Bastion?](../../bastion/bastion-overview.md).
++
+### <a name="nat"></a>Virtual network NAT Gateway
+Virtual Network NAT (network address translation) simplifies outbound-only Internet connectivity for virtual networks. When configured on a subnet, all outbound connectivity uses your specified static public IP addresses. Outbound connectivity is possible without load balancer or public IP addresses directly attached to virtual machines.
+For more information, see [What is virtual network NAT gateway?](../../virtual-network/nat-overview.md).
++
+### <a name="azurepeeringservice"></a> Azure Peering Service
+Azure Peering service enhances customer connectivity to Microsoft cloud services such as Microsoft 365, Dynamics 365, software as a service (SaaS) services, Azure, or any Microsoft services accessible via the public internet. For more information, see [What is Azure Peering Service?](../../peering-service/about.md).
+
+### <a name="edge-zones"></a>Azure Edge Zones
+
+Azure Edge Zone is a family of offerings from Microsoft Azure that enables data processing close to the user. You can deploy VMs, containers, and other selected Azure services into Edge Zones to address the low latency and high throughput requirements of applications.
+
+### <a name="orbital"></a>Azure Orbital
+
+Azure Orbital is a fully managed cloud-based ground station as a service that lets you communicate with your spacecraft or satellite constellations, downlink and uplink data, process your data in the cloud, chain services with Azure services in unique scenarios, and generate products for your customers. This system is built on top of the Azure global infrastructure and low-latency global fiber network.
+
+## <a name="protect"></a>Application protection services
+
+This section describes networking services in Azure that help protect your network resources - Protect your applications using any or a combination of these networking services in Azure - DDoS protection, Private Link, Firewall, Web Application Firewall, Network Security Groups, and Virtual Network Service Endpoints.
+
+### <a name="ddosprotection"></a>DDoS Protection
+[Azure DDoS Protection](../../ddos-protection/manage-ddos-protection.md) provides countermeasures against the most sophisticated DDoS threats. The service provides enhanced DDoS mitigation capabilities for your application and resources deployed in your virtual networks. Additionally, customers using Azure DDoS Protection have access to DDoS Rapid Response support to engage DDoS experts during an active attack.
++
+### <a name="privatelink"></a>Azure Private Link
+[Azure Private Link](../../private-link/private-link-overview.md) enables you to access Azure PaaS Services (for example, Azure Storage and SQL Database) and Azure hosted customer-owned/partner services over a private endpoint in your virtual network.
+Traffic between your virtual network and the service travels the Microsoft backbone network. Exposing your service to the public internet is no longer necessary. You can create your own private link service in your virtual network and deliver it to your customers.
++
+### <a name="firewall"></a>Azure Firewall
+Azure Firewall is a managed, cloud-based network security service that protects your Azure Virtual Network resources. Using Azure Firewall, you can centrally create, enforce, and log application and network connectivity policies across subscriptions and virtual networks. Azure Firewall uses a static public IP address for your virtual network resources allowing outside firewalls to identify traffic originating from your virtual network.
+
+For more information about Azure Firewall, see the [Azure Firewall documentation](../../firewall/overview.md).
++
+### <a name="waf"></a>Web Application Firewall
+[Azure Web Application Firewall](../../web-application-firewall/overview.md) (WAF) provides protection to your web applications from common web exploits and vulnerabilities such as SQL injection, and cross site scripting. Azure WAF provides out of box protection from OWASP top 10 vulnerabilities via managed rules. Additionally customers can also configure custom rules, which are customer managed rules to provide additional protection based on source IP range, and request attributes such as headers, cookies, form data fields or query string parameters.
+
+Customers can choose to deploy [Azure WAF with Application Gateway](../../web-application-firewall/ag/ag-overview.md) which provides regional protection to entities in public and private address space. Customers can also choose to deploy [Azure WAF with Front Door](../../web-application-firewall/afds/afds-overview.md) which provides protection at the network edge to public endpoints.
++
+### <a name="nsg"></a>Network security groups
+You can filter network traffic to and from Azure resources in an Azure virtual network with a network security group. For more information, see [Network security groups](../../virtual-network/network-security-groups-overview.md).
+
+### <a name="serviceendpoints"></a>Service endpoints
+Virtual Network (VNet) service endpoints extend your virtual network private address space and the identity of your VNet to the Azure services, over a direct connection. Endpoints allow you to secure your critical Azure service resources to only your virtual networks. Traffic from your VNet to the Azure service always remains on the Microsoft Azure backbone network. For more information, see [Virtual network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md).
++
+## <a name="deliver"></a>Application delivery services
+
+This section describes networking services in Azure that help deliver applications - Content Delivery Network, Azure Front Door Service, Traffic Manager, Load Balancer, and Application Gateway.
+
+### <a name="cdn"></a>Content Delivery Network
+Azure Content Delivery Network (CDN) offers developers a global solution for rapidly delivering high-bandwidth content to users by caching their content at strategically placed physical nodes across the world. For more information about Azure CDN, see [Azure Content Delivery Network](../../cdn/cdn-overview.md).
++
+### <a name="frontdoor"></a>Azure Front Door Service
+Azure Front Door Service enables you to define, manage, and monitor the global routing for your web traffic by optimizing for best performance and instant global failover for high availability. With Front Door, you can transform your global (multi-region) consumer and enterprise applications into robust, high-performance personalized modern applications, APIs, and content that reach a global audience with Azure. For more information, see [Azure Front Door](../../frontdoor/front-door-overview.md).
++
+### <a name="trafficmanager"></a>Traffic Manager
+
+Azure Traffic Manager is a DNS-based traffic load balancer that enables you to distribute traffic optimally to services across global Azure regions, while providing high availability and responsiveness. Traffic Manager provides a range of traffic-routing methods to distribute traffic such as priority, weighted, performance, geographic, multi-value, or subnet. For more information about traffic routing methods, see [Traffic Manager routing methods](../../traffic-manager/traffic-manager-routing-methods.md).
+
+The following diagram shows endpoint priority-based routing with Traffic
++
+For more information about Traffic Manager, see [What is Azure Traffic Manager?](../../traffic-manager/traffic-manager-overview.md)
+
+### <a name="loadbalancer"></a>Load Balancer
+The Azure Load Balancer provides high-performance, low-latency Layer 4 load-balancing for all UDP and TCP protocols. It manages inbound and outbound connections. You can configure public and internal load-balanced endpoints. You can define rules to map inbound connections to back-end pool destinations by using TCP and HTTP health-probing options to manage service availability. To learn more about Load Balancer, read the [Load Balancer overview](../../load-balancer/load-balancer-overview.md) article.
+
+The following picture shows an Internet-facing multi-tier application that utilizes both external and internal load balancers:
++
+### <a name="applicationgateway"></a>Application Gateway
+Azure Application Gateway is a web traffic load balancer that enables you to manage traffic to your web applications. It is an Application Delivery Controller (ADC) as a service, offering various layer 7 load-balancing capabilities for your applications. For more information, see [What is Azure Application Gateway?](../../application-gateway/overview.md).
+
+The following diagram shows url path-based routing with Application Gateway.
++
+## <a name="monitor"></a>Network monitoring services
+This section describes networking services in Azure that help monitor your network resources - Network Watcher, Azure Monitor for Networks, ExpressRoute Monitor, Azure Monitor, and Virtual Network TAP.
+
+### <a name="networkwatcher"></a>Network Watcher
+Azure Network Watcher provides tools to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network. For more information, see [What is Network Watcher?](../../network-watcher/network-watcher-monitoring-overview.md?toc=%2fazure%2fnetworking%2ftoc.json).
+
+### Azure Monitor for Networks Preview
+Azure Monitor for Networks provides a comprehensive view of health and metrics for all deployed network resources, without requiring any configuration. It also provides access to network monitoring capabilities like [Connection Monitor](../../network-watcher/connection-monitor-overview.md), [flow logging for network security groups](../../network-watcher/network-watcher-nsg-flow-logging-overview.md), and [Traffic Analytics](../../network-watcher/traffic-analytics.md). For more information, see [Azure Monitor for Networks Preview](../../azure-monitor/insights/network-insights-overview.md?toc=%2fazure%2fnetworking%2ftoc.json).
+
+### <a name="expressroutemonitor"></a>ExpressRoute Monitor
+To learn about how view ExpressRoute circuit metrics, resource logs and alerts, see [ExpressRoute monitoring, metrics, and alerts](../../expressroute/expressroute-monitoring-metrics-alerts.md?toc=%2fazure%2fnetworking%2ftoc.json).
+### <a name="azuremonitor"></a>Azure Monitor
+Azure Monitor maximizes the availability and performance of your applications by delivering a comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments. It helps you understand how your applications are performing and proactively identifies issues affecting them and the resources they depend on. For more information, see [Azure Monitor Overview](../../azure-monitor/overview.md?toc=%2fazure%2fnetworking%2ftoc.json).
+### <a name="vnettap"></a>Virtual Network TAP
+Azure virtual network TAP (Terminal Access Point) allows you to continuously stream your virtual machine network traffic to a network packet collector or analytics tool. The collector or analytics tool is provided by a [network virtual appliance](https://azure.microsoft.com/solutions/network-appliances/) partner.
+
+The following image shows how virtual network TAP works:
++
+For more information, see [What is Virtual Network TAP](../../virtual-network/virtual-network-tap-overview.md).
+
+## Next steps
+
+- Create your first virtual network, and connect a few VMs to it, by completing the steps in the [Create your first virtual network](../../virtual-network/quick-create-portal.md?toc=%2fazure%2fnetworking%2ftoc.json) article.
+- Connect your computer to a virtual network by completing the steps in the [Configure a point-to-site connection article](../../vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md?toc=%2fazure%2fnetworking%2ftoc.json).
+- Load balance Internet traffic to public servers by completing the steps in the [Create an Internet-facing load balancer](../../load-balancer/quickstart-load-balancer-standard-public-portal.md?toc=%2fazure%2fnetworking%2ftoc.json) article.
purview Create Sensitivity Label https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/create-sensitivity-label.md
To apply MIP sensitivity labels to Azure assets in Azure Purview, you must expli
By extending MIP’s sensitivity labels with Azure Purview, organizations can now discover, classify, and get insight into sensitivity across a broader range of data sources, minimizing compliance risk. > [!NOTE]
-> Since Microsoft 365 and Azure Purview are separate services, there is a possibility that they will be deployed in different regions. Label names and custom sensitive information type names are considered to be customer data, and are kept within the same GEO location by default to protect the sensitivity of your data and to avoid GDPR laws.
+> Since Microsoft 365 and Azure Purview are separate services, there is a possibility that they will be deployed in different regions. Label names and custom sensitive information type names are considered to be customer data, and are kept within the same GEO location by default to protect the sensitivity of your data and to comply with privacy regulations.
> > For this reason, labels and custom sensitive information types are not shared to Azure Purview by default, and require your consent to use them in Azure Purview.
security-center Security Center Wdatp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-wdatp.md
Microsoft Defender for Endpoint is a holistic, cloud delivered endpoint security
| Release state: | Generally available (GA) | | Pricing: | Requires [Azure Defender for servers](defender-for-servers-introduction.md) | | Supported platforms: | ΓÇó Azure machines running Windows<br> ΓÇó Azure Arc machines running Windows|
-| Supported versions of Windows: | ΓÇó **General Availability (GA) -** Detection on Windows Server 2016, 2012 R2, and 2008 R2 SP1<br> ΓÇó **Preview -** Detection on Windows Server 2019, [Windows Virtual Desktop (WVD)](../virtual-desktop/overview.md), and [Windows 10 Enterprise multi-session](../virtual-desktop/windows-10-multisession-faq.md) (formerly Enterprise for Virtual Desktops (EVD)|
+| Supported versions of Windows: | ΓÇó **General Availability (GA) -** Detection on Windows Server 2016, 2012 R2, and 2008 R2 SP1<br> ΓÇó **Preview -** Detection on Windows Server 2019, [Windows Virtual Desktop (WVD)](../virtual-desktop/overview.md), and [Windows 10 Enterprise multi-session](../virtual-desktop/windows-10-multisession-faq.yml) (formerly Enterprise for Virtual Desktops (EVD)|
| Unsupported operating systems: | ΓÇó Windows 10 (other than EVD or WVD)<br> ΓÇó Linux| | Required roles and permissions: | To enable/disable the integration: **Security admin** or **Owner**<br>To view MDATP alerts in Security Center: **Security reader**, **Reader**, **Resource Group Contributor**, **Resource Group Owner**, **Security admin**, **Subscription owner**, or **Subscription Contributor**| | Clouds: | ![Yes](./media/icons/yes-icon.png) Commercial clouds<br>![Yes](./media/icons/yes-icon.png) US Gov<br>![No](./media/icons/no-icon.png) China Gov, Other Gov |
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/whats-new.md
Previously updated : 03/11/2021 Last updated : 03/31/2021 # What's new in Azure Sentinel
Noted features are currently in PREVIEW. The [Azure Preview Supplemental Terms](
## March 2021
+- [New detections for Azure Firewall](#new-detections-for-azure-firewall)
- [Automation rules and incident-triggered playbooks](#automation-rules-and-incident-triggered-playbooks) (including all-new playbook documentation) - [New alert enrichments: enhanced entity mapping and custom details](#new-alert-enrichments-enhanced-entity-mapping-and-custom-details) - [Print your Azure Sentinel workbooks or save as PDF](#print-your-azure-sentinel-workbooks-or-save-as-pdf) - [Incident filters and sort preferences now saved in your session (Public preview)](#incident-filters-and-sort-preferences-now-saved-in-your-session-public-preview) - [Microsoft 365 Defender incident integration (Public preview)](#microsoft-365-defender-incident-integration-public-preview) - [New Microsoft service connectors using Azure Policy](#new-microsoft-service-connectors-using-azure-policy)
-
+
+### New detections for Azure Firewall
+
+Several out-of-the-box detections for Azure Firewall have been added to the [Analytics](import-threat-intelligence.md#analytics-puts-your-threat-indicators-to-work-detecting-potential-threats) area in Azure Sentinel. These new detections allow security teams to get alerts if machines on the internal network attempt to query or connect to internet domain names or IP addresses that are associated with known IOCs, as defined in the detection rule query.
+
+The new detections include:
+
+- [Solorigate Network Beacon](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/Solorigate-Network-Beacon.yaml)
+- [Known GALLIUM domains and hashes](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/GalliumIOCs.yaml)
+- [Known IRIDIUM IP](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/IridiumIOCs.yaml)
+- [Known Phosphorus group domains/IP](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/PHOSPHORUSMarch2019IOCs.yaml)
+- [THALLIUM domains included in DCU takedown](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/ThalliumIOCs.yaml)
+- [Known ZINC related maldoc hash](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/ZincJan272021IOCs.yaml)
+- [Known STRONTIUM group domains](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/STRONTIUMJuly2019IOCs.yaml)
+- [NOBELIUM - Domain and IP IOCs - March 2021](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/NOBELIUM_DomainIOCsMarch2021.yaml)
++
+Detections for Azure Firewalls are continuously added to the built-in template gallery. To get the most recent detections for Azure Firewall, under **Rule Templates**, filter the **Data Sources** by **Azure Firewall**:
++
+For more information, see [New detections for Azure Firewall in Azure Sentinel](https://techcommunity.microsoft.com/t5/azure-network-security/new-detections-for-azure-firewall-in-azure-sentinel/ba-p/2244958).
+ ### Automation rules and incident-triggered playbooks Automation rules are a new concept in Azure Sentinel, allowing you to centrally manage the automation of incident handling. Besides letting you assign playbooks to incidents (not just to alerts as before), automation rules also allow you to automate responses for multiple analytics rules at once, automatically tag, assign, or close incidents without the need for playbooks, and control the order of actions that are executed. Automation rules will streamline automation use in Azure Sentinel and will enable you to simplify complex workflows for your incident orchestration processes.
Learn more with this [complete explanation of automation rules](automate-inciden
As mentioned above, playbooks can now be activated with the incident trigger in addition to the alert trigger. The incident trigger provides your playbooks a bigger set of inputs to work with (since the incident includes all the alert and entity data as well), giving you even more power and flexibility in your response workflows. Incident-triggered playbooks are activated by being called from automation rules.
-Learn more about [playbooks' enhanced capabilites](automate-responses-with-playbooks.md), and how to [craft a response workflow](tutorial-respond-threats-playbook.md) using playbooks together with automation rules.
+Learn more about [playbooks' enhanced capabilities](automate-responses-with-playbooks.md), and how to [craft a response workflow](tutorial-respond-threats-playbook.md) using playbooks together with automation rules.
### New alert enrichments: enhanced entity mapping and custom details
service-fabric Service Fabric Best Practices Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-best-practices-networking.md
More information about the inbound security rules:
* **Application**. The application port range should be large enough to cover the endpoint requirement of your applications. This range should be exclusive from the dynamic port range on the machine, that is, the ephemeralPorts range as set in the configuration. Service Fabric uses these ports whenever new ports are required and takes care of opening the firewall for these ports on the nodes.
-* **SMB**. The SMB protocol is in use by the ImageStore service for two scenarios. This port is needed to download the packages from the ImageStore by the nodes as well as to replicate these between the replicas.
+* **SMB**. Optional, the runtime version 7.1+ doesn't use SMB any more by default. The SMB protocol is in use by the ImageStore service for two scenarios. This port is needed to download the packages from the ImageStore by the nodes as well as to replicate these between the replicas.
* **RDP**. Optional, if RDP is required from the Internet or VirtualNetwork for jumpbox scenarios.
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/azure-to-azure-support-matrix.md
Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5,
| | | 14.04 LTS | [9.37](https://support.microsoft.com/help/4582666/), [9.38](https://support.microsoft.com/help/4590304/), [9.39](https://support.microsoft.com/help/4597409/), [9.40](https://support.microsoft.com/en-us/topic/update-rollup-53-for-azure-site-recovery-060268ef-5835-bb49-7cbc-e8c1e6c6e12a), [9.41](https://support.microsoft.com/en-us/topic/update-rollup-54-for-azure-site-recovery-50873c7c-272c-4a7a-b9bb-8cd59c230533)| 3.13.0-24-generic to 3.13.0-170-generic,<br/>3.16.0-25-generic to 3.16.0-77-generic,<br/>3.19.0-18-generic to 3.19.0-80-generic,<br/>4.2.0-18-generic to 4.2.0-42-generic,<br/>4.4.0-21-generic to 4.4.0-148-generic,<br/>4.15.0-1023-azure to 4.15.0-1045-azure | |||
-16.04 LTS | [9.41](https://support.microsoft.com/en-us/topic/update-rollup-54-for-azure-site-recovery-50873c7c-272c-4a7a-b9bb-8cd59c230533) | 4.4.0-21-generic to 4.4.0-201-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic to 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-133-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1106-azure <br/> 4.4.0-203-generic, 4.15.0-136-generic, 4.15.0-1108-azure through 9.41 hot fix patch**|
+16.04 LTS | [9.41](https://support.microsoft.com/en-us/topic/update-rollup-54-for-azure-site-recovery-50873c7c-272c-4a7a-b9bb-8cd59c230533) | 4.4.0-21-generic to 4.4.0-201-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic to 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-133-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1106-azure <br/> 4.4.0-203-generic, 4.4.0-204-generic, 4.4.0-206-generic, 4.15.0-136-generic, 4.15.0-137-generic, 4.15.0-139-generic, 4.15.0-1108-azure, 4.15.0-1109-azure, 4.15.0-1110-azure through 9.41 hot fix patch**|
16.04 LTS | [9.40](https://support.microsoft.com/en-us/topic/update-rollup-53-for-azure-site-recovery-060268ef-5835-bb49-7cbc-e8c1e6c6e12a) | 4.4.0-21-generic to 4.4.0-197-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic to 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-128-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1102-azure </br> 4.15.0-132-generic, 4.4.0-200-generic, 4.15.0-1106-azure, 4.15.0-133-generic, 4.4.0-201-generic through 9.40 hot fix patch**| 16.04 LTS | [9.39](https://support.microsoft.com/help/4597409/) | 4.4.0-21-generic to 4.4.0-194-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic to 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-123-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1098-azure </br> 4.4.0-197-generic, 4.15.0-126-generic, 4.15.0-128-generic, 4.15.0-1100-azure, 4.15.0-1102-azure through 9.39 hot fix patch**| 16.04 LTS | [9.38](https://support.microsoft.com/help/4590304/) | 4.4.0-21-generic to 4.4.0-190-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic to 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-118-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1096-azure </br> 4.4.0-193-generic, 4.15.0-120-generic, 4.15.0-122-generic, 4.15.0-1098-azure through 9.38 hot fix patch**| 16.04 LTS | [9.37](https://support.microsoft.com/help/4582666/) | 4.4.0-21-generic to 4.4.0-189-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic to 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-115-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1093-azure </br> 4.4.0-190-generic, 4.15.0-117-generic, 4.15.0-118-generic, 4.15.0-1095-azure, 4.15.0-1096-azure through 9.37 hot fix patch**| |||
-18.04 LTS | [9.41](https://support.microsoft.com/en-us/topic/update-rollup-54-for-azure-site-recovery-50873c7c-272c-4a7a-b9bb-8cd59c230533) | 4.15.0-20-generic to 4.15.0-135-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-65-generic </br> 5.3.0-19-generic to 5.3.0-70-generic </br> 5.4.0-37-generic to 5.4.0-59-generic</br> 5.4.0-60-generic to 5.4.0-65-generic </br> 4.15.0-1009-azure to 4.15.0-1106-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1039-azure </br> 4.15.0-136-generic, 5.4.0-66-generic, 4.15.0-1108-azure, 5.4.0-1040-azure through 9.41 hot fix patch**|
+18.04 LTS | [9.41](https://support.microsoft.com/en-us/topic/update-rollup-54-for-azure-site-recovery-50873c7c-272c-4a7a-b9bb-8cd59c230533) | 4.15.0-20-generic to 4.15.0-135-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-65-generic </br> 5.3.0-19-generic to 5.3.0-70-generic </br> 5.4.0-37-generic to 5.4.0-59-generic</br> 5.4.0-60-generic to 5.4.0-65-generic </br> 4.15.0-1009-azure to 4.15.0-1106-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1039-azure </br> 4.15.0-136-generic, 4.15.0-137-generic, 4.15.0-139-generic, 5.4.0-66-generic, 5.4.0-67-generic, 4.15.0-1108-azure, 5.4.0-1040-azure, 5.4.0-1041-azure, 4.15.0-1109-azure, 4.15.0-1110-azure through 9.41 hot fix patch**|
18.04 LTS | [9.40](https://support.microsoft.com/en-us/topic/update-rollup-53-for-azure-site-recovery-060268ef-5835-bb49-7cbc-e8c1e6c6e12a) | 4.15.0-20-generic to 4.15.0-129-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-63-generic </br> 5.3.0-19-generic to 5.3.0-69-generic </br> 5.4.0-37-generic to 5.4.0-59-generic</br> 4.15.0-1009-azure to 4.15.0-1103-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1035-azure </br> 4.15.0-1104-azure, 4.15.0-130-generic, 4.15.0-132-generic, 5.4.0-1036-azure, 5.4.0-60-generic, 5.4.0-62-generic, 4.15.0-1106-azure, 4.15.0-134-generic, 4.15.0-135-generic, 5.4.0-1039-azure, 5.4.0-64-generic, 5.4.0-65-generic through 9.40 hot fix patch**| 18.04 LTS | [9.39](https://support.microsoft.com/help/4597409/) | 4.15.0-20-generic to 4.15.0-123-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-63-generic </br> 5.3.0-19-generic to 5.3.0-69-generic </br> 5.4.0-37-generic to 5.4.0-53-generic</br> 4.15.0-1009-azure to 4.15.0-1099-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1031-azure </br> 4.15.0-124-generic, 5.4.0-54-generic, 5.4.0-1032-azure, 5.4.0-56-generic, 4.15.0-1100-azure, 4.15.0-126-generic, 4.15.0-128-generic, 5.4.0-58-generic, 4.15.0-1102-azure, 5.4.0-1034-azure through 9.39 hot fix patch**| 18.04 LTS | [9.38](https://support.microsoft.com/help/4590304/) | 4.15.0-20-generic to 4.15.0-118-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-61-generic </br> 5.3.0-19-generic to 5.3.0-67-generic </br> 5.4.0-37-generic to 5.4.0-48-generic</br> 4.15.0-1009-azure to 4.15.0-1096-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1026-azure </br> 4.15.0-121-generic, 4.15.0-122-generic, 5.0.0-62-generic, 5.3.0-68-generic, 5.4.0-51-generic, 5.4.0-52-generic, 4.15.0-1099-azure, 5.4.0-1031-azure through 9.38 hot fix patch**| 18.04 LTS | [9.37](https://support.microsoft.com/help/4582666/) | 4.15.0-20-generic to 4.15.0-115-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-60-generic </br> 5.3.0-19-generic to 5.3.0-66-generic </br> 5.4.0-37-generic to 5.4.0-45-generic</br> 4.15.0-1009-azure to 4.15.0-1093-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1023-azure</br> 4.15.0-117-generic, 4.15.0-118-generic, 5.0.0-61-generic, 5.3.0-67-generic, 5.4.0-47-generic, 5.4.0-48-generic, 4.15.0-1095-azure, 4.15.0-1096-azure, 5.4.0-1025-azure, 5.4.0-1026-azure through 9.37 hot fix patch**| |||
-20.04 LTS |[9.41](https://support.microsoft.com/en-us/topic/update-rollup-54-for-azure-site-recovery-50873c7c-272c-4a7a-b9bb-8cd59c230533)| 5.4.0-26-generic to 5.4.0-65 </br> -generic 5.4.0-1010-azure to 5.4.0-1039-azure </br> 5.8.0-29-generic to 5.8.0-43-generic </br> 5.4.0-66-generic, 5.8.0-44-generic, 5.4.0-1040-azure through 9.41 hot fix patch**|
+20.04 LTS |[9.41](https://support.microsoft.com/en-us/topic/update-rollup-54-for-azure-site-recovery-50873c7c-272c-4a7a-b9bb-8cd59c230533)| 5.4.0-26-generic to 5.4.0-65 </br> -generic 5.4.0-1010-azure to 5.4.0-1039-azure </br> 5.8.0-29-generic to 5.8.0-43-generic </br> 5.4.0-66-generic, 5.4.0-67-generic, 5.8.0-44-generic, 5.8.0-45-generic, 5.4.0-1040-azure, 5.4.0-1041-azure through 9.41 hot fix patch**|
20.04 LTS |[9.40](https://support.microsoft.com/en-us/topic/update-rollup-53-for-azure-site-recovery-060268ef-5835-bb49-7cbc-e8c1e6c6e12a)| 5.4.0-26-generic to 5.4.0-59 </br> -generic 5.4.0-1010-azure to 5.4.0-1035-azure </br> 5.8.0-29-generic to 5.8.0-34-generic </br> 5.4.0-1036-azure, 5.4.0-60-generic, 5.4.0-62-generic, 5.8.0-36-generic, 5.8.0-38-generic, 5.4.0-1039-azure, 5.4.0-64-generic, 5.4.0-65-generic, 5.8.0-40-generic, 5.8.0-41-generic through 9.40 hot fix patch**| 20.04 LTS |[9.39](https://support.microsoft.com/help/4597409/) | 5.4.0-26-generic to 5.4.0-53 </br> -generic 5.4.0-1010-azure to 5.4.0-1031-azure </br> 5.4.0-54-generic, 5.8.0-29-generic, 5.4.0-1032-azure, 5.4.0-56-generic, 5.8.0-31-generic, 5.8.0-33-generic, 5.4.0-58-generic, 5.4.0-1034-azure through 9.39 hot fix patch** 20.04 LTS |[9.39](https://support.microsoft.com/help/4597409/) | 5.4.0-26-generic to 5.4.0-53 </br> -generic 5.4.0-1010-azure to 5.4.0-1031-azure </br> 5.4.0-54-generic, 5.8.0-29-generic, 5.4.0-1032-azure, 5.4.0-56-generic, 5.8.0-31-generic, 5.8.0-33-generic, 5.4.0-58-generic, 5.4.0-1034-azure through 9.39 hot fix patch**
Debian 7 | [9.37](https://support.microsoft.com/help/4582666/), [9.38](https://s
||| Debian 8 | [9.37](https://support.microsoft.com/help/4582666/), [9.38](https://support.microsoft.com/help/4590304/), [9.39](https://support.microsoft.com/help/4597409/), [9.40](https://support.microsoft.com/en-us/topic/update-rollup-53-for-azure-site-recovery-060268ef-5835-bb49-7cbc-e8c1e6c6e12a), [9.41](https://support.microsoft.com/en-us/topic/update-rollup-54-for-azure-site-recovery-50873c7c-272c-4a7a-b9bb-8cd59c230533) | 3.16.0-4-amd64 to 3.16.0-11-amd64, 4.9.0-0.bpo.4-amd64 to 4.9.0-0.bpo.11-amd64 | |||
-Debian 9.1 | [9.41](https://support.microsoft.com/en-us/topic/update-rollup-54-for-azure-site-recovery-50873c7c-272c-4a7a-b9bb-8cd59c230533) | 4.9.0-1-amd64 to 4.9.0-14-amd64 </br> 4.19.0-0.bpo.1-amd64 to 4.19.0-0.bpo.14-amd64 </br> 4.19.0-0.bpo.1-cloud-amd64 to 4.19.0-0.bpo.14-cloud-amd64
+Debian 9.1 | [9.41](https://support.microsoft.com/en-us/topic/update-rollup-54-for-azure-site-recovery-50873c7c-272c-4a7a-b9bb-8cd59c230533) | 4.9.0-1-amd64 to 4.9.0-14-amd64 </br> 4.19.0-0.bpo.1-amd64 to 4.19.0-0.bpo.14-amd64 </br> 4.19.0-0.bpo.1-cloud-amd64 to 4.19.0-0.bpo.14-cloud-amd64 </br> 4.9.0-15-amd64 through 9.41 hot fix patch**
Debian 9.1 | [9.40](https://support.microsoft.com/en-us/topic/update-rollup-53-for-azure-site-recovery-060268ef-5835-bb49-7cbc-e8c1e6c6e12a) | 4.9.0-1-amd64 to 4.9.0-14-amd64 </br> 4.19.0-0.bpo.1-amd64 to 4.19.0-0.bpo.13-amd64 </br> 4.19.0-0.bpo.1-cloud-amd64 to 4.19.0-0.bpo.13-cloud-amd64 Debian 9.1 | [9.39](https://support.microsoft.com/help/4597409/) | 4.9.0-1-amd64 to 4.9.0-14-amd64 </br> 4.19.0-0.bpo.1-amd64 to 4.19.0-0.bpo.12-amd64 </br> 4.19.0-0.bpo.1-cloud-amd64 to 4.19.0-0.bpo.12-cloud-amd64 </br> 4.19.0-0.bpo.13-amd64, 4.19.0-0.bpo.13-cloud-amd64 through 9.39 hot fix patch**</br> Debian 9.1 | [9.38](https://support.microsoft.com/help/4590304/) | 4.9.0-1-amd64 to 4.9.0-13-amd64 </br> 4.19.0-0.bpo.1-amd64 to 4.19.0-0.bpo.11-amd64 </br> 4.19.0-0.bpo.1-cloud-amd64 to 4.19.0-0.bpo.11-cloud-amd64 </br> 4.9.0-14-amd64, 4.19.0-0.bpo.12-amd64, 4.19.0-0.bpo.12-cloud-amd64 through 9.38 hot fix patch** Debian 9.1 | [9.37](https://support.microsoft.com/help/4582666/) | 4.9.0-3-amd64 to 4.9.0-13-amd64, 4.19.0-0.bpo.6-amd64 to 4.19.0-0.bpo.10-amd64, 4.19.0-0.bpo.6-cloud-amd64 to 4.19.0-0.bpo.10-cloud-amd64 |||
-Debian 10 | [9.41](https://support.microsoft.com/en-us/topic/update-rollup-54-for-azure-site-recovery-50873c7c-272c-4a7a-b9bb-8cd59c230533) | 4.19.0-5-amd64 to 4.19.0-14-amd64 </br> 4.19.0-6-cloud-amd64 to 4.19.0-14-cloud-amd64 </br> 5.8.0-0.bpo.2-amd64 </br> 5.8.0-0.bpo.2-cloud-amd64
+Debian 10 | [9.41](https://support.microsoft.com/en-us/topic/update-rollup-54-for-azure-site-recovery-50873c7c-272c-4a7a-b9bb-8cd59c230533) | 4.19.0-5-amd64 to 4.19.0-14-amd64 </br> 4.19.0-6-cloud-amd64 to 4.19.0-14-cloud-amd64 </br> 5.8.0-0.bpo.2-amd64 </br> 5.8.0-0.bpo.2-cloud-amd64 </br> 4.19.0-10-cloud-amd64 through 9.41 hot fix patch**
Debian 10 | [9.40](https://support.microsoft.com/en-us/topic/update-rollup-53-for-azure-site-recovery-060268ef-5835-bb49-7cbc-e8c1e6c6e12a) | 4.19.0-5-amd64 to 4.19.0-13-amd64 </br> 4.19.0-6-cloud-amd64 to 4.19.0-13-cloud-amd64 </br> 5.8.0-0.bpo.2-amd64 </br> 5.8.0-0.bpo.2-cloud-amd64 #### Supported SUSE Linux Enterprise Server 12 kernel versions for Azure virtual machines **Release** | **Mobility service version** | **Kernel version** | | | |
-SUSE Linux Enterprise Server 12 (SP1,SP2,SP3,SP4, SP5) | [9.41](https://support.microsoft.com/en-us/topic/update-rollup-54-for-azure-site-recovery-50873c7c-272c-4a7a-b9bb-8cd59c230533) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.4.138-4.7-azure to 4.4.180-4.31-azure,</br>4.12.14-6.3-azure to 4.12.14-6.43-azure </br> 4.12.14-16.7-azure to 4.12.14-16.44-azure |
+SUSE Linux Enterprise Server 12 (SP1,SP2,SP3,SP4, SP5) | [9.41](https://support.microsoft.com/en-us/topic/update-rollup-54-for-azure-site-recovery-50873c7c-272c-4a7a-b9bb-8cd59c230533) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.4.138-4.7-azure to 4.4.180-4.31-azure,</br>4.12.14-6.3-azure to 4.12.14-6.43-azure </br> 4.12.14-16.7-azure to 4.12.14-16.44-azure </br> 4.12.14-16.47-azure through 9.41 hot fix patch**|
SUSE Linux Enterprise Server 12 (SP1,SP2,SP3,SP4, SP5) | [9.40](https://support.microsoft.com/en-us/topic/update-rollup-53-for-azure-site-recovery-060268ef-5835-bb49-7cbc-e8c1e6c6e12a) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.4.138-4.7-azure to 4.4.180-4.31-azure,</br>4.12.14-6.3-azure to 4.12.14-6.43-azure </br> 4.12.14-16.7-azure to 4.12.14-16.38-azure </br> 4.12.14-16.41-azure through 9.40 hot fix patch**| SUSE Linux Enterprise Server 12 (SP1,SP2,SP3,SP4, SP5) | [9.39](https://support.microsoft.com/help/4597409/) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.4.138-4.7-azure to 4.4.180-4.31-azure,</br>4.12.14-6.3-azure to 4.12.14-6.43-azure </br> 4.12.14-16.7-azure to 4.12.14-16.34-azure </br> 4.12.14-16.38-azure through 9.39 hot fix patch**| SUSE Linux Enterprise Server 12 (SP1,SP2,SP3,SP4, SP5) | [9.38](https://support.microsoft.com/help/4590304/) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.4.138-4.7-azure to 4.4.180-4.31-azure,</br>4.12.14-6.3-azure to 4.12.14-6.43-azure </br> 4.12.14-16.7-azure to 4.12.14-16.28-azure |
SUSE Linux Enterprise Server 12 (SP1,SP2,SP3,SP4, SP5) | [9.37](https://support.
**Release** | **Mobility service version** | **Kernel version** | | | |
-SUSE Linux Enterprise Server 15, SP1, SP2 | [9.41](https://support.microsoft.com/en-us/topic/update-rollup-54-for-azure-site-recovery-50873c7c-272c-4a7a-b9bb-8cd59c230533) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.35-azure
+SUSE Linux Enterprise Server 15, SP1, SP2 | [9.41](https://support.microsoft.com/en-us/topic/update-rollup-54-for-azure-site-recovery-50873c7c-272c-4a7a-b9bb-8cd59c230533) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.35-azure </br> 5.3.18-18.38-azure through 9.41 hot fix patch**
SUSE Linux Enterprise Server 15, SP1, SP2 | [9.40](https://support.microsoft.com/en-us/topic/update-rollup-53-for-azure-site-recovery-060268ef-5835-bb49-7cbc-e8c1e6c6e12a) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.58-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.29-azure </br> 5.3.18-18.32-azure, 4.12.14-8.58-azure through 9.40 hot fix patch** SUSE Linux Enterprise Server 15, SP1, SP2 | [9.39](https://support.microsoft.com/help/4597409/) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.47-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.21-azure </br> 4.12.14-8.52-azure, 5.3.18-18.24-azure, 4.12.14-8.55-azure, 5.3.18-18.29-azure through 9.39 hot fix patch** SUSE Linux Enterprise Server 15, SP1, SP2 | [9.38](https://support.microsoft.com/help/4590304/) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.44-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.18-azure </br> 4.12.14-8.47-azure, 5.3.18-18.21-azure through 9.38 hot fix patch**
spring-cloud Connect Managed Identity To Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/connect-managed-identity-to-azure-sql.md
+
+ Title: Use Managed identity to connect Azure SQL to Azure Spring Cloud app
+description: Set up managed identity to connect Azure SQL to an Azure Spring Cloud app.
++++ Last updated : 03/25/2021+++
+# Use a managed identity to connect Azure SQL Database to an Azure Spring Cloud app
+
+**This article applies to:** ✔️ Java
+
+This article shows you how to create a managed identity for an Azure Spring Cloud app and use it to access Azure SQL Database.
+
+[Azure SQL Database](https://azure.microsoft.com/services/sql-database/) is the intelligent, scalable, relational database service built for the cloud. ItΓÇÖs always up to date, with AI-powered and automated features that optimize performance and durability. Serverless compute and Hyperscale storage options automatically scale resources on demand, so you can focus on building new applications without worrying about storage size or resource management.
+
+## Prerequisites
+This example uses the following resources.
+* Follow the [Spring Data JPA tutorial](https://docs.microsoft.com/azure/developer/java/spring-framework/configure-spring-data-jpa-with-azure-sql-server) to provision an Azure SQL Database and get it work with a Java app locally
+* Follow the [Azure Spring Cloud system-assigned managed identity tutorial](https://docs.microsoft.com/azure/spring-cloud/spring-cloud-howto-enable-system-assigned-managed-identity) to provision an Azure Spring Cloud app with MI enabled
+
+## Grant permission to the Managed Identity
+Connect to your SQL server and run the following SQL query:
+
+```sql
+CREATE USER [<MIName>] FROM EXTERNAL PROVIDER;
+ALTER ROLE db_datareader ADD MEMBER [<MIName>];
+ALTER ROLE db_datawriter ADD MEMBER [<MIName>];
+ALTER ROLE db_ddladmin ADD MEMBER [<MIName>];
+GO
+```
+
+This <MIName> follows the rule: `<service instance name>/apps/<app name>`, e.g myspringcloud/apps/sqldemo. You can also query the MIName with Azure CLI:
+
+```azurecli
+az ad sp show --id <identity object ID> --query displayName
+```
+
+## Configure your Java app to use Managed Identity
+Open the `src/main/resources/application.properties` file, and add `Authentication=ActiveDirectoryMSI;` at the end of the following line. Be sure to use the correct value for $AZ_DATABASE_NAME variable.
+
+```properties
+spring.datasource.url=jdbc:sqlserver://$AZ_DATABASE_NAME.database.windows.net:1433;database=demo;encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;Authentication=ActiveDirectoryMSI;
+```
+
+## Build and deploy the app to Azure Spring Cloud
+Rebuild the app and deploy it to the Azure Spring Cloud app provisioned in the second bullet point under Prerequisites. Now you have a Spring Boot application, authenticated by a Managed Identity, that uses JPA to store and retrieve data from an Azure SQL Database in Azure Spring Cloud.
+
+## Next steps
+
+* [How to access Storage blob with managed identity in Azure Spring Cloud](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/tree/master/managed-identity-storage-blob)
+* [How to enable system-assigned managed identity for Azure Spring Cloud application](./spring-cloud-howto-enable-system-assigned-managed-identity.md)
+* [Learn more about managed identities for Azure resources](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/active-directory/managed-identities-azure-resources/overview.md)
+* [Authenticate Azure Spring Cloud with Key Vault in GitHub Actions](./spring-cloud-github-actions-key-vault.md)
spring-cloud How To Access Data Plane Azure Ad Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/how-to-access-data-plane-azure-ad-rbac.md
+
+ Title: "Access Config Server and Service Registry"
+
+description: How to access Config Server and Service Registry Endpoints with Azure Active Directory role-based access control.
++++ Last updated : 02/04/2021+++
+# Access Config Server and Service Registry
+
+This article explains how to access the Spring Cloud Config Server and Spring Cloud Service Registry managed by Azure Spring Cloud using Azure Active Directory (Azure AD) role-based access control (RBAC).
+
+## Assign role to Azure AD user/group, MSI, or service principal
+
+To use Azure AD and RBAC you must assign the *Azure Spring Cloud Data Reader* role to a user, group, or service principal by the following procedure:
+
+1. Go to the service overview page of your service instance.
+
+2. Click **Access Control (IAM)** to open the access control blade.
+
+3. Click the **Add** button and **Add role assignments** (Authorization may be required to add).
+
+4. Find and select *Azure Spring Cloud Data Reader* under **Role**.
+5. Assign access to `User, group, or service principal` or `User assigned managed identity` according to the user type. Search for and select user.
+6. Click `Save`
+
+ ![assign-role](media/access-data-plane-aad-rbac/assign-data-reader-role.png)
+
+## Access Config Server and Service Registry Endpoints
+
+After the Azure Spring Cloud Data Reader role is assigned, customers can access the Spring Cloud Config Server and the Spring Cloud Service Registry endpoints. Use the following procedures:
+
+1. Get an access token. After an Azure AD user is assigned the Azure Spring Cloud Data Reader role, customers can use the following commands to log in to Azure CLI with user, service principal, or managed identity to get an access token. For details, see [Authenticate Azure CLI](https://docs.microsoft.com/cli/azure/authenticate-azure-cli).
+
+ ```azurecli
+ az login
+ az account get-access-token
+ ```
+2. Compose the endpoint. We support default endpoints of the Spring Cloud Config Server and Spring Cloud Service Registry managed by Azure Spring Cloud. For more information, see [Production ready endpoints](https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#production-ready-endpoints). Customers can also get a full list of supported endpoints of the Spring Cloud Config Server and Spring Cloud Service Registry managed by Azure Spring Cloud by accessing endpoints:
+
+ * *https://SERVICE_NAME.svc.azuremicroservices.io/eureka/actuator/*
+ * *https://SERVICE_NAME.svc.azuremicroservices.io/config/actuator/*
+
+3. Access the composed endpoint with the access token. Put the access token in a header to provide authorization. Only the "GET" method is supported.
+
+ For example, access an endpoint like *https://SERVICE_NAME.svc.azuremicroservices.io/eureka/actuator/health* to see the health status of eureka.
+
+ If the response is *401 Unauthorized*, check to see if the role is successfully assigned. It will take several minutes for the role take effect or verify that the access token has not expired.
+
+## Next steps
+* [Authenticate Azure CLI](https://docs.microsoft.com/cli/azure/authenticate-azure-cli)
+* [Production ready endpoints](https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#production-ready-endpoints)
+
+## See also
+* [Create roles and permissions](how-to-permissions.md)
static-web-apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/overview.md
Previously updated : 05/08/2020 Last updated : 04/01/2021
-# Customer intent: As a developer, I want to publish a website from a GitHub repository so that the app is publicly available on the web.
+# Customer intent: As a developer, I want to publish a website from a GitHub or Azure DevOps repository so that the app is publicly available on the web.
# What is Azure Static Web Apps Preview?
-Azure Static Web Apps is a service that automatically builds and deploys full stack web apps to Azure from a GitHub repository.
+Azure Static Web Apps is a service that automatically builds and deploys full stack web apps to Azure from a code repository.
-The workflow of Azure Static Web Apps is tailored to a developer's daily workflow. Apps are built and deployed based off GitHub interactions.
+The workflow of Azure Static Web Apps is tailored to a developer's daily workflow. Apps are built and deployed based off code changes.
-When you create an Azure Static Web Apps resource, Azure sets up a GitHub Actions workflow in the app's source code repository that monitors a branch of your choice. Every time you push commits or accept pull requests into the watched branch, the GitHub Action automatically builds and deploys your app and its API to Azure.
+When you create an Azure Static Web Apps resource, Azure interacts directly with GitHub or Azure DevOps to monitor a branch of your choice. Every time you push commits or accept pull requests into the watched branch, a build is automatically run and your app and API is deployed to Azure.
Static web apps are commonly built using libraries and frameworks like Angular, React, Svelte, Vue, or Blazor where server side rendering is not required. These apps include HTML, CSS, JavaScript, and image assets that make up the application. With a traditional web server, these assets are served from a single server alongside any required API endpoints.
With Static Web Apps, static assets are separated from a traditional web server
- **Web hosting** for static content like HTML, CSS, JavaScript, and images. - **Integrated API** support provided by Azure Functions.-- **First-class GitHub integration** where repository changes trigger builds and deployments.
+- **First-class GitHub and Azure DevOps integration** where repository changes trigger builds and deployments.
- **Globally distributed** static content, putting content closer to your users. - **Free SSL certificates**, which are automatically renewed. - **Custom domains** to provide branded customizations to your app.
storage Account Encryption Key Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/account-encryption-key-create.md
Previously updated : 02/05/2020 Last updated : 03/31/2021
Azure Storage encrypts all data in a storage account at rest. By default, Queue
This article describes how to create a storage account that relies on a key that is scoped to the account. When the account is first created, Microsoft uses the account key to encrypt the data in the account, and Microsoft manages the key. You can subsequently configure customer-managed keys for the account to take advantage of those benefits, including the ability to provide your own keys, update the key version, rotate the keys, and revoke access controls.
-## About the feature
-
-To create a storage account that relies on the account encryption key for Queue and Table storage, you must first register to use this feature with Azure. Due to limited capacity, be aware that it may take several months before requests for access are approved.
-
-You can create a storage account that relies on the account encryption key for Queue and Table storage in the following regions:
--- East US-- South Central US-- West US 2 -
-### Register to use the account encryption key
-
-To register to use the account encryption key with Queue or Table storage, use PowerShell or Azure CLI.
-
-# [PowerShell](#tab/powershell)
-
-To register with PowerShell, call the [Register-AzProviderFeature](/powershell/module/az.resources/register-azproviderfeature) command.
-
-```powershell
-Register-AzProviderFeature -ProviderNamespace Microsoft.Storage `
- -FeatureName AllowAccountEncryptionKeyForQueues
-Register-AzProviderFeature -ProviderNamespace Microsoft.Storage `
- -FeatureName AllowAccountEncryptionKeyForTables
-```
-
-# [Azure CLI](#tab/azure-cli)
-
-To register with Azure CLI, call the [az feature register](/cli/azure/feature#az-feature-register) command.
-
-```azurecli
-az feature register --namespace Microsoft.Storage \
- --name AllowAccountEncryptionKeyForQueues
-az feature register --namespace Microsoft.Storage \
- --name AllowAccountEncryptionKeyForTables
-```
-
-# [Template](#tab/template)
-
-N/A
---
-### Check the status of your registration
-
-To check the status of your registration for Queue or Table storage, use PowerShell or Azure CLI.
-
-# [PowerShell](#tab/powershell)
-
-To check the status of your registration with PowerShell, call the [Get-AzProviderFeature](/powershell/module/az.resources/get-azproviderfeature) command.
-
-```powershell
-Get-AzProviderFeature -ProviderNamespace Microsoft.Storage `
- -FeatureName AllowAccountEncryptionKeyForQueues
-Get-AzProviderFeature -ProviderNamespace Microsoft.Storage `
- -FeatureName AllowAccountEncryptionKeyForTables
-```
-
-# [Azure CLI](#tab/azure-cli)
-
-To check the status of your registration with Azure CLI, call the [az feature](/cli/azure/feature#az-feature-show) command.
-
-```azurecli
-az feature show --namespace Microsoft.Storage \
- --name AllowAccountEncryptionKeyForQueues
-az feature show --namespace Microsoft.Storage \
- --name AllowAccountEncryptionKeyForTables
-```
-
-# [Template](#tab/template)
-
-N/A
---
-### Re-register the Azure Storage resource provider
-
-After your registration is approved, you must re-register the Azure Storage resource provider. Use PowerShell or Azure CLI to re-register the resource provider.
-
-# [PowerShell](#tab/powershell)
-
-To re-register the resource provider with PowerShell, call the [Register-AzResourceProvider](/powershell/module/az.resources/register-azresourceprovider) command.
-
-```powershell
-Register-AzResourceProvider -ProviderNamespace 'Microsoft.Storage'
-```
-
-# [Azure CLI](#tab/azure-cli)
-
-To re-register the resource provider with Azure CLI, call the [az provider register](/cli/azure/provider#az-provider-register) command.
-
-```azurecli
-az provider register --namespace 'Microsoft.Storage'
-```
-
-# [Template](#tab/template)
-
-N/A
--- ## Create an account that uses the account encryption key You must configure a new storage account to use the account encryption key for queues and tables at the time that you create the storage account. The scope of the encryption key cannot be changed after the account is created.
N/A
+## Pricing and billing
+
+A storage account that is created to use an encryption key scoped to the account is billed for Table storage capacity and transactions at a different rate than an account that uses the default service-scoped key. For details, see [Azure Table Storage pricing](https://azure.microsoft.com/pricing/details/storage/tables/).
+ ## Next steps - [Azure Storage encryption for data at rest](storage-service-encryption.md)
storage Storage Sync Files Planning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-sync-files-planning.md
If cloud tiering is enabled, solutions that directly back up the server endpoint
If you prefer to use an on-premises backup solution, backups should be performed on a server in the sync group that has cloud tiering disabled. When performing a restore, use the volume-level or file-level restore options. Files restored using the file-level restore option will be synced to all endpoints in the sync group and existing files will be replaced with the version restored from backup. Volume-level restores will not replace newer file versions in the Azure file share or other server endpoints. > [!WARNING]
-> Robocopy /B switch is not supported with Azure File Sync. Using the Robocopy /B switch with an Azure File Sync server endpoint as the source may lead to file corruption.
+> If you need to use Robocopy /B with an Azure File Sync agent running on either source or target server, please upgrade to Azure File Sync agent version v12.0 or above. Using Robocopy /B with agent versions less than v12.0 will lead to the corruption of tiered files during the copy.
> [!Note] > Bare-metal (BMR) restore can cause unexpected results and is not currently supported.
If you prefer to use an on-premises backup solution, backups should be performed
* [Planning for an Azure Files deployment](storage-files-planning.md) * [Deploy Azure Files](./storage-how-to-create-file-share.md) * [Deploy Azure File Sync](storage-sync-files-deployment-guide.md)
-* [Monitor Azure File Sync](storage-sync-files-monitoring.md)
+* [Monitor Azure File Sync](storage-sync-files-monitoring.md)
synapse-analytics Get Started Create Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/get-started-create-workspace.md
To complete this tutorial's steps, you need to have access to a resource group f
## Basics tab > Project Details Fill in the following fields:
- 1. **Subscription** - Pick any subscription.
- 1. **Resource group** - Use any resource group.
- 1. **Resource group** - Leave this blank.
+
+1. **Subscription** - Pick any subscription.
+1. **Resource group** - Use any resource group.
+1. **Resource group** - Leave this blank.
## Basics tab > Workspace details Fill in the following fields:
- 1. **Workspace name** - Pick any globally unique name. In this tutorial, we'll use **myworkspace**.
- 1. **Region** - Pick any region.
- 1. **Select Data Lake Storage Gen 2**
- 1. Click the button for **From subscription**.
- 1. By **Account name**, click **Create New** and name the new storage account **contosolake** or similar as this name must be unique.
- 1. By **File system name**, click **Create New** and name it **users**. This will create a storage container called **users**. The workspace will use this storage account as the "primary" storage account to Spark tables and Spark application logs.
- 1. Check the "Assign myself the Storage Blob Data Contributor role on the Data Lake Storage Gen2 account" box.
+
+1. **Workspace name** - Pick any globally unique name. In this tutorial, we'll use **myworkspace**.
+1. **Region** - Pick any region.
+1. **Select Data Lake Storage Gen 2**
+1. Click the button for **From subscription**.
+1. By **Account name**, click **Create New** and name the new storage account **contosolake** or similar as this name must be unique.
+1. By **File system name**, click **Create New** and name it **users**. This will create a storage container called **users**. The workspace will use this storage account as the "primary" storage account to Spark tables and Spark application logs.
+1. Check the "Assign myself the Storage Blob Data Contributor role on the Data Lake Storage Gen2 account" box.
## Completing the process Select **Review + create** > **Create**. Your workspace is ready in a few minutes.
After your Azure Synapse workspace is created, you have two ways to open Synapse
* Open your Synapse workspace in the [Azure portal](https://portal.azure.com), in the **Overview** section of the Synapse workspace, select **Open** in the Open Synapse Studio box. * Go to the `https://web.azuresynapse.net` and sign in to your workspace. ---------- ## Next steps > [!div class="nextstepaction"]
synapse-analytics Overview What Is https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/overview-what-is.md
Azure Synapse provides a single way for enterprises to manage analytics resource
* Industry-leading productivity for writing SQL or Spark code: authoring, debugging, and performance optimization * Integrate with enterprise CI/CD process
-## Engage with the Synapse engineering team
+## Engage with the Synapse community
+- [Microsoft Q&A](/answers/topics/azure-synapse-analytics.html): Ask technical questions.
- [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-synapse): Ask development questions.-- [Microsoft Q&A question page](/answers/topics/azure-synapse-analytics.html): Ask technical questions. ## Next steps
virtual-desktop Azure Advisor Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/azure-advisor-recommendations.md
Title: Azure Advisor Windows Virtual Desktop Walkthrough - Azure
description: How to resolve Azure Advisor recommendations for Windows Virtual Desktop. Previously updated : 08/28/2020 Last updated : 03/31/2021
You need to unblock specific URLs to make sure that your virtual machine (VM) fu
To solve this recommendation, make sure you unblock all the URLs on the [Safe URL list](safe-url-list.md). You can use Service Tag or FQDN tags to unblock URLs, too.
-## Propose new recommendations
-
-You can help us improve Azure Advisor by submitting ideas for recommendations. Your recommendation could help another user out of a tough spot. To submit a suggestion, go to [our UserVoice forum](https://windowsvirtualdesktop.uservoice.com/forums/930847-azure-advisor-recommendations) and fill out the submission form. When you fill out the form, make sure to give us as much detail as possible.
- ## Next steps If you're looking for more in-depth guides about how to resolve common issues, check out [Troubleshooting overview, feedback, and support for Windows Virtual Desktop](troubleshoot-set-up-overview.md).
virtual-desktop Azure Advisor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/azure-advisor.md
Title: Integrate Windows Virtual Desktop with Azure Advisor - Azure
description: How to use Azure Advisor with your Windows Virtual Desktop deployment. Previously updated : 08/28/2020 Last updated : 03/31/2021
When you select a category, you'll go to its active recommendations page. On thi
## Next steps To learn how to resolve recommendations, see [How to resolve Azure Advisor recommendations](azure-advisor-recommendations.md).-
-If you have suggestions for new recommendations, post it on our [Azure Advisor User Voice forum](https://windowsvirtualdesktop.uservoice.com/forums/930847-azure-advisor-recommendations).
virtual-desktop Language Packs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/language-packs.md
After a user changes their language settings, they'll need to sign out of their
If you're curious about known issues for language packs, see [Adding language packs in Windows 10, version 1803 and later versions: Known issues](/windows-hardware/manufacture/desktop/language-packs-known-issue).
-If you have any other questions about Windows 10 Enterprise multi-session, check out our [FAQ](windows-10-multisession-faq.md).
+If you have any other questions about Windows 10 Enterprise multi-session, check out our [FAQ](windows-10-multisession-faq.yml).
virtual-desktop Linux Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/linux-overview.md
Title: Windows Virtual Desktop Thin Client Support - Azure
description: A brief overview of thin client support for Windows Virtual Desktop. Previously updated : 01/23/2020 Last updated : 03/31/2021 # Linux support
-You can access Windows Virtual Desktop resources from your Linux devices with the [web client](connect-web.md) or the following supported clients, provided by our Linux thin client partners. We're working with a number of partners to enable supported Windows Virtual Desktop clients on more Linux-based operating systems and devices. If you need Windows Virtual Desktop support on a Linux platform that isn't listed here, let us know on our [UserVoice page](https://remotedesktop.uservoice.com/forums/923035-remote-desktop-support-on-linux).
+You can access Windows Virtual Desktop resources from your Linux devices with the [web client](connect-web.md) or the following supported clients, provided by our Linux thin client partners. We're working with a number of partners to enable supported Windows Virtual Desktop clients on more Linux-based operating systems and devices.
## Connect with your Linux device
virtual-desktop Network Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/network-connectivity.md
Client connection sequence described below:
## Connection security
-TLS 1.2 is used for all connections initiated from the clients and session hosts to the Windows Virtual Desktop infrastructure components. Windows Virtual Desktop uses the same TLS 1.2 ciphers as [Azure Front Door](../frontdoor/front-door-faq.md#what-are-the-current-cipher-suites-supported-by-azure-front-door). It's important to make sure both client computers and session hosts can use these ciphers.
+TLS 1.2 is used for all connections initiated from the clients and session hosts to the Windows Virtual Desktop infrastructure components. Windows Virtual Desktop uses the same TLS 1.2 ciphers as [Azure Front Door](../frontdoor/front-door-faq.yml#what-are-the-current-cipher-suites-supported-by-azure-front-door-). It's important to make sure both client computers and session hosts can use these ciphers.
For reverse connect transport, both client and session host connect to the Windows Virtual Desktop gateway. After establishing the TCP connection, the client or session host validates the Windows Virtual Desktop gateway's certificate. After establishing the base transport, RDP establishes a nested TLS connection between client and session host using the session host's certificates. By default, the certificate used for RDP encryption is self-generated by the OS during the deployment. If desired, customers may deploy centrally managed certificates issued by the enterprise certification authority. For more information about configuring certificates, see [Windows Server documentation](/troubleshoot/windows-server/remote/remote-desktop-listener-certificate-configurations).
virtual-desktop Teams On Wvd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/teams-on-wvd.md
Title: Microsoft Teams on Windows Virtual Desktop - Azure
description: How to use Microsoft Teams on Windows Virtual Desktop. Previously updated : 11/10/2020 Last updated : 03/31/2021
Using