Updates from: 06/05/2021 03:07:03
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Identity Provider Github https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-github.md
Previously updated : 03/17/2021 Last updated : 06/04/2021
zone_pivot_groups: b2c-policy-type
::: zone pivot="b2c-custom-policy"
+> [!IMPORTANT]
+> Starting May 2021, GitHub announced a change that impacts your Azure AD B2C custom policy federation. Due to the change, add `<Item Key="BearerTokenTransmissionMethod">AuthorizationHeader</Item>` metadata to your GitHub technical profile. For more information, see [Deprecating API authentication through query parameters](https://developer.github.com/changes/2020-02-10-deprecating-auth-through-query-param/).
::: zone-end
You can define a GitHub account as a claims provider by adding it to the **Claim
<Item Key="HttpBinding">GET</Item> <Item Key="scope">read:user user:email</Item> <Item Key="UsePolicyInRedirectUri">0</Item>
+ <Item Key="BearerTokenTransmissionMethod">AuthorizationHeader</Item>
<Item Key="UserAgentForClaimsExchange">CPIM-Basic/{tenant}/{policy}</Item> <!-- Update the Client ID below to the Application ID --> <Item Key="client_id">Your GitHub application ID</Item>
active-directory Application Provisioning Configuration Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/application-provisioning-configuration-api.md
+
+ Title: Configure provisioning using Microsoft Graph APIs
+description: Learn how to save time by using the Microsoft Graph APIs to automate the configuration of automatic provisioning.
+++++++ Last updated : 06/03/2021++++
+# Configure provisioning using Microsoft Graph APIs
+
+The Azure portal is a convenient way to configure provisioning for individual apps one at a time. But if you're creating severalΓÇöor even hundredsΓÇöof instances of an application, it can be easier to automate app creation and configuration with the Microsoft Graph APIs. This article outlines how to automate provisioning configuration through APIs. This method is commonly used for applications like [Amazon Web Services](/azure/active-directory/saas-apps/amazon-web-service-tutorial#configure-azure-ad-sso).
+
+**Overview of steps for using Microsoft Graph APIs to automate provisioning configuration**
++
+|Step |Details |
+|||
+|[Step 1. Create the gallery application](#step-1-create-the-gallery-application) |Sign-in to the API client <br> Retrieve the gallery application template <br> Create the gallery application |
+|[Step 2. Create provisioning job based on template](#step-2-create-the-provisioning-job-based-on-the-template) |Retrieve the template for the provisioning connector <br> Create the provisioning job |
+|[Step 3. Authorize access](#step-3-authorize-access) |Test the connection to the application <br> Save the credentials |
+|[Step 4. Start provisioning job](#step-4-start-the-provisioning-job) |Start the job |
+|[Step 5. Monitor provisioning](#step-5-monitor-provisioning) |Check the status of the provisioning job <br> Retrieve the provisioning logs |
+
+## Step 1: Create the gallery application
+
+### Sign in to Microsoft Graph Explorer (recommended), Postman, or any other API client you use
+
+1. Start [Microsoft Graph Explorer](https://developer.microsoft.com/graph/graph-explorer).
+1. Select the "Sign-In with Microsoft" button and sign in using Azure AD global administrator or App Admin credentials.
+1. Upon successful sign-in, you'll see the user account details in the left-hand pane.
+
+### Retrieve the gallery application template identifier
+Applications in the Azure AD application gallery each have an [application template](/graph/api/applicationtemplate-list?tabs=http&view=graph-rest-beta) that describes the metadata for that application. Using this template, you can create an instance of the application and service principal in your tenant for management.
+
+#### Request
+
+```msgraph-interactive
+GET https://graph.microsoft.com/beta/applicationTemplates
+```
+#### Response
+
+<!-- {
+ "blockType": "response",
+ "truncated": true,
+ "@odata.type": "microsoft.graph.applicationTemplate",
+ "isCollection": true
+} -->
+
+```http
+HTTP/1.1 200 OK
+Content-type: application/json
+{
+ "value": [
+ {
+ "id": "8b1025e4-1dd2-430b-a150-2ef79cd700f5",
+ "displayName": "Amazon Web Services (AWS)",
+ "homePageUrl": "http://aws.amazon.com/",
+ "supportedSingleSignOnModes": [
+ "password",
+ "saml",
+ "external"
+ ],
+ "supportedProvisioningTypes": [
+ "sync"
+ ],
+ "logoUrl": "https://az495088.vo.msecnd.net/app-logo/aws_215.png",
+ "categories": [
+ "developerServices"
+ ],
+ "publisher": "Amazon",
+ "description": null
+
+}
+```
+
+### Create the gallery application
+
+Use the template ID retrieved for your application in the last step to [create an instance](/graph/api/applicationtemplate-instantiate?tabs=http&view=graph-rest-beta) of the application and service principal in your tenant.
+
+#### Request
++
+```msgraph-interactive
+POST https://graph.microsoft.com/beta/applicationTemplates/{id}/instantiate
+Content-type: application/json
+{
+ "displayName": "AWS Contoso"
+}
+```
+
+#### Response
+
+```http
+HTTP/1.1 201 OK
+Content-type: application/json
+{
+ "application": {
+ "objectId": "cbc071a6-0fa5-4859-8g55-e983ef63df63",
+ "appId": "92653dd4-aa3a-3323-80cf-e8cfefcc8d5d",
+ "applicationTemplateId": "8b1025e4-1dd2-430b-a150-2ef79cd700f5",
+ "displayName": "AWS Contoso",
+ "homepage": "https://signin.aws.amazon.com/saml?metadata=aws|ISV9.1|primary|z",
+ "replyUrls": [
+ "https://signin.aws.amazon.com/saml"
+ ],
+ "logoutUrl": null,
+ "samlMetadataUrl": null,
+ },
+ "servicePrincipal": {
+ "objectId": "f47a6776-bca7-4f2e-bc6c-eec59d058e3e",
+ "appDisplayName": "AWS Contoso",
+ "applicationTemplateId": "8b1025e4-1dd2-430b-a150-2ef79cd700f5",
+ "appRoleAssignmentRequired": true,
+ "displayName": "My custom name",
+ "homepage": "https://signin.aws.amazon.com/saml?metadata=aws|ISV9.1|primary|z",
+ "replyUrls": [
+ "https://signin.aws.amazon.com/saml"
+ ],
+ "servicePrincipalNames": [
+ "93653dd4-aa3a-4323-80cf-e8cfefcc8d7d"
+ ],
+ "tags": [
+ "WindowsAzureActiveDirectoryIntegratedApp"
+ ],
+ }
+}
+```
+
+## Step 2: Create the provisioning job based on the template
+
+### Retrieve the template for the provisioning connector
+
+Applications in the gallery that are enabled for provisioning have templates to streamline configuration. Use the request below to [retrieve the template for the provisioning configuration](/graph/api/synchronization-synchronizationtemplate-list?tabs=http&view=graph-rest-beta). Note that you will need to provide the ID. The ID refers to the preceding resource, which in this case is the servicePrincipal resource.
+
+#### Request
+
+```msgraph-interactive
+GET https://graph.microsoft.com/beta/servicePrincipals/{id}/synchronization/templates
+```
+
+#### Response
+```http
+HTTP/1.1 200 OK
+{
+ "value": [
+ {
+ "id": "aws",
+ "factoryTag": "aws",
+ "schema": {
+ "directories": [],
+ "synchronizationRules": []
+ }
+ }
+ ]
+}
+```
+
+### Create the provisioning job
+To enable provisioning, you'll first need to [create a job](/graph/api/synchronization-synchronizationjob-post?tabs=http&view=graph-rest-beta). Use the following request to create a provisioning job. Use the templateId from the previous step when specifying the template to be used for the job.
+
+#### Request
+
+```msgraph-interactive
+POST https://graph.microsoft.com/beta/servicePrincipals/{id}/synchronization/jobs
+Content-type: application/json
+{
+ "templateId": "aws"
+}
+```
+
+#### Response
+```http
+HTTP/1.1 201 OK
+Content-type: application/json
+{
+ "id": "{jobId}",
+ "templateId": "aws",
+ "schedule": {
+ "expiration": null,
+ "interval": "P10675199DT2H48M5.4775807S",
+ "state": "Disabled"
+ },
+ "status": {
+ "countSuccessiveCompleteFailures": 0,
+ "escrowsPruned": false,
+ "synchronizedEntryCountByType": [],
+ "code": "NotConfigured",
+ "lastExecution": null,
+ "lastSuccessfulExecution": null,
+ "lastSuccessfulExecutionWithExports": null,
+ "steadyStateFirstAchievedTime": "0001-01-01T00:00:00Z",
+ "steadyStateLastAchievedTime": "0001-01-01T00:00:00Z",
+ "quarantine": null,
+ "troubleshootingUrl": null
+ }
+}
+```
+
+## Step 3: Authorize access
+
+### Test the connection to the application
+
+Test the connection with the third-party application. The following example is for an application that requires a client secret and secret token. Each application has its own requirements. Applications often use a base address in place of a client secret. To determine what credentials your app requires, go to the provisioning configuration page for your application, and in developer mode, click **test connection**. The network traffic will show the parameters used for credentials. For a full list of credentials, see [synchronizationJob: validateCredentials](/graph/api/synchronization-synchronizationjob-validatecredentials?tabs=http&view=graph-rest-beta). Most applications, such as Azure Databricks, rely on a BaseAddress and SecretToken. The BaseAddress is referred to as a tenant URL in the Azure portal.
+
+#### Request
+```msgraph-interactive
+POST https://graph.microsoft.com/beta/servicePrincipals/{id}/synchronization/jobs/{id}/validateCredentials
+{
+ "credentials": [
+ { "key": "ClientSecret", "value": "xxxxxxxxxxxxxxxxxxxxx" },
+ { "key": "SecretToken", "value": "xxxxxxxxxxxxxxxxxxxxx" }
+ ]
+}
+```
+#### Response
+```http
+HTTP/1.1 204 No Content
+```
+
+### Save your credentials
+
+Configuring provisioning requires establishing a trust between Azure AD and the application. Authorize access to the third-party application. The following example is for an application that requires a client secret and a secret token. Each application has its own requirements. Review the [API documentation](/graph/api/synchronization-synchronizationjob-validatecredentials?tabs=http&view=graph-rest-beta) to see the available options.
+
+#### Request
+```msgraph-interactive
+PUT https://graph.microsoft.com/beta/servicePrincipals/{id}/synchronization/secrets
+
+{
+ "value": [
+ { "key": "ClientSecret", "value": "xxxxxxxxxxxxxxxxxxxxx" },
+ { "key": "SecretToken", "value": "xxxxxxxxxxxxxxxxxxxxx" }
+ ]
+}
+```
+
+#### Response
+```http
+HTTP/1.1 204 No Content
+```
+
+## Step 4: Start the provisioning job
+Now that the provisioning job is configured, use the following command to [start the job](/graph/api/synchronization-synchronizationjob-start?tabs=http&view=graph-rest-beta).
++
+#### Request
+
+```http
+POST https://graph.microsoft.com/beta/servicePrincipals/{id}/synchronization/jobs/{jobId}/start
+```
++
+#### Response
+```http
+HTTP/1.1 204 No Content
+```
++
+## Step 5: Monitor provisioning
+
+### Monitor the provisioning job status
+
+Now that the provisioning job is running, use the following command to track the progress of the current provisioning cycle as well as statistics to date such as the number of users and groups that have been created in the target system.
+
+#### Request
+```msgraph-interactive
+GET https://graph.microsoft.com/beta/servicePrincipals/{id}/synchronization/jobs/{jobId}/
+```
+
+#### Response
+```http
+HTTP/1.1 200 OK
+Content-type: application/json
+Content-length: 2577
+{
+ "id": "{jobId}",
+ "templateId": "aws",
+ "schedule": {
+ "expiration": null,
+ "interval": "P10675199DT2H48M5.4775807S",
+ "state": "Disabled"
+ },
+ "status": {
+ "countSuccessiveCompleteFailures": 0,
+ "escrowsPruned": false,
+ "synchronizedEntryCountByType": [],
+ "code": "Paused",
+ "lastExecution": null,
+ "lastSuccessfulExecution": null,
+ "progress": [],
+ "lastSuccessfulExecutionWithExports": null,
+ "steadyStateFirstAchievedTime": "0001-01-01T00:00:00Z",
+ "steadyStateLastAchievedTime": "0001-01-01T00:00:00Z",
+ "quarantine": null,
+ "troubleshootingUrl": null
+ },
+ "synchronizationJobSettings": [
+ {
+ "name": "QuarantineTooManyDeletesThreshold",
+ "value": "500"
+ }
+ ]
+}
+```
++
+### Monitor provisioning events using the provisioning logs
+In addition to monitoring the status of the provisioning job, you can use the [provisioning logs](/graph/api/provisioningobjectsummary-list?tabs=http&view=graph-rest-beta) to query for all the events that are occurring. For example, query for a particular user and determine if they were successfully provisioned.
+
+#### Request
+```msgraph-interactive
+GET https://graph.microsoft.com/beta/auditLogs/provisioning
+```
+#### Response
+```http
+HTTP/1.1 200 OK
+Content-type: application/json
+{
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#auditLogs/provisioning",
+ "value": [
+ {
+ "id": "gc532ff9-r265-ec76-861e-42e2970a8218",
+ "activityDateTime": "2019-06-24T20:53:08Z",
+ "tenantId": "7928d5b5-7442-4a97-ne2d-66f9j9972ecn",
+ "jobId": "BoxOutDelta.7928d5b574424a97ne2d66f9j9972ecn",
+ "cycleId": "44576n58-v14b-70fj-8404-3d22tt46ed93",
+ "changeId": "eaad2f8b-e6e3-409b-83bd-e4e2e57177d5",
+ "action": "Create",
+ "durationInMilliseconds": 2785,
+ "sourceSystem": {
+ "id": "0404601d-a9c0-4ec7-bbcd-02660120d8c9",
+ "displayName": "Azure Active Directory",
+ "details": {}
+ },
+ "targetSystem": {
+ "id": "cd22f60b-5f2d-1adg-adb4-76ef31db996b",
+ "displayName": "Box",
+ "details": {
+ "ApplicationId": "f2764360-e0ec-5676-711e-cd6fc0d4dd61",
+ "ServicePrincipalId": "chc46a42-966b-47d7-9774-576b1c8bd0b8",
+ "ServicePrincipalDisplayName": "Box"
+ }
+ },
+ "initiatedBy": {
+ "id": "",
+ "displayName": "Azure AD Provisioning Service",
+ "initiatorType": "system"
+ },
+ "sourceIdentity": {
+ "id": "5e6c9rae-ab4d-5239-8ad0-174391d110eb",
+ "displayName": "Self-service Pilot",
+ "identityType": "Group",
+ "details": {}
+ },
+ "targetIdentity": {
+ "id": "",
+ "displayName": "",
+ "identityType": "Group",
+ "details": {}
+ },
+ "statusInfo": {
+ "@odata.type": "#microsoft.graph.statusDetails",
+ "status": "failure",
+ "errorCode": "BoxEntryConflict",
+ "reason": "Message: Box returned an error response with the HTTP status code 409. This response indicates that a user or a group already exisits with the same name. This can be avoided by identifying and removing the conflicting user from Box via the Box administrative user interface, or removing the current user from the scope of provisioning either by removing their assignment to the Box application in Azure Active Directory or adding a scoping filter to exclude the user.",
+ "additionalDetails": null,
+ "errorCategory": "NonServiceFailure",
+ "recommendedAction": null
+ },
+ "provisioningSteps": [
+ {
+ "name": "EntryImportAdd",
+ "provisioningStepType": "import",
+ "status": "success",
+ "description": "Received Group 'Self-service Pilot' change of type (Add) from Azure Active Directory",
+ "details": {}
+ },
+ {
+ "name": "EntrySynchronizationAdd",
+ "provisioningStepType": "matching",
+ "status": "success",
+ "description": "Group 'Self-service Pilot' will be created in Box (Group is active and assigned in Azure Active Directory, but no matching Group was found in Box)",
+ "details": {}
+ },
+ {
+ "name": "EntryExportAdd",
+ "provisioningStepType": "export",
+ "status": "failure",
+ "description": "Failed to create Group 'Self-service Pilot' in Box",
+ "details": {
+ "ReportableIdentifier": "Self-service Pilot"
+ }
+ }
+ ],
+ "modifiedProperties": [
+ {
+ "displayName": "objectId",
+ "oldValue": null,
+ "newValue": "5e0c9eae-ad3d-4139-5ad0-174391d110eb"
+ },
+ {
+ "displayName": "displayName",
+ "oldValue": null,
+ "newValue": "Self-service Pilot"
+ },
+ {
+ "displayName": "mailEnabled",
+ "oldValue": null,
+ "newValue": "False"
+ },
+ {
+ "displayName": "mailNickname",
+ "oldValue": null,
+ "newValue": "5ce25n9a-4c5f-45c9-8362-ef3da29c66c5"
+ },
+ {
+ "displayName": "securityEnabled",
+ "oldValue": null,
+ "newValue": "True"
+ },
+ {
+ "displayName": "Name",
+ "oldValue": null,
+ "newValue": "Self-service Pilot"
+ }
+ ]
+ }
+ ]
+}
+```
+## See also
+
+- [Review the synchronization Microsoft Graph documentation](/graph/api/resources/synchronization-overview?view=graph-rest-beta)
+- [Integrating a custom SCIM app with Azure AD](/azure/active-directory/app-provisioning/use-scim-to-provision-users-and-groups)
active-directory Functions For Customizing Application Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/functions-for-customizing-application-data.md
The Item function returns one item from a multi-valued string/attribute.
| **index** |Required |Integer | Index to an item in the multi-valued string| **Example:**
-`Item([proxyAddresses], 1)` returns the second item in the multi-valued attribute.
+`Item([proxyAddresses], 1)` returns the first item in the multi-valued attribute. Index 0 should not be used.
### Join
active-directory On Premises Ecma Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/on-premises-ecma-configure.md
Previously updated : 05/28/2021 Last updated : 06/04/2021
The following sections will guide you through establishing connectivity with the
2. In the portal, navigate to Azure Active Directory, **Enterprise Applications**. 3. Click on **New Application**. ![Add new application](.\media\on-premises-ecma-configure\configure-4.png)
-4. Locate your application and click **Create**.
+4. Locate the "On-premises provisioning" application from the gallery and click **Create**.
### Configure the application and test 1. Once it has been created, click he **Provisioning page**.
active-directory Plan Cloud Hr Provision https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/plan-cloud-hr-provision.md
Depending on the number of Active Directory domains involved in the inbound user
This is the most common deployment topology. Use this topology, if you need to provision all users from Cloud HR to a single AD domain and same provisioning rules apply to all users. **Salient configuration aspects** * Setup two provisioning agent nodes for high availability and failover.
This is the most common deployment topology. Use this topology, if you need to p
This topology supports business requirements where attribute mapping and provisioning logic differs based on user type (employee/contractor), user location or user's business unit. You can also use this topology to delegate the administration and maintenance of inbound user provisioning based on division or country basis. **Salient configuration aspects** * Setup two provisioning agent nodes for high availability and failover.
+* Create an HR2AD provisioning app for each distinct user set that you want to provision.
* Use [scoping filters](define-conditional-rules-for-provisioning-user-accounts.md) in the provisioning app to define users to be processed by each app. * Configure [skip out of scope deletions flag](skip-out-of-scope-deletions.md) to prevent accidental account deactivations.
This topology supports business requirements where attribute mapping and provisi
Use this topology to manage multiple independent child AD domains belonging to the same forest. It also offers the flexibility of delegating the administration of each provisioning job by domain boundary. For example: In the diagram below, *EMEA administrators* can independently manage the provisioning configuration of users belonging to the EMEA region. **Salient configuration aspects** * Setup two provisioning agent nodes for high availability and failover. * Use the [provisioning agent configuration wizard](../cloud-sync/how-to-install.md#install-the-agent) to register all child AD domains with your Azure AD tenant.
+* Create a separate HR2AD provisioning app for each target domain.
* When configuring the provisioning app, select the respective child AD domain from the dropdown of available AD domains. * Use [scoping filters](define-conditional-rules-for-provisioning-user-accounts.md) in the provisioning app to define users to be processed by each app. * Configure [skip out of scope deletions flag](skip-out-of-scope-deletions.md) to prevent accidental account deactivations.
Use this topology to manage multiple independent child AD domains belonging to t
Use this topology to manage multiple child AD domains with cross-domain visibility for resolving cross-domain manager references and checking for forest-wide uniqueness when generating values for attributes like *userPrincipalName*, *samAccountName* and *mail*. **Salient configuration aspects** * Setup two provisioning agent nodes for high availability and failover. * Configure [referral chasing](../cloud-sync/how-to-manage-registry-options.md#configure-referral-chasing) on the provisioning agent. * Use the [provisioning agent configuration wizard](../cloud-sync/how-to-install.md#install-the-agent) to register the parent AD domain and all child AD domains with your Azure AD tenant.
+* Create a separate HR2AD provisioning app for each target domain.
* When configuring each provisioning app, select the parent AD domain from the dropdown of available AD domains. * Use *parentDistinguishedName* with expression mapping to dynamically create user in the correct child domain and [OU container](#configure-active-directory-ou-container-assignment). * Use [scoping filters](define-conditional-rules-for-provisioning-user-accounts.md) in the provisioning app to define users to be processed by each app.
Use this topology to manage multiple child AD domains with cross-domain visibili
Use this topology if you want to use a single provisioning app to manage users belonging to all your child AD domains. This topology is recommended if provisioning rules are consistent across all domains and there is no requirement for delegated administration of provisioning jobs. This topology supports resolving cross-domain manager references and can perform forest-wide uniqueness check. **Salient configuration aspects** * Setup two provisioning agent nodes for high availability and failover. * Configure [referral chasing](../cloud-sync/how-to-manage-registry-options.md#configure-referral-chasing) on the provisioning agent. * Use the [provisioning agent configuration wizard](../cloud-sync/how-to-install.md#install-the-agent) to register the parent AD domain and all child AD domains with your Azure AD tenant.
+* Create a single HR2AD provisioning app for the entire forest.
* When configuring the provisioning app, select the parent AD domain from the dropdown of available AD domains. * Use *parentDistinguishedName* with expression mapping to dynamically create user in the correct child domain and [OU container](#configure-active-directory-ou-container-assignment). * If you are using scoping filters, configure [skip out of scope deletions flag](skip-out-of-scope-deletions.md) to prevent accidental account deactivations.
Use this topology if you want to use a single provisioning app to manage users b
Use this topology if your IT infrastructure has disconnected/disjoint AD forests and you need to provision users to different forests based on business affiliation. For example: Users working for subsidiary *Contoso* need to be provisioned into the *contoso.com* domain, while users working for subsidiary *Fabrikam* need to be provisioned into the *fabrikam.com* domain. **Salient configuration aspects** * Setup two different sets of provisioning agents for high availability and failover, one for each forest.
+* Create two different provisioning apps, one for each forest.
+* If you need to resolve cross domain references within the forest, enable [referral chasing](../cloud-sync/how-to-manage-registry-options.md#configure-referral-chasing) on the provisioning agent.
+* Create a separate HR2AD provisioning app for each disconnected forest.
* When configuring each provisioning app, select the appropriate parent AD domain from the dropdown of available AD domain names. * Configure [skip out of scope deletions flag](skip-out-of-scope-deletions.md) to prevent accidental account deactivations.
+### Deployment topology 7: Separate apps to provision distinct users from multiple Cloud HR to disconnected on-premises Active Directory forests
+
+In large organizations, it is not uncommon to have multiple HR systems. During business M&A (mergers and acquisitions) scenarios, you may come across a need to connect your on-premises Active Directory to multiple HR sources. We recommend the topology below if you have multiple HR sources and would like to channel the identity data from these HR sources to either the same or different on-premises Active Directory domains.
++
+**Salient configuration aspects**
+* Setup two different sets of provisioning agents for high availability and failover, one for each forest.
+* If you need to resolve cross domain references within the forest, enable [referral chasing](../cloud-sync/how-to-manage-registry-options.md#configure-referral-chasing) on the provisioning agent.
+* Create a separate HR2AD provisioning app for each HR system and on-premises Active Directory combination.
+* When configuring each provisioning app, select the appropriate parent AD domain from the dropdown of available AD domain names.
+* Configure [skip out of scope deletions flag](skip-out-of-scope-deletions.md) to prevent accidental account deactivations.
## Plan scoping filters and attribute mapping
active-directory Tutorial Ecma Sql Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/tutorial-ecma-sql-connector.md
Previously updated : 03/17/2021 Last updated : 06/04/2021
This tutorial covers how to setup and use the generic SQL connector with the Azu
![Architecure](.\media\tutorial-ecma-sql-connector\sql-1.png) -- This tutorial uses 2 virtual machines. One is the domain controller (DC1.contoso.com) and the second is an application server(APP1.contoso.com). - SQL Server 2019 and SQL Server Management Studio is installed on APP1. - Both VMs have connectivity to the internet. - SQL Server Agent has been started
active-directory Concept Authentication Passwordless https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-authentication-passwordless.md
The following providers offer FIDO2 security keys of different form factors that
| AuthenTrend | ![y] | ![y]| ![y]| ![y]| ![n] | https://authentrend.com/about-us/#pg-35-3 | | Ensurity | ![y] | ![y]| ![n]| ![n]| ![n] | https://www.ensurity.com/contact | | Excelsecu | ![n] | ![y]| ![n]| ![n]| ![n] | https://www.excelsecu.com/productdetail/esecufido2secu.html |
-| Feitian | ![y] | ![y]| ![y]| ![y]| ![n] | https://ftsafe.us/pages/microsoft |
+| Feitian | ![y] | ![y]| ![y]| ![y]| ![n] | https://shop.ftsafe.us/pages/microsoft |
| Gemalto (Thales Group) | ![n] | ![y]| ![y]| ![n]| ![n] | https://safenet.gemalto.com/access-management/authenticators/fido-devices | | GoTrustID Inc. | ![n] | ![y]| ![y]| ![y]| ![n] | https://www.gotrustid.com/idem-key | | HID | ![n] | ![y]| ![y]| ![n]| ![n] | https://www.hidglobal.com/contact-us |
active-directory Concept Mfa Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-mfa-data-residency.md
Previously updated : 03/16/2021 Last updated : 06/03/2021 +
Azure Active Directory (Azure AD) stores customer data in a geographical location based on the address an organization provides when subscribing to a Microsoft online service such as Microsoft 365 or Azure. For information on where your customer data is stored, see [Where is your data located?](https://www.microsoft.com/trustcenter/privacy/where-your-data-is-located) in the Microsoft Trust Center.
-Cloud-based Azure AD multifactor authentication and Azure Multifactor Authentication Server process and store personal data and organizational data. This article outlines what and where data is stored.
+Cloud-based Azure AD multifactor authentication and MFA Server process and store personal data and organizational data. This article outlines what and where data is stored.
The Azure AD multifactor authentication service has datacenters in the United States, Europe, and Asia Pacific. The following activities originate from the regional datacenters except where noted:
-* Multifactor authentication phone calls originate from United States datacenters and are routed by global providers.
+* Multifactor authentication phone calls originate from datacenters in the customer's region and are routed by global providers. Phone calls using custom greetings always originate from data centers in the United States.
* General purpose user authentication requests from other regions are currently processed based on the user's location. * Push notifications that use the Microsoft Authenticator app are currently processed in regional datacenters based on the user's location. Vendor-specific device services, such as Apple Push Notification Service, might be outside the user's location.
For Microsoft Azure Government, Microsoft Azure Germany, Microsoft Azure operate
| Voice call | Multifactor authentication logs<br/>Multifactor authentication activity report data store<br/>Blocked users (if fraud was reported) | | Microsoft Authenticator notification | Multifactor authentication logs<br/>Multifactor authentication activity report data store<br/>Blocked users (if fraud was reported)<br/>Change requests when the Microsoft Authenticator device token changes |
-### Data stored by Azure Multifactor Authentication Server
+### Data stored by MFA Server
-If you use Azure Multifactor Authentication Server, the following personal data is stored.
+If you use MFA Server, the following personal data is stored.
> [!IMPORTANT]
-> As of July 1, 2019, Microsoft no longer offers Multifactor Authentication Server for new deployments. New customers who want to require multifactor authentication from their users should use cloud-based Azure AD multifactor authentication. Existing customers who activated Multifactor Authentication Server before July 1, 2019, can download the latest version and updates, and generate activation credentials as usual.
+> As of July 1, 2019, Microsoft no longer offers MFA Server for new deployments. New customers who want to require multifactor authentication from their users should use cloud-based Azure AD multifactor authentication. Existing customers who activated Multifactor Authentication Server before July 1, 2019, can download the latest version and updates, and generate activation credentials as usual.
| Event type | Data store type | |--|--|
Organizational data is tenant-level information that can expose configuration or
* Notifications * Phone call settings
-For Azure Multifactor Authentication Server, the following Azure portal pages might contain organizational data:
+For MFA Server, the following Azure portal pages might contain organizational data:
* Server settings * One-time bypass * Caching rules * Multifactor Authentication Server status
-## Multifactor authentication logs location
+## Multifactor authentication activity reports for public cloud
-The following table shows the location for service logs for public clouds.
+Multifactor authentication activity reports store activity from on-premises components: NPS Extension, AD FS adapter, and MFA server.
+The multifactor authentication service logs are used to operate the service.
+The following sections show where activity reports and services logs are stored for specific authentication methods for each component in different customer regions.
+Standard voice calls may failover to a different region.
-| Public cloud| Sign-in logs | Multifactor authentication activity report | Multifactor authentication service logs |
-|-|--|-||
-| United States | United States | United States | United States |
-| Europe | Europe | United States | Europe <sup>2</sup> |
-| Australia | Australia | United States<sup>1</sup> | Australia <sup>2</sup> |
+>[!NOTE]
+>The multifactor authentication activity reports contain personal data such as User Principal Name (UPN) and complete phone number.
-<sup>1</sup>OATH Code logs are stored in Australia.
+### NPS extension and AD FS adapter
-<sup>2</sup>Voice calls multifactor authentication service logs are stored in the United States.
+| Authentication method | Customer region | Activity report location | Service log location |
+|--|--|--|-|
+| OATH software and hardware tokens | Australia and New Zealand | Australia/New Zealand | Cloud in-region |
+| OATH software and hardware tokens | Outside of Australia and New Zealand | United States | Cloud in-region |
+| Voice calls without custom greetings and all other authentication methods except OATH software and hardware tokens | Any | United States | Cloud in-region |
+| Voice calls with custom greetings | Any | United States | MFA backend in United States |
-The following table shows the location for service logs for sovereign clouds.
+### MFA server and cloud-based MFA
-| Sovereign cloud | Sign-in logs | Multifactor authentication activity report (includes personal data)| Multifactor authentication service logs |
-|--|--|-||
-| Microsoft Azure Germany | Germany | United States | United States |
-| Azure China 21Vianet | China | United States | United States |
-| Microsoft Government Cloud | United States | United States | United States |
+| Component | Authentication method | Customer region | Activity report location | Service log location |
+|||--|||
+| MFA server | All methods | Any | United States | MFA backend in United States |
+| Cloud MFA | Standard voice calls and all other methods | Any | Azure AD Sign-in logs in region | Cloud in-region |
+| Cloud MFA | Voice calls with custom greetings | Any | Azure AD Sign-in logs in region | MFA backend in United States |
-The multifactor authentication activity reports contain personal data such as User Principal Name (UPN) and complete phone number.
+## Multifactor authentication activity reports for sovereign clouds
-The multifactor authentication service logs are used to operate the service.
+The following table shows the location for service logs for sovereign clouds.
+
+| Sovereign cloud | Sign-in logs | Multifactor authentication activity report | Multifactor authentication service logs |
+|--|--|--|--|
+| Microsoft Azure Germany | Germany | United States | United States |
+| Azure China 21Vianet | China | United States | United States |
+| Microsoft Government Cloud | United States | United States | United States |
## Next steps
-For more information about what user information is collected by cloud-based Azure AD multifactor authentication and Azure Multifactor Authentication Server, see [Azure AD multifactor authentication user data collection](howto-mfa-reporting-datacollection.md).
+For more information about what user information is collected by cloud-based Azure AD multifactor authentication and MFA Server, see [Azure AD multifactor authentication user data collection](howto-mfa-reporting-datacollection.md).
active-directory Howto Authentication Passwordless Security Key On Premises https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-authentication-passwordless-security-key-on-premises.md
Get-AzureADKerberosServer -Domain $domain -CloudCredential $cloudCred -DomainCre
This command outputs the properties of the Azure AD Kerberos Server. You can review the properties to verify that everything is in good order.
+> [!NOTE]
+
+Running against another domain by supplying the credential will connect over NTLM and then it would fails. if the users are part of "Protected Users" Security Group in AD
+Workaround: login with another domain user into ADConnect box and donΓÇÖt supply -domainCredential . it would consume the kerebros ticket of the current logon user.
+you can confirm by executing whoami /groups to validate if the user has required privelege in AD to execute the above command
+
| Property | Description | | | | | ID | The unique ID of the AD DS DC object. This ID is sometimes referred to as it's "slot" or it's "branch ID". |
active-directory Howto Authentication Passwordless Security Key https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-authentication-passwordless-security-key.md
Administrator provisioning and de-provisioning of security keys is not available
### UPN changes
-We are working on supporting a feature that allows UPN change on hybrid Azure AD joined and Azure AD joined devices. If a user's UPN changes, you can no longer modify FIDO2 security keys to account for the change. The resolution is to reset the device and the user has to re-register.
+If a user's UPN changes, you can no longer modify FIDO2 security keys to account for the change. The solution for a user with a FIDO2 security key is to login to MySecurityInfo, delete the old key, and add a new one.
## Next steps
active-directory Reference Breaking Changes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/reference-breaking-changes.md
Previously updated : 3/30/2021 Last updated : 6/4/2021
The authentication system alters and adds features on an ongoing basis to improv
## Upcoming changes
-### Bug fix: Azure AD will no longer URL encode the state parameter twice.
-
-**Effective date**: May 2021
-
-**Endpoints impacted**: v1.0 and v2.0
-
-**Protocol impacted**: All flows that visit the `/authorize` endpoint (implicit flow and authorization code flow)
-
-A bug was found and fixed in the Azure AD authorization response. During the `/authorize` leg of authentication, the `state` parameter from the request is included in the response, in order to preserve app state and help prevent CSRF attacks. Azure AD incorrectly URL encoded the `state` parameter before inserting it into the response, where it was encoded once more. This would result in applications incorrectly rejecting the response from Azure AD.
-
-Azure AD will no longer double-encode this parameter, allowing apps to correctly parse the result. This change will be made for all applications.
- ### Conditional Access will only trigger for explicitly requested scopes **Effective date**: May 2021, with gradual rollout starting in April.
If the app then requests `scope=files.readwrite`, the Conditional Access require
If the app then makes one last request for any of the three scopes (say, `scope=tasks.read`), Azure AD will see that the user has already completed the Conditional access policies needed for `files.readwrite`, and again issue a token with all three permissions in it.
+### The device code flow UX will now include an app confirmation prompt
+
+**Effective date**: June 2021.
+
+**Endpoints impacted**: v2.0 and v1.0
+
+**Protocol impacted**: The [device code flow](v2-oauth2-device-code.md)
+
+As a security improvement, the device code flow has been updated to add an additional prompt, which validates that the user is signing into the app they expect. This is added to help prevent phishing attacks.
+
+The prompt that appears looks like this:
+ ## May 2020
+### Bug fix: Azure AD will no longer URL encode the state parameter twice
+
+**Effective date**: May 2021
+
+**Endpoints impacted**: v1.0 and v2.0
+
+**Protocol impacted**: All flows that visit the `/authorize` endpoint (implicit flow and authorization code flow)
+
+A bug was found and fixed in the Azure AD authorization response. During the `/authorize` leg of authentication, the `state` parameter from the request is included in the response, in order to preserve app state and help prevent CSRF attacks. Azure AD incorrectly URL encoded the `state` parameter before inserting it into the response, where it was encoded once more. This would result in applications incorrectly rejecting the response from Azure AD.
+
+Azure AD will no longer double-encode this parameter, allowing apps to correctly parse the result. This change will be made for all applications.
+ ### Azure Government endpoints are changing **Effective date**: May 5th (Finishing June 2020)
active-directory Howto Vm Sign In Azure Ad Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
Previously updated : 05/10/2021 Last updated : 06/04/2021
There are many security benefits of using Azure AD based authentication to login
- Login to Windows VMs with Azure Active Directory also works for customers that use Federation Services. - Automate and scale Azure AD join with MDM auto enrollment with Intune of Azure Windows VMs that are part for your VDI deployments. Auto MDM enrollment requires Azure AD P1 license. Windows Server 2019 VMs do not support MDM enrollment. - > [!NOTE] > Once you enable this capability, your Windows VMs in Azure will be Azure AD joined. You cannot join it to other domain like on-premises AD or Azure AD DS. If you need to do so, you will need to disconnect the VM from your Azure AD tenant by uninstalling the extension.
This feature is now available in the following Azure clouds:
- Azure Government - Azure China -- ### Network requirements To enable Azure AD authentication for your Windows VMs in Azure, you need to ensure your VMs network configuration permits outbound access to the following endpoints over TCP port 443:
For Azure Global
- `https://login.microsoftonline.com` - For authentication flows. - `https://pas.windows.net` - For Azure RBAC flows. - For Azure Government - `https://enterpriseregistration.microsoftonline.us` - For device registration. - `http://169.254.169.254` - Azure Instance Metadata Service. - `https://login.microsoftonline.us` - For authentication flows. - `https://pasff.usgovcloudapi.net` - For Azure RBAC flows. - For Azure China - `https://enterpriseregistration.partner.microsoftonline.cn` - For device registration. - `http://169.254.169.254` - Azure Instance Metadata Service endpoint. - `https://login.chinacloudapi.cn` - For authentication flows. - `https://pas.chinacloudapi.cn` - For Azure RBAC flows. - ## Enabling Azure AD login in for Windows VM in Azure To use Azure AD login in for Windows VM in Azure, you need to first enable Azure AD login option for your Windows VM and then you need to configure Azure role assignments for users who are authorized to login in to the VM.
The AADLoginForWindows extension must install successfully in order for the VM t
> [!NOTE] > If the extension restarts after the initial failure, the log with the deployment error will be saved as `CommandExecution_YYYYMMDDHHMMSSSSS.log`. "
-1. Open a PowerShell command prompt on the VM and verify these queries against the Instance Metadata Service (IMDS) Endpoint running on the Azure host returns:
+1. Open a PowerShell window on the VM and verify these queries against the Instance Metadata Service (IMDS) Endpoint running on the Azure host returns:
| Command to run | Expected output | | | |
The AADLoginForWindows extension must install successfully in order for the VM t
> [!NOTE] > The access token can be decoded using a tool like [calebb.net](http://calebb.net/). Verify the `appid` in the access token matches the managed identity assigned to the VM.
-1. Ensure the required endpoints are accessible from the VM using the command line:
+1. Ensure the required endpoints are accessible from the VM using PowerShell:
- `curl https://login.microsoftonline.com/ -D -` - `curl https://login.microsoftonline.com/<TenantID>/ -D -`
This exit code translates to `DSREG_E_MSI_TENANTID_UNAVAILABLE` because the exte
1. Verify the Azure VM can retrieve the TenantID from the Instance Metadata Service.
- - RDP to the VM as a local administrator and verify the endpoint returns valid Tenant ID by running this command from an elevated command line on the VM:
+ - RDP to the VM as a local administrator and verify the endpoint returns valid Tenant ID by running this command from an elevated PowerShell window on the VM:
- `curl -H Metadata:true http://169.254.169.254/metadata/identity/info?api-version=2018-02-01`
This exit code translates to `DSREG_E_MSI_TENANTID_UNAVAILABLE` because the exte
This Exit code translates to `DSREG_AUTOJOIN_DISC_FAILED` because the extension is not able to reach the `https://enterpriseregistration.windows.net` endpoint.
-1. Verify the required endpoints are accessible from the VM using the command line:
+1. Verify the required endpoints are accessible from the VM using PowerShell:
- `curl https://login.microsoftonline.com/ -D -` - `curl https://login.microsoftonline.com/<TenantID>/ -D -`
active-directory Delegate Invitations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/delegate-invitations.md
Previously updated : 05/19/2021 Last updated : 06/04/2021
By default, all users, including guests, can invite guest users.
- **Anyone in the organization can invite guest users including guests and non-admins (most inclusive)**: To allow guests in the organization to invite other guests including those who are not members of an organization, select this radio button. - **Member users and users assigned to specific admin roles can invite guest users including guests with member permissions**: To allow member users and users who have specific administrator roles to invite guests, select this radio button.
- - **Only users assigned to specific admin roles can invite guest users**: To allow only those users with certain administrator roles to invite guests, select this radio button.
+ - **Only users assigned to specific admin roles can invite guest users**: To allow only those users with administrator roles to invite guests, select this radio button. The administrator roles include [Global Administrator](../roles/permissions-reference.md#global-administrator), [User Administrator](../roles/permissions-reference.md#user-administrator), and [Guest Inviter](../roles/permissions-reference.md#guest-inviter).
- **No one in the organization can invite guest users including admins (most restrictive)**: To deny everyone in the organization from inviting guests, select this radio button. > [!NOTE] > If **Members can invite** is set to **No** and **Admins and users in the guest inviter role can invite** is set to **Yes**, users in the **Guest Inviter** role will still be able to invite guests.
active-directory Redemption Experience https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/redemption-experience.md
There are some cases where the invitation email is recommended over a direct lin
- The user may sign in with an alias of the email address that was invited. (An alias is an additional email address associated with an email account.) In this case, the user must click the redemption URL in the invitation email. ### Just-in-time redemption limitation with conflicting Contact object
-Sometimes the invited external guest user's email may conflict with an existing [Contact object](https://docs.microsoft.com/en-us/graph/api/resources/contact?view=graph-rest-1.0&preserve-view=true), resulting in the guest user being created without a proxyAddress. This is a known limitation that prevents guest users from signing in or redeeming an invitation through a direct link using [SAML/WS-Fed IdP](https://docs.microsoft.com/en-us/azure/active-directory/external-identities/direct-federation), [Microsoft Accounts](https://docs.microsoft.com/en-us/azure/active-directory/external-identities/microsoft-account), [Google Federation](https://docs.microsoft.com/en-us/azure/active-directory/external-identities/google-federation), or [Email One-Time Passcode](https://docs.microsoft.com/en-us/azure/active-directory/external-identities/one-time-passcode) accounts.
+Sometimes the invited external guest user's email may conflict with an existing [Contact object](/graph/api/resources/contact?view=graph-rest-1.0&preserve-view=true), resulting in the guest user being created without a proxyAddress. This is a known limitation that prevents guest users from signing in or redeeming an invitation through a direct link using [SAML/WS-Fed IdP](/azure/active-directory/external-identities/direct-federation), [Microsoft Accounts](/azure/active-directory/external-identities/microsoft-account), [Google Federation](/azure/active-directory/external-identities/google-federation), or [Email One-Time Passcode](/azure/active-directory/external-identities/one-time-passcode) accounts.
-To unblock users who can't redeem an invitation due to a conflicting [Contact object](https://docs.microsoft.com/en-us/graph/api/resources/contact?view=graph-rest-1.0&preserve-view=true), follow these steps:
+To unblock users who can't redeem an invitation due to a conflicting [Contact object](/graph/api/resources/contact?view=graph-rest-1.0&preserve-view=true), follow these steps:
1. Delete the conflicting Contact object. 2. Delete the guest user in the Azure portal (the user's "Invitation accepted" property should be in a pending state). 3. Re-invite the guest user.
active-directory Active Directory Compare Azure Ad To Ad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/active-directory-compare-azure-ad-to-ad.md
Most IT administrators are familiar with Active Directory Domain Services concep
- [What is Azure Active Directory?](./active-directory-whatis.md) - [Compare self-managed Active Directory Domain Services, Azure Active Directory, and managed Azure Active Directory Domain Services](../../active-directory-domain-services/compare-identity-solutions.md)-- [Frequently asked questions about Azure Active Directory](./active-directory-faq.md)
+- [Frequently asked questions about Azure Active Directory](./active-directory-faq.yml)
- [What's new in Azure Active Directory?](./whats-new.md)
active-directory Active Directory Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/active-directory-faq.md
- Title: Frequently asked questions (FAQ) - Azure Active Directory | Microsoft Docs
-description: Common questions and answers about Azure and Azure Active Directory, password management, and application access.
------- Previously updated : 11/12/2018-----
-# Frequently asked questions about Azure Active Directory
-Azure Active Directory (Azure AD) is a comprehensive identity as a service (IDaaS) solution that spans all aspects of identity, access management, and security.
-
-For more information, see [What is Azure Active Directory?](active-directory-whatis.md).
--
-## Access Azure and Azure Active Directory
-**Q: Why do I get "No subscriptions found" when I try to access Azure AD in the Azure portal?**
-
-**A:** To access the Azure portal, each user needs permissions with an Azure subscription. If you don't have a paid Microsoft 365 or Azure AD subscription, you will need to activate a free [Azure account](https://azure.microsoft.com/free/
-) or a paid subscription.
-
-For more information, see:
-
-* [How Azure subscriptions are associated with Azure Active Directory](active-directory-how-subscriptions-associated-directory.md)
--
-**Q: What's the relationship between Azure AD, Microsoft 365, and Azure?**
-
-**A:** Azure AD provides you with common identity and access capabilities to all web services. Whether you are using Microsoft 365, Microsoft Azure, Intune, or others, you're already using Azure AD to help turn on sign-on and access management for all these services.
-
-All users who are set up to use web services are defined as user accounts in one or more Azure AD instances. You can set up these accounts for free Azure AD capabilities like cloud application access.
-
-Azure AD paid services like Enterprise Mobility + Security complement other web services like Microsoft 365 and Microsoft Azure with comprehensive enterprise-scale management and security solutions.
---
-**Q: What are the differences between Owner and Global Administrator?**
-
-**A:** By default, the person who signs up for an Azure subscription is assigned the Owner role for Azure resources. An Owner can use either a Microsoft account or a work or school account from the directory that the Azure subscription is associated with. This role is authorized to manage services in the Azure portal.
-
-If others need to sign in and access services by using the same subscription, you can assign them the appropriate [built-in role](../../role-based-access-control/built-in-roles.md). For additional information, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
-
-By default, the person who signs up for an Azure subscription is assigned the Global Administrator role for the directory. The Global Administrator has access to all Azure AD directory features. Azure AD has a different set of administrator roles to manage the directory and identity-related features. These administrators will have access to various features in the Azure portal. The administrator's role determines what they can do, like create or edit users, assign administrative roles to others, reset user passwords, manage user licenses, or manage domains. For additional information on Azure AD directory admins and their roles, see [Assign a user to administrator roles in Azure Active Directory](active-directory-users-assign-role-azure-portal.md) and [Assigning administrator roles in Azure Active Directory](../roles/permissions-reference.md).
-
-Additionally, Azure AD paid services like Enterprise Mobility + Security complement other web services, such as Microsoft 365 and Microsoft Azure, with comprehensive enterprise-scale management and security solutions.
--
-**Q: Is there a report that shows when my Azure AD user licenses will expire?**
-
-**A:** No. This is not currently available.
---
-## Get started with Hybrid Azure AD
--
-**Q: How do I leave a tenant when I am added as a collaborator?**
-
-**A:** When you are added to another organization's tenant as a collaborator, you can use the "tenant switcher" in the upper right to switch between tenants. Currently, there is no way to leave the inviting organization, and Microsoft is working on providing this functionality. Until this feature is available, you can ask the inviting organization to remove you from their tenant.
--
-**Q: How can I connect my on-premises directory to Azure AD?**
-
-**A:** You can connect your on-premises directory to Azure AD by using Azure AD Connect.
-
-For more information, see [Integrating your on-premises identities with Azure Active Directory](../hybrid/whatis-hybrid-identity.md).
--
-**Q: How do I set up SSO between my on-premises directory and my cloud applications?**
-
-**A:** You only need to set up single sign-on (SSO) between your on-premises directory and Azure AD. As long as you access your cloud applications through Azure AD, the service automatically drives your users to correctly authenticate with their on-premises credentials.
-
-Implementing SSO from on-premises can be easily achieved with federation solutions such as Active Directory Federation Services (AD FS), or by configuring password hash sync. You can easily deploy both options by using the Azure AD Connect configuration wizard.
-
-For more information, see [Integrating your on-premises identities with Azure Active Directory](../hybrid/whatis-hybrid-identity.md).
--
-**Q: Does Azure AD provide a self-service portal for users in my organization?**
-
-**A:** Yes, Azure AD provides you with the [Azure AD Access Panel](https://myapps.microsoft.com) for user self-service and application access. If you are a Microsoft 365 customer, you can find many of the same capabilities in the [Office 365 portal](https://portal.office.com).
-
-For more information, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
--
-**Q: Does Azure AD help me manage my on-premises infrastructure?**
-
-**A:** Yes. The Azure AD Premium edition provides you with Azure AD Connect Health. Azure AD Connect Health helps you monitor and gain insight into your on-premises identity infrastructure and the synchronization services.
-
-For more information, see [Monitor your on-premises identity infrastructure and synchronization services in the cloud](../hybrid/whatis-azure-ad-connect.md).
--
-## Password management
-**Q: Can I use Azure AD password write-back without password sync? (In this scenario, is it possible to use Azure AD self-service password reset (SSPR) with password write-back and not store passwords in the cloud?)**
-
-**A:** You do not need to synchronize your Active Directory passwords to Azure AD to enable write-back. In a federated environment, Azure AD single sign-on (SSO) relies on the on-premises directory to authenticate the user. This scenario does not require the on-premises password to be tracked in Azure AD.
--
-**Q: How long does it take for a password to be written back to Active Directory on-premises?**
-
-**A:** Password write-back operates in real time.
-
-For more information, see [Getting started with password management](../authentication/tutorial-enable-sspr.md).
--
-**Q: Can I use password write-back with passwords that are managed by an admin?**
-
-**A:** Yes, if you have password write-back enabled, the password operations performed by an admin are written back to your on-premises environment.
-
-For more answers to password-related questions, see [Password management frequently asked questions](../authentication/active-directory-passwords-faq.yml).
--
-**Q: What can I do if I can't remember my existing Microsoft 365/Azure AD password while trying to change my password?**
-
-**A:** For this type of situation, there are a couple of options. Use self-service password reset (SSPR) if it's available. Whether SSPR works depends on how it's configured. For more information, see [How does the password reset portal work](../authentication/howto-sspr-deployment.md).
-
-For Microsoft 365 users, your admin can reset the password by using the steps outlined in [Reset user passwords](https://support.office.com/article/Admins-Reset-user-passwords-7A5D073B-7FAE-4AA5-8F96-9ECD041ABA9C?ui=en-US&rs=en-US&ad=US).
-
-For Azure AD accounts, admins can reset passwords by using one of the following:
--- [Reset accounts in the Azure portal](active-directory-users-reset-password-azure-portal.md)-- [Using PowerShell](/powershell/module/msonline/set-msoluserpassword)---
-## Security
-**Q: Are accounts locked after a specific number of failed attempts or is there a more sophisticated strategy used?**
-
-We use a more sophisticated strategy to lock accounts. This is based on the IP of the request and the passwords entered. The duration of the lockout also increases based on the likelihood that it is an attack.
-
-**Q: Certain (common) passwords get rejected with the messages 'this password has been used to many times', does this refer to passwords used in the current active directory?**
-
-This refers to passwords that are globally common, such as any variants of "Password" and "123456".
-
-**Q: Will a sign-in request from dubious sources (botnets, tor endpoint) be blocked in a B2C tenant or does this require a Basic or Premium edition tenant?**
-
-We do have a gateway that filters requests and provides some protection from botnets, and is applied for all B2C tenants.
-
-## Application access
-
-**Q: Where can I find a list of applications that are pre-integrated with Azure AD and their capabilities?**
-
-**A:** Azure AD has more than 2,600 pre-integrated applications from Microsoft, application service providers, and partners. All pre-integrated applications support single sign-on (SSO). SSO lets you use your organizational credentials to access your apps. Some of the applications also support automated provisioning and de-provisioning.
-
-For a complete list of the pre-integrated applications, see the [Active Directory Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/Microsoft.AzureActiveDirectory).
--
-**Q: What if the application I need is not in the Azure AD marketplace?**
-
-**A:** With Azure AD Premium, you can add and configure any application that you want. Depending on your application's capabilities and your preferences, you can configure SSO and automated provisioning.
-
-For more information, see:
-
-* [Configuring single sign-on to applications that are not in the Azure Active Directory application gallery](../manage-apps/configure-saml-single-sign-on.md)
-* [Using SCIM to enable automatic provisioning of users and groups from Azure Active Directory to applications](../app-provisioning/use-scim-to-provision-users-and-groups.md)
--
-**Q: How do users sign in to applications by using Azure AD?**
-
-**A:** Azure AD provides several ways for users to view and access their applications, such as:
-
-* The Azure AD access panel
-* The Microsoft 365 application launcher
-* Direct sign-in to federated apps
-* Deep links to federated, password-based, or existing apps
-
-For more information, see [End user experiences for applications](../manage-apps/end-user-experiences.md).
--
-**Q: What are the different ways Azure AD enables authentication and single sign-on to applications?**
-
-**A:** Azure AD supports many standardized protocols for authentication and authorization, such as SAML 2.0, OpenID Connect, OAuth 2.0, and WS-Federation. Azure AD also supports password vaulting and automated sign-in capabilities for apps that only support forms-based authentication.
-
-For more information, see:
-
-* [Authentication Scenarios for Azure AD](../develop/authentication-vs-authorization.md)
-* [Active Directory authentication protocols](/previous-versions/azure/dn151124(v=azure.100))
-* [Single sign-on for applications in Azure AD](../manage-apps/what-is-single-sign-on.md)
--
-**Q: Can I add applications I'm running on-premises?**
-
-**A:** Azure AD Application Proxy provides you with easy and secure access to on-premises web applications that you choose. You can access these applications in the same way that you access your software as a service (SaaS) apps in Azure AD. There is no need for a VPN or to change your network infrastructure.
-
-For more information, see [How to provide secure remote access to on-premises applications](../manage-apps/application-proxy.md).
--
-**Q: How do I require multi-factor authentication for users who access a particular application?**
-
-**A:** With Azure AD Conditional Access, you can assign a unique access policy for each application. In your policy, you can require multi-factor authentication always, or when users are not connected to the local network.
-
-For more information, see [Securing access to Microsoft 365 and other apps connected to Azure Active Directory](../conditional-access/overview.md).
--
-**Q: What is automated user provisioning for SaaS apps?**
-
-**A:** Use Azure AD to automate the creation, maintenance, and removal of user identities in many popular cloud SaaS apps.
-
-For more information, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
--
-**Q: Can I set up a secure LDAP connection with Azure AD?**
-
-**A:** No. Azure AD does not support the Lightweight Directory Access Protocol (LDAP) protocol or Secure LDAP directly. However, it's possible to enable Azure AD Domain Services (Azure AD DS) instance on your Azure AD tenant with properly configured network security groups through Azure Networking to achieve LDAP connectivity. For more information, see [Configure secure LDAP for an Azure Active Directory Domain Services managed domain](../../active-directory-domain-services/tutorial-configure-ldaps.md)
active-directory License Users Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/license-users-groups.md
You can view your available service plans, including the individual licenses, ch
1. Select **Azure Active Directory**, and then select **Licenses**.
- :::image type="content" source="media/license-users-groups/license-details-blade.png" alt-text="Licenses page, with number of purchased services and assigned licenses":::
- 1. Select **All products** to view the All Products page and to see the **Total**, **Assigned**, **Available**, and **Expiring soon** numbers for your license plans. :::image type="content" source="media/license-users-groups/license-products-blade-with-products.png" alt-text="services page - with service license plans - associated license info":::
Make sure that anyone needing to use a licensed Azure AD service has the appropr
1. On the **Products** page, select the name of the license plan you want to assign to the user.
- ![services page, with highlighted service license plan](media/license-users-groups/license-products-blade-with-product-highlight.png)
-
-1. On the license plan overview page, select **Assign**.
+1. After you select the license plan, select **Assign**.
- ![services page, with highlighted Assign option](media/license-users-groups/license-products-blade-with-assign-option-highlight.png)
+ ![services page, with highlighted license plan selection and Assign options](media/license-users-groups/license-products-blade-with-assign-option-highlight.png)
1. On the **Assign** page, select **Users and groups**, and then search for and select the user you're assigning the license.
active-directory How To Connect Fed Saml Idp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-fed-saml-idp.md
The following procedure walks you through converting an existing standard domain
```powershell $dom = "contoso.com"
- $BrandName - "Sample SAML 2.0 IDP"
+ $BrandName = "Sample SAML 2.0 IDP"
$LogOnUrl = "https://WS2012R2-0.contoso.com/passiveLogon" $LogOffUrl = "https://WS2012R2-0.contoso.com/passiveLogOff" $ecpUrl = "https://WS2012R2-0.contoso.com/PAOS"
active-directory Quickstart Access Log With Graph Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/quickstart-access-log-with-graph-api.md
+
+ Title: Access Azure AD logs with the Microsoft Graph API | Microsoft Docs
+description: In this quickstart, you learn how you can access the sign-ins log using the Graph API.
+++++ Last updated : 06/03/2021++++++
+# Customer intent: As an IT admin, you need to how to use the Graph API to access the log files so that you can fix issues.
+++
+# Quickstart: Access Azure AD logs with the Microsoft Graph API
+
+With the information in the Azure AD sign-ins log, you can figure out what happened if a sign-in of a user failed. This quickstart shows how to you can access the sign-ins log using the Graph API.
++
+## Prerequisites
+
+To complete the scenario in this quickstart, you need:
+
+- **Access to an Azure AD tenant** - If you don't have access to an Azure AD tenant, see [Create your Azure free account today](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- **A test account called Isabella Simonsen** - If you don't know how to create a test account, see [Add cloud-based users](../fundamentals/add-users-azure-active-directory.md#add-a-new-user).
++
+## Perform a failed sign-in
+
+The goal of this step is to create a record of a failed sign-in in the Azure AD sign-ins log.
+
+**To complete this step:**
+
+1. Sign in to your [Azure portal](https://portal.azure.com/) as Isabella Simonsen using an incorrect password.
+
+2. Wait for 5 minutes to ensure that you can find a record of the sign-in in the sign-ins log. For more information, see [Activity reports](reference-reports-latencies.md#activity-reports).
+++
+## Find the failed sign-in
+
+This section provides you with the steps to get information about your sign-in using the Graph API.
+
+ ![Graph explorer query](./media/quickstart-access-log-with-graph-api/graph-explorer-query.png)
+
+**To review the failed sign-in:**
+
+1. Navigate to the [Microsoft Graph explorer](https://developer.microsoft.com/en-us/graph/graph-explorer).
+
+2. Sign-in to your tenant as global administrator.
+
+ ![Microsoft Graph explorer authentication](./media/quickstart-access-log-with-graph-api/graph-explorer-authentication.png)
+
+3. In the **HTTP verb drop-down list**, select **GET**.
+
+4. In the **API version drop-down list**, select **beta**.
+
+5. In the **Request query address bar**, type `https://graph.microsoft.com/beta/auditLogs/signIns?$top=100&$filter=userDisplayName eq 'Isabella Simonsen'`
+
+6. Click **Run query**.
+
+Review the outcome of your query.
+
+ ![Microsoft Graph explorer response preview](./media/quickstart-access-log-with-graph-api/response-preview.png)
++
+## Clean up resources
+
+When no longer needed, delete the test user. If you don't know how to delete an Azure AD user, see [Delete users from Azure AD](../fundamentals/add-users-azure-active-directory.md#delete-a-user).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [What are Azure Active Directory reports?](overview-reports.md)
active-directory Quickstart Download Audit Report https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/quickstart-download-audit-report.md
- Title: Quickstart Download an audit report using the Azure portal | Microsoft Docs
-description: Learn how to download an audit report using the Azure portal
-------- Previously updated : 11/13/2018---
-# Customer intent: As an IT administrator, I want to learn how to download an audit report from the Azure portal so that I can understand what actions are being performed by users in my environment.
---
-# Quickstart: Download an audit report using the Azure portal
-
-In this quickstart, you learn how to download a CSV file of the audit logs for your tenant for the past 24 hours. You can download up to 250,000 records from the Azure portal. The records are sorted by most recent so by default, you get the most recent 250,000 records.
-
-## Prerequisites
-
-You need:
-
-* An Azure Active Directory tenant.
-* A user, who is in the **Security Administrator**, **Security Reader**, or **Global Administrator** role for the tenant. In addition, any user in the tenant can access their own audit logs.
-
-## Quickstart: Download an audit report
-
-1. Navigate to the [Azure portal](https://portal.azure.com).
-2. Select **Azure Active Directory** from the left navigation pane and use the **Switch directory** button to select your active directory.
-3. From the dashboard, select **Azure Active Directory** and then select **Audit logs**.
-4. Choose **last 24 hours** in the **Date range** filter drop-down and select **Apply** to view the audit logs for the past 24 hours.
-5. Select the **Download** button, select **CSV** as the file format and specify a file name to download a CSV file containing the filtered records.
-
-![Reporting](./media/quickstart-download-audit-report/download-audit-logs.png)
-
-## Next steps
-
-* [Sign-in activity reports in the Azure Active Directory portal](concept-sign-ins.md)
-* [Azure Active Directory reporting retention](reference-reports-data-retention.md)
-* [Azure Active Directory reporting latencies](reference-reports-latencies.md)
active-directory Quickstart Download Sign In Report https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/quickstart-download-sign-in-report.md
- Title: Quickstart Download a sign-in report using the Azure portal | Microsoft Docs
-description: Learn how to download a sign-in report using the Azure portal
-------- Previously updated : 11/13/2018---
-# Customer intent: As an IT administrator, I want to learn how to download a sign report from the Azure portal so that I can understand who is using my environment.
--
-# Quickstart: Download a sign-in report using the Azure portal
-
-In this quickstart, you learn how to download the sign-in data for your tenant for the past 24 hours. You can download up to 250,000 records from the Azure portal. The records are sorted by most recent so by default, you get the most recent 250,000 records.
-
-## Prerequisites
-
-You need:
-
-* An Azure Active Directory tenant, with a Premium license to view the sign-in activity report. See [Getting started with Azure Active Directory Premium](../fundamentals/active-directory-get-started-premium.md) to upgrade your Azure Active Directory edition. Note that if you did not have any activities data prior to the upgrade, it will take a couple of days for the data to show up in the reports after you upgrade to a premium license.
-* A user, who is in the **Security Administrator**, **Security Reader**, **Report Reader** or **Global Administrator** role for the tenant. In addition, any user in the tenant can access their own sign-ins.
-
-## Quickstart: Download a sign-in report
-
-1. Navigate to the [Azure portal](https://portal.azure.com).
-2. Select **Azure Active Directory** from the left navigation pane and use the **Switch directory** button to select your active directory.
-3. From the dashboard, select **Azure Active Directory** and then select **Sign-ins**.
-4. Choose **last 24 hours** in the **Date** filter drop-down and select **Apply** to view the sign-ins for the past 24 hours.
-5. Select the **Download** button, select **CSV** as the file format and specify a file name to download a CSV file containing the filtered records.
-
-![Reporting](./media/quickstart-download-sign-in-report/download-sign-ins.png)
-
-## Next steps
-
-* [Sign-in activity reports in the Azure Active Directory portal](concept-sign-ins.md)
-* [Azure Active Directory reporting retention](reference-reports-data-retention.md)
-* [Azure Active Directory reporting latencies](reference-reports-latencies.md)
active-directory Agiloft Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/agiloft-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Agiloft | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and Agiloft.
+ Title: 'Tutorial: Azure Active Directory integration with Agiloft Contract Management Suite | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Agiloft Contract Management Suite.
Previously updated : 05/25/2021 Last updated : 06/03/2021
-# Tutorial: Azure Active Directory integration with Agiloft
+# Tutorial: Azure Active Directory integration with Agiloft Contract Management Suite
-In this tutorial, you'll learn how to integrate Agiloft with Azure Active Directory (Azure AD). When you integrate Agiloft with Azure AD, you can:
+In this tutorial, you'll learn how to integrate Agiloft Contract Management Suite with Azure Active Directory (Azure AD). When you integrate Agiloft Contract Management Suite with Azure AD, you can:
-* Control in Azure AD who has access to Agiloft.
-* Enable your users to be automatically signed-in to Agiloft with their Azure AD accounts.
+* Control in Azure AD who has access to Agiloft Contract Management Suite.
+* Enable your users to be automatically signed-in to Agiloft Contract Management Suite with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal. ## Prerequisites
In this tutorial, you'll learn how to integrate Agiloft with Azure Active Direct
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* Agiloft single sign-on (SSO) enabled subscription.
+* Agiloft Contract Management Suite single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Agiloft supports **SP and IDP** initiated SSO.
-* Agiloft supports **Just In Time** user provisioning.
+* Agiloft Contract Management Suite supports **SP and IDP** initiated SSO.
+* Agiloft Contract Management Suite supports **Just In Time** user provisioning.
-## Add Agiloft from the gallery
+## Add Agiloft Contract Management Suite from the gallery
-To configure the integration of Agiloft into Azure AD, you need to add Agiloft from the gallery to your list of managed SaaS apps.
+To configure the integration of Agiloft Contract Management Suite into Azure AD, you need to add Agiloft Contract Management Suite from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **Agiloft** in the search box.
-1. Select **Agiloft** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. In the **Add from the gallery** section, type **Agiloft Contract Management Suite** in the search box.
+1. Select **Agiloft Contract Management Suite** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD SSO for Agiloft
+## Configure and test Azure AD SSO for Agiloft Contract Management Suite
-Configure and test Azure AD SSO with Agiloft using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Agiloft.
+Configure and test Azure AD SSO with Agiloft Contract Management Suite using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Agiloft Contract Management Suite.
-To configure and test Azure AD SSO with Agiloft, perform the following steps:
+To configure and test Azure AD SSO with Agiloft Contract Management Suite, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure Agiloft SSO](#configure-agiloft-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Agiloft test user](#create-agiloft-test-user)** - to have a counterpart of B.Simon in Agiloft that is linked to the Azure AD representation of user.
+1. **[Configure Agiloft Contract Management Suite SSO](#configure-agiloft-contract-management-suite-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Agiloft Contract Management Suite test user](#create-agiloft-contract-management-suite-test-user)** - to have a counterpart of B.Simon in Agiloft Contract Management Suite that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the Azure portal, on the **Agiloft** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Agiloft Contract Management Suite** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://<SUBDOMAIN>.agiloft.com/gui2/samlssologin.jsp?project=<KB_NAME>` > [!NOTE]
- > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Agiloft Client support team](https://www.agiloft.com/support-login.htm) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Agiloft Contract Management Suite Client support team](https://www.agiloft.com/support-login.htm) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
6. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer. ![The Certificate download link](common/certificatebase64.png)
-7. On the **Set up Agiloft** section, copy the appropriate URL(s) as per your requirement.
+7. On the **Set up Agiloft Contract Management Suite** section, copy the appropriate URL(s) as per your requirement.
![Copy configuration URLs](common/copy-configuration-urls.png)
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Agiloft.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Agiloft Contract Management Suite.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **Agiloft**.
+1. In the applications list, select **Agiloft Contract Management Suite**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure Agiloft SSO
+## Configure Agiloft Contract Management Suite SSO
-1. In a different web browser window, log in to your Agiloft company site as an administrator.
+1. In a different web browser window, log in to your Agiloft Contract Management Suite company site as an administrator.
2. Click on **Setup** (on the Left Pane) and then select **Access**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
4. A wizard dialog appears. On the dialog, click on the **Identity Provider Details** and fill in the following fields:
- ![Agiloft Configuration](./media/agiloft-tutorial/details.png)
+ ![Agiloft Contract Management Suite Configuration](./media/agiloft-tutorial/details.png)
a. In **IdP Entity Id / Issuer** textbox, paste the value of **Azure Ad Identifier**, which you have copied from Azure portal.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
e. Click **Finish**.
-### Create Agiloft test user
+### Create Agiloft Contract Management Suite test user
-In this section, a user called Britta Simon is created in Agiloft. Agiloft supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Agiloft, a new one is created after authentication.
+In this section, a user called Britta Simon is created in Agiloft Contract Management Suite. Agiloft Contract Management Suite supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Agiloft Contract Management Suite, a new one is created after authentication.
## Test SSO
In this section, you test your Azure AD single sign-on configuration with follow
#### SP initiated:
-* Click on **Test this application** in Azure portal. This will redirect to Agiloft Sign on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to Agiloft Contract Management Suite Sign on URL where you can initiate the login flow.
-* Go to Agiloft Sign-on URL directly and initiate the login flow from there.
+* Go to Agiloft Contract Management Suite Sign-on URL directly and initiate the login flow from there.
#### IDP initiated:
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the Agiloft for which you set up the SSO.
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Agiloft Contract Management Suite for which you set up the SSO.
-You can also use Microsoft My Apps to test the application in any mode. When you click the Agiloft tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Agiloft for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+You can also use Microsoft My Apps to test the application in any mode. When you click the Agiloft Contract Management Suite tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Agiloft Contract Management Suite for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure Agiloft you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
+Once you configure Agiloft Contract Management Suite you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Applied Mental Health Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/applied-mental-health-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Applied Mental Health | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Applied Mental Health.
++++++++ Last updated : 06/01/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Applied Mental Health
+
+In this tutorial, you'll learn how to integrate Applied Mental Health with Azure Active Directory (Azure AD). When you integrate Applied Mental Health with Azure AD, you can:
+
+* Control in Azure AD who has access to Applied Mental Health.
+* Enable your users to be automatically signed-in to Applied Mental Health with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Applied Mental Health single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Applied Mental Health supports **SP and IDP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Adding Applied Mental Health from the gallery
+
+To configure the integration of Applied Mental Health into Azure AD, you need to add Applied Mental Health from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Applied Mental Health** in the search box.
+1. Select **Applied Mental Health** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for Applied Mental Health
+
+Configure and test Azure AD SSO with Applied Mental Health using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Applied Mental Health.
+
+To configure and test Azure AD SSO with Applied Mental Health, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Applied Mental Health SSO](#configure-applied-mental-health-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Applied Mental Health test user](#create-applied-mental-health-test-user)** - to have a counterpart of B.Simon in Applied Mental Health that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Applied Mental Health** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type the URL:
+ `https://app.appliedmentalhealth.com.au/saml2/aad/login`
+
+1. Click **Save**.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up Applied Mental Health** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Applied Mental Health.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Applied Mental Health**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Applied Mental Health SSO
+
+To configure single sign-on on **Applied Mental Health** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Applied Mental Health support team](mailto:support@appliedmentalhealth.com.au). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Applied Mental Health test user
+
+In this section, you create a user called Britta Simon in Applied Mental Health. Work with [Applied Mental Health support team](mailto:support@appliedmentalhealth.com.au) to add the users in the Applied Mental Health platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Applied Mental Health Sign on URL where you can initiate the login flow.
+
+* Go to Applied Mental Health Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Applied Mental Health for which you set up the SSO
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Applied Mental Health tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Applied Mental Health for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
++
+## Next steps
+
+Once you configure Applied Mental Health you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
++
active-directory Bonos Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/bonos-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Bonos | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Bonos.
++++++++ Last updated : 06/01/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Bonos
+
+In this tutorial, you'll learn how to integrate Bonos with Azure Active Directory (Azure AD). When you integrate Bonos with Azure AD, you can:
+
+* Control in Azure AD who has access to Bonos.
+* Enable your users to be automatically signed-in to Bonos with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Bonos single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Bonos supports **SP and IDP** initiated SSO.
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
++
+## Adding Bonos from the gallery
+
+To configure the integration of Bonos into Azure AD, you need to add Bonos from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Bonos** in the search box.
+1. Select **Bonos** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for Bonos
+
+Configure and test Azure AD SSO with Bonos using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Bonos.
+
+To configure and test Azure AD SSO with Bonos, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Bonos SSO](#configure-bonos-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Bonos test user](#create-bonos-test-user)** - to have a counterpart of B.Simon in Bonos that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Bonos** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+
+ In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<CUSTOMER_NAME>.bonos.io/login`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<CUSTOMER_NAME>.bonos.io/login`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Reply URL and Sign-On URL. Contact [Bonos Client support team](mailto:support@bonos.io) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up Bonos** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Bonos.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Bonos**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Bonos SSO
+
+To configure single sign-on on **Bonos** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Bonos support team](mailto:support@bonos.io). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Bonos test user
+
+In this section, you create a user called Britta Simon in Bonos. Work with [Bonos support team](mailto:support@bonos.io) to add the users in the Bonos platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Bonos Sign on URL where you can initiate the login flow.
+
+* Go to Bonos Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Bonos for which you set up the SSO
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Bonos tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Bonos for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
++
+## Next steps
+
+Once you configure Bonos you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
++
active-directory Draup Inc Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/draup-inc-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Draup, Inc | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Draup, Inc.
++++++++ Last updated : 05/28/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Draup, Inc
+
+In this tutorial, you'll learn how to integrate Draup, Inc with Azure Active Directory (Azure AD). When you integrate Draup, Inc with Azure AD, you can:
+
+* Control in Azure AD who has access to Draup, Inc.
+* Enable your users to be automatically signed-in to Draup, Inc with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Draup, Inc single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Draup, Inc supports **SP** initiated SSO.
+* Draup, Inc supports **Just In Time** user provisioning.
+
+## Add Draup, Inc from the gallery
+
+To configure the integration of Draup, Inc into Azure AD, you need to add Draup, Inc from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Draup, Inc** in the search box.
+1. Select **Draup, Inc** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Draup, Inc
+
+Configure and test Azure AD SSO with Draup, Inc using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Draup, Inc.
+
+To configure and test Azure AD SSO with Draup, Inc, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Draup, Inc SSO](#configure-draup-inc-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Draup, Inc test user](#create-draup-inc-test-user)** - to have a counterpart of B.Simon in Draup, Inc that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Draup, Inc** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** box, type a URL using one of the following patterns:
+
+ | Identifier URL |
+ ||
+ |`https://<SUBDOMAIN>.draup.technology/<INSTANCE_NAME>`|
+ |`https://<SUBDOMAIN>.draup.com/<INSTANCE_NAME>`|
+ |
+
+ b. In the **Reply URL** text box, type a URL using one of the following patterns:
+
+ | Reply URL |
+ ||
+ |`https://<SUBDOMAIN>.draup.technology/<INSTANCE_NAME>`|
+ |`https://<SUBDOMAIN>.draup.com/<INSTANCE_NAME>`|
+ |
+
+ c. In the **Sign-on URL** text box, type a URL using one of the following patterns:
+
+ | Sign-on URL |
+ ||
+ |`https://<SUBDOMAIN>.draup.technology/<INSTANCE_NAME>`|
+ |`https://<SUBDOMAIN>.draup.com/<INSTANCE_NAME>`|
+ |
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier,Reply URL and Sign-On URL. Contact [Draup, Inc Client support team](mailto:support@draup.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Raw)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificateraw.png)
+
+1. On the **Set up Draup, Inc** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Draup, Inc.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Draup, Inc**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Draup, Inc SSO
+
+To configure single sign-on on **Draup, Inc** side, you need to send the downloaded **Certificate (Raw)** and appropriate copied URLs from Azure portal to [Draup, Inc support team](mailto:support@draup.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Draup, Inc test user
+
+In this section, a user called B.Simon is created in Draup, Inc. Draup, Inc supports just-in-time provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Draup, Inc, a new one is created when you attempt to access Draup, Inc.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Draup, Inc Sign-on URL where you can initiate the login flow.
+
+* Go to Draup, Inc Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Draup, Inc tile in the My Apps, this will redirect to Draup, Inc Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Draup, Inc you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Foundu Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/foundu-tutorial.md
Previously updated : 05/20/2021 Last updated : 05/27/2021
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. Perform the following steps in the **Single Sign-on Settings** page.
- ![Screenshot for foundU sso configuration](./media/foundu-tutorial/configuration.png)
+ ![Screenshot for foundU sso configuration](./media/foundu-tutorial/configuration-1.png)
a. Copy **Identifier(Entity ID)** value, paste this value into the **Identifier** text box in the **Basic SAML Configuration section** in the Azure portal.
active-directory My Ibisworld Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/my-ibisworld-tutorial.md
Previously updated : 08/31/2020 Last updated : 03/06/2021
In this tutorial, you'll learn how to integrate My IBISWorld with Azure Active D
* Enable your users to be automatically signed-in to My IBISWorld with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * My IBISWorld single sign-on (SSO) enabled subscription.
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+ ## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* My IBISWorld supports **SP and IDP** initiated SSO
-* My IBISWorld supports **Just In Time** user provisioning
-* Once you configure My IBISWorld you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real-time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
-
-> [!NOTE]
-> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+* My IBISWorld supports **SP and IDP** initiated SSO.
+* My IBISWorld supports **Just In Time** user provisioning.
## Adding My IBISWorld from the gallery To configure the integration of My IBISWorld into Azure AD, you need to add My IBISWorld from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **My IBISWorld** in the search box. 1. Select **My IBISWorld** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. - ## Configure and test Azure AD SSO for My IBISWorld Configure and test Azure AD SSO with My IBISWorld using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in My IBISWorld.
-To configure and test Azure AD SSO with My IBISWorld, complete the following building blocks:
+To configure and test Azure AD SSO with My IBISWorld, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with My IBISWorld, complete the following bui
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **My IBISWorld** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **My IBISWorld** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode:
+
+ In the **Relay State** text box, type the URL: `RPID=http://fedlogin.ibisworld.com` and leave the **Sign-on URL** text box empty.
- * To configure the application in **SP** initiated mode, request the URL from IBISWorld, and then enter the URL in the **Sign-on URL** text box.
-
- * To configure the application in **IdP** initiated mode, in the **Relay State** text box, enter the URL `RPID=http://fedlogin.ibisworld.com`. Leave the **Sign-on URL** text box empty.
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ Please contact [My IBISWorld support team](mailto:support@ibisworld.freshdesk.com) for Sign-on URL from IBISWorld and set it into the **Sign-on URL** text box.
1. Click **Save**. 1. My IBISWorld application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
- ![image](common/default-attributes.png)
+ ![image](common/default-attributes.png)
1. In addition to above, My IBISWorld application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
-
- | Name | Source Attribute|
- | | |
- | department | user.department |
- | language | user.preferredlanguage |
- | phone | user.telephonenumber |
- | title | user.jobtitle |
- | userid | user.employeeid |
- | country | user.country |
+
+ | Name | Source Attribute|
+ | - | |
+ | department | user.department |
+ | language | user.preferredlanguage |
+ | phone | user.telephonenumber |
+ | title | user.jobtitle |
+ | userid | user.employeeid |
+ | country | user.country |
1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer. ![The Certificate download link](common/copy-metadataurl.png)+ ### Create an Azure AD test user In this section, you'll create a test user in the Azure portal called B.Simon.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **My IBISWorld**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure My IBISWorld SSO
In this section, a user called Britta Simon is created in My IBISWorld. My IBISW
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
-When you click the My IBISWorld tile in the Access Panel, you should be automatically signed in to the My IBISWorld for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal. This will redirect to My IBISWorld Sign on URL where you can initiate the login flow.
-## Additional resources
+* Go to My IBISWorld Sign-on URL directly and initiate the login flow from there.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+#### IDP initiated:
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the My IBISWorld for which you set up the SSO
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the My IBISWorld tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the My IBISWorld for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [Try My IBISWorld with Azure AD](https://aad.portal.azure.com/) -- [What is session control in Microsoft Cloud App Security?](/cloud-app-security/proxy-intro-aad)
+## Next steps
-- [How to protect My IBISWorld with advanced visibility and controls](/cloud-app-security/proxy-intro-aad)
+Once you configure My IBISWorld you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
azure-app-configuration Monitor App Configuration Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/monitor-app-configuration-reference.md
App Configuration uses the [AACHttpRequest Table](/azure/azure-monitor/refere
## See Also * See [Monitoring Azure App Configuration](monitor-app-configuration.md) for a description of monitoring Azure App Configuration.
-* See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/insights/monitor-azure-resources) for details on monitoring Azure resources.
+* See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
azure-app-configuration Monitor App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/monitor-app-configuration.md
The **Overview** page in the Azure portal includes a brief view of the resource
> ![Monitoring on the Overview Page](./media/monitoring-overview-page.png) ## Monitoring data 
-App Configuration collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](/azure/azure-monitor/insights/monitor-azure-resource#monitoring-data-from-Azure-resources). See [Monitoring App Configuration data reference](/monitor-app-configuration-reference.md) for detailed information on the metrics and logs metrics created by App Configuration.
+App Configuration collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](/azure/azure-monitor/insights/monitor-azure-resource#monitoring-data-from-Azure-resources). See [Monitoring App Configuration data reference](/azure/azure-app-configuration/monitor-app-configuration-reference) for detailed information on the metrics and logs metrics created by App Configuration.
## Collection and routing Platform metrics and the activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
Resource Logs are not collected and stored until you create a diagnostic setting
For more information on creating a diagnostic setting using the Azure portal, CLI, or PowerShell, see [create a diagnostic setting to collect platform logs and metrics in Azure](/azure/azure-monitor/platform/diagnostic-settings).
-When you create a diagnostic setting, you specify which categories of logs to collect. For further information on the categories of logs for App Configuration, reference [App Configuration monitoring data reference](/monitor-app-configuration-reference.md#resource-logs).
+When you create a diagnostic setting, you specify which categories of logs to collect. For further information on the categories of logs for App Configuration, reference [App Configuration monitoring data reference](/azure/azure-app-configuration/monitor-app-configuration-reference#resource-logs).
## Analyzing metrics
In the portal, navigate to the **Metrics** section and select the **Metric Names
> [!div class="mx-imgBorder"] > ![How to use App Config Metrics](./media/monitoring-analyze-metrics.png)
-For a list of the platform metrics collected for App Configuration, see [Monitoring App Configuration data reference metrics](/monitor-app-configuration-reference#metrics). For reference, you can also see a list of [all resource metrics supported in Azure Monitor](/azure/azure-monitor/platform/metrics-supported).
+For a list of the platform metrics collected for App Configuration, see [Monitoring App Configuration data reference metrics](/azure/azure-app-configuration/monitor-app-configuration-reference#metrics). For reference, you can also see a list of [all resource metrics supported in Azure Monitor](/azure/azure-monitor/platform/metrics-supported).
## Analyzing logs Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties. The common schema is outlined in [Azure Monitor resource log schema](/azure/azure-monitor/platform/diagnostic-logs-schema#top-level-resource-logs-schema). The [Activity log](/azure/azure-monitor/platform/activity-log) is a platform log in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
-For a list of the types of resource logs collected for App Configuration, see [Monitoring App Configuration data reference](/monitor-app-configuration-reference#logs). For a list of the tables used by Azure Monitor Logs and queryable by Log Analytics, see [Monitoring App Configuration data reference](/monitor-app-configuration-reference#azuremonitorlogstables)
+For a list of the types of resource logs collected for App Configuration, see [Monitoring App Configuration data reference](/azure/azure-app-configuration/monitor-app-configuration-reference#logs). For a list of the tables used by Azure Monitor Logs and queryable by Log Analytics, see [Monitoring App Configuration data reference](/azure/azure-app-configuration/monitor-app-configuration-reference#azuremonitorlogstables)
>[!IMPORTANT] > When you select **Logs** from the App Configuration menu, Log Analytics is opened with the query scope set to the current app configuration resource. This means that log queries will only include data from that resource.
The following table lists common and recommended alert rules for App C
| Alert type | Condition | Description  | |:|:|:|
-|Rate Limit on Http Requests | Status Code = 429  | The configuration store has exceeded the [hourly request quota](/faq#are-there-any-limits-on-the-number-of-requests-made-to-app-configuration). Upgrade to a standard store or follow the [best practices](/howto-best-practices#reduce-requests-made-to-app-configuration) to optimize your usage. |
+|Rate Limit on Http Requests | Status Code = 429  | The configuration store has exceeded the [hourly request quota](/azure/azure-app-configuration/faq#are-there-any-limits-on-the-number-of-requests-made-to-app-configuration). Upgrade to a standard store or follow the [best practices](/azure/azure-app-configuration/howto-best-practices#reduce-requests-made-to-app-configuration) to optimize your usage. |
## Next steps
-* See [Monitoring App Configuration data reference](/monitor-app-configuration-reference.md) for a reference of the metrics, logs, and other important values created by App Configuration.
+* See [Monitoring App Configuration data reference](/azure/azure-app-configuration/monitor-app-configuration-reference) for a reference of the metrics, logs, and other important values created by App Configuration.
* See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/insights/monitor-azure-resource) for details on monitoring Azure resources.
azure-arc Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/agent-overview.md
Title: Overview of the Connected Machine agent description: This article provides a detailed overview of the Azure Arc enabled servers agent available, which supports monitoring virtual machines hosted in hybrid environments. Previously updated : 05/26/2021 Last updated : 06/04/2021
Service Tags:
* AzureTrafficManager * AzureResourceManager * AzureArcInfrastructure
+* Storage
URLs:
URLs:
|`dc.services.visualstudio.com`|Application Insights| |`*.guestconfiguration.azure.com` |Guest Configuration| |`*.his.arc.azure.com`|Hybrid Identity Service|
-|`www.office.com`|Office 365|
+|`*.blob.core.windows.net`|Download source for Arc enabled servers extensions|
Preview agents (version 0.11 and lower) also require access to the following URLs:
azure-cache-for-redis Cache How To Manage Redis Cache Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-manage-redis-cache-powershell.md
# Manage Azure Cache for Redis with Azure PowerShell+ > [!div class="op_single_selector"] > * [PowerShell](cache-how-to-manage-redis-cache-powershell.md) > * [Azure CLI](cache-manage-cli.md)
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-This topic shows you how to perform common tasks such as create, update, and scale your Azure Cache for Redis instances, how to regenerate access keys, and how to view information about your caches. For a complete list of Azure Cache for Redis PowerShell cmdlets, see [Azure Cache for Redis cmdlets](/powershell/module/az.rediscache).
+This article shows you how to do common tasks such as create, update, and scale your Azure Cache for Redis instances. The article also shows how to regenerate access keys, and how to view information about your caches. For a complete list of Azure Cache for Redis PowerShell cmdlets, see [Azure Cache for Redis cmdlets](/powershell/module/az.rediscache).
[!INCLUDE [learn-about-deployment-models](../../includes/learn-about-deployment-models-rm-include.md)] For more information about the classic deployment model, see [Azure Resource Manager vs. classic deployment: Understand deployment models and the state of your resources](../azure-resource-manager/management/deployment-models.md). ## Prerequisites
-If you have already installed Azure PowerShell, you must have Azure PowerShell version 1.0.0 or later. You can check the version of Azure PowerShell that you have installed with this command at the Azure PowerShell command prompt.
+
+If you have already installed Azure PowerShell, you must have Azure PowerShell version 1.0.0 or later. You can check the version of Azure PowerShell you've installed with this command at the Azure PowerShell command prompt.
```azurepowershell Get-Module Az | format-table version ```
-First, you must log in to Azure with this command.
+First, you must sign in to Azure with this command.
```azurepowershell Connect-AzAccount ```
-Specify the email address of your Azure account and its password in the Microsoft Azure sign-in dialog.
+Specify the email address of your Azure account and your password in the Microsoft Azure sign-in dialog.
Next, if you have multiple Azure subscriptions, you need to set your Azure subscription. To see a list of your current subscriptions, run this command.
To specify the subscription, run the following command. In the following example
Select-AzSubscription -SubscriptionName ContosoSubscription ```
-Before you can use Windows PowerShell with Azure Resource Manager, you need the following:
+Before you can use Windows PowerShell with Azure Resource Manager, you need verify your set-up:
* Windows PowerShell, Version 3.0 or 4.0. To find the version of Windows PowerShell, type:`$PSVersionTable` and verify the value of `PSVersion` is 3.0 or 4.0. To install a compatible version, see [Windows Management Framework 3.0](https://www.microsoft.com/download/details.aspx?id=34595).
For example, to get help for the `New-AzRedisCache` cmdlet, type:
``` ### How to connect to other clouds
-By default the Azure environment is `AzureCloud`, which represents the global Azure cloud instance. To connect to a different instance, use the `Connect-AzAccount` command with the `-Environment` or -`EnvironmentName` command line switch with the desired environment or environment name.
+
+By default the Azure environment is `AzureCloud`, which represents the global Azure cloud instance. To connect to a different instance, use the `Connect-AzAccount` command with the `-Environment` or -`EnvironmentName` command-line switch with the environment or environment name you want.
To see the list of available environments, run the `Get-AzEnvironment` cmdlet. ### To connect to the Azure Government Cloud+ To connect to the Azure Government Cloud, use one of the following commands. ```azurepowershell
To create a cache in the Azure Government Cloud, use one of the following locati
For more information about the Azure Government Cloud, see [Microsoft Azure Government](https://azure.microsoft.com/features/gov/) and [Microsoft Azure Government Developer Guide](../azure-government/documentation-government-developer-guide.md).
-### To connect to the Azure China Cloud
-To connect to the Azure China Cloud, use one of the following commands.
+### To connect to the Azure 21Vianet China Cloud
+
+To connect to the Azure China 21Vianet cloud, use one of the following commands.
```azurepowershell Connect-AzAccount -EnvironmentName AzureChinaCloud
To create a cache in the Azure China Cloud, use one of the following locations.
For more information about the Azure China Cloud, see [AzureChinaCloud for Azure operated by 21Vianet in China](https://www.windowsazure.cn/). ### To connect to Microsoft Azure Germany+ To connect to Microsoft Azure Germany, use one of the following commands. ```azurepowershell
To create a cache in Microsoft Azure Germany, use one of the following locations
For more information about Microsoft Azure Germany, see [Microsoft Azure Germany](https://azure.microsoft.com/overview/clouds/germany/). ### Properties used for Azure Cache for Redis PowerShell
-The following table contains properties and descriptions for commonly used parameters when creating and managing your Azure Cache for Redis instances using Azure PowerShell.
+
+The following table contains Azure PowerShell properties and descriptions for common parameters when creating and managing your Azure Cache for Redis instances.
| Parameter | Description | Default | | | | |
The following table contains properties and descriptions for commonly used param
| KeyType |Specifies which access key to regenerate when renewing access keys. Valid values are: Primary, Secondary | | ### RedisConfiguration properties+ | Property | Description | Pricing tiers | | | | | | rdb-backup-enabled |Whether [Redis data persistence](cache-how-to-premium-persistence.md) is enabled |Premium only |
The following table contains properties and descriptions for commonly used param
| databases |Configures the number of databases. This property can be configured only at cache creation. |Standard and Premium | ## To create an Azure Cache for Redis+ New Azure Cache for Redis instances are created using the [New-AzRedisCache](/powershell/module/az.rediscache/new-azrediscache) cmdlet. > [!IMPORTANT]
To create a cache with default parameters, run the following command.
New-AzRedisCache -ResourceGroupName myGroup -Name mycache -Location "North Central US" ```
-`ResourceGroupName`, `Name`, and `Location` are required parameters, but the rest are optional and have default values. Running the previous command creates a Standard SKU Azure Cache for Redis instance with the specified name, location, and resource group, that is 1 GB in size with the non-SSL port disabled.
+`ResourceGroupName`, `Name`, and `Location` are required parameters, but the rests are optional and have default values. Running the previous command creates a Standard SKU Azure Cache for Redis instance with the specified name, location, and resource group. The instance is 1 GB in size with the non-SSL port disabled.
-To create a premium cache, specify a size of P1 (6 GB - 60 GB), P2 (13 GB - 130 GB), P3 (26 GB - 260 GB), or P4 (53 GB - 530 GB). To enable clustering, specify a shard count using the `ShardCount` parameter. The following example creates a P1 premium cache with 3 shards. A P1 premium cache is 6 GB in size, and since we specified three shards the total size is 18 GB (3 x 6 GB).
+To create a premium cache, specify a size of P1 (6 GB - 60 GB), P2 (13 GB - 130 GB), P3 (26 GB - 260 GB), or P4 (53 GB - 530 GB). To enable clustering, specify a shard count using the `ShardCount` parameter. The following example creates a P1 premium cache with three shards. A P1 premium cache is 6 GB in size, and since we specified three shards the total size is 18 GB (3 x 6 GB).
```azurepowershell New-AzRedisCache -ResourceGroupName myGroup -Name mycache -Location "North Central US" -Sku Premium -Size P1 -ShardCount 3 ```
-To specify values for the `RedisConfiguration` parameter, enclose the values inside `{}` as key/value pairs like `@{"maxmemory-policy" = "allkeys-random", "notify-keyspace-events" = "KEA"}`. The following example creates a standard 1 GB cache with `allkeys-random` maxmemory policy and keyspace notifications configured with `KEA`. For more information, see [Keyspace notifications (advanced settings)](cache-configure.md#keyspace-notifications-advanced-settings) and [Memory policies](cache-configure.md#memory-policies).
+To specify values for the `RedisConfiguration` parameter, enclose the values inside `{}` as key/value pairs like `@{"maxmemory-policy" = "allkeys-random", "notify-keyspace-events" = "KEA"}`. The following example creates a standard 1-GB cache with `allkeys-random` maxmemory policy and keyspace notifications configured with `KEA`. For more information, see [Keyspace notifications (advanced settings)](cache-configure.md#keyspace-notifications-advanced-settings) and [Memory policies](cache-configure.md#memory-policies).
```azurepowershell New-AzRedisCache -ResourceGroupName myGroup -Name mycache -Location "North Central US" -RedisConfiguration @{"maxmemory-policy" = "allkeys-random", "notify-keyspace-events" = "KEA"}
To specify values for the `RedisConfiguration` parameter, enclose the values ins
<a name="databases"></a> ## To configure the databases setting during cache creation+ The `databases` setting can be configured only during cache creation. The following example creates a premium P3 (26 GB) cache with 48 databases using the [New-AzRedisCache](/powershell/module/az.rediscache/New-azRedisCache) cmdlet. ```azurepowershell
The `databases` setting can be configured only during cache creation. The follow
For more information on the `databases` property, see [Default Azure Cache for Redis server configuration](cache-configure.md#default-redis-server-configuration). For more information on creating a cache using the [New-AzRedisCache](/powershell/module/az.rediscache/new-azrediscache) cmdlet, see the previous To create an Azure Cache for Redis section. ## To update an Azure Cache for Redis+ Azure Cache for Redis instances are updated using the [Set-AzRedisCache](/powershell/module/az.rediscache/Set-azRedisCache) cmdlet. To see a list of available parameters and their descriptions for `Set-AzRedisCache`, run the following command.
The following command updates the maxmemory-policy for the Azure Cache for Redis
<a name="scale"></a> ## To scale an Azure Cache for Redis+ `Set-AzRedisCache` can be used to scale an Azure Cache for Redis instance when the `Size`, `Sku`, or `ShardCount` properties are modified. > [!NOTE]
The following command updates the maxmemory-policy for the Azure Cache for Redis
> >
-The following example shows how to scale a cache named `myCache` to a 2.5 GB cache. Note that this command works for both a Basic or a Standard cache.
+The following example shows how to scale a cache named `myCache` to a 2.5-GB cache. This command works for both a Basic or a Standard cache.
```azurepowershell Set-AzRedisCache -ResourceGroupName myGroup -Name myCache -Size 2.5GB ```
-After this command is issued, the status of the cache is returned (similar to calling `Get-AzRedisCache`). Note that the `ProvisioningState` is `Scaling`.
+After this command is issued, the status of the cache is returnedsimilar to calling `Get-AzRedisCache`. The `ProvisioningState` is set to `Scaling`.
```azurepowershell PS C:\> Set-AzRedisCache -Name myCache -ResourceGroupName myGroup -Size 2.5GB
After this command is issued, the status of the cache is returned (similar to ca
ShardCount : ```
-When the scaling operation is complete, the `ProvisioningState` changes to `Succeeded`. If you need to make a subsequent scaling operation, such as changing from Basic to Standard and then changing the size, you must wait until the previous operation is complete or you receive an error similar to the following.
+When the scaling operation is complete, the `ProvisioningState` changes to `Succeeded`. If you need to make another scaling operationfor example, changing from Basic to Standard and then changing the sizeyou must wait until the previous operation is complete or you receive an error similar to the following.
```azurepowershell Set-AzRedisCache : Conflict: The resource '...' is not in a stable state, and is currently unable to accept the update request. ``` ## To get information about an Azure Cache for Redis+ You can retrieve information about a cache using the [Get-AzRedisCache](/powershell/module/az.rediscache/get-azrediscache) cmdlet. To see a list of available parameters and their descriptions for `Get-AzRedisCache`, run the following command.
To return information about a specific cache, run `Get-AzRedisCache` with the `N
``` ## To retrieve the access keys for an Azure Cache for Redis+ To retrieve the access keys for your cache, you can use the [Get-AzRedisCacheKey](/powershell/module/az.rediscache/Get-azRedisCacheKey) cmdlet. To see a list of available parameters and their descriptions for `Get-AzRedisCacheKey`, run the following command.
To retrieve the keys for your cache, call the `Get-AzRedisCacheKey` cmdlet and p
``` ## To regenerate access keys for your Azure Cache for Redis+ To regenerate the access keys for your cache, you can use the [New-AzRedisCacheKey](/powershell/module/az.rediscache/New-azRedisCacheKey) cmdlet. To see a list of available parameters and their descriptions for `New-AzRedisCacheKey`, run the following command.
To regenerate the primary or secondary key for your cache, call the `New-AzRedis
``` ## To delete an Azure Cache for Redis+ To delete an Azure Cache for Redis, use the [Remove-AzRedisCache](/powershell/module/az.rediscache/remove-azrediscache) cmdlet. To see a list of available parameters and their descriptions for `Remove-AzRedisCache`, run the following command.
In the following example, the cache named `myCache` is removed.
## To import an Azure Cache for Redis+ You can import data into an Azure Cache for Redis instance using the `Import-AzRedisCache` cmdlet. > [!IMPORTANT]
The following command imports data from the blob specified by the SAS uri into A
``` ## To export an Azure Cache for Redis+ You can export data from an Azure Cache for Redis instance using the `Export-AzRedisCache` cmdlet. > [!IMPORTANT]
The following command exports data from an Azure Cache for Redis instance into t
``` ## To reboot an Azure Cache for Redis+ You can reboot your Azure Cache for Redis instance using the `Reset-AzRedisCache` cmdlet. > [!IMPORTANT]
The following command reboots both nodes of the specified cache.
## Next steps+ To learn more about using Windows PowerShell with Azure, see the following resources: * [Azure Cache for Redis cmdlet documentation on MSDN](/powershell/module/az.rediscache)
azure-cache-for-redis Cache How To Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-monitor.md
Last updated 02/08/2021
# Monitor Azure Cache for Redis
-Azure Cache for Redis uses [Azure Monitor](../azure-monitor/index.yml) to provide several options for monitoring your cache instances. You can view metrics, pin metrics charts to the Startboard, customize the date and time range of monitoring charts, add and remove metrics from the charts, and set alerts when certain conditions are met. These tools enable you to monitor the health of your Azure Cache for Redis instances and help you manage your caching applications.
+Azure Cache for Redis uses [Azure Monitor](../azure-monitor/index.yml) to provide several options for monitoring your cache instances. These tools enable you to monitor the health of your Azure Cache for Redis instances and help you manage your caching applications.
-Metrics for Azure Cache for Redis instances are collected using the Redis [INFO](https://redis.io/commands/info) command approximately twice per minute and automatically stored for 30 days (see [Export cache metrics](#export-cache-metrics) to configure a different retention policy) so they can be displayed in the metrics charts and evaluated by alert rules. For more information about the different INFO values used for each cache metric, see [Available metrics and reporting intervals](#available-metrics-and-reporting-intervals).
+Use Azure Monitor to:
+
+- view metrics
+- pin metrics charts to the Startboard
+- customize the date and time range of monitoring charts
+- add and remove metrics from the charts
+- and set alerts when certain conditions are met
+
+Metrics for Azure Cache for Redis instances are collected using the Redis [INFO](https://redis.io/commands/info) command. Metrics are collected approximately twice per minute and automatically stored for 30 days so they can be displayed in the metrics charts and evaluated by alert rules.
+
+To configure a different retention policy, see [Export cache metrics](#export-cache-metrics).
+
+For more information about the different INFO values used for each cache metric, see [Available metrics and reporting intervals](#available-metrics-and-reporting-intervals).
<a name="view-cache-metrics"></a>
-To view cache metrics, [browse](cache-configure.md#configure-azure-cache-for-redis-settings) to your cache instance in the [Azure portal](https://portal.azure.com). Azure Cache for Redis provides some built-in charts on the **Overview** blade and the **Redis metrics** blade. Each chart can be customized by adding or removing metrics and changing the reporting interval.
+To view cache metrics, [browse](cache-configure.md#configure-azure-cache-for-redis-settings) to your cache instance in the [Azure portal](https://portal.azure.com). Azure Cache for Redis provides some built-in charts on the left using **Overview** and **Redis metrics**. Each chart can be customized by adding or removing metrics and changing the reporting interval.
![Six graphs are shown. One of them is Cache Hits and Cache Misses past hour.](./media/cache-how-to-monitor/redis-cache-redis-metrics-blade.png) ## View pre-configured metrics charts
-The **Overview** blade has the following pre-configured monitoring charts.
+On the left, **Overview** has the following pre-configured monitoring charts.
-* [Monitoring charts](#monitoring-charts)
-* [Usage charts](#usage-charts)
+- [Monitoring charts](#monitoring-charts)
+- [Usage charts](#usage-charts)
### Monitoring charts
-The **Monitoring** section in the **Overview** blade has **Hits and Misses**, **Gets and Sets**, **Connections**, and **Total Commands** charts.
+The **Monitoring** sectionin **Overview** on the lefthas **Hits and Misses**, **Gets and Sets**, **Connections**, and **Total Commands** charts.
![Monitoring charts](./media/cache-how-to-monitor/redis-cache-monitoring-part.png) ### Usage charts
-The **Usage** section in the **Overview** blade has **Redis Server Load**, **Memory Usage**, **Network Bandwidth**, and **CPU Usage** charts, and also displays the **Pricing tier** for the cache instance.
+The **Usage** sectionin **Overview** on the lefthas **Redis Server Load**, **Memory Usage**, **Network Bandwidth**, and **CPU Usage** charts, and also displays the **Pricing tier** for the cache instance.
![Usage charts](./media/cache-how-to-monitor/redis-cache-usage-part.png)
The **Pricing tier** displays the cache pricing tier, and can be used to [scale]
## View metrics charts for all your caches with Azure Monitor for Azure Cache for Redis
-Use [Azure Monitor for Azure Cache for Redis](../azure-monitor/insights/redis-cache-insights-overview.md) (preview) for a view of the overall performance, failures, capacity, and operational health of all your Azure Cache for Redis resources in a customizable unified interactive experience that lets you drill down into details for individual resources. Azure Monitor for Azure Cache for Redis is based on the [workbooks feature of Azure Monitor](../azure-monitor/visualize/workbooks-overview.md) that provides rich visualizations for metrics and other data. To learn more, see the [Explore Azure Monitor for Azure Cache for Redis](../azure-monitor/insights/redis-cache-insights-overview.md) article.
+Use [Azure Monitor for Azure Cache for Redis](../azure-monitor/insights/redis-cache-insights-overview.md) (preview) for a view of the overall performance, failures, capacity, and operational health of all your Azure Cache for Redis resources. View metrics in a customizable, unified, and interactive experience that lets you drill down into details for individual resources. Azure Monitor for Azure Cache for Redis is based on the [workbooks feature of Azure Monitor](../azure-monitor/visualize/workbooks-overview.md) that provides rich visualizations for metrics and other data. To learn more, see the [Explore Azure Monitor for Azure Cache for Redis](../azure-monitor/insights/redis-cache-insights-overview.md) article.
## View metrics with Azure Monitor metrics explorer
-For scenarios where you don't need the full flexibility of Azure Monitor for Azure Cache for Redis, you can instead view metrics and create custom charts using the Azure Monitor metrics explorer. Click **Metrics** from the **Resource menu**, and customize your chart using the desired metrics, reporting interval, chart type, and more.
+For scenarios where you don't need the full flexibility of Azure Monitor for Azure Cache for Redis, you can instead view metrics and create custom charts using the Azure Monitor metrics explorer. Select **Metrics** from the **Resource menu**, and customize your chart using your preferred metrics, reporting interval, chart type, and more.
![In the left navigation pane of contoso55, Metrics is an option under Monitoring and is highlighted. On Metrics there is a list of metrics. Cache hits and Cache misses are selected.](./media/cache-how-to-monitor/redis-cache-monitor.png) For more information on working with metrics using Azure Monitor, see [Overview of metrics in Microsoft Azure](../azure-monitor/data-platform.md). <a name="enable-cache-diagnostics"></a>+ ## Export cache metrics
-By default, cache metrics in Azure Monitor are [stored for 30 days](../azure-monitor/essentials/data-platform-metrics.md) and then deleted. To persist your cache metrics for longer than 30 days, you can [designate a storage account](../azure-monitor/essentials/resource-logs.md#send-to-azure-storage) and specify a **Retention (days)** policy for your cache metrics.
+By default, cache metrics in Azure Monitor are [stored for 30 days](../azure-monitor/essentials/data-platform-metrics.md) and then deleted. To persist your cache metrics for longer than 30 days, you can [designate a storage account](../azure-monitor/essentials/resource-logs.md#send-to-azure-storage) and specify a **Retention (days)** policy for your cache metrics.
To configure a storage account for your cache metrics: 1. In the **Azure Cache for Redis** page, under the **Monitoring** heading, select **Diagnostics**.
-2. Select **+ Add diagnostic setting**.
-3. Name the settings.
-4. Check **Archive to a storage account**. YouΓÇÖll be charged normal data rates for storage and transactions when you send diagnostics to a storage account.
-4. Select **Configure** to choose the storage account in which to store the cache metrics.
-5. Under the table heading **metric**, check box beside the line items you want to store, such as **AllMetrics**. Specify a **Retention (days)** policy. The maximum days retention you can specify is **365 days**. However, if you want to retain the metrics data forever, set **Retention (days)** to **0**.
-6. Click **Save**.
-
+1. Select **+ Add diagnostic setting**.
+1. Name the settings.
+1. Check **Archive to a storage account**. YouΓÇÖll be charged normal data rates for storage and transactions when you send diagnostics to a storage account.
+1. Select **Configure** to choose the storage account in which to store the cache metrics.
+1. Under the table heading **metric**, check box beside the line items you want to store, such as **AllMetrics**. Specify a **Retention (days)** policy. The maximum days retention you can specify is **365 days**. However, if you want to keep the metrics data forever, set **Retention (days)** to **0**.
+1. Select **Save**.
![Redis diagnostics](./media/cache-how-to-monitor/redis-cache-diagnostics.png)
To configure a storage account for your cache metrics:
>In addition to archiving your cache metrics to storage, you can also [stream them to an Event hub or send them to Azure Monitor logs](../azure-monitor/essentials/rest-api-walkthrough.md#retrieve-metric-values). >
-To access your metrics, you can view them in the Azure portal as previously described in this article, and you can also access them using the [Azure Monitor Metrics REST API](../azure-monitor/essentials/stream-monitoring-data-event-hubs.md).
+To access your metrics, you can view them in the Azure portal as previously described in this article. You can also access them using the [Azure Monitor Metrics REST API](../azure-monitor/essentials/stream-monitoring-data-event-hubs.md).
> [!NOTE] > If you change storage accounts, the data in the previously configured storage account remains available for download, but it is not displayed in the Azure portal.
->
+>
## Available metrics and reporting intervals
-Cache metrics are reported using several reporting intervals, including **Past hour**, **Today**, **Past week**, and **Custom**. The **Metric** blade for each metrics chart displays the average, minimum, and maximum values for each metric in the chart, and some metrics display a total for the reporting interval.
+Cache metrics are reported using several reporting intervals, including **Past hour**, **Today**, **Past week**, and **Custom**. On the left, you find the **Metric** selection for each metrics chart displays the average, minimum, and maximum values for each metric in the chart, and some metrics display a total for the reporting interval.
Each metric includes two versions. One metric measures performance for the entire cache, and for caches that use [clustering](cache-how-to-premium-clustering.md), a second version of the metric that includes `(Shard 0-9)` in the name measures performance for a single shard in a cache. For example if a cache has four shards, `Cache Hits` is the total number of hits for the entire cache, and `Cache Hits (Shard 3)` is just the hits for that shard of the cache. > [!NOTE]
-> When you're seeing the aggregation type :
+> When you're seeing the aggregation type :
>
-> - CountΓÇ¥ show 2, it indicates the metric received 2 data points for your time granularity (1 minute).
-> - ΓÇ£MaxΓÇ¥ shows the maximum value of a data point in the time granularity,
-> - ΓÇ£MinΓÇ¥ shows the minimum value of a data point in the time granularity,
-> - ΓÇ£AverageΓÇ¥ shows the average value of all data points in the time granularity.
-> - ΓÇ£SumΓÇ¥ shows the sum of all data points in the time granularity and may be misleading depending on the specific metric.
+> - CountΓÇ¥ show 2, it indicates the metric received 2 data points for your time granularity (1 minute).
+> - ΓÇ£MaxΓÇ¥ shows the maximum value of a data point in the time granularity,
+> - ΓÇ£MinΓÇ¥ shows the minimum value of a data point in the time granularity,
+> - ΓÇ£AverageΓÇ¥ shows the average value of all data points in the time granularity.
+> - ΓÇ£SumΓÇ¥ shows the sum of all data points in the time granularity and may be misleading depending on the specific metric.
> Under normal conditions, ΓÇ£AverageΓÇ¥ and ΓÇ£MaxΓÇ¥ will be very similar because only one node emits these metrics (the master node). In a scenario where the number of connected clients changes rapidly, ΓÇ£Max,ΓÇ¥ ΓÇ£Average,ΓÇ¥ and ΓÇ£MinΓÇ¥ would show very different values and this is also expected behavior.
->
+>
> Generally, ΓÇ£AverageΓÇ¥ will show you a smooth chart of your desired metric and reacts well to changes in time granularity. ΓÇ£MaxΓÇ¥ and ΓÇ£MinΓÇ¥ may hide large changes in the metric if the time granularity is large but can be used with a small time granularity to help pinpoint exact times when large changes occur in the metric. >
-> ΓÇ£CountΓÇ¥ and ΓÇ£SumΓÇ¥ may be misleading for certain metrics (connected clients included).
+> ΓÇ£CountΓÇ¥ and ΓÇ£SumΓÇ¥ may be misleading for certain metrics (connected clients included).
> > Hence, we suggested you to have a look at the Average metrics and not the Sum metrics. > [!NOTE] > Even when the cache is idle with no connected active client applications, you may see some cache activity, such as connected clients, memory usage, and operations being performed. This activity is normal during the operation of an Azure Cache for Redis instance.
->
->
+>
+>
| Metric | Description | | | | | Cache Hits |The number of successful key lookups during the specified reporting interval. This number maps to `keyspace_hits` from the Redis [INFO](https://redis.io/commands/info) command. |
-| Cache Latency (Preview) | The latency of the cache calculated based off the internode latency of the cache. This metric is measured in microseconds, and has three dimensions: `Avg`, `Min`, and `Max`, which represent the average, minimum, and maximum latency of the cache respectively during the specified reporting interval. |
-| Cache Misses |The number of failed key lookups during the specified reporting interval. This number maps to `keyspace_misses` from the Redis INFO command. Cache misses do not necessarily mean there is an issue with the cache. For example, when using the cache-aside programming pattern, an application looks first in the cache for an item. If the item is not there (cache miss), the item is retrieved from the database and added to the cache for next time. Cache misses are normal behavior for the cache-aside programming pattern. If the number of cache misses is higher than expected, examine the application logic that populates and reads from the cache. If items are being evicted from the cache due to memory pressure, then there may be some cache misses, but a better metric to monitor for memory pressure would be `Used Memory` or `Evicted Keys`. |
-| Cache Read |The amount of data read from the cache in Megabytes per second (MB/s) during the specified reporting interval. This value is derived from the network interface cards that support the virtual machine that hosts the cache and is not Redis specific. **This value corresponds to the network bandwidth used by this cache. If you want to set up alerts for server-side network bandwidth limits, then create it using this `Cache Read` counter. See [this table](cache-planning-faq.md#azure-cache-for-redis-performance) for the observed bandwidth limits for various cache pricing tiers and sizes.** |
-| Cache Write |The amount of data written to the cache in Megabytes per second (MB/s) during the specified reporting interval. This value is derived from the network interface cards that support the virtual machine that hosts the cache and is not Redis specific. This value corresponds to the network bandwidth of data sent to the cache from the client. |
-| Connected Clients |The number of client connections to the cache during the specified reporting interval. This number maps to `connected_clients` from the Redis INFO command. Once the [connection limit](cache-configure.md#default-redis-server-configuration) is reached, subsequent connection attempts to the cache will fail. Even if there are no active client applications, there may still be a few instances of connected clients due to internal processes and connections. |
+| Cache Latency (Preview) | The latency of the cache calculated using the internode latency of the cache. This metric is measured in microseconds, and has three dimensions: `Avg`, `Min`, and `Max`. The dimensions represent the average, minimum, and maximum latency of the cache during the specified reporting interval. |
+| Cache Misses |The number of failed key lookups during the specified reporting interval. This number maps to `keyspace_misses` from the Redis INFO command. Cache misses don't necessarily mean there's an issue with the cache. For example, when using the cache-aside programming pattern, an application looks first in the cache for an item. If the item isn't there (cache miss), the item is retrieved from the database and added to the cache for next time. Cache misses are normal behavior for the cache-aside programming pattern. If the number of cache misses is higher than expected, examine the application logic that populates and reads from the cache. If items are being evicted from the cache because of memory pressure, then there may be some cache misses, but a better metric to monitor for memory pressure would be `Used Memory` or `Evicted Keys`. |
+| Cache Read |The amount of data read from the cache in Megabytes per second (MB/s) during the specified reporting interval. This value is derived from the network interface cards that support the virtual machine that hosts the cache and isn't Redis specific. **This value corresponds to the network bandwidth used by this cache. If you want to set up alerts for server-side network bandwidth limits, then create it using this `Cache Read` counter. See [this table](cache-planning-faq.md#azure-cache-for-redis-performance) for the observed bandwidth limits for various cache pricing tiers and sizes.** |
+| Cache Write |The amount of data written to the cache in Megabytes per second (MB/s) during the specified reporting interval. This value is derived from the network interface cards that support the virtual machine that hosts the cache and isn't Redis specific. This value corresponds to the network bandwidth of data sent to the cache from the client. |
+| Connected Clients |The number of client connections to the cache during the specified reporting interval. This number maps to `connected_clients` from the Redis INFO command. Once the [connection limit](cache-configure.md#default-redis-server-configuration) is reached, later attempts to connect to the cache fail. Even if there are no active client applications, there may still be a few instances of connected clients because of internal processes and connections. |
| CPU |The CPU utilization of the Azure Cache for Redis server as a percentage during the specified reporting interval. This value maps to the operating system `\Processor(_Total)\% Processor Time` performance counter. |
-| Errors | Specific failures and performance issues that the cache could be experiencing during a specified reporting interval. This metric has eight dimensions representing different error types, but could have more added in the future. The error types represented now are as follows: <br/><ul><li>**Failover** ΓÇô when a cache fails over (subordinate promotes to primary)</li><li>**Dataloss** ΓÇô when there is data loss on the cache</li><li>**UnresponsiveClients** ΓÇô when the clients are not reading data from the server fast enough</li><li>**AOF** ΓÇô when there is an issue related to AOF persistence</li><li>**RDB** ΓÇô when there is an issue related to RDB persistence</li><li>**Import** ΓÇô when there is an issue related to Import RDB</li><li>**Export** ΓÇô when there is an issue related to Export RDB</li></ul> |
-| Evicted Keys |The number of items evicted from the cache during the specified reporting interval due to the `maxmemory` limit. This number maps to `evicted_keys` from the Redis INFO command. |
+| Errors | Specific failures and performance issues that the cache could be experiencing during a specified reporting interval. This metric has eight dimensions representing different error types, but could have more added in the future. The error types represented now are as follows: <br/><ul><li>**Failover** ΓÇô when a cache fails over (subordinate promotes to primary)</li><li>**Dataloss** ΓÇô when there's data loss on the cache</li><li>**UnresponsiveClients** ΓÇô when the clients aren't reading data from the server fast enough</li><li>**AOF** ΓÇô when there's an issue related to AOF persistence</li><li>**RDB** ΓÇô when there's an issue related to RDB persistence</li><li>**Import** ΓÇô when there's an issue related to Import RDB</li><li>**Export** ΓÇô when there's an issue related to Export RDB</li></ul> |
+| Evicted Keys |The number of items evicted from the cache during the specified reporting interval because of the `maxmemory` limit. This number maps to `evicted_keys` from the Redis INFO command. |
| Expired Keys |The number of items expired from the cache during the specified reporting interval. This value maps to `expired_keys` from the Redis INFO command.| | Gets |The number of get operations from the cache during the specified reporting interval. This value is the sum of the following values from the Redis INFO all command: `cmdstat_get`, `cmdstat_hget`, `cmdstat_hgetall`, `cmdstat_hmget`, `cmdstat_mget`, `cmdstat_getbit`, and `cmdstat_getrange`, and is equivalent to the sum of cache hits and misses during the reporting interval. | | Operations per Second | The total number of commands processed per second by the cache server during the specified reporting interval. This value maps to "instantaneous_ops_per_sec" from the Redis INFO command. |
-| Redis Server Load |The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. If this counter reaches 100, it means the Redis server has hit a performance ceiling and the CPU can't process work any faster. If you are seeing high Redis Server Load, then you will see timeout exceptions in the client. In this case, you should consider scaling up or partitioning your data into multiple caches. |
+| Redis Server Load |The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. If this counter reaches 100, it means the Redis server has hit a performance ceiling and the CPU can't process work any faster. If you're seeing high Redis Server Load, then you see timeout exceptions in the client. In this case, you should consider scaling up or partitioning your data into multiple caches. |
| Sets |The number of set operations to the cache during the specified reporting interval. This value is the sum of the following values from the Redis INFO all command: `cmdstat_set`, `cmdstat_hset`, `cmdstat_hmset`, `cmdstat_hsetnx`, `cmdstat_lset`, `cmdstat_mset`, `cmdstat_msetnx`, `cmdstat_setbit`, `cmdstat_setex`, `cmdstat_setrange`, and `cmdstat_setnx`. |
-| Total Keys | The maximum number of keys in the cache during the past reporting time period. This number maps to `keyspace` from the Redis INFO command. Due to a limitation of the underlying metrics system, for caches with clustering enabled, Total Keys returns the maximum number of keys of the shard that had the maximum number of keys during the reporting interval. |
+| Total Keys | The maximum number of keys in the cache during the past reporting time period. This number maps to `keyspace` from the Redis INFO command. Because of a limitation in the underlying metrics system for caches with clustering enabled, Total Keys return the maximum number of keys of the shard that had the maximum number of keys during the reporting interval. |
| Total Operations |The total number of commands processed by the cache server during the specified reporting interval. This value maps to `total_commands_processed` from the Redis INFO command. When Azure Cache for Redis is used purely for pub/sub there will be no metrics for `Cache Hits`, `Cache Misses`, `Gets`, or `Sets`, but there will be `Total Operations` metrics that reflect the cache usage for pub/sub operations. |
-| Used Memory |The amount of cache memory used for key/value pairs in the cache in MB during the specified reporting interval. This value maps to `used_memory` from the Redis INFO command. This value does not include metadata or fragmentation. |
+| Used Memory |The amount of cache memory in MB that is used for key/value pairs in the cache during the specified reporting interval. This value maps to `used_memory` from the Redis INFO command. This value doesn't include metadata or fragmentation. |
| Used Memory Percentage | The % of total memory that is being used during the specified reporting interval. This value references the `used_memory` value from the Redis INFO command to calculate the percentage. | | Used Memory RSS |The amount of cache memory used in MB during the specified reporting interval, including fragmentation and metadata. This value maps to `used_memory_rss` from the Redis INFO command. | <a name="operations-and-alerts"></a>+ ## Alerts You can configure to receive alerts based on metrics and activity logs. Azure Monitor allows you to configure an alert to do the following when it triggers:
-* Send an email notification
-* Call a webhook
-* Invoke an Azure Logic App
+- Send an email notification
+- Call a webhook
+- Invoke an Azure Logic App
-To configure Alert rules for your cache, click **Alert rules** from the **Resource menu**.
+To configure Alert rules for your cache, select **Alert rules** from the **Resource menu**.
![Monitoring](./media/cache-how-to-monitor/redis-cache-monitoring.png) For more information about configuring and using Alerts, see [Overview of Alerts](../azure-monitor/alerts/alerts-classic-portal.md). ## Activity Logs
-Activity logs provide insight into the operations that were performed on your Azure Cache for Redis instances. It was previously known as "audit logs" or "operational logs". Using activity logs, you can determine the "what, who, and when" for any write operations (PUT, POST, DELETE) taken on your Azure Cache for Redis instances.
+
+Activity logs provide insight into the operations that completed on your Azure Cache for Redis instances. It was previously known as "audit logs" or "operational logs". Using activity logs, you can determine the "what, who, and when" for any write operations (PUT, POST, DELETE) taken on your Azure Cache for Redis instances.
> [!NOTE] > Activity logs do not include read (GET) operations. >
-To view activity logs for your cache, click **Activity logs** from the **Resource menu**.
+To view activity logs for your cache, select **Activity logs** from the **Resource menu**.
For more information about Activity logs, see [Overview of the Azure Activity Log](../azure-monitor/essentials/platform-logs-overview.md).
azure-cache-for-redis Cache How To Premium Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-premium-vnet.md
Last updated 02/08/2021 + # Configure virtual network support for a Premium Azure Cache for Redis instance
-[Azure Virtual Network](https://azure.microsoft.com/services/virtual-network/) deployment provides enhanced security and isolation along with subnets, access control policies, and other features to further restrict access. When an Azure Cache for Redis instance is configured with a virtual network, it isn't publicly addressable and can only be accessed from virtual machines and applications within the virtual network. This article describes how to configure virtual network support for a Premium-tier Azure Cache for Redis instance.
+[Azure Virtual Network](https://azure.microsoft.com/services/virtual-network/) deployment provides enhanced security and isolation along with: subnets, access control policies, and other features to restrict access further. When an Azure Cache for Redis instance is configured with a virtual network, it isn't publicly addressable. Instead, the instance can only be accessed from virtual machines and applications within the virtual network. This article describes how to configure virtual network support for a Premium-tier Azure Cache for Redis instance.
> [!NOTE] > Azure Cache for Redis supports both classic deployment model and Azure Resource Manager virtual networks.
->
+>
## Set up virtual network support Virtual network support is configured on the **New Azure Cache for Redis** pane during cache creation.
-1. To create a Premium-tier cache, sign in to the [Azure portal](https://portal.azure.com) and select **Create a resource**. In addition to creating caches in the Azure portal, you can also create them by using Resource Manager templates, PowerShell, or the Azure CLI. For more information about how to create an Azure Cache for Redis instance, see [Create a cache](cache-dotnet-how-to-use-azure-redis-cache.md#create-a-cache).
+1. To create a Premium-tier cache, sign in to the [Azure portal](https://portal.azure.com) and select **Create a resource**. You can also create them by using Resource Manager templates, PowerShell, or the Azure CLI. For more information about how to create an Azure Cache for Redis instance, see [Create a cache](cache-dotnet-how-to-use-azure-redis-cache.md#create-a-cache).
:::image type="content" source="media/cache-private-link/1-create-resource.png" alt-text="Screenshot that shows Create a resource.":::
-
+ 1. On the **New** page, select **Databases**. Then select **Azure Cache for Redis**. :::image type="content" source="media/cache-private-link/2-select-cache.png" alt-text="Screenshot that shows selecting Azure Cache for Redis."::: 1. On the **New Redis Cache** page, configure the settings for your new Premium-tier cache.
-
+ | Setting | Suggested value | Description | | | - | -- | | **DNS name** | Enter a globally unique name. | The cache name must be a string between 1 and 63 characters that contain only numbers, letters, or hyphens. The name must start and end with a number or letter, and it can't contain consecutive hyphens. Your cache instance's *host name* will be *\<DNS name>.redis.cache.windows.net*. |
Virtual network support is configured on the **New Azure Cache for Redis** pane
> [!IMPORTANT] > When you deploy Azure Cache for Redis to a Resource Manager virtual network, the cache must be in a dedicated subnet that contains no other resources except for Azure Cache for Redis instances. If you attempt to deploy an Azure Cache for Redis instance to a Resource Manager virtual network subnet that contains other resources, or has a NAT Gateway assigned, the deployment fails.
- >
- >
+ >
+ >
| Setting | Suggested value | Description | | | - | -- |
Virtual network support is configured on the **New Azure Cache for Redis** pane
> [!IMPORTANT] > Azure reserves some IP addresses within each subnet, and these addresses can't be used. The first and last IP addresses of the subnets are reserved for protocol conformance, along with three more addresses used for Azure services. For more information, see [Are there any restrictions on using IP addresses within these subnets?](../virtual-network/virtual-networks-faq.md#are-there-any-restrictions-on-using-ip-addresses-within-these-subnets)
- >
+ >
> In addition to the IP addresses used by the Azure virtual network infrastructure, each Azure Cache for Redis instance in the subnet uses two IP addresses per shard and one additional IP address for the load balancer. A nonclustered cache is considered to have one shard.
- >
+ >
1. Select the **Next: Advanced** tab, or select the **Next: Advanced** button at the bottom of the page.
When Azure Cache for Redis is hosted in a virtual network, the ports in the foll
>[!IMPORTANT] >If the ports in the following tables are blocked, the cache might not function correctly. Having one or more of these ports blocked is the most common misconfiguration issue when you use Azure Cache for Redis in a virtual network.
->
+>
-- [Outbound port requirements](#outbound-port-requirements)-- [Inbound port requirements](#inbound-port-requirements)
+* [Outbound port requirements](#outbound-port-requirements)
+* [Inbound port requirements](#inbound-port-requirements)
#### Outbound port requirements
-There are nine outbound port requirements. Outbound requests in these ranges are either outbound to other services necessary for the cache to function or internal to the Redis subnet for internode communication. For geo-replication, additional outbound requirements exist for communication between subnets of the primary and replica cache.
+There are nine outbound port requirements. Outbound requests in these ranges are either: a) outbound to other services necessary for the cache to function, or b) internal to the Redis subnet for internode communication. For geo-replication, other outbound requirements exist for communication between subnets of the primary and replica cache.
| Ports | Direction | Transport protocol | Purpose | Local IP | Remote IP | | | | | | | |
There are nine outbound port requirements. Outbound requests in these ranges are
#### Geo-replication peer port requirements
-If you're using geo-replication between caches in Azure virtual networks, unblock ports 15000-15999 for the whole subnet in both inbound *and* outbound directions to both caches. With this configuration, all the replica components in the subnet can communicate directly with each other even if there's a future geo-failover.
+If you're using geo-replication between caches in Azure virtual networks: a) unblock ports 15000-15999 for the whole subnet in both inbound *and* outbound directions, and b) to both caches. With this configuration, all the replica components in the subnet can communicate directly with each other even if there's a future geo-failover.
#### Inbound port requirements
-There are eight inbound port range requirements. Inbound requests in these ranges are either inbound from other services hosted in the same virtual network or internal to the Redis subnet communications.
+There are eight inbound port range requirements. Inbound requests in these ranges are either inbound from other services hosted in the same virtual network. Or, they're internal to the Redis subnet communications.
| Ports | Direction | Transport protocol | Purpose | Local IP | Remote IP | | | | | | | |
There are network connectivity requirements for Azure Cache for Redis that might
* Outbound network connectivity to Azure Storage endpoints worldwide. Endpoints located in the same region as the Azure Cache for Redis instance and storage endpoints located in *other* Azure regions are included. Azure Storage endpoints resolve under the following DNS domains: *table.core.windows.net*, *blob.core.windows.net*, *queue.core.windows.net*, and *file.core.windows.net*. * Outbound network connectivity to *ocsp.digicert.com*, *crl4.digicert.com*, *ocsp.msocsp.com*, *mscrl.microsoft.com*, *crl3.digicert.com*, *cacerts.digicert.com*, *oneocsp.microsoft.com*, and *crl.microsoft.com*. This connectivity is needed to support TLS/SSL functionality.
-* The DNS configuration for the virtual network must be capable of resolving all of the endpoints and domains mentioned in the earlier points. These DNS requirements can be met by ensuring a valid DNS infrastructure is configured and maintained for the virtual network.
+* The DNS configuration for the virtual network must be able to resolve all of the endpoints and domains mentioned in the earlier points. These DNS requirements can be met by ensuring a valid DNS infrastructure is configured and maintained for the virtual network.
* Outbound network connectivity to the following Azure Monitor endpoints, which resolve under the following DNS domains: *shoebox2-black.shoebox2.metrics.nsatc.net*, *north-prod2.prod2.metrics.nsatc.net*, *azglobal-black.azglobal.metrics.nsatc.net*, *shoebox2-red.shoebox2.metrics.nsatc.net*, *east-prod2.prod2.metrics.nsatc.net*, *azglobal-red.azglobal.metrics.nsatc.net*, *shoebox3.prod.microsoftmetrics.com*, *shoebox3-red.prod.microsoftmetrics.com*, *shoebox3-black.prod.microsoftmetrics.com*, *azredis-red.prod.microsoftmetrics.com* and *azredis-black.prod.microsoftmetrics.com*. ### How can I verify that my cache is working in a virtual network?
There are network connectivity requirements for Azure Cache for Redis that might
After the port requirements are configured as described in the previous section, you can verify that your cache is working by following these steps: -- [Reboot](cache-administration.md#reboot) all of the cache nodes. If all of the required cache dependencies can't be reached, as documented in [Inbound port requirements](cache-how-to-premium-vnet.md#inbound-port-requirements) and [Outbound port requirements](cache-how-to-premium-vnet.md#outbound-port-requirements), the cache won't be able to restart successfully.-- After the cache nodes have restarted, as reported by the cache status in the Azure portal, you can do the following tests:
- - Ping the cache endpoint by using port 6380 from a machine that's within the same virtual network as the cache, using [tcping](https://www.elifulkerson.com/projects/tcping.php). For example:
-
+* [Reboot](cache-administration.md#reboot) all of the cache nodes. The cache won't be able to restart successfully if all of the required cache dependencies can't be reachedas documented in [Inbound port requirements](cache-how-to-premium-vnet.md#inbound-port-requirements) and [Outbound port requirements](cache-how-to-premium-vnet.md#outbound-port-requirements).
+* After the cache nodes have restarted, as reported by the cache status in the Azure portal, you can do the following tests:
+ + Ping the cache endpoint by using port 6380 from a machine that's within the same virtual network as the cache, using [tcping](https://www.elifulkerson.com/projects/tcping.php). For example:
+ `tcping.exe contosocache.redis.cache.windows.net 6380`
-
- If the `tcping` tool reports that the port is open, the cache is available for connection from clients in the virtual network.
- - Another way to test is to create a test cache client (which could be a simple console application using StackExchange.Redis) that connects to the cache and adds and retrieves some items from the cache. Install the sample client application onto a VM that's in the same virtual network as the cache. Then run it to verify connectivity to the cache.
+ If the `tcping` tool reports that the port is open, the cache is available for connection from clients in the virtual network.
+ + Another way to test: create a test cache client that connects to the cache, then adds and retrieves some items from the cache. The test cache client could be a console application using StackExchange.Redis. Install the sample client application onto a VM that's in the same virtual network as the cache. Then, run it to verify connectivity to the cache.
### When I try to connect to my Azure Cache for Redis instance in a virtual network, why do I get an error stating the remote certificate is invalid?
Virtual networks can only be used with Premium-tier caches.
### Why does creating an Azure Cache for Redis instance fail in some subnets but not others?
-If you're deploying an Azure Cache for Redis instance to a virtual network, the cache must be in a dedicated subnet that contains no other resource type. If an attempt is made to deploy an Azure Cache for Redis instance to a Resource Manager virtual network subnet that contains other resources, such as Azure Application Gateway instances and Outbound NAT, the deployment will usually fail. You must delete existing resources of other types before you can create a new Azure Cache for Redis instance.
+If you're deploying an Azure Cache for Redis instance to a virtual network, the cache must be in a dedicated subnet that contains no other resource type. If an attempt is made to deploy an Azure Cache for Redis instance to a Resource Manager virtual network subnet that contains other resourcessuch as Azure Application Gateway instances and Outbound NATthe deployment usually fails. Delete the existing resources of other types before you create a new Azure Cache for Redis instance.
You must also have enough IP addresses available in the subnet.
You must also have enough IP addresses available in the subnet.
Azure reserves some IP addresses within each subnet, and these addresses can't be used. The first and last IP addresses of the subnets are reserved for protocol conformance, along with three more addresses used for Azure services. For more information, see [Are there any restrictions on using IP addresses within these subnets?](../virtual-network/virtual-networks-faq.md#are-there-any-restrictions-on-using-ip-addresses-within-these-subnets)
-In addition to the IP addresses used by the Azure virtual network infrastructure, each Azure Cache for Redis instance in the subnet uses two IP addresses per cluster shard, plus additional IP addresses for additional replicas, if any. One additional IP address is used for the load balancer. A nonclustered cache is considered to have one shard.
+In addition to the IP addresses used by the Azure virtual network infrastructure, each Azure Cache for Redis instance in the subnet uses two IP addresses per cluster shard, plus IP addresses for additional replicas, if any. One more IP address is used for the load balancer. A non-clustered cache is considered to have one shard.
### Can I connect to my cache from a peered virtual network? If the virtual networks are in the same region, you can connect them using virtual network peering or a VPN Gateway VNET-to-VNET connection.
-If the peered Azure virtual networks are in *different* regions, a client VM in region 1 will not be able to access the cache in region 2 via its load balanced IP address because of a constraint with basic load balancers, unless it is a cache with a standard load balancer, which is currently only a cache that was created with *availability zones*. For more information about virtual network peering constraints, see Virtual Network - Peering - Requirements and constraints. One solution is to use a VPN Gateway VNET-to-VNET connection instead of virtual network peering.
+If the peered Azure virtual networks are in *different* regions: a client VM in region 1 can't access the cache in region 2 via its load balanced IP address because of a constraint with basic load balancers. That is, unless it's a cache with a standard load balancer, which is currently only a cache that was created with *availability zones*.
+
+For more information about virtual network peering constraints, see Virtual Network - Peering - Requirements and constraints. One solution is to use a VPN Gateway VNET-to-VNET connection instead of virtual network peering.
### Do all cache features work when a cache is hosted in a virtual network? When your cache is part of a virtual network, only clients in the virtual network can access the cache. As a result, the following cache management features don't work at this time:
-* **Redis Console**: Because Redis Console runs in your local browser, which is usually on a developer machine that isn't connected to the virtual network, it can't then connect to your cache.
+* **Redis Console**: Because Redis Console runs in your local browserusually on a developer machine that isn't connected to the virtual networkit can't then connect to your cache.
## Use ExpressRoute with Azure Cache for Redis Customers can connect an [Azure ExpressRoute](https://azure.microsoft.com/services/expressroute/) circuit to their virtual network infrastructure. In this way, they extend their on-premises network to Azure.
-By default, a newly created ExpressRoute circuit doesn't perform forced tunneling (advertisement of a default route, 0.0.0.0/0) on a virtual network. As a result, outbound internet connectivity is allowed directly from the virtual network. Client applications can connect to other Azure endpoints, which includes an Azure Cache for Redis instance.
+By default, a newly created ExpressRoute circuit doesn't do forced tunneling (advertisement of a default route, 0.0.0.0/0) on a virtual network. As a result, outbound internet connectivity is allowed directly from the virtual network. Client applications can connect to other Azure endpoints, which include an Azure Cache for Redis instance.
A common customer configuration is to use forced tunneling (advertise a default route), which forces outbound internet traffic to instead flow on-premises. This traffic flow breaks connectivity with Azure Cache for Redis if the outbound traffic is then blocked on-premises such that the Azure Cache for Redis instance isn't able to communicate with its dependencies.
If possible, use the following configuration:
* The ExpressRoute configuration advertises 0.0.0.0/0 and, by default, force tunnels all outbound traffic on-premises. * The UDR applied to the subnet that contains the Azure Cache for Redis instance defines 0.0.0.0/0 with a working route for TCP/IP traffic to the public internet. For example, it sets the next hop type to *internet*.
-The combined effect of these steps is that the subnet-level UDR takes precedence over the ExpressRoute forced tunneling, which ensures outbound internet access from the Azure Cache for Redis instance.
+The combined effect of these steps is that the subnet-level UDR takes precedence over the ExpressRoute forced tunneling and that ensures outbound internet access from the Azure Cache for Redis instance.
Connecting to an Azure Cache for Redis instance from an on-premises application by using ExpressRoute isn't a typical usage scenario because of performance reasons. For best performance, Azure Cache for Redis clients should be in the same region as the Azure Cache for Redis instance.
azure-functions Create First Function Arc Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-arc-cli.md
On your local computer:
# [C\#](#tab/csharp) + [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download)
-+ [Azure Functions Core Tools](functions-run-local.md#v2) version 3.x.
++ [Azure Functions Core Tools](functions-run-local.md#v2) version 3.0.3245 or later. + [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later. # [JavaScript](#tab/nodejs) + [Node.js](https://nodejs.org/) version 12. Node.js version 10 is also supported.
-+ [Azure Functions Core Tools](functions-run-local.md#v2) version 3.x.
++ [Azure Functions Core Tools](functions-run-local.md#v2) version 3.0.3245 or later. + [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later. # [Python](#tab/python) + [Python versions that are supported by Azure Functions](supported-languages.md#languages-by-runtime-version)
-+ [Azure Functions Core Tools](functions-run-local.md#v2) version 3.x.
++ [Azure Functions Core Tools](functions-run-local.md#v2) version 3.0.3245 or later. + [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later.
Now that you have your function app running in a container an Arc-enabled App Se
> [!div class="nextstepaction"] > [Connect to an Azure Storage queue](functions-add-output-binding-storage-queue-cli.md?pivots=programming-language-python) -+
azure-functions Create First Function Arc Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-arc-custom-container.md
On your local computer:
# [C\#](#tab/csharp) + [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download)
-+ [Azure Functions Core Tools](functions-run-local.md#v2) version 3.x.
++ [Azure Functions Core Tools](functions-run-local.md#v2) version 3.0.3245 or later. + [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later. + [Docker](https://docs.docker.com/install/) + [Docker ID](https://hub.docker.com/signup)
On your local computer:
# [JavaScript](#tab/nodejs) + [Node.js](https://nodejs.org/) version 12. Node.js version 10 is also supported.
-+ [Azure Functions Core Tools](functions-run-local.md#v2) version 3.x.
++ [Azure Functions Core Tools](functions-run-local.md#v2) version 3.0.3245 or later. + [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later. + [Docker](https://docs.docker.com/install/) + [Docker ID](https://hub.docker.com/signup)
On your local computer:
# [Python](#tab/python) + [Python versions that are supported by Azure Functions](supported-languages.md#languages-by-runtime-version)
-+ [Azure Functions Core Tools](functions-run-local.md#v2) version 3.x.
++ [Azure Functions Core Tools](functions-run-local.md#v2) version 3.0.3245 or later. + [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later. + [Docker](https://docs.docker.com/install/) + [Docker ID](https://hub.docker.com/signup)
In Azure Functions, a function project is the context for one or more individual
The `--docker` option generates a `Dockerfile` for the project, which defines a suitable custom container for use with Azure Functions and the selected runtime.
-> [!NOTE]
-> The generated `Dockerfile` references the 3.0 tag for the base image. Deploying a custom Functions image in Arc requires the base image to have a set of changes not yet assigned the 3.0 tag. For now, it is recommended that the base image reference the **3.0.15885** tag. For example, in a JavaScript application, the Docker file should be modified have `FROM mcr.microsoft.com/azure-functions/node:3.0.15885`.
- 1. Navigate into the project folder: ```console cd LocalFunctionProj ```
- This folder contains various files for the project, including configurations files named [local.settings.json](functions-run-local.md#local-settings-file) and [host.json](functions-host-json.md). By default, the *local.settings.json* file is excluded from source control in the *.gitignore* file. This exclusion is because the file can contain secrets that are downloaded from Azure.
+ This folder contains the Dockerfile other files for the project, including configurations files named [local.settings.json](functions-run-local.md#local-settings-file) and [host.json](functions-host-json.md). By default, the *local.settings.json* file is excluded from source control in the *.gitignore* file. This exclusion is because the file can contain secrets that are downloaded from Azure.
+
+1. Open the generated `Dockerfile` and locate the `3.0` tag for the base image. If there's a `3.0` tag, replace it with a `3.0.15885` tag. For example, in a JavaScript application, the Docker file should be modified to have `FROM mcr.microsoft.com/azure-functions/node:3.0.15885`. This version of the base image supports deployment to an Azure Arc-enabled Kubernetes cluster.
1. Add a function to your project by using the following command, where the `--name` argument is the unique name of your function (HttpExample) and the `--template` argument specifies the function's trigger (HTTP).
Now that you have your function app running in a container an Arc-enabled App Se
> [!div class="nextstepaction"] > [Connect to an Azure Storage queue](functions-add-output-binding-storage-queue-cli.md?pivots=programming-language-python) -+
azure-functions Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-reference.md
An identity-based connection for an Azure service accepts the following properti
| Property | Required for Extensions | Environment variable | Description | |||||
-| Service URI | Azure Blob, Azure Queue | `<CONNECTION_NAME_PREFIX>__serviceUri` | The data plane URI of the service to which you are connecting. |
+| Service URI | Azure Blob<sup>1</sup>, Azure Queue | `<CONNECTION_NAME_PREFIX>__serviceUri` | The data plane URI of the service to which you are connecting. |
| Fully Qualified Namespace | Event Hubs, Service Bus | `<CONNECTION_NAME_PREFIX>__fullyQualifiedNamespace` | The fully qualified Event Hubs and Service Bus namespace. |
+<sup>1</sup> Both blob and queue service URI's are required for Azure Blob.
+ Additional options may be supported for a given connection type. Please refer to the documentation for the component making the connection. ##### Local development
azure-monitor Data Collection Rule Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md
Follow the steps below to create a data collection rule and association
You can create an association between an Azure virtual machine or Azure Arc enabled server using a Resource Manager template. See [Resource Manager template samples for data collection rules in Azure Monitor](./resource-manager-data-collection-rules.md) for sample templates.
+## Manage rules and association using PowerShell
+
+> [!NOTE]
+> If you wish to send data to Log Analytics, you must create the data collection rule in the **same region** where your Log Analytics workspace resides. The rule can be associated to machines in other supported region(s).
+
+**Data collection rules**
+
+| Action | Command |
+|:|:|
+| Get rule(s) | [Get-AzDataCollectionRule](/powershell/module/az.monitor/get-azdatacollectionrule?view=azps-5.4.0&preserve-view=true) |
+| Create a rule | [New-AzDataCollectionRule](/powershell/module/az.monitor/new-azdatacollectionrule?view=azps-6.0.0&viewFallbackFrom=azps-5.4.0&preserve-view=true) |
+| Update a rule | [Set-AzDataCollectionRule](/powershell/module/az.monitor/set-azdatacollectionrule?view=azps-6.0.0&viewFallbackFrom=azps-5.4.0&preserve-view=true) |
+| Delete a rule | [Remove-AzDataCollectionRule](/powershell/module/az.monitor/remove-azdatacollectionrule?view=azps-6.0.0&viewFallbackFrom=azps-5.4.0&preserve-view=true) |
+| Update 'Tags' for a rule | [Update-AzDataCollectionRule](/powershell/module/az.monitor/update-azdatacollectionrule?view=azps-6.0.0&viewFallbackFrom=azps-5.4.0&preserve-view=true) |
+
+**Data collection rule associations**
+
+| Action | Command |
+|:|:|
+| Get association(s) | [Get-AzDataCollectionRuleAssociation](/powershell/module/az.monitor/get-azdatacollectionruleassociation?view=azps-6.0.0&viewFallbackFrom=azps-5.4.0&preserve-view=true) |
+| Create an association | [New-AzDataCollectionRuleAssociation](/powershell/module/az.monitor/new-azdatacollectionruleassociation?view=azps-6.0.0&viewFallbackFrom=azps-5.4.0&preserve-view=true) |
+| Delete an association | [Remove-AzDataCollectionRuleAssociation](/powershell/module/az.monitor/remove-azdatacollectionruleassociation?view=azps-6.0.0&viewFallbackFrom=azps-5.4.0&preserve-view=true) |
+++
+## Manage rules and association using Azure CLI
+
+> [!NOTE]
+> If you wish to send data to Log Analytics, you must create the data collection rule in the **same region** where your Log Analytics workspace resides. The rule can be associated to machines in other supported region(s).
+
+This is enabled as part of Azure CLI **monitor-control-service** Extension. [View all commands](/cli/azure/monitor/data-collection/rule?view=azure-cli-latest&preserve-view=true)
+ ## Next steps
azure-monitor Alerts Smart Detections Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-smart-detections-migration.md
+
+ Title: Upgrade Azure Monitor Application Insights smart detection to alerts (Preview) | Microsoft Docs
+description: Learn about the steps required to upgrade your Azure Monitor Application Insights smart detection to alert rules
+ Last updated : 05/30/2021++
+# Migrate Azure Monitor Application Insights smart detection to alerts (Preview)
+
+This article describes the process of migrating Application Insights smart detection to alerts. The migration creates alert rules for the different smart detection modules. You can manage and configure these rules just like any other Azure Monitor alert rules. You can also configure action groups for these rules, providing you with multiple methods of actions or notifications on new detections.
+
+## Benefits of migration to alerts
+
+With the migration, smart detection now allows you to take advantage of the full capabilities of Azure Monitor alerts, including:
+
+- **Rich Notification options for all detectors** - [Action groups](../alerts/action-groups.md) allow you to configure multiple types of notifications and actions that are triggered when an alert is fired. You can configure notification by email, SMS, voice call or push notifications, and actions such as calling a secure webhook, Logic App, automation runbook, and more. Action groups further management at scale by allowing you to configure actions once and use them across multiple alert rules.
+- **At-scale management** of smart detection alerts using the Azure Monitor alerts experience and API.
+- **Rule based suppression of notifications** - [Action Rules](../alerts/alerts-action-rules.md) help you define or suppress actions at any Azure Resource Manager scope (Azure subscription, resource group, or target resource). They have various filters that help you narrow down the specific subset of alert instances that you want to act on.
+
+## Migrated smart detection capabilities
+
+A new set of alert rules is created when migrating an Application Insights resource. One rule is created for each of the migrated smart detection capabilities. The following table maps the pre-migration smart detection capabilities to post-migration alert rules:
+
+| Smart detection rule name <sup>(1)</sup> | Alert rule name <sup>(2)</sup> |
+| - | |
+| Degradation in server response time | Response latency degradation - *\<Application Insights resource name\>* |
+| Degradation in dependency duration | Dependency latency degradation - *\<Application Insights resource name\>*|
+| Degradation in trace severity ratio (preview) | Trace severity degradation - *\<Application Insights resource name\>*|
+| Abnormal rise in exception volume (preview) | Exception anomalies - *\<Application Insights resource name\>*|
+| Potential memory leak detected (preview) | Potential memory leak - *\<Application Insights resource name\>*|
+| Slow page load time | *discontinued* <sup>(3)</sup> |
+| Slow server response time | *discontinued* <sup>(3)</sup> |
+| Long dependency duration | *discontinued* <sup>(3)</sup> |
+| Potential security issue detected (preview) | *discontinued* <sup>(3)</sup> |
+| Abnormal rise in daily data volume (preview) | *discontinued* <sup>(3)</sup> |
+
+<sup>(1)</sup> Name of rule as appears in smart detection Settings blade
+<sup>(2)</sup> Name of new alert rule after migration
+<sup>(3)</sup> These smart detection capabilities aren't converted to alerts, because of low usage and reassessment of detection effectiveness. These detectors will no longer be supported for this resource once its migration is completed.
+
+The migration doesn't change the algorithmic design and behavior of smart detection. The same detection performance is expected before and after the change.
+
+You need to apply the migration to each Application Insights resource separately. For resources that aren't explicitly migrated, smart detection will continue to work as before.
+
+### Action group configuration for the new smart detection alert rules
+
+As part of migration, each new alert rule is automatically configured with an action group. The migration can assign a default action group for each rule. The default action group is configured according to the rule notification before the migration:
+
+- If the **smart detection rule had the default email or no notifications configured**, then the new alert rule is configured with an action group named ΓÇ£Application Insights Smart Detection".
+ - If the migration tool finds an existing action group with that name, it links the new alert rule to that action group.
+ - Otherwise, it creates a new action group with that name. The new group in configured for "Email Azure Resource Manager Role" actions and sends notification to your Azure Resource Manager Monitoring Contributor and Monitoring Reader users.
+
+- If the **default email notification was changed** before migration, then an action group called "Application Insights Smart Detection \<n\>" is created, with an email action sending notifications to the previously configured email addresses.
+
+Instead of using the default action group, you select an existing action group that will be configured for all the new alert rules.
+
+## Executing smart detection migration process
+
+### Migrate your smart detection using the Azure portal
+
+Apply the migration to one specific Application Insights resource at a time.
+
+To migrate smart detection in your resource, take the following steps:
+
+1. Select **Smart detection** under the **Investigate** heading in your Application Insights resource left-side menu.
+
+2. Click on the banner reading **"Migrate smart detection to alerts (Preview)**. The migration dialog is opened.
+
+ ![Smart detection feed banner](media/alerts-smart-detections-migration/smart-detection-feed-banner.png)
+
+3. Select an action group to be configured for the new alert rules. You can choose between using the default action group (as explained above) or using one of your existing action groups.
+
+4. Select **Migrate** to start the migration process.
+
+ ![Smart detection migration dialog](media/alerts-smart-detections-migration/smart-detection-migration-dialog.png)
+
+After the migration, new alert rules are created for your Application Insight resource, as explained above.
+
+### Migrate your smart detection using Azure CLI
+
+You can start the smart detection migration using the following Azure CLI command. The command triggers the pre-configured migration process as described previously.
+
+```azurecli
+az rest --method POST --uri /subscriptions/{subscriptionId}/providers/Microsoft.AlertsManagement/migrateFromSmartDetection?api-version=2021-01-01-preview --body @body.txt
+```
+
+Where body.txt should include:
+
+```json
+{
+ "scope": [
+"/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName} /providers/microsoft.insights/components/{resourceName} "
+ ],
+ "actionGroupCreationPolicy" : "{Auto/Custom}",
+ "customActionGroupName" : "{actionGroupName}"
+}
+```
+
+**ActionGroupCreationPolicy** selects the policy for migrating the email settings in the smart detection rules into action groups. Allowed values are:
+
+- **'Auto'**, which uses the default action groups as described in this document
+- **'Custom'**, which creates all alert rules with the action group specified in **'customActionGroupName'**.
+- *\<blank\>* - If **ActionGroupCreationPolicy** isn't specified, the 'Auto' policy is used.
+
+### Migrate your smart detection using Azure Resource Manager templates
+
+You can trigger the smart detection migration to alerts for a specific Application Insights resource, using Azure Resource Manager templates. Using this method you would need to:
+
+- Create a smart detection alert rule for each for the supported detectors
+- Modify the Application Insight properties to indicate that the migration was completed
+
+This method allows you to control which alert rules to create, define your own alert rule name and description, and select any action group you desire for each rule.
+
+The following templates should be used for this purpose (edit as needed to provide your Subscription ID, and Application Insights Resource Name)
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "applicationInsightsResourceName": {
+ "type": "string"
+ },
+ "actionGroupName": {
+ "type": "string",
+ "defaultValue": "Application Insights Smart Detection"
+ },
+ "actionGroupResourceGroup": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().Name]"
+ }
+ },
+ "variables": {
+ "applicationInsightsResourceId": "[concat('/subscriptions/',subscription().subscriptionId,'/resourceGroups/',resourceGroup().Name,'/providers/microsoft.insights/components/',parameters('applicationInsightsResourceName'))]",
+ "actionGroupId": "[concat('/subscriptions/',subscription().subscriptionId,'/resourceGroups/',parameters('actionGroupResourceGroup'),'/providers/microsoft.insights/ActionGroups/',parameters('actionGroupName'))]",
+ "requestPerformanceDegradationDetectorRuleName": "[concat('Response Latency Degradation - ', parameters('applicationInsightsResourceName'))]",
+ "dependencyPerformanceDegradationDetectorRuleName": "[concat('Dependency Latency Degradation - ', parameters('applicationInsightsResourceName'))]",
+ "traceSeverityDetectorRuleName": "[concat('Trace Severity Degradation - ', parameters('applicationInsightsResourceName'))]",
+ "exceptionVolumeChangedDetectorRuleName": "[concat('Exception Anomalies - ', parameters('applicationInsightsResourceName'))]",
+ "memoryLeakRuleName": "[concat('Potential Memory Leak - ', parameters('applicationInsightsResourceName'))]"
+ },
+ "resources": [
+ {
+ "name": "[variables('requestPerformanceDegradationDetectorRuleName')]",
+ "type": "Microsoft.AlertsManagement/smartdetectoralertrules",
+ "location": "global",
+ "apiVersion": "2019-03-01",
+ "properties": {
+ "description": "Response Latency Degradation notifies you of an unusual increase in latency in your app response to requests.",
+ "state": "Enabled",
+ "severity": "Sev3",
+ "frequency": "PT24H",
+ "detector": {
+ "id": "RequestPerformanceDegradationDetector"
+ },
+ "scope": [
+ "[variables('applicationInsightsResourceId')]"
+ ],
+ "actionGroups": {
+ "groupIds": [
+ "[variables('actionGroupId')]"
+ ]
+ }
+ }
+ },
+ {
+ "name": "[variables('dependencyPerformanceDegradationDetectorRuleName')]",
+ "type": "Microsoft.AlertsManagement/smartdetectoralertrules",
+ "location": "global",
+ "apiVersion": "2019-03-01",
+ "properties": {
+ "description": "Dependency Latency Degradation notifies you of an unusual increase in response by a dependency your app is calling (e.g. REST API or database)",
+ "state": "Enabled",
+ "severity": "Sev3",
+ "frequency": "PT24H",
+ "detector": {
+ "id": "DependencyPerformanceDegradationDetector"
+ },
+ "scope": [
+ "[variables('applicationInsightsResourceId')]"
+ ],
+ "actionGroups": {
+ "groupIds": [
+ "[variables('actionGroupId')]"
+ ]
+ }
+ }
+ },
+ {
+ "name": "[variables('traceSeverityDetectorRuleName')]",
+ "type": "Microsoft.AlertsManagement/smartdetectoralertrules",
+ "location": "global",
+ "apiVersion": "2019-03-01",
+ "properties": {
+ "description": "Trace Severity Degradation notifies you of an unusual increase in the severity of the traces generated by your app.",
+ "state": "Enabled",
+ "severity": "Sev3",
+ "frequency": "PT24H",
+ "detector": {
+ "id": "TraceSeverityDetector"
+ },
+ "scope": [
+ "[variables('applicationInsightsResourceId')]"
+ ],
+ "actionGroups": {
+ "groupIds": [
+ "[variables('actionGroupId')]"
+ ]
+ }
+ }
+ },
+ {
+ "name": "[variables('exceptionVolumeChangedDetectorRuleName')]",
+ "type": "Microsoft.AlertsManagement/smartdetectoralertrules",
+ "location": "global",
+ "apiVersion": "2019-03-01",
+ "properties": {
+ "description": "Exception Anomalies notifies you of an unusual rise in the rate of exceptions thrown by your app.",
+ "state": "Enabled",
+ "severity": "Sev3",
+ "frequency": "PT24H",
+ "detector": {
+ "id": "ExceptionVolumeChangedDetector"
+ },
+ "scope": [
+ "[variables('applicationInsightsResourceId')]"
+ ],
+ "actionGroups": {
+ "groupIds": [
+ "[variables('actionGroupId')]"
+ ]
+ }
+ }
+ },
+ {
+ "name": "[variables('memoryLeakRuleName')]",
+ "type": "Microsoft.AlertsManagement/smartdetectoralertrules",
+ "location": "global",
+ "apiVersion": "2019-03-01",
+ "properties": {
+ "description": "Potential Memory Leak notifies you of increased memory consumption pattern by your app which may indicate a potential memory leak.",
+ "state": "Enabled",
+ "severity": "Sev3",
+ "frequency": "PT24H",
+ "detector": {
+ "id": "MemoryLeakDetector"
+ },
+ "scope": [
+ "[variables('applicationInsightsResourceId')]"
+ ],
+ "actionGroups": {
+ "groupIds": [
+ "[variables('actionGroupId')]"
+ ]
+ }
+ }
+ },
+ {
+ "name": "[concat(parameters('applicationInsightsResourceName'),'/migrationToAlertRulesCompleted')]",
+ "type": "Microsoft.Insights/components/ProactiveDetectionConfigs",
+ "location": "[resourceGroup().location]",
+ "apiVersion": "2018-05-01-preview",
+ "properties": {
+ "name": "migrationToAlertRulesCompleted",
+ "sendEmailsToSubscriptionOwners": false,
+ "customEmails": [],
+ "enabled": true
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.AlertsManagement/smartdetectoralertrules', variables('requestPerformanceDegradationDetectorRuleName'))]",
+ "[resourceId('Microsoft.AlertsManagement/smartdetectoralertrules', variables('dependencyPerformanceDegradationDetectorRuleName'))]",
+ "[resourceId('Microsoft.AlertsManagement/smartdetectoralertrules', variables('traceSeverityDetectorRuleName'))]",
+ "[resourceId('Microsoft.AlertsManagement/smartdetectoralertrules', variables('exceptionVolumeChangedDetectorRuleName'))]",
+ "[resourceId('Microsoft.AlertsManagement/smartdetectoralertrules', variables('memoryLeakRuleName'))]"
+ ]
+ }
+ ]
+}
+```
+
+## Viewing your alerts after the migration
+
+Following the migration process, you can view your smart detection alerts by selecting the Alerts entry in your Application Insights resource left-side menu. Select **Signal Type** to be **Smart Detector** to filter and present only the smart detection alerts. You can select an alert to see its detection details.
+
+![Smart detection alerts](media/alerts-smart-detections-migration/smart-detector-alerts.png)
+
+You can also still see the available detections in the smart detection feed of your Application Insights resource.
+
+![Smart detection feed](media/alerts-smart-detections-migration/smart-detection-feed.png)
+
+## Managing smart detection alert rules settings after the migration
+
+### Managing alert rules settings using the Azure portal
+
+After the migration is complete, you access the new smart detection alert rules in a similar way to other alert rules defined for the resource:
+
+1. Select **Alerts** under the **Monitoring** heading in your Application Insights resource left-side menu.
+
+ ![Alerts menu](media/alerts-smart-detections-migration/application-insights-alerts.png)
+
+2. Select **Manage Alert Rules**
+
+ ![Manage alert rules](media/alerts-smart-detections-migration/manage-alert-rules.png)
+
+3. Select **Signal Type** to be **Smart Detector** to filter and present the smart detection alert rules.
+
+ ![Smart Detector rules](media/alerts-smart-detections-migration/smart-detector-rules.png)
+
+### Enabling / disabling smart detection alert rules
+
+Smart detection alert rules can be enabled or disabled through the portal UI or programmatically, just like any other alert rule.
+
+If a specific smart detection rule was disabled before the migration, the new alert rule will be disabled as well.
+
+### Configuring action group for your alert rules
+
+You can create and manage action groups for the new smart detection alert rules just like for any other Azure Monitor alert rule.
+
+### Managing alert rule settings using Azure Resource Manager templates
+
+After completing the migration, you can use Azure Resource Management templates to configure settings for smart detection alert rule settings.
+
+> [!NOTE]
+> After completion of migration, smart detection settings must be configured using smart detection alert rule templates, and can no longer be configured using the [Application Insights Resource Manager template](../app/proactive-arm-config.md#smart-detection-rule-configuration).
+
+This Azure Resource Manager template example demonstrates configuring an **Response Latency Degradation** alert rule in an **Enabled** state with a severity of 2.
+* Smart detection is a global service, therefore rule location is created in the **global** location.
+* "id" property should change according to the specific detector configured. The value must be one of:
+
+ - **FailureAnomaliesDetector**
+ - **RequestPerformanceDegradationDetector**
+ - **DependencyPerformanceDegradationDetector**
+ - **ExceptionVolumeChangedDetector**
+ - **TraceSeverityDetector**
+ - **MemoryLeakDetector**
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "resources": [
+ {
+ "type": "microsoft.alertsmanagement/smartdetectoralertrules",
+ "apiVersion": "2019-03-01",
+ "name": "Response Latency Degradation - my-app",
+ "location": "global",
+ "properties": {
+ "description": "Response Latency Degradation notifies you of an unusual increase in latency in your app response to requests.",
+ "state": "Enabled",
+ "severity": "2",
+ "frequency": "PT24H",
+ "detector": {
+ "id": "RequestPerformanceDegradationDetector"
+ },
+ "scope": ["/subscriptions/00000000-1111-2222-3333-444444444444/resourceGroups/MyResourceGroup/providers/microsoft.insights/components/my-app"],
+ "actionGroups": {
+ "groupIds": ["/subscriptions/00000000-1111-2222-3333-444444444444/resourcegroups/MyResourceGroup/providers/microsoft.insights/actiongroups/MyActionGroup"]
+ }
+ }
+ }
+ ]
+}
+```
+++
+## Next Steps
+
+- [Learn more about alerts in Azure](./alerts-overview.md)
+- [Learn more about smart detection in Application Insights](../app/proactive-diagnostics.md)
azure-monitor Proactive Application Security Detection Pack https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/proactive-application-security-detection-pack.md
Title: Security Detection Pack with Azure Application Insights
-description: Monitor application with Azure Application Insights and Smart Detection for potential security issues.
+ Title: Security detection Pack with Azure Application Insights
+description: Monitor application with Azure Application Insights and smart detection for potential security issues.
Last updated 12/12/2017
Last updated 12/12/2017
# Application security detection pack (preview)
-Application Insights automatically analyzes the telemetry generated by your application and detects potential security issues. This capability enables you to identify potential security problems, and handle them by fixing the application or by taking the necessary security measures.
+Smart detection automatically analyzes the telemetry generated by your application, and detects potential security issues. It enables you to identify potential security problems. You can mitigate these problems by fixing the application, or by taking the necessary security measures.
This feature requires no special setup, other than [configuring your app to send telemetry](./usage-overview.md). ## When would I get this type of smart detection notification? There are three types of security issues that are detected:
-1. Insecure URL access: a URL in the application is being accessed via both HTTP and HTTPS. Typically, a URL that accepts HTTPS requests should not accept HTTP requests. This may indicate a bug or security issue in your application.
+1. Insecure URL access: a URL in the application is accessible via both HTTP and HTTPS. Typically, a URL that accepts HTTPS requests shouldn't accept HTTP requests. This detection may indicate a bug or security issue in your application.
2. Insecure form: a form (or other "POST" request) in the application uses HTTP instead of HTTPS. Using HTTP can compromise the user data that is sent by the form.
-3. Suspicious user activity: the application is being accessed from multiple countries/regions by the same user at approximately the same time. For example, the same user accessed the application from Spain and the United States within the same hour. This detection indicates a potentially malicious access attempt to your application.
+3. Suspicious user activity: the same user accesses the application from multiple countries or regions, around the same time. For example, the same user accessed the application from Spain and the United States within the same hour. This detection indicates a potentially malicious access attempt to your application.
## Does my app definitely have a security issue?
-No, a notification doesn't mean that your app definitely has a security issue. A detection of any of the scenarios above can, in many cases, indicate a security issue. However, the detection may have a natural business justification, and can be ignored.
+A notification doesn't mean that your app definitely has a security issue. A detection of any of the scenarios above can, in many cases, indicate a security issue. in other cases, the detection may have a natural business justification, and can be ignored.
## How do I fix the "Insecure URL access" detection?
-1. **Triage.** The notification provides the number of users who accessed insecure URLs, and the URL that was most affected by insecure access. This can help you assign a priority to the problem.
-2. **Scope.** What percentage of the users accessed insecure URLs? How many URLs were affected? This information can be obtained from the notification.
-3. **Diagnose.** The detection provides the list of insecure requests, and the lists of URLs and users that were affected, to help you further diagnose the issue.
+1. **Triage.** The notification provides the number of users who accessed insecure URLs, and the URL that was most affected by insecure access. This information can help you assign a priority to the problem.
+3. **Scope.** What percentage of the users accessed insecure URLs? How many URLs were affected? This information can be obtained from the notification.
+4. **Diagnose.** The detection provides the list of insecure requests, and the lists of URLs and users that were affected, to help you further diagnose the issue.
## How do I fix the "Insecure form" detection?
-1. **Triage.** The notification provides the number of insecure forms and number of users whose data was potentially compromised. This can help you assign a priority to the problem.
+1. **Triage.** The notification provides the number of insecure forms, and number of users whose data was potentially compromised. This information can help you assign a priority to the problem.
2. **Scope.** Which form was involved in the largest number of insecure transmissions, and what is the distribution of insecure transmissions over time? This information can be obtained from the notification.
-3. **Diagnose.** The detection provides the list of insecure forms and a breakdown of the number of insecure transmissions for each form, to help you further diagnose the issue.
+3. **Diagnose.** The detection provides the list of insecure forms, and a breakdown of insecure transmissions for each form, to help you further diagnose the issue.
## How do I fix the "Suspicious user activity" detection?
-1. **Triage.** The notification provides the number of different users that exhibited the suspicious behavior. This can help you assign a priority to the problem.
-2. **Scope.** From which countries/regions did the suspicious requests originate? Which user was the most suspicious? This information can be obtained from the notification.
-3. **Diagnose.** The detection provides the list of suspicious users and the list of countries/regions for each user, to help you further diagnose the issue.
+1. **Triage.** The notification provides the number of different users that presented the suspicious behavior. This information can help you assign a priority to the problem.
+2. **Scope.** From which countries or regions did the suspicious requests originate? Which user was the most suspicious? This information can be obtained from the notification.
+3. **Diagnose.** The detection provides the list of suspicious users and the list of countries or regions for each user, to help you further diagnose the issue.
azure-monitor Proactive Arm Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/proactive-arm-config.md
Last updated 02/14/2021
- # Manage Application Insights smart detection rules using Azure Resource Manager templates
+>[!NOTE]
+>You can migrate your Application Insight resources to alerts-bases smart detection (preview). The migration creates alert rules for the different smart detection modules. Once created, you can manage and configure these rules just like any other Azure Monitor alert rules. You can also configure action groups for these rules, thus enabling multiple methods of taking actions or triggering notification on new detections.
+>
+> See [Smart Detection Alerts migration](../alerts/alerts-smart-detections-migration.md) for more details on the migration process and the behavior of smart detection after the migration.
+>
+ Smart detection rules in Application Insights can be managed and configured using [Azure Resource Manager templates](../../azure-resource-manager/templates/template-syntax.md). This method can be used when deploying new Application Insights resources with Azure Resource Manager automation, or for modifying the settings of existing resources.
azure-monitor Proactive Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/proactive-diagnostics.md
Title: Smart Detection in Azure Application Insights | Microsoft Docs
+ Title: Smart detection in Azure Application Insights | Microsoft Docs
description: Application Insights performs automatic deep analysis of your app telemetry and warns you of potential problems. Last updated 02/07/2019
-# Smart Detection in Application Insights
- Smart Detection automatically warns you of potential performance problems and failure anomalies in your web application. It performs proactive analysis of the telemetry that your app sends to [Application Insights](./app-insights-overview.md). If there is a sudden rise in failure rates, or abnormal patterns in client or server performance, you get an alert. This feature needs no configuration. It operates if your application sends enough telemetry.
+# Smart detection in Application Insights
-You can access the detections issued by Smart Detection both from the emails you receive, and from the Smart Detection blade.
+>[!NOTE]
+>You can migrate smart detection on your Application Insights resource to be based on alerts. The migration creates alert rules for the different smart detection modules. Once created, you can manage and configure these rules just like any other Azure Monitor alert rules. You can also configure action groups for these rules, thus enabling multiple methods of taking actions or triggering notification on new detections.
+>
+> For more information, see [Smart Detection Alerts migration](../alerts/alerts-smart-detections-migration.md).
-## Review your Smart Detections
+Smart detection automatically warns you of potential performance problems and failure anomalies in your web application. It performs proactive analysis of the telemetry that your app sends to [Application Insights](./app-insights-overview.md). If there is a sudden rise in failure rates, or abnormal patterns in client or server performance, you get an alert. This feature needs no configuration. It operates if your application sends enough telemetry.
+
+You can access the detections issued by smart detection both from the emails you receive, and from the smart detection blade.
+
+## Review your smart detections
You can discover detections in two ways: * **You receive an email** from Application Insights. Here's a typical example: ![Email alert](./media/proactive-diagnostics/03.png)
- Click the big button to open more detail in the portal.
-* **The Smart Detection blade** in Application Insights. Select **Smart Detection** under the **Investigate** menu to see a list of recent detections.
+ Click the large button to open more detail in the portal.
+* **The smart detection blade** in Application Insights. Select **Smart detection** under the **Investigate** menu to see a list of recent detections.
![View recent detections](./media/proactive-diagnostics/04.png)
-Select a detection to see its details.
+Select a detection to view its details.
## What problems are detected?
-Smart Detection detects and notifies about a variety of issues, such as:
-* [Smart detection - Failure Anomalies](./proactive-failure-diagnostics.md). We use machine learning to set the expected rate of failed requests for your app, correlating with load and other factors. If the failure rate goes outside the expected envelope, we send an alert.
-* [Smart detection - Performance Anomalies](./proactive-performance-diagnostics.md). You get notifications if response time of an operation or dependency duration is slowing down compared to historical baseline or if we identify an anomalous pattern in response time or page load time.
+Smart detection detects and notifies about various issues, such as:
+
+* [Smart detection - Failure Anomalies](./proactive-failure-diagnostics.md). We use machine learning to set the expected rate of failed requests for your app, correlating with load, and other factors. Notifies if the failure rate goes outside the expected envelope.
+* [Smart detection - Performance Anomalies](./proactive-performance-diagnostics.md). Notifies if response time of an operation or dependency duration is slowing down, compared to historical baseline. It also notifies if we identify an anomalous pattern in response time, or page load time.
* General degradations and issues, like [Trace degradation](./proactive-trace-severity.md), [Memory leak](./proactive-potential-memory-leak.md), [Abnormal rise in Exception volume](./proactive-exception-volume.md) and [Security anti-patterns](./proactive-application-security-detection-pack.md). (The help links in each notification take you to the relevant articles.)
-## Smart Detection email notifications
+## Smart detection email notifications
-All Smart Detection rules, except for rules marked as _preview_, are configured by default to send email notifications when detections are found.
+All smart detection rules, except for rules marked as _preview_, are configured by default to send email notifications when detections are found.
-Configuring email notifications for a specific Smart Detection rule can be done by opening the Smart Detection **Settings** blade and selecting the rule, which will open the **Edit rule** blade.
+Configuring email notifications for a specific smart detection rule can be done by opening the smart detection **Settings** blade and selecting the rule, which will open the **Edit rule** blade.
-Alternatively, you can change the configuration using Azure Resource Manager templates. [See Manage Application Insights smart detection rules using Azure Resource Manager templates](./proactive-arm-config.md) for more details.
+Alternatively, you can change the configuration using Azure Resource Manager templates. For more information, see [Manage Application Insights smart detection rules using Azure Resource Manager templates](./proactive-arm-config.md) for more details.
## Video > [!VIDEO https://channel9.msdn.com/events/Connect/2016/112/player] ++ ## Next steps These diagnostic tools help you inspect the telemetry from your app:
These diagnostic tools help you inspect the telemetry from your app:
* [Search explorer](./diagnostic-search.md) * [Analytics - powerful query language](../logs/log-analytics-tutorial.md)
-Smart Detection is completely automatic. But maybe you'd like to set up some more alerts?
+Smart Detection is automatic. But maybe you'd like to set up some more alerts?
* [Manually configured metric alerts](../alerts/alerts-log.md)
-* [Availability web tests](./monitor-web-app-availability.md)
+* [Availability web tests](./monitor-web-app-availability.md)
azure-monitor Proactive Email Notification https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/proactive-email-notification.md
Last updated 02/14/2021
- # Smart Detection e-mail notification change
+>[!NOTE]
+>You can migrate your Application Insight resources to alerts-bases smart detection (preview). The migration creates alert rules for the different smart detection modules. Once created, you can manage and configure these rules just like any other Azure Monitor alert rules. You can also configure action groups for these rules, thus enabling multiple methods of taking actions or triggering notification on new detections.
+>
+> See [Smart Detection Alerts migration](../alerts/alerts-smart-detections-migration.md) for more details on the migration process and the behavior of smart detection after the migration.
+ Based on customer feedback, on April 1, 2019, weΓÇÖre changing the default roles who receive email notifications from Smart Detection. ## What is changing?
azure-monitor Proactive Exception Volume https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/proactive-exception-volume.md
Title: Abnormal rise in exception volume - Azure Application Insights
-description: Monitor application exceptions with Smart Detection in Azure Application Insights for unusual patterns in exception volume.
+description: Monitor application exceptions with smart detection in Azure Application Insights for unusual patterns in exception volume.
Last updated 12/08/2017 - # Abnormal rise in exception volume (preview)
-Application Insights automatically analyzes the exceptions thrown in your application, and can warn you about unusual patterns in your exception telemetry.
+>[!NOTE]
+>You can migrate your Application Insight resources to alerts-bases smart detection (preview). The migration creates alert rules for the different smart detection modules. Once created, you can manage and configure these rules just like any other Azure Monitor alert rules. You can also configure action groups for these rules, thus enabling multiple methods of taking actions or triggering notification on new detections.
+>
+> For more information, see [Smart Detection Alerts migration](../alerts/alerts-smart-detections-migration.md).
+
+Smart detection automatically analyze the exceptions thrown in your application, and can warn you about unusual patterns in your exception telemetry.
-This feature requires no special setup, other than [configuring exception reporting](./asp-net-exceptions.md#set-up-exception-reporting) for your app. It is active when your app generates enough exception telemetry.
+This feature requires no special setup, other than [configuring exception reporting](./asp-net-exceptions.md#set-up-exception-reporting) for your app. It's active when your app generates enough exception telemetry.
## When would I get this type of smart detection notification?
-You might get this type of notification if your app is exhibiting an abnormal rise in the number of exceptions of a specific type during a day, compared to a baseline calculated over the previous seven days.
-Machine learning algorithms are being used for detecting the rise in exception count, while taking into account a natural growth in your application usage.
+You get this type of notification if your app is showing an abnormal rise in the number of exceptions of a specific type, during a day. This number is compared to a baseline calculated over the previous seven days.
+Machine learning algorithms are used for detecting the rise in exception count, while taking into account a natural growth in your application usage.
## Does my app definitely have a problem? No, a notification doesn't mean that your app definitely has a problem. Although an excessive number of exceptions usually indicates an application issue, these exceptions might be benign and handled correctly by your application. ## How do I fix it? The notifications include diagnostic information to support in the diagnostics process:
-1. **Triage.** The notification shows you how many users or how many requests are affected. This can help you assign a priority to the problem.
+1. **Triage.** The notification shows you how many users or how many requests are affected. This information can help you assign a priority to the problem.
2. **Scope.** Is the problem affecting all traffic, or just some operation? This information can be obtained from the notification.
-3. **Diagnose.** The detection contains information about the method from which the exception was thrown, as well as the exception type. You can also use the related items and reports linking to supporting information, to help you further diagnose the issue.
+3. **Diagnose.** The detection contains information about the method from which the exception was thrown, and the exception type. You can also use the related items and reports linking to supporting information, to help you further diagnose the issue.
azure-monitor Proactive Performance Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/proactive-performance-diagnostics.md
Title: Smart Detection - performance anomalies | Microsoft Docs
-description: Application Insights performs smart analysis of your app telemetry and warns you of potential problems. This feature needs no setup.
+ Title: Smart detection - performance anomalies | Microsoft Docs
+description: Smart detection analyzes your app telemetry and warns you of potential problems. This feature needs no setup.
Last updated 05/04/2017
-# Smart Detection - Performance Anomalies
+# Smart detection - Performance Anomalies
-[Application Insights](./app-insights-overview.md) automatically analyzes the performance of your web application, and can warn you about potential problems. You might be reading this because you received one of our smart detection notifications.
+>[!NOTE]
+>You can migrate your Application Insight resources to alerts-bases smart detection (preview). The migration creates alert rules for the different smart detection modules. Once created, you can manage and configure these rules just like any other Azure Monitor alert rules. You can also configure action groups for these rules, thus enabling multiple methods of taking actions or triggering notification on new detections.
+>
+> For more information on the migration process, see [Smart Detection Alerts migration](../alerts/alerts-smart-detections-migration.md).
-This feature requires no special setup, other than configuring your app for Application Insights for your [supported language](./platforms.md). It is active when your app generates enough telemetry.
+[Application Insights](./app-insights-overview.md) automatically analyzes the performance of your web application, and can warn you about potential problems.
+
+This feature requires no special setup, other than configuring your app for Application Insights for your [supported language](./platforms.md). It's active when your app generates enough telemetry.
## When would I get a smart detection notification?
Application Insights has detected that the performance of your application has d
* **Dependency duration degradation** - Your app makes calls to a REST API, database, or other dependency. The dependency is responding more slowly than it used to. * **Slow performance pattern** - Your app appears to have a performance issue that is affecting only some requests. For example, pages are loading more slowly on one type of browser than others; or requests are being served more slowly from one particular server. Currently, our algorithms look at page load times, request response times, and dependency response times.
-Smart Detection requires at least 8 days of telemetry at a workable volume in order to establish a baseline of normal performance. So, after your application has been running for that period, any significant issue will result in a notification.
+To establish a baseline of normal performance, smart detection requires at least eight days of sufficient telemetry volume. After your application has been running for that period, significant anomalies will result in a notification.
## Does my app definitely have a problem?
The notifications include diagnostic information. Here's an example:
![Here is an example of Server Response Time Degradation detection](media/proactive-performance-diagnostics/server_response_time_degradation.png)
-1. **Triage**. The notification shows you how many users or how many operations are affected. This can help you assign a priority to the problem.
+1. **Triage**. The notification shows you how many users or how many operations are affected. This information can help you assign a priority to the problem.
2. **Scope**. Is the problem affecting all traffic, or just some pages? Is it restricted to particular browsers or locations? This information can be obtained from the notification.
-3. **Diagnose**. Often, the diagnostic information in the notification will suggest the nature of the problem. For example, if response time slows down when request rate is high, that suggests your server or dependencies are overloaded.
-
- Otherwise, open the Performance blade in Application Insights. There, you will find [Profiler](profiler.md) data. If exceptions are thrown, you can also try the [snapshot debugger](./snapshot-debugger.md).
-
+3. **Diagnose**. Often, the diagnostic information in the notification will suggest the nature of the problem. For example, if response time slows down when request rate is high, it may indicate that your server or dependencies are beyond their capacity.
+ Otherwise, open the Performance blade in Application Insights. You'll find there [Profiler](profiler.md) data. If exceptions are thrown, you can also try the [snapshot debugger](./snapshot-debugger.md).
## Configure Email Notifications
-Smart Detection notifications are enabled by default and sent to those who have [Monitoring Reader](../../role-based-access-control/built-in-roles.md#monitoring-reader) and [Monitoring Contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) access to the subscription in which the Application Insights resource resides. To change this, either click **Configure** in the email notification, or open Smart Detection settings in Application Insights.
+Smart detection notifications are enabled by default. They are sent to users that have [Monitoring Reader](../../role-based-access-control/built-in-roles.md#monitoring-reader) and [Monitoring Contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) access to the subscription in which the Application Insights resource resides. To change the default notification, either click **Configure** in the email notification, or open **Smart detection settings** in Application Insights.
![Smart Detection Settings](media/proactive-performance-diagnostics/smart_detection_configuration.png)
- * You can use the **unsubscribe** link in the Smart Detection email to stop receiving the email notifications.
+ * You can use the **unsubscribe** link in the smart detection email to stop receiving the email notifications.
-Emails about Smart Detections performance anomalies are limited to one email per day per Application Insights resource. The email will be sent only if there is at least one new issue that was detected on that day. You won't get repeats of any message.
+Emails about smart detection performance anomalies are limited to one email per day per Application Insights resource. The email will be sent only if there is at least one new issue that was detected on that day. You won't get repeats of any message.
## FAQ * *So, Microsoft staff look at my data?* * No. The service is entirely automatic. Only you get the notifications. Your data is [private](./data-retention-privacy.md). * *Do you analyze all the data collected by Application Insights?*
- * Not at present. Currently, we analyze request response time, dependency response time and page load time. Analysis of additional metrics is on our backlog looking forward.
+ * Currently, we analyze request response time, dependency response time, and page load time. Analysis of other metrics is on our backlog looking forward.
-* What types of application does this work for?
- * These degradations are detected in any application that generates the appropriate telemetry. If you installed Application Insights in your web app, then requests and dependencies are automatically tracked. But in backend services or other apps, if you inserted calls to [TrackRequest()](./api-custom-events-metrics.md#trackrequest) or [TrackDependency](./api-custom-events-metrics.md#trackdependency), then Smart Detection will work in the same way.
+* What types of application does this detection work for?
+ * These degradations are detected in any application that generates the appropriate telemetry. If you installed Application Insights in your web app, then requests and dependencies are automatically tracked. But in backend services or other apps, if you inserted calls to [TrackRequest()](./api-custom-events-metrics.md#trackrequest) or [TrackDependency](./api-custom-events-metrics.md#trackdependency), then smart detection will work in the same way.
* *Can I create my own anomaly detection rules or customize existing rules?* * Not yet, but you can: * [Set up alerts](../alerts/alerts-log.md) that tell you when a metric crosses a threshold. * [Export telemetry](./export-telemetry.md) to a [database](./code-sample-export-sql-stream-analytics.md) or [to Power BI](./export-power-bi.md), where you can analyze it yourself.
-* *How often is the analysis performed?*
+* *How often is the analysis done?*
* We run the analysis daily on the telemetry from the previous day (full day in UTC timezone).
-* *So does this replace [metric alerts](../alerts/alerts-log.md)?*
- * No. We don't commit to detecting every behavior that you might consider abnormal.
+* *Does this replace [metric alerts](../alerts/alerts-log.md)?*
+ * No. We don't commit to detecting every behavior that you might consider abnormal.
* *If I don't do anything in response to a notification, will I get a reminder?*
- * No, you get a message about each issue only once. If the issue persists it will be updated in the Smart Detection feed blade.
+ * No, you get a message about each issue only once. If the issue persists, it will be updated in the smart detection feed blade.
* *I lost the email. Where can I find the notifications in the portal?*
- * In the Application Insights overview of your app, click the **Smart Detection** tile. There you'll be able to find all notifications up to 90 days back.
+ * In the Application Insights overview of your app, click the **Smart detection** tile. There you'll find all notifications up to 90 days back.
## How can I improve performance? Slow and failed responses are one of the biggest frustrations for web site users, as you know from your own experience. So, it's important to address the issues. ### Triage
-First, does it matter? If a page is always slow to load, but only 1% of your site's users ever have to look at it, maybe you have more important things to think about. On the other hand, if only 1% of users open it, but it throws exceptions every time, that might be worth investigating.
+First, does it matter? If a page is always slow to load, but only 1% of your site's users ever have to look at it, maybe you have more important things to think about. However, if only 1% of users open it, but it throws exceptions every time, that might be worth investigating.
-Use the impact statement (affected users or % of traffic) as a general guide, but be aware that it isn't the whole story. Gather other evidence to confirm.
+Use the impact statement, such as affected users or % of traffic, as a general guide. Be aware that it may not be telling the whole story. Gather other evidence to confirm.
-Consider the parameters of the issue. If it's geography-dependent, set up [availability tests](./monitor-web-app-availability.md) including that region: there might simply be network issues in that area.
+Consider the parameters of the issue. If it's geography-dependent, set up [availability tests](./monitor-web-app-availability.md) including that region: there might be network issues in that area.
### Diagnose slow page loads
-Where is the problem? Is the server slow to respond, is the page very long, or does the browser have to do a lot of work to display it?
+Where is the problem? Is the server slow to respond, is the page too long, or does the browser need too much work to display it?
Open the Browsers metric blade. The segmented display of browser page load time shows where the time is going.
-* If **Send Request Time** is high, either the server is responding slowly, or the request is a post with a lot of data. Look at the [performance metrics](./performance-counters.md) to investigate response times.
-* Set up [dependency tracking](./asp-net-dependencies.md) to see whether the slowness is due to external services or your database.
-* If **Receiving Response** is predominant, your page and its dependent parts - JavaScript, CSS, images and so on (but not asynchronously loaded data) are long. Set up an [availability test](./monitor-web-app-availability.md), and be sure to set the option to load dependent parts. When you get some results, open the detail of a result and expand it to see the load times of different files.
+* If **Send Request Time** is high, either the server is responding slowly, or the request is a post with large amount of data. Look at the [performance metrics](./performance-counters.md) to investigate response times.
+* Set up [dependency tracking](./asp-net-dependencies.md) to see whether the slowness is because of external services or your database.
+* If **Receiving Response** is predominant, your page and its dependent parts - JavaScript, CSS, images, and so on (but not asynchronously loaded data) are long. Set up an [availability test](./monitor-web-app-availability.md), and be sure to set the option to load dependent parts. When you get some results, open the detail of a result and expand it to see the load times of different files.
* High **Client Processing time** suggests scripts are running slowly. If the reason isn't obvious, consider adding some timing code and send the times in trackMetric calls. ### Improve slow pages There's a web full of advice on improving your server responses and page load times, so we won't try to repeat it all here. Here are a few tips that you probably already know about, just to get you thinking:
-* Slow loading because of big files: Load the scripts and other parts asynchronously. Use script bundling. Break the main page into widgets that load their data separately. Don't send plain old HTML for long tables: use a script to request the data as JSON or other compact format, then fill the table in place. There are great frameworks to help with all this. (They also entail big scripts, of course.)
+* Slow loading because of large files: Load the scripts and other parts asynchronously. Use script bundling. Break the main page into widgets that load their data separately. Don't send plain old HTML for long tables: use a script to request the data as JSON or other compact format, then fill the table in place. There are great frameworks to help with such tasks. (They also include large scripts, of course.)
* Slow server dependencies: Consider the geographical locations of your components. For example, if you're using Azure, make sure the web server and the database are in the same region. Do queries retrieve more information than they need? Would caching or batching help? * Capacity issues: Look at the server metrics of response times and request counts. If response times peak disproportionately with peaks in request counts, it's likely that your servers are stretched.
The response time degradation notification tells you:
* The response time compared to normal response time for this operation. * How many users are affected.
-* Average response time and 90th percentile response time for this operation on the day of the detection and 7 days before.
-* Count of this operation requests on the day of the detection and 7 days before.
+* Average response time and 90th percentile response time for this operation on the day of the detection and seven days before.
+* Count of this operation requests on the day of the detection and seven days before.
* Correlation between degradation in this operation and degradations in related dependencies. * Links to help you diagnose the problem.
- * Profiler traces to help you view where operation time is spent (the link is available if Profiler trace examples were collected for this operation during the detection period).
+ * Profiler traces can help you view where operation time is spent. The link is available if Profiler trace examples exist for this operation.
* Performance reports in Metric Explorer, where you can slice and dice time range/filters for this operation. * Search for this call to view specific call properties.
- * Failure reports - If count > 1 this means that there were failures in this operation that might have contributed to performance degradation.
+ * Failure reports - If count > 1, it means that there were failures in this operation that might have contributed to performance degradation.
## Dependency Duration Degradation
-Modern applications more and more adopt a micro services design approach, which in many cases leads to heavy reliability on external services. For example, if your application relies on some data platform or even if you build your own bot service you will probably relay on some cognitive services provider to enable your bots to interact in more human ways and some data store service for bot to pull the answers from.
+Modern applications often adopt a micro services design approach, which in many cases rely heavily on external services. For example, if your application relies on some data platform, or on a critical services provider such as cognitive services.
-Example dependency degradation notification:
+Example of dependency degradation notification:
![Here is an example of Dependency Duration Degradation detection](media/proactive-performance-diagnostics/dependency_duration_degradation.png)
Notice that it tells you:
* The duration compared to normal response time for this operation * How many users are affected
-* Average duration and 90th percentile duration for this dependency on the day of the detection and 7 days before
-* Number of dependency calls on the day of the detection and 7 days before
+* Average duration and 90th percentile duration for this dependency on the day of the detection and seven days before
+* Number of dependency calls on the day of the detection and seven days before
* Links to help you diagnose the problem * Performance reports in Metric Explorer for this dependency * Search for this dependency calls to view calls properties
- * Failure reports - If count > 1 this means that there were failed dependency calls during the detection period that might have contributed to duration degradation.
+ * Failure reports - If count > 1, it means that there were failed dependency calls during the detection period that might have contributed to duration degradation.
* Open Analytics with queries that calculate this dependency duration and count
-## Smart Detection of slow performing patterns
+## Smart detection of slow performing patterns
-Application Insights finds performance issues that might only affect some portion of your users, or only affect users in some cases. For example, notification about pages load is slower on one type of browser than on other types of browsers, or if requests are served more slowly from a particular server. It can also discover problems associated with combinations of properties, such as slow page loads in one geographical area for clients using particular operating system.
+Application Insights finds performance issues that might only affect some portion of your users, or only affect users in some cases. For example, if a page loads slower on a specific browser types compared to others, or if a particular server handles requests more slowly than other servers. It can also discover problems that are associated with combinations of properties, such as slow page loads in one geographical area for clients using particular operating system.
-Anomalies like these are very hard to detect just by inspecting the data, but are more common than you might think. Often they only surface when your customers complain. By that time, it's too late: the affected users are already switching to your competitors!
+Anomalies like these are hard to detect just by inspecting the data, but are more common than you might think. Often they only surface when your customers complain. By that time, it's too late: the affected users are already switching to your competitors!
Currently, our algorithms look at page load times, request response times at the server, and dependency response times.
You don't have to set any thresholds or configure rules. Machine learning and da
![From the email alert, click the link to open the diagnostic report in Azure](./media/proactive-performance-diagnostics/03.png) * **When** shows the time the issue was detected.
-* **What** describes:
-
- * The problem that was detected;
- * The characteristics of the set of events that we found displayed the problem behavior.
-* The table compares the poorly-performing set with the average behavior of all other events.
+* **What** describes te problem that was detected, and th characteristics of the set of events that we found, which displayed the problem behavior.
+* The table compares the poorly performing set with the average behavior of all other events.
-Click the links to open Metric Explorer and Search on relevant reports, filtered on the time and properties of the slow performing set.
+Click the links to open Metric Explorer to view reports, filtered by the time and properties of the slow performing set.
Modify the time range and filters to explore the telemetry.
These diagnostic tools help you inspect the telemetry from your app:
* [Analytics](../logs/log-analytics-tutorial.md) * [Analytics smart diagnostics](../logs/log-query-overview.md)
-Smart detections are completely automatic. But maybe you'd like to set up some more alerts?
+Smart detection is automatic. But maybe you'd like to set up some more alerts?
* [Manually configured metric alerts](../alerts/alerts-log.md)
-* [Availability web tests](./monitor-web-app-availability.md)
+* [Availability web tests](./monitor-web-app-availability.md)
azure-monitor Proactive Potential Memory Leak https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/proactive-potential-memory-leak.md
Title: Detect memory leak - Azure Application Insights Smart Detection
+ Title: Detect memory leak - Azure Application Insights smart detection
description: Monitor applications with Azure Application Insights for potential memory leaks. Last updated 12/12/2017 - # Memory leak detection (preview)
-Application Insights automatically analyzes the memory consumption of each process in your application, and can warn you about potential memory leaks or increased memory consumption.
+>[!NOTE]
+>You can migrate your Application Insight resources to alerts-bases smart detection (preview). The migration creates alert rules for the different smart detection modules. Once created, you can manage and configure these rules just like any other Azure Monitor alert rules. You can also configure action groups for these rules, thus enabling multiple methods of taking actions or triggering notification on new detections.
+>
+> For more information, see [Smart Detection Alerts migration](../alerts/alerts-smart-detections-migration.md).
+
+Smart detection automatically analyzes the memory consumption of each process in your application, and can warn you about potential memory leaks or increased memory consumption.
This feature requires no special setup, other than [configuring performance counters](./performance-counters.md) for your app. It's active when your app generates enough memory performance counters telemetry (for example, Private Bytes). ## When would I get this type of smart detection notification?
-A typical notification will follow a consistent increase in memory consumption over a long period of time, in one or more processes and/or one or more machines, which are part of your application. Machine learning algorithms are used for detecting increased memory consumption that matches the pattern of a memory leak.
+A typical notification will follow a consistent increase in memory consumption, over a long period of time, in one or more processes or machines, which are part of your application. Machine learning algorithms are used for detecting increased memory consumption that matches the pattern of a memory leak.
## Does my app really have a problem?
-No, a notification doesn't mean that your app definitely has a problem. Although memory leak patterns usually indicate an application issue, these patterns could be typical to your specific process, or could have a natural business justification, and can be ignored.
+A notification doesn't mean that your app definitely has a problem. Although memory leak patterns many times indicate an application issue, these patterns could be typical to your specific process, or could have a natural business justification. In such case the notification can be ignored.
## How do I fix it? The notifications include diagnostic information to support in the diagnostic analysis process:
-1. **Triage.** The notification shows you the amount of memory increase (in GB), and the time range in which the memory has increased. This can help you assign a priority to the problem.
+1. **Triage.** The notification shows you the amount of memory increase (in GB), and the time range in which the memory has increased. This information can help you assign a priority to the problem.
2. **Scope.** How many machines exhibited the memory leak pattern? How many exceptions were triggered during the potential memory leak? This information can be obtained from the notification. 3. **Diagnose.** The detection contains the memory leak pattern, showing memory consumption of the process over time. You can also use the related items and reports linking to supporting information, to help you further diagnose the issue.
azure-monitor Proactive Trace Severity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/proactive-trace-severity.md
Title: Degradation in trace severity ratio - Azure Application Insights
-description: Monitor application traces with Azure Application Insights for unusual patterns in trace telemetry with Smart Detection .
+description: Monitor application traces with Azure Application Insights for unusual patterns in trace telemetry with smart detection.
Last updated 11/27/2017 - # Degradation in trace severity ratio (preview)
-Traces are widely used in applications, as they help tell the story of what happens behind the scenes. When things go wrong, traces provide crucial visibility into the sequence of events leading to the undesired state. While traces are generally unstructured, there is one thing that can concretely be learned from them ΓÇô their severity level. In an applicationΓÇÖs steady state, we would expect the ratio between ΓÇ£goodΓÇ¥ traces (*Info* and *Verbose*) and ΓÇ£badΓÇ¥ traces (*Warning*, *Error*, and *Critical*) to remain stable. The assumption is that ΓÇ£badΓÇ¥ traces may happen on a regular basis to a certain extent due to any number of reasons (transient network issues for instance). But when a real problem begins growing, it usually manifests as an increase in the relative proportion of ΓÇ£badΓÇ¥ traces vs ΓÇ£goodΓÇ¥ traces. Application Insights Smart Detection automatically analyzes the traces logged by your application, and can warn you about unusual patterns in the severity of your trace telemetry.
+>[!NOTE]
+>You can migrate your Application Insight resources to alerts-bases smart detection (preview). The migration creates alert rules for the different smart detection modules. Once created, you can manage and configure these rules just like any other Azure Monitor alert rules. You can also configure action groups for these rules, thus enabling multiple methods of taking actions or triggering notification on new detections.
+>
+> For more information, see [Smart Detection Alerts migration](../alerts/alerts-smart-detections-migration.md).
+>
+
+Traces are widely used in applications, and they help tell the story of what happens behind the scenes. When things go wrong, traces provide crucial visibility into the sequence of events leading to the undesired state. While traces are mostly unstructured, their severity level can still provide valuable information. In an applicationΓÇÖs steady state, we would expect the ratio between ΓÇ£goodΓÇ¥ traces (*Info* and *Verbose*) and ΓÇ£badΓÇ¥ traces (*Warning*, *Error*, and *Critical*) to remain stable.
+
+It's normal to expect some level of ΓÇ£BadΓÇ¥ traces because of any number of reasons, such as transient network issues. But when a real problem begins growing, it usually manifests as an increase in the relative proportion of ΓÇ£badΓÇ¥ traces vs ΓÇ£goodΓÇ¥ traces. Smart detection automatically analyzes the trace telemetry that your application logs, and can warn you about unusual patterns in their severity.
-This feature requires no special setup, other than configuring trace logging for your app (see how to configure a trace log listener for [.NET](./asp-net-trace-logs.md) or [Java](java-2x-trace-logs.md)). It is active when your app generates enough exception telemetry.
+This feature requires no special setup, other than configuring trace logging for your app. See how to configure a trace log listener for [.NET](./asp-net-trace-logs.md) or [Java](./java-trace-logs.md). It's active when your app generates enough trace telemetry.
## When would I get this type of smart detection notification?
-You might get this type of notification if the ratio between ΓÇ£goodΓÇ¥ traces (traces logged with a level of *Info* or *Verbose*) and ΓÇ£badΓÇ¥ traces (traces logged with a level of *Warning*, *Error*, or *Fatal*) is degrading in a specific day, compared to a baseline calculated over the previous seven days.
+You get this type of notification if the ratio between ΓÇ£goodΓÇ¥ traces (traces logged with a level of *Info* or *Verbose*) and ΓÇ£badΓÇ¥ traces (traces logged with a level of *Warning*, *Error*, or *Fatal*) is degrading in a specific day, compared to a baseline calculated over the previous seven days.
## Does my app definitely have a problem?
-No, a notification doesn't mean that your app definitely has a problem. Although a degradation in the ratio between ΓÇ£goodΓÇ¥ and ΓÇ£badΓÇ¥ traces might indicate an application issue, this change in ratio might be benign. For example, the increase might be due to a new flow in the application emitting more ΓÇ£badΓÇ¥ traces than existing flows).
+A notification doesn't mean that your app definitely has a problem. Although a degradation in the ratio between ΓÇ£goodΓÇ¥ and ΓÇ£badΓÇ¥ traces might indicate an application issue, it can also be benign. For example, the increase can be because of a new flow in the application emitting more ΓÇ£badΓÇ¥ traces than existing flows).
## How do I fix it? The notifications include diagnostic information to support in the diagnostics process:
-1. **Triage.** The notification shows you how many operations are affected. This can help you assign a priority to the problem.
+1. **Triage.** The notification shows you how many operations are affected. This information can help you assign a priority to the problem.
2. **Scope.** Is the problem affecting all traffic, or just some operation? This information can be obtained from the notification. 3. **Diagnose.** You can use the related items and reports linking to supporting information, to help you further diagnose the issue.
azure-netapp-files Cross Region Replication Requirements Considerations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/cross-region-replication-requirements-considerations.md
na ms.devlang: na Previously updated : 01/20/2021 Last updated : 06/04/2021
Note the following requirements and considerations about [using the volume cross
* Configuring volume replication for source volumes created from snapshot is not supported at this time. * After you set up cross-region replication, the replication process creates *snapmirror snapshots* to provide references between the source volume and the destination volume. Snapmirror snapshots are cycled automatically when a new one is created for every incremental transfer. You cannot delete snapmirror snapshots until replication relationship and volume is deleted. * You can delete manual snapshots on the source volume of a replication relationship when the replication relationship is active or broken, and also after the replication relationship is deleted. You cannot delete manual snapshots for the destination volume until the replication relationship is broken.
-* You cannot revert to a snapshot that was taken before the replication destination volume was created.
+* You cannot revert a source or destination volume of cross-region replication to a snapshot. The snapshot revert functionality is greyed out for volumes in a replication relationship.
## Next steps * [Create volume replication](cross-region-replication-create-peering.md)
azure-netapp-files Performance Linux Concurrency Session Slots https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/performance-linux-concurrency-session-slots.md
na ms.devlang: na Previously updated : 06/01/2021 Last updated : 06/03/2021 # Linux concurrency best practices for Azure NetApp Files - Session slots and slot table entries
By default, modern Linux kernels define the per-connection `sunrpc` slot table e
|-|-| | 128 | 65,536 |
-These slot table entries define the limits of concurrency. Values this high are unnecessary. For example, using a queueing theory *Littles Law*, you will find that the I/O rate is determined by concurrency (that is, outstanding I/O) and latency. As such, the algorithm proves that 65,536 slots are orders of magnitude higher than what is needed to drive even extremely demanding workloads.
+These slot table entries define the limits of concurrency. Values this high are unnecessary. For example, using a queueing theory known as *Littles Law*, you will find that the I/O rate is determined by concurrency (that is, outstanding I/O) and latency. As such, the algorithm proves that 65,536 slots are orders of magnitude higher than what is needed to drive even extremely demanding workloads.
`Littles Law: (concurrency = operation rate × latency in seconds)`
azure-resource-manager Add Template To Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/add-template-to-azure-pipelines.md
Title: CI/CD with Azure Pipelines and Bicep files
-description: Describes how to configure continuous integration in Azure Pipelines by using Bicep files. It shows how to use a PowerShell script, or copy files to a staging location and deploy from there. (Bicep)
+description: Describes how to configure continuous integration in Azure Pipelines by using Bicep files. It shows how to use a PowerShell script, or copy files to a staging location and deploy from there.
Last updated 06/01/2021
# Integrate Bicep with Azure Pipelines
-You can integrate Bicep file with Azure Pipelines for continuous integration and continuous deployment (CI/CD). In this article, you learn how to build a Bicep file into a JSON template and then use two advanced ways to deploy templates with Azure Pipelines.
+You can integrate Bicep files with Azure Pipelines for continuous integration and continuous deployment (CI/CD). In this article, you learn how to build a Bicep file into an Azure Resource Manager template (ARM template) and then use two advanced ways to deploy templates with Azure Pipelines.
## Select your option Before proceeding with this article, let's consider the different options for deploying an ARM template from a pipeline.
-* **Use Azure CLI task**. Use this task to run `az bicep build` to build your Bicep files before deploying the JSON templates.
+* **Use Azure CLI task**. Use this task to run `az bicep build` to build your Bicep files before deploying the ARM templates.
-* **Use ARM template deployment task**. This option is the easiest option. This approach works when you want to deploy a template directly from a repository. This option isn't covered in this article but instead is covered in the tutorial [Continuous integration of ARM templates with Azure Pipelines](../templates/deployment-tutorial-pipeline.md). It shows how to use the [ARM template deployment task](https://github.com/microsoft/azure-pipelines-tasks/blob/master/Tasks/AzureResourceManagerTemplateDeploymentV3/README.md) to deploy a template from your GitHub repo.
+* **Use ARM template deployment task**. This option is the easiest option. This approach works when you want to deploy an ARM template directly from a repository. This option isn't covered in this article but instead is covered in the tutorial [Continuous integration of ARM templates with Azure Pipelines](../templates/deployment-tutorial-pipeline.md). It shows how to use the [ARM template deployment task](https://github.com/microsoft/azure-pipelines-tasks/blob/master/Tasks/AzureResourceManagerTemplateDeploymentV3/README.md) to deploy a template from your GitHub repository.
* **Add task that runs an Azure PowerShell script**. This option has the advantage of providing consistency throughout the development life cycle because you can use the same script that you used when running local tests. Your script deploys the template but can also perform other operations such as getting values to use as parameters. This option is shown in this article. See [Azure PowerShell task](#azure-powershell-task).
You're ready to either add an Azure PowerShell task or the copy file and deploy
## Azure CLI task
-This section shows how to build a Bicep file into a JSON template before the JSON template is deployed.
+This section shows how to build a Bicep file into an ARM template before the template is deployed.
-The following YML file builds a Bicep file by using an [Azure CLI task](/azure/devops/pipelines/tasks/deploy/azure-cli):
+The following YAML file builds a Bicep file by using an [Azure CLI task](/azure/devops/pipelines/tasks/deploy/azure-cli):
```yml trigger:
For `scriptType`, use **bash**.
For `scriptLocation`, use **inlineScript**, or **scriptPath**. If you specify **scriptPath**, you will also need to specify a `scriptPath` parameter.
-In `inlineScript`, specify your script lines. The script provided in the sample builds a bicep file called *azuredeploy.bicep* and exists in the root of the repo.
+In `inlineScript`, specify your script lines. The script provided in the sample builds a bicep file called *azuredeploy.bicep* and exists in the root of the repository.
## Azure PowerShell task
azure-resource-manager Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/best-practices.md
This article recommends practices to follow when developing your Bicep files. Th
* Use the `@allowed` decorator sparingly. If you use this decorator too broadly, you might block valid deployments. As Azure services add SKUs and sizes, your allowed list might not be up to date. For example, allowing only Premium v3 SKUs might make sense in production, but it prevents you from using the same template in non-production environments.
-* It's a good practice to provide descriptions for your parameters. Try to make the descriptions helpful, and provide any important information about what the template needs the parameter values to be.
+* It's a good practice to provide descriptions for your parameters. Try to make the descriptions helpful, and provide any important information about what the template needs the parameter values to be.
You can also use `//` comments for some information.
This article recommends practices to follow when developing your Bicep files. Th
* It's a good practice to specify the minimum and maximum character length for parameters that control naming. These limitations help avoid errors later during deployment.
+For more information about Bicep parameters, see [Parameters in Bicep](parameters.md).
+
+## Variables
+
+* Use camel case for variable names, such as `myVariableName`.
+
+* When you define a variable, the [data type](data-types.md) isn't needed. Variables infer the type from the resolve value.
+
+* You can use Bicep functions to create a variable.
+
+* After a variable is defined in your Bicep file, you reference the value using the variable's name.
+
+For more information about Bicep variables, see [Variables in Bicep](variables.md).
+ ## Naming * The [uniqueString() function](bicep-functions-string.md#uniquestring) is useful for creating globally unique resource names. When you provide the same parameters, it returns the same string every time. Passing in the resource group ID means the string is the same on every deployment to the same resource group, but different when you deploy to different resource groups or subscriptions.
-* Sometimes the uniqueString() function creates strings that start with a number. Some Azure resources, like storage accounts, don't allow their names to start with numbers. This requirement means it's a good idea to use string interpolation to create resource names. You can add a prefix to the unique string.
+* Sometimes the `uniqueString()` function creates strings that start with a number. Some Azure resources, like storage accounts, don't allow their names to start with numbers. This requirement means it's a good idea to use string interpolation to create resource names. You can add a prefix to the unique string.
* It's often a good idea to use template expressions to create resource names. Many Azure resource types have rules about the allowed characters and length of their names. Embedding the creation of resource names in the template means that anyone who uses the template doesn't have to remember to follow these rules themselves.
This article recommends practices to follow when developing your Bicep files. Th
* When possible, avoid using the [reference](./bicep-functions-resource.md#reference) and [resourceId](./bicep-functions-resource.md#resourceid) functions in your Bicep file. You can access any resource in Bicep by using the symbolic name. For example, if you define a storage account with the symbolic name toyDesignDocumentsStorageAccount, you can access its resource ID by using the expression `toyDesignDocumentsStorageAccount.id`. By using the symbolic name, you create an implicit dependency between resources.
-* If the resource isn't deployed in the Bicep file, you can still get a symbolic reference to the resource using the **existing** keyword.
+* If the resource isn't deployed in the Bicep file, you can still get a symbolic reference to the resource using the `existing` keyword.
## Child resources
This article recommends practices to follow when developing your Bicep files. Th
* Make sure you don't create outputs for sensitive data. Output values can be accessed by anyone who has access to the deployment history. They're not appropriate for handling secrets.
-* Instead of passing property values around through outputs, use the `existing` keyword to look up properties of resources that already exist. It's a best practice to look up keys from other resources in this way instead of passing them around through outputs. You'll always get the most up-to-date data.
+* Instead of passing property values around through outputs, use the `existing` keyword to look up properties of resources that already exist. It's a best practice to look up keys from other resources in this way instead of passing them around through outputs. You'll always get the most up-to-date data.
+
+For more information about Bicep outputs, see [Outputs in Bicep](outputs.md).
## Tenant scopes
azure-resource-manager Bicep Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/bicep-cli.md
Last updated 06/01/2021
# Bicep CLI commands
-This article describes the commands you can use in the Bicep CLI. You must have the [Bicep CLI installed](./install.md) to run the commands.
+This article describes the commands you can use in the Bicep CLI. You must have the [Bicep CLI installed](./install.md) to run the commands.
This article shows how to run the commands in Azure CLI. If you're not using Azure CLI, run the commands without `az` at the start of the command. For example, `az bicep version` becomes ``bicep version``. ## build
-The **build** command converts a Bicep file to an Azure Resource Manager template (ARM template). Typically, you don't need to run this command because it runs automatically when you deploy a Bicep file. Run it manually when you want to see the ARM template JSON that is created from your Bicep file.
+The `build` command converts a Bicep file to an Azure Resource Manager template (ARM template). Typically, you don't need to run this command because it runs automatically when you deploy a Bicep file. Run it manually when you want to see the ARM template JSON that is created from your Bicep file.
-The following example converts a Bicep file named **main.bicep** to an ARM template named **main.json**. The new file is created in the same directory as the Bicep file.
+The following example converts a Bicep file named _main.bicep_ to an ARM template named _main.json_. The new file is created in the same directory as the Bicep file.
```azurecli az bicep build --file main.bicep ```
-The next example saves **main.json** to a different directory.
+The next example saves _main.json_ to a different directory.
```azurecli az bicep build --file main.bicep --outdir c:\jsontemplates
The next example specifies the name and location of the file to create.
az bicep build --file main.bicep --outfile c:\jsontemplates\azuredeploy.json ```
-To print the file to **stdout**, use:
+To print the file to `stdout`, use:
```azurecli az bicep build --file main.bicep --stdout
az bicep build --file main.bicep --stdout
## decompile
-The **decompile** command converts an ARM template to a Bicep file.
+The `decompile` command converts ARM template JSON to a Bicep file.
```azurecli az bicep decompile --file main.json
For more information about using this command, see [Decompiling ARM template JSO
## install
-The **install** command adds the Bicep CLI to your local environment. For more information, see [Install Bicep tools](install.md).
+The `install` command adds the Bicep CLI to your local environment. For more information, see [Install Bicep tools](install.md).
To install the latest version, use:
az bicep install --version v0.3.255
## list-versions
-The **list-vesions** command returns all available versions of the Bicep CLI. Use this command to see if you want to [upgrade](#upgrade) or [install](#install) a new version.
-
+The `list-versions` command returns all available versions of the Bicep CLI. Use this command to see if you want to [upgrade](#upgrade) or [install](#install) a new version.
+ ```azurecli az bicep list-versions ```
The command returns an array of available versions.
```azurecli [
+ "v0.4.1",
"v0.3.539", "v0.3.255", "v0.3.126",
The command returns an array of available versions.
## upgrade
-The **upgrade** command updates your installed version with the latest version.
+The `upgrade` command updates your installed version with the latest version.
```azurecli az bicep upgrade
az bicep upgrade
## version
-The **version** command returns your installed version.
+The `version` command returns your installed version.
```azurecli az bicep version
az bicep version
The command shows the version number. ```azurecli
-Bicep CLI version 0.3.539 (c8b397dbdd)
+Bicep CLI version 0.4.1 (e2387595c9)
```
-If you haven't installed Bicep CLI, you see an error indicating Bicep CLI wasn't found.
+If you haven't installed Bicep CLI, you see an error indicating Bicep CLI wasn't found.
## Next steps
azure-resource-manager Compare Template Syntax https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/compare-template-syntax.md
resource vm 'Microsoft.Compute/virtualMachines@2020-06-01' = if(deployVM) {
] ```
-To set resource property:
+To set a resource property:
```bicep sku: '2016-Datacenter'
sku: '2016-Datacenter'
"sku": "2016-Datacenter", ```
-To get resource ID of resource in the template:
+To get the resource ID of a resource in the template:
```bicep nic1.id
To iterate over items in an array or count:
## Resource dependencies
-For Bicep, you can set an explicit dependency but this approach is not recommended. Instead, rely on implicit dependencies. An implicit dependency is created when one resource declaration references the identifier of another resource.
+For Bicep, you can set an explicit dependency but this approach isn't recommended. Instead, rely on implicit dependencies. An implicit dependency is created when one resource declaration references the identifier of another resource.
The following shows a network interface with an implicit dependency on a network security group. It references the network security group with `nsg.id`.
output hostname string = publicIP.properties.dnsSettings.fqdn
To separate a solution into multiple files: * For Bicep, use [modules](modules.md).
-* For JSON, use [linked templates](../templates/linked-templates.md).
+* For ARM templates, use [linked templates](../templates/linked-templates.md).
## Next steps
azure-resource-manager Conditional Resource Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/conditional-resource-deployment.md
You set a [resource as dependent](./resource-declaration.md#set-resource-depende
## Next steps
-* For a Microsoft Learn module that covers conditional deployment, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
-* For recommendations about creating templates, see [ARM template best practices](../templates/template-best-practices.md).
-* To create multiple instances of a resource, see [Resource iteration in Bicep](./loop-resources.md).
+* For a Microsoft Learn module about conditions and loops, see [Build flexible Bicep templates by using conditions and loops](/learn/modules/build-flexible-bicep-templates-conditions-loops/).
+* For recommendations about creating Bicep files, see [Best practices for Bicep](best-practices.md).
+* To create multiple instances of a resource, see [Resource iteration in Bicep](loop-resources.md).
azure-resource-manager Decompile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/decompile.md
Title: Decompile ARM template JSON to Bicep
+ Title: Decompile ARM template JSON to Bicep
description: Describes commands for decompiling Azure Resource Manager templates to Bicep files. Last updated 06/01/2021
This article describes how to decompile Azure Resource Manager templates (ARM te
Decompiling an ARM template helps you get started with Bicep development. If you have a library of ARM templates and want to use Bicep for future development, you can decompile them to Bicep. However, the Bicep file might need revisions to implement best practices for Bicep.
-This article shows how to run the decompile command in Azure CLI. If you're not using Azure CLI, run the command without `az` at the start of the command. For example, `az bicep decompile` becomes ``bicep decompile``.
+This article shows how to run the `decompile` command in Azure CLI. If you're not using Azure CLI, run the command without `az` at the start of the command. For example, `az bicep decompile` becomes ``bicep decompile``.
## Decompile from JSON to Bicep
To decompile ARM template JSON to Bicep, use:
az bicep decompile --file main.json ```
-The command creates a file named **main.bicep** in the same directory as the ARM template.
+The command creates a file named _main.bicep_ in the same directory as the ARM template.
> [!CAUTION] > Decompilation attempts to convert the file, but there is no guaranteed mapping from ARM template JSON to Bicep. You may need to fix warnings and errors in the generated Bicep file. Or, decompilation can fail if an accurate conversion isn't possible. To report any issues or inaccurate conversions, [create an issue](https://github.com/Azure/bicep/issues).
Since you changed the name of the variable for the storage account name, you nee
```bicep resource exampleStorage 'Microsoft.Storage/storageAccounts@2019-06-01' = {
- name: uniqueStorageName
+ name: uniqueStorageName
``` And in the output, use:
output storageAccountName string = uniqueStorageName
## Export template and convert
-You can export the template for a resource group, and then pass it directly to the decompile command. The following example shows how to decompile an exported template.
+You can export the template for a resource group, and then pass it directly to the `decompile` command. The following example shows how to decompile an exported template.
# [Azure CLI](#tab/azure-cli)
Use `bicep decompile <filename>` on the downloaded file.
## Side-by-side view
-The [Bicep playground](https://aka.ms/bicepdemo) enables you to view equivalent JSON and Bicep files side by side. You can select a sample template to see both versions. Or, select `Decompile` to upload your own JSON template and view the equivalent Bicep file.
+The [Bicep playground](https://aka.ms/bicepdemo) enables you to view equivalent ARM template and Bicep files side by side. You can select a sample template to see both versions. Or, select `Decompile` to upload your own ARM template and view the equivalent Bicep file.
## Next steps
azure-resource-manager Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/install.md
Title: Set up Bicep development and deployment environments description: How to configure Bicep development and deployment environments Previously updated : 06/01/2021 Last updated : 06/04/2021
Let's make sure your environment is set up for developing and deploying Bicep fi
To create Bicep files, you need a good Bicep editor. We recommend: - **Visual Studio Code** - If you don't already have Visual Studio Code, [install it](https://code.visualstudio.com/).-- **Bicep extension for Visual Studio Code**. Visual Studio Code with the Bicep extension provides language support and resource autocompletion. The extension helps you create and validate Bicep files.
+- **Bicep extension for Visual Studio Code**. Visual Studio Code with the Bicep extension provides language support and resource autocompletion. The extension helps you create and validate Bicep files.
- To install the extension, search for *bicep* in the **Extensions** tab or in the [Visual Studio marketplace](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep).
+ To install the extension, search for *bicep* in the **Extensions** tab or in the [Visual Studio marketplace](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep).
Select **Install**.
For more commands, see [Bicep CLI](bicep-cli.md).
You must have Azure PowerShell version 5.6.0 or later installed. To update or install, see [Install Azure PowerShell](/powershell/azure/install-az-ps). Azure PowerShell doesn't automatically install the Bicep CLI. Instead, you must [manually install the Bicep CLI](#install-manually).
-
+ > [!IMPORTANT]
-> The self-contained instance of the Bicep CLI installed by Azure CLI isn't available to PowerShell commands. Azure PowerShell deployments fail if you haven't manually installed the Bicep CLI.
+> The self-contained instance of the Bicep CLI installed by Azure CLI isn't available to PowerShell commands. Azure PowerShell deployments fail if you haven't manually installed the Bicep CLI.
-When you manually install the Bicep CLI, run the Bicep commands with the `bicep` syntax, instead of the `az bicep` syntax for Azure CLI.
+When you manually install the Bicep CLI, run the Bicep commands with the `bicep` syntax, instead of the `az bicep` syntax for Azure CLI.
To deploy Bicep files, use Bicep CLI version 0.3.1 or later. To check your Bicep CLI version, run:
bicep --help
# Done! ```
+> [!NOTE]
+> For lightweight Linux distributions like [Alpine](https://alpinelinux.org/), use **bicep-linux-musl-x64** instead of **bicep-linux-x64** in the preceding script.
+ #### macOS ##### via homebrew
azure-resource-manager Test Createuidefinition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/managed-applications/test-createuidefinition.md
Title: Test the UI definition file
description: Describes how to test the user experience for creating your Azure Managed Application through the portal. Previously updated : 08/06/2019 Last updated : 06/04/2021 # Test your portal interface for Azure Managed Applications
-After [creating the createUiDefinition.json file](create-uidefinition-overview.md) for your managed application, you need to test the user experience. To simplify testing, use a sandbox environment that loads your file in the portal. You don't need to actually deploy your managed application. The sandbox presents your user interface in the current, full-screen portal experience. Or, you can use a script for testing the interface. Both approaches are shown in this article. The sandbox is the recommended way to preview the interface.
+After [creating the createUiDefinition.json file](create-uidefinition-overview.md) for your managed application, you need to test the user experience. To simplify testing, use a sandbox environment that loads your file in the portal. You don't need to actually deploy your managed application. The sandbox presents your user interface in the current, full-screen portal experience. The sandbox is the recommended way to preview the interface.
## Prerequisites
If your form doesn't display, and instead you see an icon of a cloud with tear d
![Show error](./media/test-createuidefinition/show-error.png)
-## Use test script
-
-To test your interface in the portal, copy one of the following scripts to your local machine:
-
-* [PowerShell side-load script - Az Module](https://github.com/Azure/azure-quickstart-templates/blob/master/SideLoad-AzCreateUIDefinition.ps1)
-* [PowerShell side-load script - Azure Module](https://github.com/Azure/azure-quickstart-templates/blob/master/SideLoad-CreateUIDefinition.ps1)
-* [Azure CLI side-load script](https://github.com/Azure/azure-quickstart-templates/blob/master/sideload-createuidef.sh)
-
-To see your interface file in the portal, run your downloaded script. The script creates a storage account in your Azure subscription, and uploads your createUiDefinition.json file to the storage account. The storage account is created the first time you run the script or if the storage account has been deleted. If the storage account already exists in your Azure subscription, the script reuses it. The script opens the portal and loads your file from the storage account.
-
-Provide a location for the storage account, and specify the folder that has your createUiDefinition.json file.
-
-For PowerShell, use:
-
-```powershell
-.\SideLoad-AzCreateUIDefinition.ps1 `
- -StorageResourceGroupLocation southcentralus `
- -ArtifactsStagingDirectory .\100-Marketplace-Sample
-```
-
-For Azure CLI, use:
-
-```bash
-./sideload-createuidef.sh \
- -l southcentralus \
- -a .\100-Marketplace-Sample
-```
-
-If your createUiDefinition.json file is in the same folder as the script, and you've already created the storage account, you don't need to provide those parameters.
-
-For PowerShell, use:
-
-```powershell
-.\SideLoad-AzCreateUIDefinition.ps1
-```
-
-For Azure CLI, use:
-
-```bash
-./sideload-createuidef.sh
-```
-
-The script opens a new tab in your browser. It displays the portal with your interface for creating the managed application.
-
-Provide values for the fields. When finished, you see the values that are passed to the template which can be found in your browser's developer tools console.
-
-![Show values](./media/test-createuidefinition/show-json.png)
-
-You can use these values as the parameter file for testing your deployment template.
-
-If the portal hangs at the summary screen, there might be a bug in the output section. For example, you may have referenced a control that doesn't exist. If a parameter in the output is empty, the parameter might be referencing a property that doesn't exist. For example, the reference to the control is valid, but the property reference isn't valid.
- ## Test your solution files Now that you've verified your portal interface is working as expected, it's time to validate that your createUiDefinition file is properly integrated with your mainTemplate.json file. You can run a validation script test to test the content of your solution files, including the createUiDefinition file. The script validates the JSON syntax, checks for regex expressions on text fields, and makes sure the output values of the portal interface match the parameters of your template. For information on running this script, see [Run static validation checks for templates](https://aka.ms/arm-ttk).
azure-resource-manager Move Resource Group And Subscription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/move-resource-group-and-subscription.md
Title: Move resources to a new subscription or resource group description: Use Azure Resource Manager to move resources to a new resource group or subscription. Previously updated : 06/02/2021 Last updated : 06/03/2021
For illustration purposes, we have only one dependent resource.
To move resources, select the resource group that contains those resources.
-When you view the resource group, the move option is disabled.
+Select the resources you want to move. To move all of the resources, select the checkbox at the top of list. Or, select resources individually.
-
-To enable the move option, select the resources you want to move. To select all of the resources, select the checkbox at the top of list. Or, select resources individually. After selecting resources, the move option is enabled.
- Select the **Move** button. This button gives you three options:
This button gives you three options:
Select whether you're moving the resources to a new resource group or a new subscription.
-Select the destination resource group. Acknowledge that you need to update scripts for these resources and select **OK**. If you selected to move to a new subscription, you must also select the destination subscription.
+The source resource group is automatically set. Specify the destination resource group. If you're moving to a new subscription, also specify the subscription. Select **Next**.
++
+The portal validates that the resources can be moved. Wait for validation to complete.
++
+When validation completes successfully, select **Next**.
+Acknowledge that you need to update tools and scripts for these resources. To start moving the resources, select **Move**.
-After validating that the resources can be moved, you see a notification that the move operation is running.
+When the move has completed, you're notified of the result.
-When it has completed, you're notified of the result.
## Use Azure PowerShell
azure-vmware Enable Public Internet Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/enable-public-internet-access.md
Public IP is a feature in Azure VMware Solution connectivity. It makes resources
You enable public internet access in two ways. - Host and publish applications under the Application Gateway load balancer for HTTP/HTTPS traffic.+ - Publish through public IP features in Azure Virtual WAN. As a part of Azure VMware Solution private cloud deployment, upon enabling public IP functionality, the required components with automation get created and enabled:
This article details how you can use the public IP functionality in Virtual WAN.
## Prerequisites - Azure VMware Solution environment+ - A webserver running in Azure VMware Solution environment.+ - A new non-overlapping IP range for the Virtual WAN hub deployment, typically a `/24`. ## Reference architecture
certification Program Requirements Azure Certified Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/certification/program-requirements-azure-certified-device.md
Title: Azure Certified Device Requirements
-description: Azure Certified Device program requirements
+ Title: Azure Certified Device Certification Requirements
+description: Azure Certified Device Certification Requirements
-# Azure Certified Device Requirements
+# Azure Certified Device Certification Requirements
(previously known as IoT Hub) This document outlines the device specific capabilities that will be represented in the Azure Certified Device catalog. A capability is singular device attribute that may be software implementation or combination of software and hardware implementations.
certification Program Requirements Edge Managed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/certification/program-requirements-edge-managed.md
Title: Edge Managed Certification Requirements
-description: Edge Managed Certification program requirements
+description: Edge Managed Certification Requirements
-# Azure Certification Edge Managed
+# Edge Managed Certification Requirements
This document outlines the device specific capabilities that will be represented in the Azure Certified Device catalog. A capability is singular device attribute that may describe the device.
certification Program Requirements Edge Secured Core https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/certification/program-requirements-edge-secured-core.md
Title: Edge Secured-core Certification Requirements
-description: Edge Secured-core Certification program requirements
+description: Edge Secured-core Certification Requirements
-# Azure Certified Device - Edge Secured-core (Preview) #
-
-## Edge Secured-Core Certification Requirements ##
+# Edge Secured-Core Certification Requirements (Preview) #
This document outlines the device specific capabilities and requirements that will be met in order to complete certification and list a device in the Azure IoT Device catalog with the Edge Secured-core label.
-### Program Purpose ###
+## Program Purpose ##
Edge Secured-core is an incremental certification in the Azure Certified Device program for IoT devices running a full operating system, such as Linux or Windows 10 IoT.This program enables device partners to differentiate their devices by meeting an additional set of security criteria. Devices meeting this criteria enable these promises: 1. Hardware-based device identity 2. Capable of enforcing system integrity
Edge Secured-core is an incremental certification in the Azure Certified Device
4. Provides data at-rest protection 5. Provides data in-transit protection 6. Built in security agent and hardening
-### Requirements ###
+## Requirements ##
|Name|SecuredCore.Built-in.Security|
certification Program Requirements Pnp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/certification/program-requirements-pnp.md
Title: IoT Plug and Play Certification Requirements
-description: IoT Plug and Play Certification program requirements
+description: IoT Plug and Play Certification Requirements
cloud-services-extended-support Support Help https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/support-help.md
+
+ Title: Azure Cloud Services (extended support) support and help options
+description: How to obtain help and support for questions or problems when you create solutions using Azure Cloud Services (extended support).
++++ Last updated : 4/28/2021+++
+# Support and troubleshooting for Azure Cloud Services (extended support)
+
+Here are suggestions for where you can get help when developing your Azure Cloud Services (extended support) solutions.
+
+## Self help troubleshooting
+<div class='icon is-large'>
+ <img alt='Self help content' src='./media/logos/doc-logo.png'>
+</div>
+
+For common issues and and workarounds, see [Troubleshoot Azure Cloud Services (extended support) role start failures](role-startup-failure.md) and [Frequently asked questions](faq.md)
+++
+## Post a question on Microsoft Q&A
+
+<div class='icon is-large'>
+ <img alt='Microsoft Q&A' src='./media/logos/microsoft-logo.png'>
+</div>
+
+Get answers to Service Fabric questions directly from Microsoft engineers, Azure Most Valuable Professionals (MVPs), and members of our expert community.
+
+[Microsoft Q&A](https://docs.microsoft.com/answers/topics/azure-cloud-services-extended-support.html) is Azure's recommended source of community support.
+
+If you can't find an answer to your problem by searching Microsoft Q&A, submit a new question. Be sure to post your question using the [**azure-cloud-services-extended-support**](https://docs.microsoft.com/answers/topics/azure-cloud-services-extended-support.html) tag. Here are some Microsoft Q&A tips for writing [high-quality questions](https://docs.microsoft.com/answers/articles/24951/how-to-write-a-quality-question.html).
+
+## Create an Azure support request
+
+<div class='icon is-large'>
+ <img alt='Azure support' src='./media/logos/azure-logo.png'>
+</div>
+
+Explore the range of [Azure support options and choose the plan](https://azure.microsoft.com/support/plans) that best fits, whether you're a developer just starting your cloud journey or a large organization deploying business-critical, strategic applications. Azure customers can create and manage support requests in the Azure portal.
+
+- If you already have an Azure Support Plan, [open a support request here](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+
+- To sign up for a new Azure Support Plan, [compare support plans](https://azure.microsoft.com/support/plans/) and select the plan that works for you.
++
+## Stay informed of updates and new releases
+
+<div class='icon is-large'>
+ <img alt='Stay informed' src='./media/logos/updates-logo.png'>
+</div>
+
+Learn about important product updates, roadmap, and announcements in [Azure Updates](https://azure.microsoft.com/updates/?category=compute).
+
+News and information about Azure Cloud Services (extended support) is shared at the [Azure blog](https://azure.microsoft.com/blog/topics/virtual-machines/).
++
+## Next steps
+
+Learn more about [Azure Cloud Services (extended support)](overview.md)
cloud-shell Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-shell/overview.md
vm-linux ms.devlang: na Previously updated : 07/20/2020 Last updated : 06/4/2021
Read more to learn how to mount a [new or existing storage account](persisting-s
Learn more about features in [Bash in Cloud Shell](features.md) and [PowerShell in Cloud Shell](./features.md).
+## Complaince
+### Encryption at rest
+All Cloud Shell infrastructure is complaint with double encryption at rest by default. No action is required by users.
+ ## Pricing The machine hosting Cloud Shell is free, with a pre-requisite of a mounted Azure Files share. Regular storage costs apply.
cognitive-services Intro To Spatial Analysis Public Preview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/intro-to-spatial-analysis-public-preview.md
The core operations of Spatial Analysis are all built on a pipeline that ingests
## Get started
-### Public preview gating
-
-To ensure Spatial Analysis is used for scenarios it was designed for, we are making this technology available to customers through an application process. To get access to Spatial Analysis, you'll need to start by filling out our online intake form. [Begin your application here](https://aka.ms/csgate).
-
-Access to the Spatial Analysis public preview is subject to Microsoft's sole discretion based on our eligibility criteria, vetting process, and availability to support a limited number of customers during this gated preview. In public preview, we are looking for customers who have a significant relationship with Microsoft, are interested in working with us on the recommended use cases, and additional scenarios that keep with our responsible AI commitments.
- ### Follow a quickstart Once you're granted access to Spatial Analysis, follow the [quickstart](spatial-analysis-container.md) to set up the container and begin analyzing video.
To learn how to use Spatial Analysis technology responsibly, see the [transparen
## Next steps > [!div class="nextstepaction"]
-> [Quickstart: Spatial Analysis container](spatial-analysis-container.md)''''''''''''
+> [Quickstart: Spatial Analysis container](spatial-analysis-container.md)
cognitive-services Spatial Analysis Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/spatial-analysis-container.md
The Spatial Analysis container enables you to analyze real-time streaming video
* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="Create a Computer Vision resource" target="_blank">create a Computer Vision resource </a> for the Standard S1 tier in the Azure portal to get your key and endpoint. After it deploys, click **Go to resource**. * You will need the key and endpoint from the resource you create to run the Spatial Analysis container. You'll use your key and endpoint later. - ### Spatial Analysis container requirements To run the Spatial Analysis container, you need a compute device with a [NVIDIA Tesla T4 GPU](https://www.nvidia.com/en-us/data-center/tesla-t4/). We recommend that you use [Azure Stack Edge](https://azure.microsoft.com/products/azure-stack/edge/) with GPU acceleration, however the container runs on any other desktop machine that meets the minimum requirements. We will refer to this device as the host computer.
In our example, we will utilize an [NC series VM](../../virtual-machines/nc-seri
| Camera | The Spatial Analysis container is not tied to a specific camera brand. The camera device needs to: support Real-Time Streaming Protocol(RTSP) and H.264 encoding, be accessible to the host computer, and be capable of streaming at 15FPS and 1080p resolution. | | Linux OS | [Ubuntu Desktop 18.04 LTS](http://releases.ubuntu.com/18.04/) must be installed on the host computer. | -
-## Request approval to run the container
-
-Fill out and submit the [request form](https://aka.ms/csgate) to request approval to run the container.
-
-The form requests information about you, your company, and the user scenario for which you'll use the container. After you submit the form, the Azure Cognitive Services team will review it and email you with a decision.
-
-> [!IMPORTANT]
-> * On the form, you must use an email address associated with an Azure subscription ID.
-> * The Computer Vision resource you use to run the container must have been created with the approved Azure subscription ID.
-
-After you're approved, you will be able to run the container after downloading it from the Microsoft Container Registry (MCR), described later in the article.
-
-You won't be able to run the container if your Azure subscription has not been approved.
- ## Set up the host computer It is recommended that you use an Azure Stack Edge device for your host computer. Click **Desktop Machine** if you're configuring a different device, or **Virtual Machine** if you're utilizing a VM.
cognitive-services Spatial Analysis Logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/spatial-analysis-logging.md
Check fetch log's lines, times, and sizes, if those settings look good replace *
You can export logs from the Azure Blob Storage when troubleshooting issues.
-## Common issues
-
-If you see the following message in the module logs, it might mean your Azure subscription needs to be approved:
-
-"Container is not in a valid state. Subscription validation failed with status 'Mismatch'. Api Key is not intended for the given container type."
-
-For more information, see [Request approval to run the container](spatial-analysis-container.md#request-approval-to-run-the-container).
- ## Troubleshooting the Azure Stack Edge device The following section is provided for help with debugging and verification of the status of your Azure Stack Edge device.
cognitive-services Specify Detection Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/Face-API-How-to-Topics/specify-detection-model.md
The different face detection models are optimized for different tasks. See the f
|||| |Default choice for all face detection operations. | Released in May 2019 and available optionally in all face detection operations. | Released in February 2021 and available optionally in all face detection operations. |Not optimized for small, side-view, or blurry faces. | Improved accuracy on small, side-view, and blurry faces. | Further improved accuracy, including on smaller faces (64x64 pixels) and rotated face orientations.
-|Returns main face attributes (head pose, age, emotion, and so on) if they're specified in the detect call. | Does not return face attributes. | Returns "mask" attribute if it's specified in the detect call.
-|Returns face landmarks if they're specified in the detect call. | Does not return face landmarks. | Does not return face landmarks.
+|Returns main face attributes (head pose, age, emotion, and so on) if they're specified in the detect call. | Does not return face attributes. | Returns mask and head pose attributes if they're specified in the detect call.
+|Returns face landmarks if they're specified in the detect call. | Does not return face landmarks. | Returns face landmarks if they're specified in the detect call.
The best way to compare the performances of the detection models is to use them on a sample dataset. We recommend calling the [Face - Detect] API on a variety of images, especially images of many faces or of faces that are difficult to see, using each detection model. Pay attention to the number of faces that each model returns.
cognitive-services Face Detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/concepts/face-detection.md
Attributes are a set of features that can optionally be detected by the [Face -
* **Gender**. The estimated gender of the given face. Possible values are male, female, and genderless. * **Glasses**. Whether the given face has eyeglasses. Possible values are NoGlasses, ReadingGlasses, Sunglasses, and Swimming Goggles. * **Hair**. The hair type of the face. This attribute shows whether the hair is visible, whether baldness is detected, and what hair colors are detected.
-* **Head pose**. The face's orientation in 3D space. This attribute is described by the pitch, roll, and yaw angles in degrees. The value ranges are -90 degrees to 90 degrees, -90 degrees to 90 degrees, and -90 degrees to 90 degrees, respectively. See the following diagram for angle mappings:
+* **Head pose**. The face's orientation in 3D space. This attribute is described by the roll, yaw, and pitch angles in degrees, which are defined according to the [right-hand rule](https://en.wikipedia.org/wiki/Right-hand_rule). The order of three angles is roll-yaw-pitch, and each angle's value range is from -180 degrees to 180 degrees. 3D orientation of the face is estimated by the roll, yaw, and pitch angles in order. See the following diagram for angle mappings:
![A head with the pitch, roll, and yaw axes labeled](../Images/headpose.1.jpg) * **Makeup**. Whether the face has makeup. This attribute returns a Boolean value for eyeMakeup and lipMakeup.
cognitive-services How To Custom Commands Developer Flow Test https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-commands-developer-flow-test.md
To set up the client, checkout [Windows Voice Assistant Client](https://github.c
> [!div class="mx-imgBorder"] > ![WVAC Create profile](media/custom-commands/conversation.png)
+## Test programatically with Cognitive Services Voice Assistant Test Tool
+The Voice Assistant Test (VST) tool is a configurable .NET core C# console application for end-to-end functional regression tests for your Microsoft Voice Assistant.
+
+The tool can run manually as a console command or automated as part of Azure DevOps CI/CD pipeline to prevent regressions in your bot.
+
+To setup the tool, see [Voice Assistant Test Tool](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/tree/main/clients/csharp-dotnet-core/voice-assistant-test).
+ ## Test with Speech SDK-enabled client applications
-The Speech software development kit (SDK) exposes many of the Speech service capabilities, which allows you to develop speech-enabled applications. It's also available in many programming languages and across all platforms.
+The Speech software development kit (SDK) exposes many of the Speech service capabilities, which allows you to develop speech-enabled applications. It's available in many programming languages on most platforms.
To set up a Universal Windows Platform (UWP) client application with Speech SDK, and integrate it with your custom command application: - [How to: Integrate with a client application using Speech SDK](./how-to-custom-commands-setup-speech-sdk.md)
For other programming languages and platforms:
## Next steps > [!div class="nextstepaction"]
-> [See samples on GitHub](https://aka.ms/speech/cc-samples)
+> [See samples on GitHub](https://aka.ms/speech/cc-samples)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/language-support.md
Last updated 06/10/2020
-# Language and region support for text and speech translation
+# Language support for text and speech translation
Use Translator to translate to and from any of the 90 text translation languages and dialects. Neural Machine Translation (NMT) is the new standard for high-quality AI-powered machine translations and is available as the default using V3 of Translator when a neural system is available.
cognitive-services Quickstart Translator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/quickstart-translator.md
When using the `/detect` endpoint, the response will include alternate detection
```json [+ {
- "alternatives": [
- {
- "isTranslationSupported": true,
- "isTransliterationSupported": false,
- "language": "nl",
- "score": 0.92
- },
- {
- "isTranslationSupported": true,
- "isTransliterationSupported": false,
- "language": "sk",
- "score": 0.77
- }
- ],
- "isTranslationSupported": true,
- "isTransliterationSupported": false,
+ "language": "de",
- "score": 1.0
+
+ "score": 1.0,
+
+ "isTranslationSupported": true,
+
+ "isTransliterationSupported": false
+ }+ ] ```
cognitive-services V3 0 Detect https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/reference/v3-0-detect.md
An example JSON response is:
```json [
- {
- "language": "de",
- "score": 0.92,
- "isTranslationSupported": true,
- "isTransliterationSupported": false,
- "alternatives": [
- {
- "language": "pt",
- "score": 0.23,
- "isTranslationSupported": true,
- "isTransliterationSupported": false
- },
- {
- "language": "sk",
- "score": 0.23,
+
+ {
+
+ "language": "de",
+
+ "score": 1.0,
+ "isTranslationSupported": true,+ "isTransliterationSupported": false
- }
- ]
- }
+
+ }
+ ] ```
The following example shows how to retrieve languages supported for text transla
```curl curl -X POST "https://api.cognitive.microsofttranslator.com/detect?api-version=3.0" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json" -d "[{'Text':'What language is this text written in?'}]"
-```
+```
cognitive-services Cognitive Services Apis Create Account Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/cognitive-services-apis-create-account-cli.md
keywords: cognitive services, cognitive intelligence, cognitive solutions, ai services Previously updated : 3/22/2021 Last updated : 06/04/2021
To remove the resource group and its associated resources, use the az group dele
az group delete --name cognitive-services-resource-group ```
+If you need to recover a deleted resource, see [Recover deleted Cognitive Services resources](manage-resources.md).
+ ## See also * See **[Authenticate requests to Azure Cognitive Services](authentication.md)** on how to securely work with Cognitive Services.
cognitive-services Cognitive Services Apis Create Account Client Library https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/cognitive-services-apis-create-account-client-library.md
Previously updated : 03/15/2021 Last updated : 06/04/2021 zone_pivot_groups: programming-languages-set-ten
cognitive-services Cognitive Services Apis Create Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/cognitive-services-apis-create-account.md
keywords: cognitive services, cognitive intelligence, cognitive solutions, ai services Previously updated : 03/15/2021 Last updated : 06/04/2021
If you want to clean up and remove a Cognitive Services subscription, you can de
2. Locate the resource group containing the resource to be deleted 3. Right-click on the resource group listing. Select **Delete resource group**, and confirm.
+If you need to recover a deleted resource, see [Recover deleted Cognitive Services resources](manage-resources.md).
+ ## See also * See **[Authenticate requests to Azure Cognitive Services](authentication.md)** on how to securely work with Cognitive Services.
cognitive-services Create Account Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/create-account-resource-manager-template.md
Previously updated : 04/28/2021 Last updated : 06/04/2021
az group delete --name $resourceGroupName
+If you need to recover a deleted resource, see [Recover deleted Cognitive Services resources](manage-resources.md).
+ ## See also * See **[Authenticate requests to Azure Cognitive Services](authentication.md)** on how to securely work with Cognitive Services.
cognitive-services Concept Identification Cards https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/concept-identification-cards.md
The IDs API also returns the following information:
> > Currently supported ID types include worldwide passport and U.S. Driver's Licenses. We are actively seeking to expand our ID support to other identity documents around the world.
-## POST Analyze ID Document
+## The Analyze ID Document operation
The [Analyze ID](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/5f74a7daad1f2612c46f5822) operation takes an image or PDF of an ID as the input and extracts the values of interest. The call returns a response header field called `Operation-Location`. The `Operation-Location` value is a URL that contains the Result ID to be used in the next step.
The [Analyze ID](https://westus.dev.cognitive.microsoft.com/docs/services/form-r
|:--|:-| |Operation-Location | `https://cognitiveservice/formrecognizer/v2.1/prebuilt/idDocument/analyzeResults/49a36324-fc4b-4387-aa06-090cfbf0064f` |
-## GET Analyze ID Document Result
+## The Get Analyze ID Document Result operation
<! Need to update this with updated APIM links when available -->
-The second step is to call the [**Get Analyze idDocument Result**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/5f74a7738978e467c5fb8707) operation. This operation takes as input the Result ID that was created by the Analyze ID operation. It returns a JSON response that contains a **status** field with the following possible values. You call this operation iteratively until it returns with the **succeeded** value. Use an interval of 3 to 5 seconds to avoid exceeding the requests per second (RPS) rate.
+The second step is to call the [**Get Analyze ID Document Result**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/5f74a7738978e467c5fb8707) operation. This operation takes as input the Result ID that was created by the Analyze ID operation. It returns a JSON response that contains a **status** field with the following possible values. You call this operation iteratively until it returns with the **succeeded** value. Use an interval of 3 to 5 seconds to avoid exceeding the requests per second (RPS) rate.
|Field| Type | Possible values | |:--|:-:|:-|
cognitive-services Manage Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/manage-resources.md
+
+ Title: Recover deleted Cognitive Services resource
+
+description: This article provides instructions on how to recover an already-deleted Cognitive Services resource.
+++++ Last updated : 06/04/2021+++
+# Recover deleted Cognitive Services resources
+
+This article provides instructions on how to recover a Cognitive Services resource that is already deleted. The article also provides instructions on how to purge a deleted resource.
+
+> [!NOTE]
+> The instructions in this article are applicable to both a multi-service resource and a single-service resource. A multi-service resource enables access to multiple cognitive services using a single key and endpoint. On the other hand, a single-service resource enables access to just that specific cognitive service for which the resource was created.
+
+## Prerequisites
+
+* The resource to be recovered must have been deleted within the past 48 hours.
+* The resource to be recovered must not have been purged already. A purged resource cannot be recovered.
+* Before you attempt to recover a deleted resource, make sure that the resource group for that account exists. If the resource group was deleted, you must recreate it. Recovering a resource group is not possible. For more information, seeΓÇ»[Manage resource groups](../azure-resource-manager/management/manage-resource-groups-portal.md).
+* If the deleted resource used customer-managed keys with Azure Key Vault and the key vault has also been deleted, then you must restore the key vault before you restore the Cognitive Services resource. For more information, see [Azure Key Vault recovery management](../key-vault/general/key-vault-recovery.md).
+* If the deleted resource used a customer-managed storage and storage account has also been deleted, you must restore the storage account before you restore the Cognitive Services resource. For instructions, see [Recover a deleted storage account](../storage/common/storage-account-recover.md).
+
+## Recover a deleted resource
+
+To recover a deleted cognitive service resource, use the following commands. Where applicable, replace:
+
+* `{subscriptionID}` with your Azure subscription ID
+* `{resourceGroup}` with your resource group
+* `{resourceName}` with your resource name
+* `{location}` with the location of your resource
+
+### Using the REST API
+
+Use the following `PUT` command:
+
+```rest-api
+https://management.azure.com/subscriptions/{subscriptionID}/resourceGroups/{resourceGroup}/providers/Microsoft.CognitiveServices/accounts/{resourceName}?Api-Version=2021-04-30
+```
+
+In the request body, use the following JSON format:
+
+```json
+{
+ "location": "{location}",
+ "properties": {
+ "restore": true
+ }
+}
+```
+
+### Using PowerShell
+
+Use the following command to restore the resource:
+
+```powershell
+New-AzResource -Location {location}-Properties @{restore=$true} -ResourceId /subscriptions/{subscriptionID}/resourceGroups/{resourceGroup}/providers/Microsoft.CognitiveServices/accounts/{resourceName} -ApiVersion 2021-04-30
+```
+
+If you need to find the name of your deleted resources, you can get a list of deleted resource names with the following command:
+
+```powershell
+Get-AzResource -ResourceId /subscriptions/{subscriptionId}/providers/Microsoft.CognitiveServices/deletedAccounts -ApiVersion 2021-04-30
+```
++
+## Purge a deleted resource
+
+To purge a deleted cognitive service resource, use the following commands. Where applicable, replace:
+
+* `{subscriptionID}` with your Azure subscription ID
+* `{resourceGroup}` with your resource group
+* `{resourceName}` with your resource name
+* `{location}` with the location of your resource
+
+> [!NOTE]
+> After a resource is purged, you will not be able to create another resource with the same name for 48 hours.
+
+### Using the REST API
+
+Use the following `DELETE` command:
+
+```rest-api
+https://management.azure.com/subscriptions/{subscriptionID}/providers/Microsoft.CognitiveServices/locations/{location}/resourceGroups/{resourceGroup}/deletedAccounts/{resourceName}?Api-Version=2021-04-30`
+```
+
+### Using PowerShell
+
+```powershell
+Remove-AzResource -ResourceId /subscriptions/{subscriptionID}/providers/Microsoft.CognitiveServices/locations/{location}/resourceGroups/{resourceGroup}/deletedAccounts/{resourceName} -ApiVersion 2021-04-30`
+```
+
+### Using the Azure CLI
+
+```azurecli-interactive
+az resource delete /subscriptions/{subscriptionId}/providers/Microsoft.CognitiveServices/locations/{location}/resourceGroups/{resourceGroup}/deletedAccounts/{resourceName}
+```
+
+## See also
+* [Create a new resource using the Azure portal](cognitive-services-apis-create-account.md)
+* [Create a new resource using the Azure CLI](cognitive-services-apis-create-account-cli.md)
+* [Create a new resource using the client library](cognitive-services-apis-create-account-client-library.md)
+* [Create a new resource using an ARM template](create-account-resource-manager-template.md)
cognitive-services Model Versioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/concepts/model-versioning.md
Previously updated : 02/17/2021 Last updated : 06/03/2021
Use the table below to find which model versions are supported by each hosted en
| `/entities/recognition/general` | `2019-10-01`, `2020-02-01`, `2020-04-01`,`2021-01-15` | `2021-01-15` | | `/entities/recognition/pii` | `2019-10-01`, `2020-02-01`, `2020-04-01`,`2020-07-01`, `2021-01-15` | `2021-01-15` | | `/entities/health` | `2021-05-15` | `2021-05-15` |
-| `/keyphrases` | `2019-10-01`, `2020-07-01` | `2020-07-01` |
+| `/keyphrases` | `2019-10-01`, `2020-07-01`, `2021-06-01` | `2021-06-01` |
You can find details about the updates for these models in [What's new](../whats-new.md).
cognitive-services Text Analytics For Health https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/how-tos/text-analytics-for-health.md
Previously updated : 05/12/2021 Last updated : 06/03/2021
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/language-support.md
Previously updated : 05/18/2021 Last updated : 06/04/2021 # Text Analytics API v3 language support
| Language | Language code | v3 support | Available starting with v3 model version: | Notes | |:-|:-:|:-:|:--:|::|
-| Danish | `da` | Γ£ô | 2019-10-01 | |
+| Afrikaans      |     `af`  |     ✓      |                2020-07-01                 |                    |
+| Bulgarian      |     `bg`  |     ✓      |                2020-07-01                 |                    |
+| Catalan    |     `ca`  |     ✓      |                2020-07-01                 |                    |
+| Chinese-Simplified    |     `zh-hans` |     ✓      |                2021-06-01                 |                    |
+| Croatian | `hr` | Γ£ô | 2020-07-01 | |
+| Danish | `da` | Γ£ô | 2019-10-01 | |
| Dutch                 |     `nl`      |     ✓      |                2019-10-01                 |                    | | English               |     `en`      |     ✓      |                2019-10-01                 |                    |
+| Estonian              |     `et`      |     ✓      |                2020-07-01                 |                    |
| Finnish               |     `fi`      |     ✓      |                2019-10-01                 |                    | | French                |     `fr`      |     ✓      |                2019-10-01                 |                    | | German                |     `de`      |     ✓      |                2019-10-01                 |                    |
+| Greek    |     `el`  |     ✓      |                2020-07-01                 |                    |
+| Hungarian    |     `hu`  |     ✓      |                2020-07-01                 |                    |
| Italian               |     `it`      |     ✓      |                2019-10-01                 |                    |
+| Indonesian            |     `id`      |     ✓      |                2020-07-01                 |                    |
| Japanese              |     `ja`      |     ✓      |                2019-10-01                 |                    | | Korean                |     `ko`      |     ✓      |                2019-10-01                 |                    |
+| Latvian               |     `lv`      |     ✓      |                2020-07-01                 |                    |
| Norwegian  (Bokmål)   |     `no`      |     ✓      |                2020-07-01                 | `nb` also accepted | | Polish                |     `pl`      |    ✓      |                2019-10-01                 |                    | | Portuguese (Brazil)   |    `pt-BR`    |     ✓      |                2019-10-01                 |                    | | Portuguese (Portugal) |    `pt-PT`    |    ✓      |                2019-10-01                 | `pt` also accepted |
+| Romanian              |     `ro`      |     ✓      |                2020-07-01                 |                    |
| Russian               |     `ru`      |     ✓      |                2019-10-01                 |                    | | Spanish               |     `es`      |     ✓      |                2019-10-01                 |                    |
+| Slovak                |     `sk`      |     ✓      |                2020-07-01                 |                    |
+| Slovenian             |     `sl`      |     ✓      |                2020-07-01                 |                    |
| Swedish               |     `sv`      |     ✓      |                2019-10-01                 |                    |
+| Turkish              |     `tr`      |     ✓      |                2020-07-01                 |                    |
#### [Entity linking](#tab/entity-linking)
cognitive-services Named Entity Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/named-entity-types.md
Previously updated : 03/11/2021 Last updated : 06/03/2021
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/whats-new.md
Previously updated : 05/17/2021 Last updated : 06/03/2021
The Text Analytics API is updated on an ongoing basis. To stay up-to-date with r
## June 2021
+### General API updates
+
+* New model-version `2021-06-01` for key phrase extraction, which adds support for simplified Chinese.
+ ### Text Analytics for health updates * A new model version `2021-05-15` for the `/health` endpoint and on-premise container which provides
container-instances Container Instances Support Help https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-support-help.md
+
+ Title: Azure Container Instances support and help options
+description: How to obtain help and support for questions or problems when you create solutions using Azure Container Instances.
++++ Last updated : 6/4/2021+++
+# Support and troubleshooting for Azure Container Instances
+
+Here are suggestions for where you can get help when developing your Azure Container Instances solutions.
+
+## Self help troubleshooting
+<div class='icon is-large'>
+ <img alt='Self help content' src='./media/logos/doc-logo.png'>
+</div>
+
+See a list of [common issues in Azure Container instances](container-instances-troubleshooting.md) and how to resolve them.
+
+## Post a question on Microsoft Q&A
+
+<div class='icon is-large'>
+ <img alt='Microsoft Q&A' src='./media/logos/microsoft-logo.png'>
+</div>
+
+For quick and reliable answers on your technical product questions from Microsoft Engineers, Azure Most Valuable Professionals (MVPs), or our expert community, engage with us on [Microsoft Q&A](/answers/products/azure), AzureΓÇÖs preferred destination for community support.
+
+If you can't find an answer to your problem using search, submit a new question to Microsoft Q&A using the tag [**azure-container-instances**](/answers/topics/azure-container-instances.html).
+
+## Create an Azure support request
+
+<div class='icon is-large'>
+ <img alt='Azure support' src='./media/logos/azure-logo.png'>
+</div>
+
+Explore the range of [Azure support options and choose the plan](https://azure.microsoft.com/support/plans) that best fits, whether you're a developer just starting your cloud journey or a large organization deploying business-critical, strategic applications. Azure customers can create and manage support requests in the Azure portal.
+
+- If you already have an Azure Support Plan, [open a support request here](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+
+- To sign up for a new Azure Support Plan, [compare support plans](https://azure.microsoft.com/support/plans/) and select the plan that works for you.
+
+## Check the Stack Overflow forum
+<div class='icon is-large'>
+ <img alt='Stack Overflow' src='./media/logos/stack-overflow-logo.png'>
+</div>
+
+The [**azure-container-instances**](https://stackoverflow.com/questions/tagged/azure-container-instances) tag on [Stack Overflow](https://stackoverflow.com/) is used for asking general questions about how the platform works and how you may use it to accomplish certain tasks.
+
+## Create a GitHub issue
+
+<div class='icon is-large'>
+ <img alt='GitHub-image' src='./media/logos/github-logo.png'>
+</div>
+
+If you need help with the language and tools used to develop and manage Azure Container Instances, open an issue in its repository on GitHub.
+
+| Library | GitHub issues URL|
+| | |
+| Azure PowerShell | https://github.com/Azure/azure-powershell/issues |
+| Azure CLI | https://github.com/Azure/azure-cli/issues |
+| Azure REST API | https://github.com/Azure/azure-rest-api-specs/issues |
+| Azure SDK for Java | https://github.com/Azure/azure-sdk-for-java/issues |
+| Azure SDK for Python | https://github.com/Azure/azure-sdk-for-python/issues |
+| Azure SDK for .NET | https://github.com/Azure/azure-sdk-for-net/issues |
+| Azure SDK for JavaScript | https://github.com/Azure/azure-sdk-for-js/issues |
+| Jenkins | https://github.com/Azure/jenkins/issues |
+| Terraform | https://github.com/Azure/terraform/issues |
+| Ansible | https://github.com/Azure/Ansible/issues |
+++
+## Submit feature requests on Azure Feedback
+
+<div class='icon is-large'>
+ <img alt='UserVoice' src='./media/logos/azure-feedback-logo.png'>
+</div>
+
+To request new features, post them on Azure Feedback. Share your ideas for improving Azure Container Instances.
+
+| Service | Azure Feedback URL |
+|-||
+| Azure Container Instances | https://feedback.azure.com/forums/602224-azure-container-instances
+
+## Stay informed of updates and new releases
+
+<div class='icon is-large'>
+ <img alt='Stay informed' src='./media/logos/updates-logo.png'>
+</div>
+
+Learn about important product updates, roadmap, and announcements in [Azure Updates](https://azure.microsoft.com/updates/?category=containers).
+
+News and information about Azure Container Instances is shared at the [Azure blog](https://azure.microsoft.com/blog).
+
+## Next steps
+
+Learn more about [Azure Container Instances](https://docs.microsoft.com/azure/container-instances/)
container-registry Container Registry Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-authentication.md
Using `az acr login` with Azure identities provides [Azure role-based access con
### az acr login with --expose-token
-In some cases, you need need to authenticate with `az acr login` when the Docker daemon isn't running in your environment. For example, you might need to run `az acr login` in a script in Azure Cloud Shell, which provides the Docker CLI but doesn't run the Docker daemon.
+In some cases, you need to authenticate with `az acr login` when the Docker daemon isn't running in your environment. For example, you might need to run `az acr login` in a script in Azure Cloud Shell, which provides the Docker CLI but doesn't run the Docker daemon.
For this scenario, run `az acr login` first with the `--expose-token` parameter. This option exposes an access token instead of logging in through the Docker CLI.
cosmos-db Linux Emulator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/linux-emulator.md
Previously updated : 05/25/2021 Last updated : 06/04/2021 # Run the emulator on Docker for Linux (Preview)
This section provides tips to troubleshoot errors when using the Linux emulator.
- Make sure that the emulator self-signed certificate has been properly added to [KeyChain](#consume-endpoint-ui). -- Ensure that the emulator self-signed certificate has been properly imported into the expected location:
- - .NET: See the [certificates section](#run-on-linux)
- - Java: See the [Java Certificates Store section](#run-on-linux)
+- For Java applications, make sure you imported the certificate to the [Java Certificates Store section](#run-on-linux).
+
+- For .NET applications you can disable SSL validation:
+
+# [.NET Standard 2.1+](#tab/ssl-netstd21)
+
+For any application running in a framework compatible with .NET Standard 2.1 or later, we can leverage the `CosmosClientOptions.HttpClientFactory`:
+
+[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/HttpClientFactory/Program.cs?name=DisableSSLNETStandard21)]
+
+# [.NET Standard 2.0](#tab/ssl-netstd20)
+
+For any application running in a framework compatible with .NET Standard 2.0, we can leverage the `CosmosClientOptions.HttpClientFactory`:
+
+[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/HttpClientFactory/Program.cs?name=DisableSSLNETStandard20)]
++ #### My Node.js app is reporting a self-signed certificate error
data-catalog Data Catalog Frequently Asked Questions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-catalog/data-catalog-frequently-asked-questions.md
- Title: Azure Data Catalog frequently asked questions
-description: Frequently asked questions about Azure Data Catalog, including capabilities for data source discovery, annotation, and management.
---- Previously updated : 08/01/2019-
-# Azure Data Catalog frequently asked questions
--
-This article provides answers to frequently asked questions related to the Azure Data Catalog service.
-
-## What is Azure Data Catalog?
-Data Catalog is a fully managed service, hosted in Microsoft Azure, that serves as a system of registration and discovery for enterprise data sources. With Data Catalog, any user, from analysts to data scientists and developers, can register, discover, understand, and consume data sources.
-
-## What customer challenges does it solve?
-Data Catalog addresses the challenges of data-source discovery and ΓÇ£dark dataΓÇ¥ so that users can discover and understand enterprise data sources.
-
-## What are its target audiences?
-Data Catalog is designed for technical and non-technical users, including:
-
-* Data developers and BI and analytics professionals: People who are responsible for producing data and analytics content for others to consume.
-* Data stewards: People who have the knowledge about the data, what it means, and how it is intended to be used.
-* Data consumers: People who need to be able to easily discover, understand, and connect to the data they need to do their job, by using the tool of their choice.
-* Central IT: People who need to make hundreds of data sources discoverable by business users, and who need to maintain oversight over how data is being used and by whom.
-
-## What is its availability by region?
-Data Catalog services are currently available in the following data centers:
-
-* West US
-* East US
-* West Europe
-* North Europe
-* Australia East
-* Southeast Asia
-
-## What are its limits on the number of data assets?
-The Free Edition of Data Catalog is limited to 5,000 registered data assets.
-
-The Standard Edition of Data Catalog supports up to 100,000 registered data assets.
-
-Any object registered in Data Catalog, such as tables, views, files, and reports, counts as a data asset.
-
-## What are its supported data source and asset types?
-For a list of currently supported data sources, see [Data Catalog DSR](data-catalog-dsr.md).
-
-## How do I request support for another data source?
-To submit feature requests and other feedback, go to the [Data Catalog on the Azure Feedback Forums](https://feedback.azure.com/forums/906052-data-catalog/category/320788-data-sources).
-
-## Why do I get an error *Catalog already exists* when I try to create a new catalog?
-
-When you purchase Office 365 E5 with Power BI Pro License, Microsoft creates a default catalog in the subscription's region automatically. This catalog uses the free SKU. The Office 365 / Power BI user license is managed in the administration page.
-
-However, this type of data catalog does not have an **Administrator Option** and is not visible in the **Azure portal**. You cannot delete this type of data catalog. Similarly, you are not allowed to rename the data catalog, and you cannot move it to another region.
-
-Users accounts that are assigned a Power BI Pro license automatic have access to the data catalog due to License Agreement when they signed up for Office 365 E5 with the Power BI Pro License. This type of user has full access to data catalog assets without administrative privileges. That kind of user is *not* part of **Catalog User** role in Azure Data Catalog.
--
-## How do I get started with Data Catalog?
-The best way to get started is by going to [Getting Started with Data Catalog](data-catalog-get-started.md). This article is an end-to-end overview of the capabilities in the service.
-
-## How do I register my data?
-To register your data in Data Catalog:
-1. In the Azure Data Catalog portal, in the **Publish** area, start the Azure Data Catalog registration tool.
-2. In the Data Catalog data source registration tool, sign in with the same credentials that you use to access the Data Catalog portal.
-3. Select the data source and the specific assets that you want to register.
-
-## What properties does it extract for data assets that are registered?
-The specific properties differ from data source to data source but, in general, the Data Catalog publishing service extracts the following information:
-
-* Asset Name
-* Asset Type
-* Asset Description
-* Attribute/Column Names
-* Attribute/Column Data Types
-* Attribute/Column Description
-
-> [!IMPORTANT]
-> Registering data assets with Data Catalog does not move or copy your data to the cloud. Registering assets from a data source copies the assetsΓÇÖ metadata to Azure, but the data remains in the existing data-source location. The exception to this rule is if you choose to upload preview records or a data profile when you register the assets. When you include a preview, up to 20 records are copied from each asset and stored as a snapshot in Data Catalog. When you include a data profile, aggregate information is calculated and included in the metadata that's stored in the catalog. Aggregate information can include the size of tables, the percentage of null values per column, or the minimum, maximum, and average values for columns.
->
->
-
-> [!NOTE]
-> For data sources such as SQL Server Analysis Services that have a first-class **Description** property, the Data Catalog data source registration tool extracts that property value. For *on-premises* SQL Server relational databases that lack a first-class **Description** property, the Data Catalog data source registration tool extracts the value from the **ms_description** extended property for objects and columns. This property is not supported for SQL Azure. For more information, see [Using Extended Properties on Database Objects](/previous-versions/sql/sql-server-2008-r2/ms190243(v=sql.105)).
->
->
-
-## How long should it take for newly registered assets to appear in the catalog?
-After you register assets with Data Catalog, there may be a period of 5 to 10 seconds before they appear in the Data Catalog portal.
-
-## How do I annotate and enrich the metadata for my registered data assets?
-The simplest way to provide metadata for registered assets is to select the asset in the Data Catalog portal and then enter the values in the properties pane or schema pane for the selected object.
-
-You can also provide some metadata, such as experts and tags, during the registration process. The values you provide in the Data Catalog publishing service apply to all assets being registered at that time. To view the recently registered objects in the portal for additional annotation, select the **View Portal** button on the final screen of the Data Catalog data source registration tool.
-
-## How do I delete my registered data objects?
-You can delete an object from Data Catalog by selecting the object in the portal and then clicking the **Delete** button. Removing the object removes its metadata from Data Catalog but does not affect the underlying data source.
-
-## What is an expert?
-An expert is a person who has an informed perspective about a data object. An object can have multiple experts. An expert does not need to be the ΓÇ£ownerΓÇ¥ for an object, but is simply someone who knows how the data can and should be used.
-
-## How do I share information with the Data Catalog team if I encounter problems?
-To report problems, share information, and ask questions, go to the [Azure Data Catalog forum](https://go.microsoft.com/fwlink/?LinkID=616424&clcid=0x409).
-
-## Does the catalog work with another data source that IΓÇÖm interested in?
-WeΓÇÖre actively working on adding more data sources to Data Catalog. If you want to see a specific data source supported, suggest it (or voice your support if it has already been suggested) by going to the [Data Catalog on the Azure Feedback Forums](https://feedback.azure.com/forums/906052-data-catalog).
-
-## What permissions do I need to register assets with Data Catalog?
-To run the Data Catalog registration tool, you need permissions on the data source that allows you to read the metadata from the source. To also include a preview, you must have permissions that allow you to read in the data from the objects being registered.
-
-Data Catalog also allows catalog administrators to restrict which users and groups can add metadata to the catalog. For additional information, see [How to secure access to data catalog and data assets](data-catalog-how-to-secure-catalog.md).
-
-## Will Data Catalog be made available for on-premises deployment as well?
-Data Catalog is a cloud service that can work with both cloud and on-premises data sources to deliver a hybrid data-source discovery solution. There are currently no plans for a version of the Data Catalog service that runs on-premises.
-
-## Can I extract more or richer metadata from the data sources I register?
-WeΓÇÖre actively working to expand the capabilities of Data Catalog. If you want to have additional metadata extracted from the data source during registration, suggest it (or vote for it, if it has already been suggested) in the [Data Catalog on the Azure Feedback Forums](https://feedback.azure.com/forums/906052-data-catalog).
-
-If you would like to include column/schema metadata, previews, or data profiles, for data sources where this metadata is not extracted by the data source registration tool, you can use the Data Catalog API to add this metadata. For additional information, see [Azure Data Catalog REST API](/rest/api/datacatalog/).
-
-## How do I restrict the visibility of registered data assets, so that only certain people can discover them?
-Select the data assets in the Data Catalog, and then click the **Take Ownership** button. Owners of data assets in Data Catalog can change the visibility settings to either allow all users to discover the owned assets or restrict visibility to specific users. For additional information, see [Manage data assets in Azure Data Catalog](data-catalog-how-to-manage.md).
-
-## How do I update the registration for a data asset so that changes in the data source are reflected in the catalog?
-To update the metadata for data assets that are already registered in the catalog, simply re-register the data source that contains the assets. Any changes in the data source, such as columns being added or removed from tables or views, are updated in the catalog, but any annotations provided by users are retained.
-
-## My question isnΓÇÖt answered here. Where can I go for answers?
-Go to the [Azure Data Catalog forum](https://go.microsoft.com/fwlink/?LinkID=616424&clcid=0x409). Questions asked there will find their way here.
data-catalog Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-catalog/overview.md
To learn more about the capabilities of Data Catalog, see:
* [How to work with big data](data-catalog-how-to-big-data.md) * [How to manage data assets](data-catalog-how-to-manage.md) * [How to set up the Business Glossary](data-catalog-how-to-business-glossary.md)
-* [Frequently asked questions](data-catalog-frequently-asked-questions.md)
+* [Frequently asked questions](data-catalog-frequently-asked-questions.yml)
## Next steps
databox-online Azure Stack Edge Gpu Enable Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-enable-azure-monitor.md
Previously updated : 05/13/2021 Last updated : 06/03/2021
Monitoring containers on your Azure Stack Edge Pro GPU device is critical, speci
This article describes the steps required to enable Azure Monitor on your device and gather container logs in Log Analytics workspace. The Azure Monitor metrics store is currently not supported with your Azure Stack Edge Pro GPU device.
+> [!NOTE]
+> If Azure Arc is enabled on the Kubernetes cluster on your device, follow the steps in [Azure Monitor Container Insights for Azure Arc enabled Kubernetes clusters](/azure/azure-monitor/containers/container-insights-enable-arc-enabled-clusters?toc=/azure/azure-arc/kubernetes/toc.json) to set up container monitoring.
+ ## Prerequisites
dedicated-hsm Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dedicated-hsm/faq.md
Yes. Contact your Thales representative for the appropriate Thales migration gui
### Q: How do I decide whether to use Azure Key Vault or Azure Dedicated HSM?
-Azure Dedicated HSM is the appropriate choice for enterprises migrating to Azure on-premises applications that use HSMs. Dedicated HSMs present an option to migrate an application with minimal changes. If cryptographic operations are performed in the application's code running in an Azure VM or Web App, they can use Dedicated HSM. In general, shrink-wrapped software running in IaaS (infrastructure as a service) models, that support HSMs as a key store can use Dedicate HSM, such as Application gateway or traffic manager for keyless TLS, ADCS (Active Directory Certificate Services), or similar PKI tools, tools/applications used for document signing, code signing, or a SQL Server (IaaS) configured with TDE (transparent database encryption) with master key in an HSM using an EKM (extensible key management) provider. Azure Key Vault is suitable for "born-in-cloud" applications or for encryption at rest scenarios where customer data is processed by PaaS (platform as a service) or SaaS (Software as a service) scenarios such as Office 365 Customer Key, Azure Information Protection, Azure Disk Encryption, Azure Data Lake Store encryption with customer-managed key, Azure Storage encryption with customer managed key, and Azure SQL with customer managed key.
+Azure Dedicated HSM is the appropriate choice for enterprises migrating to Azure on-premises applications that use HSMs. Dedicated HSMs present an option to migrate an application with minimal changes. If cryptographic operations are performed in the application's code running in an Azure VM or Web App, they can use Dedicated HSM. In general, shrink-wrapped software running in IaaS (infrastructure as a service) models, that support HSMs as a key store can use Dedicate HSM, such as traffic manager for keyless TLS, ADCS (Active Directory Certificate Services), or similar PKI tools, tools/applications used for document signing, code signing, or a SQL Server (IaaS) configured with TDE (transparent database encryption) with master key in an HSM using an EKM (extensible key management) provider. Azure Key Vault is suitable for "born-in-cloud" applications or for encryption at rest scenarios where customer data is processed by PaaS (platform as a service) or SaaS (Software as a service) scenarios such as Office 365 Customer Key, Azure Information Protection, Azure Disk Encryption, Azure Data Lake Store encryption with customer-managed key, Azure Storage encryption with customer managed key, and Azure SQL with customer managed key.
### Q: What usage scenarios best suit Azure Dedicated HSM? Azure Dedicated HSM is most suitable for migration scenarios. This means that if you are migrating on-premises applications to Azure that are already using HSMs. This provides a low-friction option to migrate to Azure with minimal changes to the application. If cryptographic operations are performed in the application's code running in Azure VM or Web App, Dedicated HSM may be used. In general, shrink-wrapped software running in IaaS (infrastructure as a service) models, that support HSMs as a key store can use Dedicate HSM, such as:
-* Application gateway or traffic manager for keyless TLS
+* Traffic Manager for Keyless TLS
* ADCS (Active Directory Certificate Services) * Similar PKI tools * Tools/applications used for document signing
dedicated-hsm Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dedicated-hsm/networking.md
Before provisioning a Dedicated HSM device, customers will first need to create
Subnets segment the virtual network into separate address spaces usable by the Azure resources you place in them. Dedicated HSMs are deployed into a subnet in the virtual network. Each Dedicated HSM device that is deployed in the customerΓÇÖs subnet will receive a private IP address from this subnet. The subnet in which the HSM device is deployed needs to be explicitly delegated to the service: Microsoft.HardwareSecurityModules/dedicatedHSMs. This grants certain permissions to the HSM service for deployment into the subnet. Delegation to Dedicated HSMs imposes certain policy restrictions on the subnet. Network Security Groups (NSGs) and User-Defined Routes (UDRs) are currently not supported on delegated subnets. As a result, once a subnet is delegated to dedicated HSMs, it can only be used to deploy HSM resources. Deployment of any other customer resources into the subnet will fail. - ### ExpressRoute gateway
-A requirement of the current architecture is configuration of an ER gateway in the customers subnet where an HSM device needs to be placed to enable integration of the HSM device into Azure. This ER gateway cannot be utilized for connecting on-premises locations to the customers HSM devices in Azure.
+A requirement of the current architecture is configuration of an [ExpressRoute gateway](https://docs.microsoft.com/azure/expressroute/expressroute-howto-add-gateway-portal-resource-manager) in the customers subnet where an HSM device needs to be placed to enable integration of the HSM device into Azure. This [ExpressRoute gateway](https://docs.microsoft.com/azure/expressroute/expressroute-howto-add-gateway-portal-resource-manager) cannot be utilized for connecting on-premises locations to the customers HSM devices in Azure.
## Connecting your on-premises IT to Azure
This networking design requires the following elements:
Since adding the NVA proxy solution also allows for an NVA firewall in the transit/DMZ hub to be logically placed in front of the HSM NIC, thus providing the needed default-deny policies. In our example, we will use the Azure Firewall for this purpose and will need the following elements in place: 1. An Azure Firewall deployed into subnet ΓÇ£AzureFirewallSubnetΓÇ¥ in the DMZ hub VNet
-2. A Routing Table with a UDR that directs traffic headed to the Azure ILB private endpoint into the Azure Firewall. This Routing Table will be applied to the GatewaySubnet where the customer ExpressRoute Virtual Gateway resides
+2. A Routing Table with a UDR that directs traffic headed to the Azure ILB private endpoint into the Azure Firewall. This Routing Table will be applied to the GatewaySubnet where the customer [ExpressRoute Virtual Gateway](https://docs.microsoft.com/azure/expressroute/expressroute-howto-add-gateway-portal-resource-manager) resides
3. Network security rules within the AzureFirewall to allow forwarding between a trusted source range and the Azure IBL private endpoint listening on TCP port 1792. This security logic will add the necessary ΓÇ£default denyΓÇ¥ policy against the Dedicated HSM service. Meaning, only trusted source IP ranges will be allowed into the Dedicated HSM service. All other ranges will be dropped. 4. A Routing Table with a UDR that directs traffic headed to on-prem into the Azure Firewall. This Routing Table will be applied to the NVA proxy subnet. 5. An NSG applied to the Proxy NVA subnet to trust only the subnet range of the Azure Firewall as a source, and to only allow forwarding to the HSM NIC IP address over TCP port 1792.
dedicated-hsm Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dedicated-hsm/troubleshoot.md
The number one reason for deployment failures is forgetting to set the appropria
### HSM Deployment Race Condition
-The standard ARM template provided for deployment has HSM and ExpressRoute gateway related resources. Networking resources are a dependency for successful HSM deployment and timing can be crucial. Occasionally, we have seen deployment failures related to dependency issues and rerunning the deployment often solves the issue. If not, deleting resources and then redeploying is often successful. After attempting this and still finding issue, raise a support request in the Azure portal selecting the problem type of "Issues configuring the Azure setup".
+The standard ARM template provided for deployment has HSM and [ExpressRoute gateway](https://docs.microsoft.com/azure/expressroute/expressroute-howto-add-gateway-portal-resource-manager) related resources. Networking resources are a dependency for successful HSM deployment and timing can be crucial. Occasionally, we have seen deployment failures related to dependency issues and rerunning the deployment often solves the issue. If not, deleting resources and then redeploying is often successful. After attempting this and still finding issue, raise a support request in the Azure portal selecting the problem type of "Issues configuring the Azure setup".
### HSM Deployment Using Terraform
-A few customers have used Terraform as an automation environment instead of ARM templates as supplied when registering for this service. The HSMs cannot be deployed this way but the dependent networking resources can. Terraform has a module to call out to a minimal ARM template that just has the HSM deployment. In this situation, care should be taken to ensure networking resources such as the required ExpressRoute Gateway are fully deployed before deploying HSMs. The following CLI command can be used to test for completed deployment and integrated as required. Replace the angle bracket place holders for your specific naming. You should look for a result of "provisioningState is Succeeded"
+A few customers have used Terraform as an automation environment instead of ARM templates as supplied when registering for this service. The HSMs cannot be deployed this way but the dependent networking resources can. Terraform has a module to call out to a minimal ARM template that just has the HSM deployment. In this situation, care should be taken to ensure networking resources such as the required [ExpressRoute gateway](https://docs.microsoft.com/azure/expressroute/expressroute-howto-add-gateway-portal-resource-manager) are fully deployed before deploying HSMs. The following CLI command can be used to test for completed deployment and integrated as required. Replace the angle bracket place holders for your specific naming. You should look for a result of "provisioningState is Succeeded"
```azurecli az resource show --ids /subscriptions/<subid>/resourceGroups/<myresourcegroup>/providers/Microsoft.Network/virtualNetworkGateways/<myergateway>
Deployment of Dedicated HSM has a dependency on networking resources and some co
### Provisioning ExpressRoute
-Dedicated HSM uses ExpressRoute Gateway as a "tunnel" for communication between the customer private IP address space and the physical HSM in an Azure datacenter. Considering there is a restriction of one gateway per Vnet, customers requiring connection to their on-premises resources via ExpressRoute, will have to use another Vnet for that connection.
+Dedicated HSM uses [ExpressRoute gateway](https://docs.microsoft.com/azure/expressroute/expressroute-howto-add-gateway-portal-resource-manager) as a "tunnel" for communication between the customer private IP address space and the physical HSM in an Azure datacenter. Considering there is a restriction of one gateway per Vnet, customers requiring connection to their on-premises resources via ExpressRoute, will have to use another Vnet for that connection.
### HSM Private IP Address
Software and documentation for the [Thales Luna 7 HSM](https://cpl.thalesgroup.c
### HSM Networking Configuration
-Be careful when configuring the networking within the HSM. The HSM has a connection via the ExpressRoute Gateway from a customer private IP address space directly to the HSM. This communication channel is for customer communication only and Microsoft has no access. If the HSM is configured in a such a way that this network path is impacted, that means all communication with the HSM is removed. In this situation, the only option is to raise a Microsoft support request via the Azure portal to have the device reset. This reset procedure sets the HSM back to its initial state and all configuration and key material is lost. Configuration must be recreated and when the device joins the HA group it will get key material replicated.
+Be careful when configuring the networking within the HSM. The HSM has a connection via the [ExpressRoute gateway](https://docs.microsoft.com/azure/expressroute/expressroute-howto-add-gateway-portal-resource-manager) from a customer private IP address space directly to the HSM. This communication channel is for customer communication only and Microsoft has no access. If the HSM is configured in a such a way that this network path is impacted, that means all communication with the HSM is removed. In this situation, the only option is to raise a Microsoft support request via the Azure portal to have the device reset. This reset procedure sets the HSM back to its initial state and all configuration and key material is lost. Configuration must be recreated and when the device joins the HA group it will get key material replicated.
### HSM Device Reboot
dedicated-hsm Tutorial Deploy Hsm Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dedicated-hsm/tutorial-deploy-hsm-cli.md
A typical, high availability, multi-region deployment architecture may look as f
![multi region deployment](media/tutorial-deploy-hsm-cli/high-availability-architecture.png)
-This tutorial focuses on a pair of HSMs and required ExpressRoute Gateway (see Subnet 1 above) being integrated into an existing virtual network (see VNET 1 above). All other resources are standard Azure resources. The same integration process can be used for HSMs in subnet 4 on VNET 3 above.
+This tutorial focuses on a pair of HSMs and required [ExpressRoute gateway](https://docs.microsoft.com/azure/expressroute/expressroute-howto-add-gateway-portal-resource-manager) (see Subnet 1 above) being integrated into an existing virtual network (see VNET 1 above). All other resources are standard Azure resources. The same integration process can be used for HSMs in subnet 4 on VNET 3 above.
## Prerequisites
All instructions below assume that you have already navigated to the Azure porta
## Provisioning a Dedicated HSM
-Provisioning HSMs and integrating them into an existing virtual network via ExpressRoute Gateway will be validated using ssh. This validation helps ensure reachability and basic availability of the HSM device for any further configuration activities.
+Provisioning HSMs and integrating them into an existing virtual network via [ExpressRoute gateway](https://docs.microsoft.com/azure/expressroute/expressroute-howto-add-gateway-portal-resource-manager) will be validated using ssh. This validation helps ensure reachability and basic availability of the HSM device for any further configuration activities.
### Validating Feature Registration
dedicated-hsm Tutorial Deploy Hsm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dedicated-hsm/tutorial-deploy-hsm-powershell.md
A typical, high availability, multi-region deployment architecture may look as f
![multi region deployment](media/tutorial-deploy-hsm-powershell/high-availability.png)
-This tutorial focuses on a pair of HSMs and the required ExpressRoute Gateway (see Subnet 1 above) being integrated into an existing virtual network (see VNET 1 above). All other resources are standard Azure resources. The same integration process can be used for HSMs in subnet 4 on VNET 3 above.
+This tutorial focuses on a pair of HSMs and the required [ExpressRoute gateway](https://docs.microsoft.com/azure/expressroute/expressroute-howto-add-gateway-portal-resource-manager) (see Subnet 1 above) being integrated into an existing virtual network (see VNET 1 above). All other resources are standard Azure resources. The same integration process can be used for HSMs in subnet 4 on VNET 3 above.
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
All instructions below assume that you have already navigated to the Azure porta
## Provisioning a Dedicated HSM
-Provisioning the HSMs and integrating into an existing virtual network via ExpressRoute Gateway will be validated using the ssh command-line tool to ensure reachability and basic availability of the HSM device for any further configuration activities. The following commands will use a Resource Manager template to create the HSM resources and associated networking resources.
+Provisioning the HSMs and integrating into an existing virtual network via [ExpressRoute gateway](https://docs.microsoft.com/azure/expressroute/expressroute-howto-add-gateway-portal-resource-manager) will be validated using the ssh command-line tool to ensure reachability and basic availability of the HSM device for any further configuration activities. The following commands will use a Resource Manager template to create the HSM resources and associated networking resources.
### Validating Feature Registration
The command should return a status of ΓÇ£RegisteredΓÇ¥ (as shown below) before y
### Creating HSM resources
-An HSM device is provisioned into a customersΓÇÖ virtual network. This implies the requirement for a subnet. A dependency for the HSM to enable communication between the virtual network and physical device is an ExpressRoute Gateway, and finally a virtual machine is required to access the HSM device using the Thales client software. These resources have been collected into a template file, with corresponding parameter file, for ease of use. The files are available by contacting Microsoft directly at HSMrequest@Microsoft.com.
+An HSM device is provisioned into a customersΓÇÖ virtual network. This implies the requirement for a subnet. A dependency for the HSM to enable communication between the virtual network and physical device is an [ExpressRoute gateway](https://docs.microsoft.com/azure/expressroute/expressroute-howto-add-gateway-portal-resource-manager), and finally a virtual machine is required to access the HSM device using the Thales client software. These resources have been collected into a template file, with corresponding parameter file, for ease of use. The files are available by contacting Microsoft directly at HSMrequest@Microsoft.com.
Once you have the files, you must edit the parameter file to insert your preferred names for resources. This means editing lines with ΓÇ£valueΓÇ¥: ΓÇ£ΓÇ¥.
digital-twins Concepts Data Explorer Plugin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-data-explorer-plugin.md
+
+# Mandatory fields.
+ Title: Querying with Azure Data Explorer
+
+description: Understand the Azure Digital Twins query plugin for Azure Data Explorer
++ Last updated : 5/19/2021+++
+# Optional fields. Don't forget to remove # if you need a field.
+#
+#
+#
++
+# Azure Digital Twins query plugin for Azure Data Explorer
+
+The Azure Digital Twins plugin for [Azure Data Explorer (ADX)](/azure/data-explorer/data-explorer-overview) lets you run ADX queries that access and combine data across the Azure Digital Twins graph and ADX time series databases. Use the plugin to contextualize disparate time series data by reasoning across digital twins and their relationships to gain insights into the behavior of modeled environments.
+
+For example, with this plugin, you can write a KQL query that...
+1. selects digital twins of interest via the Azure Digital Twins query plugin,
+2. joins those twins against the respective times series in ADX, and then
+3. performs advanced time series analytics on those twins.
+
+Combining data from a twin graph in Azure Digital Twins with time series data in ADX can help you understand the operational behavior of various parts of your solution.
+
+## Using the plugin
+
+In order to get the plugin running on your own ADX cluster that contains time series data, start by running the following command in ADX in order to enable the plugin:
+
+```kusto
+.enable plugin azure_digital_twins_query_request.
+```
+
+This command requires **All Databases admin** permission. For more information on the command, see the [.enable plugin documentation](/azure/data-explorer/kusto/management/enable-plugin).
+
+Once the plugin is enabled, you can invoke it within an ADX Kusto query like this:
+
+```kusto
+evaluate azure_digital_twins_query_request(Azure Digital Twinsendpoint, Azure Digital Twinsquery)
+```
+
+where `Azure Digital Twinsendpoint` and `Azure Digital Twinsquery` are strings representing the Azure Digital Twins instance endpoint and Azure Digital Twins query, respectively.
+
+The plugin works by calling the [Azure Digital Twins query API](/rest/api/digital-twins/dataplane/query), and the [query language structure](concepts-query-language.md) is the same as when using the API.
+
+>[!IMPORTANT]
+>The user of the plugin must be granted the **Azure Digital Twins Data Reader** role or the **Azure Digital Twins Data Owner** role, as the user's Azure AD token is used to authenticate. Information on how to assign this role can be found in [Concepts: Security for Azure Digital Twins solutions](concepts-security.md#authorization-azure-roles-for-azure-digital-twins).
+
+To see example queries and complete a walkthrough with sample data, see [Azure Digital Twins query plugin for ADX: Sample queries and walkthrough](https://github.com/Azure-Samples/azure-digital-twins-getting-started/tree/main/adt-adx-queries) in GitHub.
+
+## Using ADX IoT data with Azure Digital Twins
+
+There are various ways to ingest IoT data into ADX. Here are two that you might use when using ADX with Azure Digital Twins:
+* Historize digital twin property values to ADX with an Azure function that handles twin change events and writes the twin data to ADX, similar to the process used in [How-to: Integrate with Azure Time Series Insights](how-to-integrate-time-series-insights.md). This path will be suitable for customers who use telemetry data to bring their digital twins to life.
+* [Ingest IoT data directly into your ADX cluster from IoT Hub](/azure/data-explorer/ingest-data-iot-hub) or from other sources. Then, the Azure Digital Twins graph will be used to contextualize the time series data using joint Azure Digital Twins/ADX queries. This path may be suitable for direct-ingestion workloads.
+
+### Mapping data across ADX and Azure Digital Twins
+
+If you're ingesting time series data directly into ADX, you'll likely need to convert this raw time series data into a schema suitable for joint Azure Digital Twins/ADX queries.
+
+An [update policy](/azure/data-explorer/kusto/management/updatepolicy.md) in ADX allows you to automatically transform and append data to a target table whenever new data is inserted into a source table.
+
+You can use an update policy to enrich your raw time series data with the corresponding **twin ID** from Azure Digital Twins, and persist it to a target table. Using the twin ID, the target table can then be joined against the digital twins selected by the Azure Digital Twins plugin.
+
+For example, say you created the following table to hold the raw time series data flowing into your ADX instance.
+
+```kusto
+.createmerge table rawData (Timestamp:datetime, someId:string, Value:string, ValueType:string) 
+```
+
+You could create a mapping table to relate time series IDs with twin IDs, and other optional fields.
+
+```kusto
+.createmerge table mappingTable (someId:string, twinId:string, otherMetadata:string)
+```
+
+Then, create a target table to hold the enriched time series data.
+
+```kusto
+.createmerge table timeseriesSilver (twinId:string, Timestamp:datetime, someId:string, otherMetadata:string, ValueNumeric:real, ValueString:string) 
+```
+
+Next, create a function `Update_rawData` to enrich the raw data by joining it with the mapping table. This will add the twin ID to the resulting target table.
+
+```kusto
+.createoralter function with (folder = "Update", skipvalidation = "true") Update_rawData() {
+rawData
+| join kind=leftouter mappingTable on someId
+| project
+    Timestamp, ValueNumeric = toreal(Value), ValueString = Value, ...
+}
+```
+
+Lastly, create an update policy to call the function and update the target table.
+
+```kusto
+.alter table timeseriesSilver policy update
+@'[{"IsEnabled": true, "Source": "rawData", "Query": "Update_rawData()", "IsTransactional": false, "PropagateIngestionProperties": false}]'
+```
+
+Once the target table is created, you can use the Azure Digital Twins plugin to select twins of interest and then join them against time series data in the target table.
+
+### Example schema
+
+Here is an example of a schema that might be used to represent shared data.
+
+| timestamp | twinIdΓÇ»| modelIdΓÇ»| nameΓÇ»| valueΓÇ»| relationshipTarget | relationshipID |
+| | | | | | | |
+| 2021-02-01 17:24 | ConfRoomTempSensor | dtmi:com:example:TemperatureSensor;1 | temperature | 301.0 | | |
+
+Digital twin properties are stored as key-value pairs (`name, value`). `name` and `value` are stored as dynamic data types.
+
+The schema also supports storing properties for relationships, per the `relationshipTarget` and `relationshipID` fields. The key-value schema avoids the need to create a column for each twin property.
+
+### Representing properties with multiple fields
+
+You may want to store a property in your schema with multiple fields. These properties are represented by storing a JSON object as `value` in your schema.
+
+For instance, if you want to represent a property with three fields for roll, pitch, and yaw, the value object would look like this: `{"roll": 20, "pitch": 15, "yaw": 45}`.
+
+## Next steps
+
+View sample queries using the Azure Digital Twins query plugin for ADX, including a walkthrough that runs the queries in an example scenario:
+* [Azure Digital Twins query plugin for ADX: Sample queries and walkthrough](https://github.com/Azure-Samples/azure-digital-twins-getting-started/tree/main/adt-adx-queries)
+
+Read about another strategy for analyzing historical data in Azure Digital Twins:
+* [How-to: Integrate with Azure Time Series Insights](how-to-integrate-time-series-insights.md)
digital-twins Concepts Data Ingress Egress https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-data-ingress-egress.md
Azure Digital Twins can send data to connected **endpoints**. Supported endpoint
Endpoints are attached to Azure Digital Twins using management APIs or the Azure portal. Learn more about how to attach an endpoint to Azure Digital Twins in [How-to: Manage endpoints and routes](how-to-manage-routes-apis-cli.md).
-There are many other services where you may want to ultimately direct your data, such as [Azure Storage](../storage/common/storage-introduction.md), [Azure Maps](../azure-maps/about-azure-maps.md), or [Time Series Insights](../time-series-insights/overview-what-is-tsi.md). To send your data to services like these, attach the destination service to an endpoint.
+There are many other services where you may want to ultimately direct your data, such as [Azure Storage](../storage/common/storage-introduction.md), [Azure Maps](../azure-maps/about-azure-maps.md), [Azure Data Explorer](/azure/data-explorer/data-explorer-overview), or [Time Series Insights](../time-series-insights/overview-what-is-tsi.md). To send your data to services like these, attach the destination service to an endpoint.
For example, if you are also using Azure Maps and want to correlate location with your Azure Digital Twins [twin graph](concepts-twins-graph.md), you can use Azure Functions with Event Grid to establish communication between all the services in your deployment. Learn more about this in [How-to: Use Azure Digital Twins to update an Azure Maps indoor map](how-to-integrate-maps.md)
digital-twins Concepts Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-models.md
For a DTDL model to be compatible with Azure Digital Twins, it must meet these r
Azure Digital Twins also does not observe the `writable` attribute on properties or relationships. Although this can be set as per DTDL specifications, the value isn't used by Azure Digital Twins. Instead, these are always treated as writable by external clients that have general write permissions to the Azure Digital Twins service.
-## Elements of a model
+## Model overview
+
+### Elements of a model
Within a model definition, the top-level code item is an **interface**. This encapsulates the entire model, and the rest of the model is defined within the interface. A DTDL model interface may contain zero, one, or many of each of the following fields:
-* **Property** - Properties are data fields that represent the state of an entity (like the properties in many object-oriented programming languages). Properties have backing storage and can be read at any time.
-* **Telemetry** - Telemetry fields represent measurements or events, and are often used to describe device sensor readings. Unlike properties, telemetry is not stored on a digital twin; it is a series of time-bound data events that need to be handled as they occur. For more on the differences between property and telemetry, see the [Properties vs. telemetry](#properties-vs-telemetry) section below.
+* **Property** - Properties are data fields that represent the state of an entity (like the properties in many object-oriented programming languages). Properties have backing storage and can be read at any time. For more information, see [Properties and telemetry](#properties-and-telemetry) below.
+* **Telemetry** - Telemetry fields represent measurements or events, and are often used to describe device sensor readings. Unlike properties, telemetry is not stored on a digital twin; it is a series of time-bound data events that need to be handled as they occur. For more information, see [Properties and telemetry](#properties-and-telemetry) below.
+* **Relationship** - Relationships let you represent how a digital twin can be involved with other digital twins. Relationships can represent different semantic meanings, such as *contains* ("floor contains room"), *cools* ("hvac cools room"), *isBilledTo* ("compressor is billed to user"), etc. Relationships allow the solution to provide a graph of interrelated entities. Relationships can also have properties of their own. For more information, see [Relationships](#relationships) below.
* **Component** - Components allow you to build your model interface as an assembly of other interfaces, if you want. An example of a component is a *frontCamera* interface (and another component interface *backCamera*) that are used in defining a model for a *phone*. You must first define an interface for *frontCamera* as though it were its own model, and then you can reference it when defining *Phone*.
- Use a component to describe something that is an integral part of your solution but doesn't need a separate identity, and doesn't need to be created, deleted, or rearranged in the twin graph independently. If you want entities to have independent existences in the twin graph, represent them as separate digital twins of different models, connected by *relationships* (see next bullet).
+ Use a component to describe something that is an integral part of your solution but doesn't need a separate identity, and doesn't need to be created, deleted, or rearranged in the twin graph independently. If you want entities to have independent existences in the twin graph, represent them as separate digital twins of different models, connected by **relationships**.
>[!TIP] >Components can also be used for organization, to group sets of related properties within a model interface. In this situation, you can think of each component as a namespace or "folder" inside the interface.
-* **Relationship** - Relationships let you represent how a digital twin can be involved with other digital twins. Relationships can represent different semantic meanings, such as *contains* ("floor contains room"), *cools* ("hvac cools room"), *isBilledTo* ("compressor is billed to user"), etc. Relationships allow the solution to provide a graph of interrelated entities. Relationships can also have [properties](#properties-of-relationships) of their own.
+
+ For more information, see [Components](#components) below.
+ > [!NOTE] > The [spec for DTDL](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) also defines **Commands**, which are methods that can be executed on a digital twin (like a reset command, or a command to switch a fan on or off). However, *commands are not currently supported in Azure Digital Twins.*
-### Properties vs. telemetry
+### Model code
+
+Twin type models can be written in any text editor. The DTDL language follows JSON syntax, so you should store models with the extension .json. Using the JSON extension will enable many programming text editors to provide basic syntax checking and highlighting for your DTDL documents. There is also a [DTDL extension](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.vscode-dtdl) available for [Visual Studio Code](https://code.visualstudio.com/).
+
+The fields of the model are:
+
+| Field | Description |
+| | |
+| `@id` | An identifier for the model. Must be in the format `dtmi:<domain>:<unique-model-identifier>;<model-version-number>`. |
+| `@type` | Identifies the kind of information being described. For an interface, the type is *Interface*. |
+| `@context` | Sets the [context](https://niem.github.io/json/reference/json-ld/context/) for the JSON document. Models should use `dtmi:dtdl:context;2`. |
+| `displayName` | [optional] Allows you to give the model a friendly name if desired. |
+| `contents` | All remaining interface data is placed here, as an array of attribute definitions. Each attribute must provide a `@type` (**property**, **telemetry**, **command**, **relationship**, or **component**) to identify the sort of interface information it describes, and then a set of properties that define the actual attribute (for example, `name` and `schema` to define a **property**). |
+
+#### Example model
+
+This section contains an example of a basic model, written as a DTDL interface.
+
+This model describes a Home, with one **property** for an ID. The Home model also defines a **relationship** to a Floor model, which can be used to indicate that a Home twin is connected to certain Floor twins.
+
-Here is some additional guidance on distinguishing between DTDL **property** and **telemetry** fields in Azure Digital Twins.
+## Properties and telemetry
-The difference between properties and telemetry for Azure Digital Twins models is as follows:
+This section goes into more detail about **properties** and **telemetry** in DTDL models.
+
+### Difference between properties and telemetry
+
+Here's some additional guidance on conceptually distinguishing between DTDL **property** and **telemetry** in Azure Digital Twins.
* **Properties** are expected to have backing storage. This means that you can read a property at any time and retrieve its value. If the property is writeable, you can also store a value in the property. * **Telemetry** is more like a stream of events; it's a set of data messages that have short lifespans. If you don't set up listening for the event and actions to take when it happens, there is no trace of the event at a later time. You can't come back to it and read it later. - In C# terms, telemetry is like a C# event.
Telemetry and properties often work together to handle data ingress from devices
You can also publish a telemetry event from the Azure Digital Twins API. As with other telemetry, that is a short-lived event that requires a listener to handle.
-#### Properties of relationships
+### Schema
-DTDL also allows for **relationships** to have properties of their own. When defining a relationship within a DTDL model, the relationship can have its own `properties` field where you can define custom properties to describe relationship-specific state.
+As per DTDL, the schema for **property** and **telemetry** attributes can be of standard primitive typesΓÇö`integer`, `double`, `string`, and `Boolean`ΓÇöand other types such as `DateTime` and `Duration`.
-## Model inheritance
+In addition to primitive types, property and telemetry fields can have these [complex types](#complex-object-type-example):
+* `Object`
+* `Map`
+* `Enum`
+* (**telemetry** only) `Array`
-Sometimes, you may want to specialize a model further. For example, it might be useful to have a generic model Room, and specialized variants ConferenceRoom and Gym. To express specialization, DTDL supports inheritance: interfaces can inherit from one or more other interfaces.
+They can also be [semantic types](#semantic-type-example), which allow you to annotate values with units.
-The following example re-imagines the Planet model from the earlier DTDL example as a subtype of a larger CelestialBody model. The "parent" model is defined first, and then the "child" model builds on it by using the field `extends`.
+### Basic property and telemetry examples
+Here is a basic example of a **property** on a DTDL model. This example shows the ID property of a Home.
-In this example, CelestialBody contributes a name, a mass, and a temperature to Planet. The `extends` section is an interface name, or an array of interface names (allowing the extending interface to inherit from multiple parent models if desired).
-Once inheritance is applied, the extending interface exposes all properties from the entire inheritance chain.
+Here is a basic example of a **telemetry** field on a DTDL model. This example shows Temperature telemetry on a Sensor.
-The extending interface cannot change any of the definitions of the parent interfaces; it can only add to them. It also cannot redefine a capability already defined in any of its parent interfaces (even if the capabilities are defined to be the same). For example, if a parent interface defines a `double` property *mass*, the extending interface cannot contain a declaration of *mass*, even if it's also a `double`.
-## Model code
+### Complex (object) type example
-Twin type models can be written in any text editor. The DTDL language follows JSON syntax, so you should store models with the extension .json. Using the JSON extension will enable many programming text editors to provide basic syntax checking and highlighting for your DTDL documents. There is also a [DTDL extension](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.vscode-dtdl) available for [Visual Studio Code](https://code.visualstudio.com/).
+Properties and telemetry can be of complex types, including an `Object` type.
-### Possible schemas
+The following example shows another version of the Home model, with a property for its address. `address` is an object, with its own fields for street, city, state, and zip.
-As per DTDL, the schema for *Property* and *Telemetry* attributes can be of standard primitive typesΓÇö`integer`, `double`, `string`, and `Boolean`ΓÇöand other types such as `DateTime` and `Duration`.
-In addition to primitive types, *Property* and *Telemetry* fields can have these complex types:
-* `Object`
-* `Map`
-* `Enum`
+### Semantic type example
-*Telemetry* fields also support `Array`.
+Semantic types make it possible to express a value with a unit. Properties and telemetry can be represented with any of the semantic types that are supported by DTDL. For more information on semantic types in DTDL and what values are supported, see [Semantic types in the DTDL v2 spec](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md#semantic-types).
-### Example model
+The following example shows a Sensor model with a semantic-type telemetry for Temperature, and a semantic-type property for Humidity.
-This section contains an example of a typical model, written as a DTDL interface. The model describes **planets**, each with a name, a mass, and a temperature.
-
-Consider that planets may also interact with **moons** that are their satellites, and may contain **craters**. In the example below, the Planet model expresses connections to these other entities by referencing two external modelsΓÇöMoon and Crater. These models are also defined in the example code below, but are kept very simple so as not to detract from the primary Planet example.
+## Relationships
-The fields of the model are:
+This section goes into more detail about **relationships** in DTDL models.
-| Field | Description |
-| | |
-| `@id` | An identifier for the model. Must be in the format `dtmi:<domain>:<unique-model-identifier>;<model-version-number>`. |
-| `@type` | Identifies the kind of information being described. For an interface, the type is *Interface*. |
-| `@context` | Sets the [context](https://niem.github.io/json/reference/json-ld/context/) for the JSON document. Models should use `dtmi:dtdl:context;2`. |
-| `displayName` | [optional] Allows you to give the model a friendly name if desired. |
-| `contents` | All remaining interface data is placed here, as an array of attribute definitions. Each attribute must provide a `@type` (*Property*, *Telemetry*, *Command*, *Relationship*, or *Component*) to identify the sort of interface information it describes, and then a set of properties that define the actual attribute (for example, `name` and `schema` to define a *Property*). |
+### Basic relationship example
+
+Here is a basic example of a relationship on a DTDL model. This example shows a relationship on a Home model that allows it to connect to a Floor model.
++
+### Targeted and non-targeted relationships
+
+Relationships can be defined with or without a **target**. A target specifies which types of twin the relationship can reach. For example, you might include a target to specify that a Home model can only have a *rel_has_floors* relationship with twins that are Floor twins.
+
+Sometimes, you might want to define a relationship without a specific target, so that the relationship can connect to many different types of twins.
+
+Here is an example of a relationship on a DTDL model that does not have a target. In this example, the relationship is for defining what sensors a Room might have, and the relationship can connect to any type.
++
+### Properties of relationships
+
+DTDL also allows for **relationships** to have properties of their own. When defining a relationship within a DTDL model, the relationship can have its own `properties` field where you can define custom properties to describe relationship-specific state.
+
+The following example shows another version of the Home model, where the `rel_has_floors` relationship has a property representing when the related Floor was last occupied.
++
+## Components
+
+This section goes into more detail about **components** in DTDL models.
+
+### Basic component example
+
+Here is a basic example of a component on a DTDL model. This example shows a Room model that makes use of a thermostat component.
+ > [!NOTE]
-> Note that the component interface (Crater in this example) is defined in the same array as the interface that uses it (Planet). Components must be defined this way in API calls in order for the interface to be found.
+> Note that the component interface (thermostat component) is defined in the same array as the interface that uses it (Room). Components must be defined this way in API calls in order for the interface to be found.
+
+## Model inheritance
+
+Sometimes, you may want to specialize a model further. For example, it might be useful to have a generic model Room, and specialized variants ConferenceRoom and Gym. To express specialization, **DTDL supports inheritance**. Interfaces can inherit from one or more other interfaces. This is done by adding an `extends` field to the model.
+
+The `extends` section is an interface name, or an array of interface names (allowing the extending interface to inherit from multiple parent models if desired). A single parent can serve as the base model for multiple extending interfaces.
+
+The following example re-imagines the Home model from the earlier DTDL example as a subtype of a larger "core" model. The parent model (Core) is defined first, and then the child model (Home) builds on it by using `extends`.
++
-## Best practices for designing models
+In this case, Core contributes an ID and name to Home. Other models can also extend the Core model to get these properties as well. Here is a Room model extending the same parent interface:
++
+Once inheritance is applied, the extending interface exposes all properties from the entire inheritance chain.
+
+The extending interface cannot change any of the definitions of the parent interfaces; it can only add to them. It also cannot redefine a capability already defined in any of its parent interfaces (even if the capabilities are defined to be the same). For example, if a parent interface defines a `double` property *mass*, the extending interface cannot contain a declaration of *mass*, even if it's also a `double`.
+
+## Modeling best practices
While designing models to reflect the entities in your environment, it can be useful to look ahead and consider the [query](concepts-query-language.md) implications of your design. You may want to design properties in a way that will avoid large result sets from graph traversal. You may also want to model relationships that will to be answered in a single query as single-level relationships.
While designing models to reflect the entities in your environment, it can be us
[!INCLUDE [Azure Digital Twins: validate models info](../../includes/digital-twins-validate.md)]
-## Tools for models
+## Modeling tools
There are several samples available to make it even easier to deal with models and ontologies. They are located in this repository: [Tools for Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-tools).
This section describes the current set of samples in more detail.
### Model uploader
-_**For uploading models to Azure Digital Twins**_
- Once you are finished creating, extending, or selecting your models, you can upload them to your Azure Digital Twins instance to make them available for use in your solution. This is done using the [Azure Digital Twins APIs](concepts-apis-sdks.md), as described in [How-to: Manage DTDL models](how-to-manage-model.md#upload-models). However, if you have many models to uploadΓÇöor if they have many interdependencies that would make ordering individual uploads complicatedΓÇöyou can use this [Azure Digital Twins Model Uploader sample](https://github.com/Azure/opendigitaltwins-building-tools/tree/master/ModelUploader) to upload many models at once. Follow the instructions provided with the sample to configure and use this project to upload models into your own instance. ### Model visualizer
-_**For visualizing models**_
- Once you have uploaded models into your Azure Digital Twins instance, you can view the models in your Azure Digital Twins instance, including any inheritance and model relationships, using the [Azure Digital Twins Model Visualizer](https://github.com/Azure/opendigitaltwins-building-tools/tree/master/AdtModelVisualizer). This sample is currently in a draft state. We encourage the digital twins development community to extend and contribute to the sample. ## Next steps
digital-twins Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/overview.md
You can create a new IoT Hub for this purpose with Azure Digital Twins, or conne
You can also drive Azure Digital Twins from other data sources, using REST APIs or connectors to other services like [Logic Apps](../logic-apps/logic-apps-overview.md).
-### Output to TSI, storage, and analytics
+### Output to ADX, TSI, storage, and analytics
The data in your Azure Digital Twins model can be routed to downstream Azure services for additional analytics or storage. This is provided through **event routes**, which use [Event Hub](../event-hubs/event-hubs-about.md), [Event Grid](../event-grid/overview.md), or [Service Bus](../service-bus-messaging/service-bus-messaging-overview.md) to drive your desired data flows. Some things you can do with event routes include:
+* Sending digital twin data to ADX for querying with the [Azure Digital Twins query plugin for Azure Data Explorer (ADX)](concepts-data-explorer-plugin.md)
+* [Connecting Azure Digital Twins to Time Series Insights (TSI)](how-to-integrate-time-series-insights.md) to track time series history of each twin
+* Aligning a Time Series Model in Time Series Insights with a source in Azure Digital Twins
* Storing Azure Digital Twins data in [Azure Data Lake](../storage/blobs/data-lake-storage-introduction.md) * Analyzing Azure Digital Twins data with [Azure Synapse Analytics](../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md), or other Microsoft data analytics tools * Integrating larger workflows with Logic AppsΓÇï
-* Connecting Azure Digital Twins to Time Series Insights to track time series history of each twin
-* Aligning a Time Series Model in Time Series Insights with a source in Azure Digital Twins
This is another way that Azure Digital Twins can connect into a larger solution, and support your custom needs for continued work with these insights.
A complete solution using Azure Digital Twins may contain the following parts:
* One or more client apps that drive the Azure Digital Twins instance by configuring models, creating topology, and extracting insights from the twin graph. * One or more external compute resources to process events generated by Azure Digital Twins, or connected data sources such as devices. One common way to provide compute resources is via [Azure Functions](../azure-functions/functions-overview.md). * An IoT hub to provide device management and IoT data stream capabilities.
-* Downstream services to handle tasks such as workflow integration (like [Logic Apps](../logic-apps/logic-apps-overview.md), cold storage, time series integration, or analytics).
+* Downstream services to handle tasks such as workflow integration (like [Logic Apps](../logic-apps/logic-apps-overview.md), cold storage, Azure Data Explorer, time series integration, or analytics).
The following diagram shows where Azure Digital Twins lies in the context of a larger Azure IoT solution.
dms Migrate Mysql To Azure Mysql Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/migrate-mysql-to-azure-mysql-powershell.md
In this article, you migrate a MySQL database restored to an on-premises instanc
> Amazon Relational Database Service (RDS) for MySQL and Amazon Aurora (MySQL-based) are also supported as sources for migration. > [!IMPORTANT]
-> The ΓÇ£MySQL to Azure Database for MySQLΓÇ¥ online migration scenario is being replaced with a parallelized, highly performant offline migration scenario from June 1, 2021. For online migrations, you can use this new offering together with [data-in replication](../mysql/concepts-data-in-replication.md). Alternatively, use open-source tools such as [MyDumper/MyLoader](https://centminmod.com/mydumper.html) with data-in replication for online migrations.
+> For online migrations, you can use open-source tools such as [MyDumper/MyLoader](https://centminmod.com/mydumper.html) with [data-in replication](https://docs.microsoft.com/azure/mysql/concepts-data-in-replication).
+ The article helps to automate the scenario where source and target database names can be same or different and as part of migration either all or few of the tables in the target database need to be migrated which have the same name and table structure. Although the articles assumes the source to be a MySQL database instance and target to be Azure Database for MySQL, it can be used to migrate from one Azure Database for MySQL to another just by changing the source server name and credentials. Also, migration from lower version MySQL servers (v5.6 and above) to higher versions is also supported.
dms Resource Scenario Status https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/resource-scenario-status.md
The following table shows Azure Database Migration Service support for offline m
| **Azure SQL VM** | SQL Server | Γ£ö | GA | | | Oracle | X | | | **Azure Cosmos DB** | MongoDB | Γ£ö | GA |
-| **Azure DB for MySQL - Single Server** | MySQL | Γ£ö | |
-| | RDS MySQL | Γ£ö | |
-| | Azure DB for MySQL* | Γ£ö | |
-| **Azure DB for MySQL - Flexible Server** | MySQL | Γ£ö | |
-| | RDS MySQL | Γ£ö | |
-| | Azure DB for MySQL* | Γ£ö | |
+| **Azure DB for MySQL - Single Server** | MySQL | Γ£ö | Public Preview |
+| | RDS MySQL | Γ£ö | Public Preview |
+| | Azure DB for MySQL* | Γ£ö | Public Preview |
+| **Azure DB for MySQL - Flexible Server** | MySQL | Γ£ö | Public Preview |
+| | RDS MySQL | Γ£ö | Public Preview |
+| | Azure DB for MySQL* | Γ£ö | Public Preview |
| **Azure DB for PostgreSQL - Single server** | PostgreSQL | X | | | RDS PostgreSQL | X | | | **Azure DB for PostgreSQL - Flexible server** | PostgreSQL | X |
event-grid Event Schema Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/event-schema-blob-storage.md
These events are triggered when a client creates, replaces, or deletes a blob by
|Event name |Description| |-|--|
- |**Microsoft.Storage.BlobCreated** |Triggered when a blob is created or replaced. <br>Specifically, this event is triggered when clients use the `PutBlob`, `PutBlockList`, or `CopyBlob` operations that are available in the Blob REST API **and** when the Block Blob is completely committed. <br><br>If clients use the `CopyBlob` operation on accounts that have the **hierarchical namespace** feature enabled on them, the `CopyBlob` operation works a little differently. In that case, the **Microsoft.Storage.BlobCreated** event is triggered when the `CopyBlob` operation is **initiated** instead of when the Block Blob is completely committed. |
+ |**Microsoft.Storage.BlobCreated** |Triggered when a blob is created or replaced. <br>Specifically, this event is triggered when clients use the `PutBlob`, `PutBlockList`, or `CopyBlob` operations that are available in the Blob REST API **and** when the Block Blob is completely committed. <br>If clients use the `CopyBlob` operation on accounts that have the **hierarchical namespace** feature enabled on them, the `CopyBlob` operation works a little differently. In that case, the **Microsoft.Storage.BlobCreated** event is triggered when the `CopyBlob` operation is **initiated** and not when the Block Blob is completely committed. |
|**Microsoft.Storage.BlobDeleted** |Triggered when a blob is deleted. <br>Specifically, this event is triggered when clients call the `DeleteBlob` operation that is available in the Blob REST API. |
+ |**Microsoft.Storage.BlobTierChanged** |Triggered when the blob access tier is changed. Specifically, when clients call the `Set Blob Tier` operation that is available in the Blob REST API, this event is triggered after the tier change completes. |
### List of the events for Azure Data Lake Storage Gen 2 REST APIs
If the blob storage account has a hierarchical namespace, the data looks similar
}] ```
+### Microsoft.Storage.BlobTierChanged event
+
+```json
+{
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/my-storage-account",
+ "subject": "/blobServices/default/containers/testcontainer/blobs/Auto.jpg",
+ "eventType": "Microsoft.Storage.BlobTierChanged",
+ "id": "0fdefc06-b01e-0034-39f6-4016610696f6",
+ "data": {
+ "api": "SetBlobTier",
+ "clientRequestId": "68be434c-1a0d-432f-9cd7-1db90bff83d7",
+ "requestId": "0fdefc06-b01e-0034-39f6-401661000000",
+ "contentType": "image/jpeg",
+ "contentLength": 105891,
+ "blobType": "BlockBlob",
+ "url": "https://my-storage-account.blob.core.windows.net/testcontainer/Auto.jpg",
+ "sequencer": "000000000000000000000000000089A4000000000018d6ea",
+ "storageDiagnostics": {
+ "batchId": "3418f7a9-7006-0014-00f6-406dc6000000"
+ }
+ },
+ "dataVersion": "",
+ "metadataVersion": "1",
+ "eventTime": "2021-05-04T15:00:00.8350154Z"
+}
+```
+ ### Microsoft.Storage.BlobRenamed event ```json
event-hubs Event Hubs Kafka Connect Debezium https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-kafka-connect-debezium.md
INSERT INTO todos (description, todo_status) VALUES ('configure and install conn
INSERT INTO todos (description, todo_status) VALUES ('start connector', 'pending'); ```
-The connector should now spring into action and send change data events to an Event Hubs topic with the following na,e `my-server.public.todos`, assuming you have `my-server` as the value for `database.server.name` and `public.todos` is the table whose changes you're tracking (as per `table.whitelist` configuration)
+The connector should now spring into action and send change data events to an Event Hubs topic with the following name `my-server.public.todos`, assuming you have `my-server` as the value for `database.server.name` and `public.todos` is the table whose changes you're tracking (as per `table.whitelist` configuration)
**Check Event Hubs topic**
To learn more about Event Hubs for Kafka, see the following articles:
- [Connect Apache Flink to an event hub](event-hubs-kafka-flink-tutorial.md) - [Explore samples on our GitHub](https://github.com/Azure/azure-event-hubs-for-kafka) - [Connect Akka Streams to an event hub](event-hubs-kafka-akka-streams-tutorial.md)-- [Apache Kafka developer guide for Azure Event Hubs](apache-kafka-developer-guide.md)
+- [Apache Kafka developer guide for Azure Event Hubs](apache-kafka-developer-guide.md)
expressroute Expressroute Global Reach https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-global-reach.md
Previously updated : 04/28/2021 Last updated : 06/04/2021
ExpressRoute Global Reach is supported in the following places.
* France * Germany * Hong Kong SAR
+* India
* Ireland * Japan * Korea
expressroute Expressroute Howto Linkvnet Arm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-howto-linkvnet-arm.md
This article helps you link virtual networks (VNets) to Azure ExpressRoute circu
* If you enable the ExpressRoute premium add-on, you can link virtual networks outside of the geopolitical region of the ExpressRoute circuit. The premium add-on will also allow you to connect more than 10 virtual networks to your ExpressRoute circuit depending on the bandwidth chosen. Check the [FAQ](expressroute-faqs.md) for more details on the premium add-on.
-* In order to create the connection from the ExpressRoute circuit to the target ExpressRoute virtual network gateway, the number of addresss spaces advertised from the local or peered virtual networks needs to be equal to or less than **200**. Once the connection has been successfully created, you can add additional address spaces, up to 1,000, to the local or peered virtual networks.
+* In order to create the connection from the ExpressRoute circuit to the target ExpressRoute virtual network gateway, the number of address spaces advertised from the local or peered virtual networks needs to be equal to or less than **200**. Once the connection has been successfully created, you can add additional address spaces, up to 1,000, to the local or peered virtual networks.
In this tutorial, you learn how to: > [!div class="checklist"]
expressroute Expressroute Howto Linkvnet Portal Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-howto-linkvnet-portal-resource-manager.md
In this tutorial, you learn how to:
* If you enable the ExpressRoute premium add-on, you can link virtual networks outside of the geopolitical region of the ExpressRoute circuit. The premium add-on will also allow you to connect more than 10 virtual networks to your ExpressRoute circuit depending on the bandwidth chosen. Check the [FAQ](expressroute-faqs.md) for more details on the premium add-on.
+* In order to create the connection from the ExpressRoute circuit to the target ExpressRoute virtual network gateway, the number of address spaces advertised from the local or peered virtual networks needs to be equal to or less than **200**. Once the connection has been successfully created, you can add additional address spaces, up to 1,000, to the local or peered virtual networks.
+ * You can [view a video](https://azure.microsoft.com/documentation/videos/azure-expressroute-how-to-create-a-connection-between-your-vpn-gateway-and-expressroute-circuit) before beginning to better understand the steps. ## Connect a VNet to a circuit - same subscription
expressroute Howto Linkvnet Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/howto-linkvnet-cli.md
In this tutorial, you learn how to:
* A single VNet can be linked to up to 16 ExpressRoute circuits. Use the following process to create a new connection object for each ExpressRoute circuit you're connecting to. The ExpressRoute circuits can be in the same subscription, different subscriptions, or a mix of both. * If you enable the ExpressRoute premium add-on, you can link virtual networks outside of the geopolitical region of the ExpressRoute circuit. The premium add-on will also allow you to connect more than 10 virtual networks to your ExpressRoute circuit depending on the bandwidth chosen. Check the [FAQ](expressroute-faqs.md) for more details on the premium add-on.
+* In order to create the connection from the ExpressRoute circuit to the target ExpressRoute virtual network gateway, the number of address spaces advertised from the local or peered virtual networks needs to be equal to or less than **200**. Once the connection has been successfully created, you can add additional address spaces, up to 1,000, to the local or peered virtual networks.
+ ## Connect a virtual network in the same subscription to a circuit You can connect a virtual network gateway to an ExpressRoute circuit by using the example. Make sure that the virtual network gateway is created and is ready for linking before you run the command.
firewall Rule Processing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/rule-processing.md
Previously updated : 06/01/2021 Last updated : 06/04/2021 # Configure Azure Firewall rules
-You can configure NAT rules, network rules, and applications rules on Azure Firewall using either classic rules or Firewall Policy.
+You can configure NAT rules, network rules, and applications rules on Azure Firewall using either classic rules or Firewall Policy. Azure Firewall denies all traffic by default, until rules are manually configured to allow traffic.
## Rule processing using classic rules
iot-develop Quickstart Device Development https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-device-development.md
Previously updated : 02/15/2021 Last updated : 06/04/2021 # Getting started with Azure IoT embedded device development
The following tutorials are included in the getting started guide:
|Quickstart|Device| ||--|
-|[Getting started with the ST Microelectronics B-L475E-IOT01 Discovery kit](https://go.microsoft.com/fwlink/p/?linkid=2129536) |[ST Microelectronics B-L475E-IOT01](https://www.st.com/content/st_com/en/products/evaluation-tools/product-evaluation-tools/mcu-mpu-eval-tools/stm32-mcu-mpu-eval-tools/stm32-discovery-kits/b-l475e-iot01a.html)|
-|[Getting started with the ST Microelectronics B-L4S5I-IOT01 Discovery kit](https://github.com/azure-rtos/getting-started/tree/master/STMicroelectronics/STM32L4_L4+) |[ST Microelectronics B-L4S5I-IOT01](https://www.st.com/en/evaluation-tools/b-l4s5i-iot01a.html)|
-|[Getting started with the NXP MIMXRT1060-EVK Evaluation kit](https://go.microsoft.com/fwlink/p/?linkid=2129821) |[NXP MIMXRT1060-EVK](https://www.nxp.com/design/development-boards/i-mx-evaluation-and-development-boards/mimxrt1060-evk-i-mx-rt1060-evaluation-kit:MIMXRT1060-EVK)|
-|[Getting started with the NXP MIMXRT1050-EVKB Evaluation kit](https://github.com/azure-rtos/getting-started/tree/master/NXP/MIMXRT1050-EVKB) |[NXP MIMXRT1050-EVKB](https://www.nxp.com/design/development-boards/i-mx-evaluation-and-development-boards/i-mx-rt1050-evaluation-kit:MIMXRT1050-EVK)|
|[Getting started with the Microchip ATSAME54-XPRO Evaluation kit](https://go.microsoft.com/fwlink/p/?linkid=2129537) |[Microchip ATSAME54-XPRO](https://www.microchip.com/developmenttools/productdetails/atsame54-xpro)| |[Getting started with the Renesas Starter Kit+ for RX65N-2MB](https://github.com/azure-rtos/getting-started/tree/master/Renesas/RSK_RX65N_2MB) |[Renesas Starter Kit+ for RX65N-2MB](https://www.renesas.com/us/en/products/microcontrollers-microprocessors/rx-32-bit-performance-efficiency-mcus/rx65n-2mb-starter-kit-plus-renesas-starter-kit-rx65n-2mb)|
iot-develop Quickstart Devkit Mxchip Az3166 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-devkit-mxchip-az3166.md
Last updated 06/02/2021
[![Browse code](media/common/browse-code.svg)](https://github.com/azure-rtos/getting-started/tree/master/MXChip/AZ3166)
-In this quickstart, you use Azure RTOS to connect an MXCHIP AZ3166 IoT DevKit (hereafter, MXCHIP DevKit) to Azure IoT. The article is part of the series [Get started with Azure IoT embedded device development](quickstart-device-development.md). The series introduces device developers to Azure RTOS, and shows how to connect several device evaluation kits to Azure IoT.
+In this quickstart, you use Azure RTOS to connect an MXCHIP AZ3166 IoT DevKit (hereafter, MXCHIP DevKit) to Azure IoT.
You'll complete the following tasks:
To remove the entire Azure IoT Central sample application and all its devices an
In this quickstart, you built a custom image that contains Azure RTOS sample code, and then flashed the image to the MXCHIP DevKit device. You also used the IoT Central portal to create Azure resources, connect the MXCHIP DevKit securely to Azure, view telemetry, and send messages.
-> [!IMPORTANT]
-> Azure RTOS provides OEMs with components to secure communication and to create code and data isolation using underlying MCU/MPU hardware protection mechanisms. However, each OEM is ultimately responsible for ensuring that their device meets evolving security requirements.
- As a next step, explore the following articles to learn more about using the IoT device SDKs to connect devices to Azure IoT. > [!div class="nextstepaction"]
As a next step, explore the following articles to learn more about using the IoT
> [!div class="nextstepaction"] > [Connect a simulated device to IoT Hub](quickstart-send-telemetry-iot-hub.md)
+> [!IMPORTANT]
+> Azure RTOS provides OEMs with components to secure communication and to create code and data isolation using underlying MCU/MPU hardware protection mechanisms. However, each OEM is ultimately responsible for ensuring that their device meets evolving security requirements.
+
iot-develop Quickstart Devkit Nxp Mimxrt1050 Evkb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-devkit-nxp-mimxrt1050-evkb.md
+
+ Title: Connect an NXP MIMXRT1050-EVKB to Azure IoT Central quickstart
+description: Use Azure RTOS embedded software to connect an NXP MIMXRT1050-EVKB device to Azure IoT and send telemetry.
+++
+ms.devlang: c
+ Last updated : 06/04/2021++
+# Quickstart: Connect an NXP MIMXRT1050-EVKB Evaluation kit to IoT Central
+
+**Applies to**: [Embedded device development](about-iot-develop.md#embedded-device-development)<br>
+**Total completion time**: 30 minutes
+
+[![Browse code](media/common/browse-code.svg)](https://github.com/azure-rtos/getting-started/tree/master/NXP/MIMXRT1050-EVKB/)
+
+In this quickstart, you use Azure RTOS to connect an NXP MIMXRT1050-EVKB Evaluation kit (hereafter, NXP EVK) to Azure IoT.
+
+You will complete the following tasks:
+
+* Install a set of embedded development tools for programming an NXP EVK in C
+* Build an image and flash it onto the NXP EVK
+* Use Azure IoT Central to create cloud components, view properties, view device telemetry, and call direct commands
+
+## Prerequisites
+
+* A PC running Microsoft Windows 10
+* [Git](https://git-scm.com/downloads) for cloning the repository
+* Hardware
+
+ > * The [NXP MIMXRT1050-EVKB](https://www.nxp.com/design/development-boards/i-mx-evaluation-and-development-boards/i-mx-rt1050-evaluation-kit:MIMXRT1050-EVK) (NXP EVK)
+ > * USB 2.0 A male to Micro USB male cable
+ > * Wired Ethernet access
+ > * Ethernet cable
+
+## Prepare the development environment
+
+To set up your development environment, first you clone a GitHub repo that contains all the assets you need for the quickstart. Then you install a set of programming tools.
+
+### Clone the repo for the quickstart
+
+Clone the following repo to download all sample device code, setup scripts, and offline versions of the documentation. If you previously cloned this repo in another quickstart, you don't need to do it again.
+
+To clone the repo, run the following command:
+
+```shell
+git clone --recursive https://github.com/azure-rtos/getting-started.git
+```
+
+### Install the tools
+
+The cloned repo contains a setup script that installs and configures the required tools. If you installed these tools in another embedded device quickstart, you don't need to do it again.
+
+> [!NOTE]
+> The setup script installs the following tools:
+> * [CMake](https://cmake.org): Build
+> * [ARM GCC](https://developer.arm.com/tools-and-software/open-source-software/developer-tools/gnu-toolchain/gnu-rm): Compile
+> * [Termite](https://www.compuphase.com/software_termite.htm): Monitor serial port output for connected devices
+
+To install the tools:
+
+1. From File Explorer, navigate to the following path in the repo and run the setup script named *get-toolchain.bat*:
+
+ > *getting-started\tools\get-toolchain.bat*
+
+1. After the installation, open a new console window to recognize the configuration changes made by the setup script. Use this console to complete the remaining programming tasks in the quickstart. You can use Windows CMD, PowerShell, or Git Bash for Windows.
+1. Run the following code to confirm that CMake version 3.14 or later is installed.
+
+ ```shell
+ cmake --version
+ ```
++
+## Prepare the device
+
+To connect the NXP EVK to Azure, you'll modify a configuration file for Wi-Fi and Azure IoT settings, rebuild the image, and flash the image to the device.
+
+### Add configuration
+
+1. Open the following file in a text editor:
+
+ > *getting-started\NXP\MIMXRT1050-EVKB\app\azure_config.h*
+
+1. Set the Azure IoT device information constants to the values that you saved after you created Azure resources.
+
+ |Constant name|Value|
+ |-|--|
+ |`IOT_DPS_ID_SCOPE` |{*Your ID scope value*}|
+ |`IOT_DPS_REGISTRATION_ID` |{*Your Device ID value*}|
+ |`IOT_DEVICE_SAS_KEY` |{*Your Primary key value*}|
+
+1. Save and close the file.
+
+### Build the image
+
+In your console or in File Explorer, run the script *rebuild.bat* at the following path to build the image:
+
+> *getting-started\NXP\MIMXRT1050-EVKB\tools\rebuild.bat*
+
+After the build completes, confirm that the binary file was created in the following path:
+
+> *getting-started\NXP\MIMXRT1050-EVKB\build\app\mimxrt1050_azure_iot.bin*
+
+### Flash the image
+
+1. On the NXP EVK, locate the **Reset** button, the Micro USB port, and the Ethernet port. You use these components in the following steps. All three are highlighted in the following picture:
+
+ :::image type="content" source="media/quickstart-devkit-nxp-mimxrt1050-evkb/nxp-1050-evkb-board.png" alt-text="Locate key components on the NXP EVK board":::
+
+1. Connect the Micro USB cable to the Micro USB port on the NXP EVK, and then connect it to your computer. After the device powers up, a solid green LED shows the power status.
+1. Use the Ethernet cable to connect the NXP EVK to an Ethernet port.
+1. In File Explorer, find the binary file that you created in the previous section.
+1. Copy the binary file *mimxrt1050_azure_iot.bin*
+1. In File Explorer, find the NXP EVK device connected to your computer. The device appears as a drive on your system with the drive label **RT1050-EVK**.
+1. Paste the binary file into the root folder of the NXP EVK. Flashing starts automatically and completes in a few seconds.
+
+ > [!NOTE]
+ > During the flashing process, a red LED blinks rapidly on the NXP EVK.
+
+### Confirm device connection details
+
+You can use the **Termite** app to monitor communication and confirm that your device is set up correctly.
+
+1. Start **Termite**.
+ > [!TIP]
+ > If you have issues getting your device to initialize or connect after flashing, see [Troubleshooting](https://github.com/azure-rtos/getting-started/blob/master/docs/troubleshooting.md).
+1. Select **Settings**.
+1. In the **Serial port settings** dialog, check the following settings and update if needed:
+ * **Baud rate**: 115,200
+ * **Port**: The port that your NXP EVK is connected to. If there are multiple port options in the dropdown, you can find the correct port to use. Open Windows **Device Manager**, and view **Ports** to identify which port to use.
+
+ :::image type="content" source="media/quickstart-devkit-nxp-mimxrt1050-evkb/termite-settings.png" alt-text="Confirm settings in the Termite app":::
+
+1. Select OK.
+1. Press the **Reset** button on the device. The button is labeled on the device and located near the Micro USB connector.
+1. In the **Termite** app, check the following checkpoint values to confirm that the device is initialized and connected to Azure IoT.
+
+ ```output
+ Starting Azure thread
+
+ Initializing DHCP
+ IP address: 10.0.0.77
+ Mask: 255.255.255.0
+ Gateway: 10.0.0.1
+ SUCCESS: DHCP initialized
+
+ Initializing DNS client
+ DNS address: 10.0.0.1
+ SUCCESS: DNS client initialized
+
+ Initializing SNTP client
+ SNTP server 0.pool.ntp.org
+ SNTP IP address: 142.147.92.5
+ SNTP time update: May 28, 2021 17:36:33.325 UTC
+ SUCCESS: SNTP initialized
+
+ Initializing Azure IoT DPS client
+ DPS endpoint: global.azure-devices-provisioning.net
+ DPS ID scope: ***
+ Registration ID: mydevice
+ SUCCESS: Azure IoT DPS client initialized
+
+ Initializing Azure IoT Hub client
+ Hub hostname: ***.azure-devices.net
+ Device id: mydevice
+ Model id: dtmi:azurertos:devkit:gsg;1
+ Connected to IoT Hub
+ SUCCESS: Azure IoT Hub client initialized
+ ```
+
+Keep Termite open to monitor device output in the following steps.
+
+## Verify the device status
+
+To view the device status in IoT Central portal:
+1. From the application dashboard, select **Devices** on the side navigation menu.
+1. Confirm that the **Device status** is updated to **Provisioned**.
+1. Confirm that the **Device template** is updated to **Getting Started Guide**.
+
+ :::image type="content" source="media/quickstart-devkit-nxp-mimxrt1050-evkb/iot-central-device-view-status.png" alt-text="View device status in IoT Central":::
+
+## View telemetry
+
+With IoT Central, you can view the flow of telemetry from your device to the cloud.
+
+To view telemetry in IoT Central portal:
+
+1. From the application dashboard, select **Devices** on the side navigation menu.
+1. Select the device from the device list.
+1. View the telemetry as the device sends messages to the cloud in the **Overview** tab.
+1. The temperature is measured from the MCU wafer.
+
+ :::image type="content" source="media/quickstart-devkit-nxp-mimxrt1050-evkb/iot-central-device-telemetry.png" alt-text="View device telemetry in IoT Central":::
+
+ > [!NOTE]
+ > You can also monitor telemetry from the device by using the Termite app.
+
+## Call a direct method on the device
+
+You can also use IoT Central to call a direct method that you've implemented on your device. Direct methods have a name, and can optionally have a JSON payload, configurable connection, and method timeout. In this section, you call a method that enables you to turn an LED on or off.
+
+To call a method in IoT Central portal:
+
+1. Select the **Command** tab from the device page.
+1. In the **State** dropdown, select **True**, and then select **Run**. There will be no change on the device as there isn't an available LED to toggle. You can view the output in Termite to monitor the status of the methods.
+
+ :::image type="content" source="media/quickstart-devkit-nxp-mimxrt1050-evkb/iot-central-invoke-method.png" alt-text="Call a direct method on a device":::
+
+1. In the **State** dropdown, select **False**, and then select **Run**.
+
+## View device information
+
+You can view the device information from IoT Central.
+
+Select **About** tab from the device page.
++
+## Troubleshoot and debug
+
+If you experience issues building the device code, flashing the device, or connecting, see [Troubleshooting](https://github.com/azure-rtos/getting-started/blob/master/docs/troubleshooting.md).
+
+For debugging the application, see [Debugging with Visual Studio Code](https://github.com/azure-rtos/getting-started/blob/master/docs/debugging.md).
+
+## Clean up resources
+
+If you no longer need the Azure resources created in this quickstart, you can delete them from the IoT Central portal.
+
+To remove the entire Azure IoT Central sample application and all its devices and resources:
+1. Select **Administration** > **Your application**.
+1. Select **Delete**.
+
+## Next steps
+
+In this quickstart, you built a custom image that contains Azure RTOS sample code, and then flashed the image to the NXP EVK device. You also used the IoT Central portal to create Azure resources, connect the NXP EVK securely to Azure, view telemetry, and send messages.
+
+As a next step, explore the following articles to learn more about using the IoT device SDKs to connect devices to Azure IoT.
+
+> [!div class="nextstepaction"]
+> [Connect a simulated device to IoT Central](quickstart-send-telemetry-central.md)
+> [!div class="nextstepaction"]
+> [Connect a simulated device to IoT Hub](quickstart-send-telemetry-iot-hub.md)
+
+> [!IMPORTANT]
+> Azure RTOS provides OEMs with components to secure communication and to create code and data isolation using underlying MCU/MPU hardware protection mechanisms. However, each OEM is ultimately responsible for ensuring that their device meets evolving security requirements.
+
iot-develop Quickstart Devkit Nxp Mimxrt1060 Evk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-devkit-nxp-mimxrt1060-evk.md
+
+ Title: Connect an NXP MIMXRT1060-EVK to Azure IoT Central quickstart
+description: Use Azure RTOS embedded software to connect an NXP MIMXRT1060-EVK device to Azure IoT and send telemetry.
+++
+ms.devlang: c
+ Last updated : 06/04/2021++
+# Quickstart: Connect an NXP MIMXRT1060-EVK Evaluation kit to IoT Central
+
+**Applies to**: [Embedded device development](about-iot-develop.md#embedded-device-development)<br>
+**Total completion time**: 30 minutes
+
+[![Browse code](media/common/browse-code.svg)](https://github.com/azure-rtos/getting-started/tree/master/NXP/MIMXRT1060-EVK)
+
+In this quickstart, you use Azure RTOS to connect the NXP MIMXRT1060-EVK Evaluation kit (hereafter, the NXP EVK) to Azure IoT.
+
+You will complete the following tasks:
+
+* Install a set of embedded development tools for programming an NXP EVK in C
+* Build an image and flash it onto the NXP EVK
+* Use Azure IoT Central to create cloud components, view properties, view device telemetry, and call direct commands
+
+## Prerequisites
+
+* A PC running Microsoft Windows 10
+* [Git](https://git-scm.com/downloads) for cloning the repository
+* Hardware
+
+ > * The [NXP MIMXRT1060-EVK](https://www.nxp.com/design/development-boards/i-mx-evaluation-and-development-boards/mimxrt1060-evk-i-mx-rt1060-evaluation-kit:MIMXRT1060-EVK) (NXP EVK)
+ > * USB 2.0 A male to Micro USB male cable
+ > * Wired Ethernet access
+ > * Ethernet cable
+
+## Prepare the development environment
+
+To set up your development environment, first you clone a GitHub repo that contains all the assets you need for the quickstart. Then you install a set of programming tools.
+
+### Clone the repo for the quickstart
+
+Clone the following repo to download all sample device code, setup scripts, and offline versions of the documentation. If you previously cloned this repo in another quickstart, you don't need to do it again.
+
+To clone the repo, run the following command:
+
+```shell
+git clone --recursive https://github.com/azure-rtos/getting-started.git
+```
+
+### Install the tools
+
+The cloned repo contains a setup script that installs and configures the required tools. If you installed these tools in another embedded device quickstart, you don't need to do it again.
+
+> [!NOTE]
+> The setup script installs the following tools:
+> * [CMake](https://cmake.org): Build
+> * [ARM GCC](https://developer.arm.com/tools-and-software/open-source-software/developer-tools/gnu-toolchain/gnu-rm): Compile
+> * [Termite](https://www.compuphase.com/software_termite.htm): Monitor serial port output for connected devices
+
+To install the tools:
+
+1. From File Explorer, navigate to the following path in the repo and run the setup script named *get-toolchain.bat*:
+
+ > *getting-started\tools\get-toolchain.bat*
+
+1. After the installation, open a new console window to recognize the configuration changes made by the setup script. Use this console to complete the remaining programming tasks in the quickstart. You can use Windows CMD, PowerShell, or Git Bash for Windows.
+1. Run the following code to confirm that CMake version 3.14 or later is installed.
+
+ ```shell
+ cmake --version
+ ```
++
+## Prepare the device
+
+To connect the NXP EVK to Azure, you'll modify a configuration file for Azure IoT settings, rebuild the image, and flash the image to the device.
+
+### Add configuration
+
+1. Open the following file in a text editor:
+
+ > *getting-started\NXP\MIMXRT1060-EVK\app\azure_config.h*
+
+1. Set the Azure IoT device information constants to the values that you saved after you created Azure resources.
+
+ |Constant name|Value|
+ |-|--|
+ |`IOT_DPS_ID_SCOPE` |{*Your ID scope value*}|
+ |`IOT_DPS_REGISTRATION_ID` |{*Your Device ID value*}|
+ |`IOT_DEVICE_SAS_KEY` |{*Your Primary key value*}|
+
+1. Save and close the file.
+
+### Build the image
+
+In your console or in File Explorer, run the script *rebuild.bat* at the following path to build the image:
+
+> *getting-started\NXP\MIMXRT1060-EVK\tools\rebuild.bat*
+
+After the build completes, confirm that the binary file was created in the following path:
+
+> *getting-started\NXP\MIMXRT1060-EVK\build\app\mimxrt1060_azure_iot.bin*
+
+### Flash the image
+
+1. On the NXP EVK, locate the **Reset** button, the Micro USB port, and the Ethernet port. You use these components in the following steps. All three are highlighted in the following picture:
+
+ :::image type="content" source="media/quickstart-devkit-nxp-mimxrt1060-evk/nxp-evk-board.png" alt-text="Locate key components on the NXP EVK board":::
+
+1. Connect the Micro USB cable to the Micro USB port on the NXP EVK, and then connect it to your computer. After the device powers up, a solid green LED shows the power status.
+1. Use the Ethernet cable to connect the NXP EVK to an Ethernet port.
+1. In File Explorer, find the binary file that you created in the previous section.
+1. Copy the binary file *mimxrt1060_azure_iot.bin*
+1. In File Explorer, find the NXP EVK device connected to your computer. The device appears as a drive on your system with the drive label **RT1060-EVK**.
+1. Paste the binary file into the root folder of the NXP EVK. Flashing starts automatically and completes in a few seconds.
+
+ > [!NOTE]
+ > During the flashing process, a red LED blinks rapidly on the NXP EVK.
+
+### Confirm device connection details
+
+You can use the **Termite** app to monitor communication and confirm that your device is set up correctly.
+
+1. Start **Termite**.
+ > [!TIP]
+ > If you have issues getting your device to initialize or connect after flashing, see [Troubleshooting](https://github.com/azure-rtos/getting-started/blob/master/docs/troubleshooting.md).
+1. Select **Settings**.
+1. In the **Serial port settings** dialog, check the following settings and update if needed:
+ * **Baud rate**: 115,200
+ * **Port**: The port that your NXP EVK is connected to. If there are multiple port options in the dropdown, you can find the correct port to use. Open Windows **Device Manager**, and view **Ports** to identify which port to use.
+
+ :::image type="content" source="media/quickstart-devkit-nxp-mimxrt1060-evk/termite-settings.png" alt-text="Confirm settings in the Termite app":::
+
+1. Select OK.
+1. Press the **Reset** button on the device. The button is labeled on the device and located near the Micro USB connector.
+1. In the **Termite** app, check the following checkpoint values to confirm that the device is initialized and connected to Azure IoT.
+
+ ```output
+ Starting Azure thread
+
+ Initializing DHCP
+ IP address: 192.168.0.19
+ Mask: 255.255.255.0
+ Gateway: 192.168.0.1
+ SUCCESS: DHCP initialized
+
+ Initializing DNS client
+ DNS address: 75.75.75.75
+ SUCCESS: DNS client initialized
+
+ Initializing SNTP client
+ SNTP server 0.pool.ntp.org
+ SNTP IP address: 108.62.122.57
+ SNTP time update: May 20, 2021 19:41:20.319 UTC
+ SUCCESS: SNTP initialized
+
+ Initializing Azure IoT DPS client
+ DPS endpoint: global.azure-devices-provisioning.net
+ DPS ID scope: ***
+ Registration ID: mydevice
+ SUCCESS: Azure IoT DPS client initialized
+
+ Initializing Azure IoT Hub client
+ Hub hostname: ***.azure-devices.net
+ Device id: mydevice
+ Model id: dtmi:azurertos:devkit:gsg;1
+ Connected to IoT Hub
+ SUCCESS: Azure IoT Hub client initialized
+ ```
+
+Keep Termite open to monitor device output in the following steps.
+
+## Verify the device status
+
+To view the device status in IoT Central portal:
+1. From the application dashboard, select **Devices** on the side navigation menu.
+1. Confirm that the **Device status** is updated to **Provisioned**.
+1. Confirm that the **Device template** is updated to **Getting Started Guide**.
+
+ :::image type="content" source="media/quickstart-devkit-nxp-mimxrt1060-evk/iot-central-device-view-status.png" alt-text="View device status in IoT Central":::
+
+## View telemetry
+
+With IoT Central, you can view the flow of telemetry from your device to the cloud.
+
+To view telemetry in IoT Central portal:
+
+1. From the application dashboard, select **Devices** on the side navigation menu.
+1. Select the device from the device list.
+1. View the telemetry as the device sends messages to the cloud in the **Overview** tab.
+1. The temperature is measured from the MCU wafer.
+
+ :::image type="content" source="media/quickstart-devkit-nxp-mimxrt1060-evk/iot-central-device-telemetry.png" alt-text="View device telemetry in IoT Central":::
+
+ > [!NOTE]
+ > You can also monitor telemetry from the device by using the Termite app.
+
+## Call a direct method on the device
+
+You can also use IoT Central to call a direct method that you've implemented on your device. Direct methods have a name, and can optionally have a JSON payload, configurable connection, and method timeout. In this section, you call a method that enables you to turn an LED on or off.
+
+To call a method in IoT Central portal:
+
+1. Select the **Command** tab from the device page.
+1. In the **State** dropdown, select **True**, and then select **Run**. There will be no change on the device as there isn't an available LED to toggle; however, you can view the output in Termite to monitor the status of the methods.
+
+ :::image type="content" source="media/quickstart-devkit-nxp-mimxrt1060-evk/iot-central-invoke-method.png" alt-text="Call a direct method on a device":::
+
+1. In the **State** dropdown, select **False**, and then select **Run**.
+
+## View device information
+
+You can view the device information from IoT Central.
+
+Select **About** tab from the device page.
++
+## Troubleshoot and debug
+
+If you experience issues building the device code, flashing the device, or connecting, see [Troubleshooting](https://github.com/azure-rtos/getting-started/blob/master/docs/troubleshooting.md).
+
+For debugging the application, see [Debugging with Visual Studio Code](https://github.com/azure-rtos/getting-started/blob/master/docs/debugging.md).
+
+## Clean up resources
+
+If you no longer need the Azure resources created in this quickstart, you can delete them from the IoT Central portal.
+
+To remove the entire Azure IoT Central sample application and all its devices and resources:
+1. Select **Administration** > **Your application**.
+1. Select **Delete**.
+
+## Next steps
+
+In this quickstart, you built a custom image that contains Azure RTOS sample code, and then flashed the image to the NXP EVK device. You also used the IoT Central portal to create Azure resources, connect the NXP EVK securely to Azure, view telemetry, and send messages.
+
+As a next step, explore the following articles to learn more about using the IoT device SDKs to connect devices to Azure IoT.
+
+> [!div class="nextstepaction"]
+> [Connect a simulated device to IoT Central](quickstart-send-telemetry-central.md)
+> [!div class="nextstepaction"]
+> [Connect a simulated device to IoT Hub](quickstart-send-telemetry-iot-hub.md)
+
+> [!IMPORTANT]
+> Azure RTOS provides OEMs with components to secure communication and to create code and data isolation using underlying MCU/MPU hardware protection mechanisms. However, each OEM is ultimately responsible for ensuring that their device meets evolving security requirements.
+
iot-develop Quickstart Devkit Stm B L475e https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-devkit-stm-b-l475e.md
+
+ Title: Connect an ST Microelectronics B-L475E-IOT01A or B-L4S5I-IOTO1A to Azure IoT Central quickstart
+description: Use Azure RTOS embedded software to connect an ST Microelectronics B-L475E-IOT01A or B-L4S5I-IOTO1A device to Azure IoT and send telemetry.
+++
+ms.devlang: c
+ Last updated : 06/02/2021++
+# Quickstart: Connect an ST Microelectronics B-L475E-IOT01A or B-L4S5I-IOTO1A Discovery kit to IoT Central
+
+**Applies to**: [Embedded device development](about-iot-develop.md#embedded-device-development)<br>
+**Total completion time**: 30 minutes
+
+[![Browse code](media/common/browse-code.svg)](https://github.com/azure-rtos/getting-started/tree/master/STMicroelectronics/STM32L4_L4+)
+
+In this quickstart, you use Azure RTOS to connect either the ST Microelectronics [B-L475E-IOT01A](https://www.st.com/en/evaluation-tools/b-l475e-iot01a.html) or [B-L4S5I-IOT01A](https://www.st.com/en/evaluation-tools/b-l4s5i-iot01a.html) Discovery kit (hereafter, the STM DevKit) to Azure IoT.
+
+You will complete the following tasks:
+
+* Install a set of embedded development tools for programming the STM DevKit in C
+* Build an image and flash it onto the STM DevKit
+* Use Azure IoT Central to create cloud components, view properties, view device telemetry, and call direct commands
+
+## Prerequisites
+
+* A PC running Microsoft Windows 10
+* [Git](https://git-scm.com/downloads) for cloning the repository
+* Hardware
+
+ > * Either the [B-L475E-IOT01A](https://www.st.com/en/evaluation-tools/b-l475e-iot01a.html) or the [B-L4S5I-IOT01A](https://www.st.com/en/evaluation-tools/b-l4s5i-iot01a.html) (STM DevKit)
+ > * Wi-Fi 2.4 GHz
+ > * USB 2.0 A male to Micro USB male cable
+
+## Prepare the development environment
+
+To set up your development environment, first you clone a GitHub repo that contains all the assets you need for the quickstart. Then you install a set of programming tools.
+
+### Clone the repo for the quickstart
+
+Clone the following repo to download all sample device code, setup scripts, and offline versions of the documentation. If you previously cloned this repo in another quickstart, you don't need to do it again.
+
+To clone the repo, run the following command:
+
+```shell
+git clone --recursive https://github.com/azure-rtos/getting-started.git
+```
+
+### Install the tools
+
+The cloned repo contains a setup script that installs and configures the required tools. If you installed these tools in another embedded device quickstart, you don't need to do it again.
+
+> [!NOTE]
+> The setup script installs the following tools:
+> * [CMake](https://cmake.org): Build
+> * [ARM GCC](https://developer.arm.com/tools-and-software/open-source-software/developer-tools/gnu-toolchain/gnu-rm): Compile
+> * [Termite](https://www.compuphase.com/software_termite.htm): Monitor serial port output for connected devices
+
+To install the tools:
+
+1. From File Explorer, navigate to the following path in the repo and run the setup script named *get-toolchain.bat*:
+
+ > *getting-started\tools\get-toolchain.bat*
+
+1. After the installation, open a new console window to recognize the configuration changes made by the setup script. Use this console to complete the remaining programming tasks in the quickstart. You can use Windows CMD, PowerShell, or Git Bash for Windows.
+1. Run the following code to confirm that CMake version 3.14 or later is installed.
+
+ ```shell
+ cmake --version
+ ```
+
+## Prepare the device
+
+To connect the STM DevKit to Azure, you'll modify a configuration file for Wi-Fi and Azure IoT settings, rebuild the image, and flash the image to the device.
+
+### Add configuration
+
+1. Open the following file in a text editor:
+
+ > *getting-started\STMicroelectronics\STM32L4_L4+\app\azure_config.h*
+
+1. Set the Wi-Fi constants to the following values from your local environment.
+
+ |Constant name|Value|
+ |-|--|
+ |`WIFI_SSID` |{*Your Wi-Fi SSID*}|
+ |`WIFI_PASSWORD` |{*Your Wi-Fi password*}|
+ |`WIFI_MODE` |{*One of the enumerated Wi-Fi mode values in the file*}|
+
+1. Set the Azure IoT device information constants to the values that you saved after you created Azure resources.
+
+ |Constant name|Value|
+ |-|--|
+ |`IOT_DPS_ID_SCOPE` |{*Your ID scope value*}|
+ |`IOT_DPS_REGISTRATION_ID` |{*Your Device ID value*}|
+ |`IOT_DEVICE_SAS_KEY` |{*Your Primary key value*}|
+
+1. Save and close the file.
+
+### Build the image
+
+In your console or in File Explorer, run the script *rebuild.bat* at the following path to build the image:
+
+> *getting-started\STMicroelectronics\STM32L4_L4+\tools\rebuild.bat*
+
+After the build completes, confirm that two binary files were created. There's a binary image for each STM Devkit. The build saves the images to the following paths:
+
+> *getting-started\STMicroelectronics\STM32L4_L4+\build\app\stm32l475_azure_iot.bin*
+
+> *getting-started\STMicroelectronics\STM32L4_L4+\build\app\stm32l4S5_azure_iot.bin*
+
+### Flash the image
+
+1. On the STM DevKit MCU, locate the **Reset** button, the Micro USB port, which is labeled **USB STLink**, and the board part number. You will refer to these items in the next steps. All of them are highlighted in the following picture:
+
+ :::image type="content" source="media/quickstart-devkit-stm-b-l475e/stm-devkit-board.png" alt-text="Locate key components on the STM DevKit board":::
+
+1. Connect the Micro USB cable to the **USB STLINK** port on the STM DevKit, and then connect it to your computer.
+
+ > [!NOTE]
+ > For detailed setup information about the STM DevKit, see the instructions on the packaging, or see [B-L475E-IOT01A Resources](https://www.st.com/en/evaluation-tools/b-l475e-iot01a.html#resource) or [B-L4S5I-IOT01A Resources](https://www.st.com/en/evaluation-tools/b-l4s5i-iot01a.html#resource).
+
+1. In File Explorer, find the binary files that you created in the previous section.
+
+1. Copy the binary file whose file name corresponds to the part number of the STM Devkit you're using. For example, if your board part number is **B-L475E-IOT01A1**, copy the binary file named *stm32l475_azure_iot.bin*.
+
+1. In File Explorer, find the STM Devkit that's connected to your computer. The device appears as a drive on your system with the drive label **DIS_L4IOT**.
+
+1. Paste the binary file into the root folder of the STM Devkit. Flashing starts automatically and completes in a few seconds.
+
+ > [!NOTE]
+ > During the flashing process, a LED toggles between red and green on the STM DevKit.
+
+### Confirm device connection details
+
+You can use the **Termite** app to monitor communication and confirm that your device is set up correctly.
+
+1. Start **Termite**.
+ > [!TIP]
+ > If you are unable to connect Termite to your devkit, install the [ST-LINK driver](https://my.st.com/content/ccc/resource/technical/software/driver/files/stsw-link009.zip) and try again. See [Troubleshooting](https://github.com/azure-rtos/getting-started/blob/master/docs/troubleshooting.md) for additional steps.
+1. Select **Settings**.
+1. In the **Serial port settings** dialog, check the following settings and update if needed:
+ * **Baud rate**: 115,200
+ * **Port**: The port that your STM DevKit is connected to. If there are multiple port options in the dropdown, you can find the correct port to use. Open Windows **Device Manager**, and view **Ports** to identify which port to use.
+
+ :::image type="content" source="media/quickstart-devkit-stm-b-l475e/termite-settings.png" alt-text="Confirm settings in the Termite app":::
+
+1. Select OK.
+1. Press the **Reset** button on the device. The button is black and is labeled on the device.
+1. In the **Termite** app, check the following checkpoint values to confirm that the device is initialized and connected to Azure IoT.
+
+ ```output
+ Starting Azure thread
+
+ Initializing WiFi
+ Module: ISM43362-M3G-L44-SPI
+ MAC address: C4:7F:51:8F:67:F6
+ Firmware revision: C3.5.2.5.STM
+ Connecting to SSID 'iot'
+ SUCCESS: WiFi connected to iot
+
+ Initializing DHCP
+ IP address: 192.168.0.22
+ Gateway: 192.168.0.1
+ SUCCESS: DHCP initialized
+
+ Initializing DNS client
+ DNS address: 75.75.75.75
+ SUCCESS: DNS client initialized
+
+ Initializing SNTP client
+ SNTP server 0.pool.ntp.org
+ SNTP IP address: 108.62.122.57
+ SNTP time update: May 21, 2021 22:42:8.394 UTC
+ SUCCESS: SNTP initialized
+
+ Initializing Azure IoT DPS client
+ DPS endpoint: global.azure-devices-provisioning.net
+ DPS ID scope: ***
+ Registration ID: mydevice
+ SUCCESS: Azure IoT DPS client initialized
+
+ Initializing Azure IoT Hub client
+ Hub hostname: ***.azure-devices.net
+ Device id: mydevice
+ Model id: dtmi:azurertos:devkit:gsg;1
+ Connected to IoT Hub
+ SUCCESS: Azure IoT Hub client initialized
+ ```
+ > [!IMPORTANT]
+ > If the DNS client initialization fails and notifies you that the Wi-Fi firmware is out of date, you'll need to update the Wi-Fi module firmware. Download and install the [Inventek ISM 43362 Wi-Fi module firmware update](https://www.st.com/resource/en/utilities/inventek_fw_updater.zip). Then press the **Reset** button on the device to recheck your connection, and continue with this quickstart.
++
+Keep Termite open to monitor device output in the following steps.
+
+## Verify the device status
+
+To view the device status in IoT Central portal:
+1. From the application dashboard, select **Devices** on the side navigation menu.
+1. Confirm that the **Device status** is updated to **Provisioned**.
+1. Confirm that the **Device template** is updated to **Getting Started Guide**.
+
+ :::image type="content" source="media/quickstart-devkit-stm-b-l475e/iot-central-device-view-status.png" alt-text="View device status in IoT Central":::
+
+## View telemetry
+
+With IoT Central, you can view the flow of telemetry from your device to the cloud.
+
+To view telemetry in IoT Central portal:
+
+1. From the application dashboard, select **Devices** on the side navigation menu.
+1. Select the device from the device list.
+1. View the telemetry as the device sends messages to the cloud in the **Overview** tab.
+
+ :::image type="content" source="media/quickstart-devkit-stm-b-l475e/iot-central-device-telemetry.png" alt-text="View device telemetry in IoT Central":::
+
+ > [!NOTE]
+ > You can also monitor telemetry from the device by using the Termite app.
+
+## Call a direct method on the device
+
+You can also use IoT Central to call a direct method that you have implemented on your device. Direct methods have a name, and can optionally have a JSON payload, configurable connection, and method timeout. In this section, you call a method that enables you to turn an LED on or off.
+
+To call a method in IoT Central portal:
+
+1. Select the **Command** tab from the device page.
+1. In the **State** dropdown, select **True**, and then select **Run**. The LED light should turn on.
+
+ :::image type="content" source="media/quickstart-devkit-stm-b-l475e/iot-central-invoke-method.png" alt-text="Call a direct method on a device":::
+
+1. In the **State** dropdown, select **False**, and then select **Run**. The LED light should turn off.
+
+## View device information
+
+You can view the device information from IoT Central.
+
+Select **About** tab from the device page.
++
+## Troubleshoot and debug
+
+If you experience issues building the device code, flashing the device, or connecting, see [Troubleshooting](https://github.com/azure-rtos/getting-started/blob/master/docs/troubleshooting.md).
+
+For debugging the application, see [Debugging with Visual Studio Code](https://github.com/azure-rtos/getting-started/blob/master/docs/debugging.md).
+
+## Clean up resources
+
+If you no longer need the Azure resources created in this quickstart, you can delete them from the IoT Central portal.
+
+To remove the entire Azure IoT Central sample application and all its devices and resources:
+1. Select **Administration** > **Your application**.
+1. Select **Delete**.
+
+## Next steps
+
+In this quickstart, you built a custom image that contains Azure RTOS sample code, and then flashed the image to the STM DevKit device. You also used the IoT Central portal to create Azure resources, connect the STM DevKit securely to Azure, view telemetry, and send messages.
+
+As a next step, explore the following articles to learn more about using the IoT device SDKs to connect devices to Azure IoT.
+
+> [!div class="nextstepaction"]
+> [Connect a simulated device to IoT Central](quickstart-send-telemetry-central.md)
+> [!div class="nextstepaction"]
+> [Connect a simulated device to IoT Hub](quickstart-send-telemetry-iot-hub.md)
++
+> [!IMPORTANT]
+> Azure RTOS provides OEMs with components to secure communication and to create code and data isolation using underlying MCU/MPU hardware protection mechanisms. However, each OEM is ultimately responsible for ensuring that their device meets evolving security requirements.
+++
iot-edge How To Edgeagent Direct Method https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-edgeagent-direct-method.md
In the Azure portal, invoke the method with the method name `ping` and an empty
The **RestartModule** method allows for remote management of modules running on an IoT Edge device. If a module is reporting a failed state or other unhealthy behavior, you can trigger the IoT Edge agent to restart it. A successful restart command returns an empty payload and **"status": 200**.
-The RestartModule method is available in IoT Edge version 1.0.9 and later.
+The RestartModule method is available in IoT Edge version 1.0.9 and later.
+
+>[!TIP]
+>The IoT Edge troubleshooting page in the Azure portal provides a simplified experience for restarting modules. For more information, see [Monitor and troubleshoot IoT Edge devices from the Azure portal](troubleshoot-in-portal.md).
You can use the RestartModule direct method on any module running on an IoT Edge device, including the edgeAgent module itself. However, if you use this direct method to shut down the edgeAgent, you won't receive a success result since the connection is disrupted while the module restarts.
iot-edge How To Retrieve Iot Edge Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-retrieve-iot-edge-logs.md
The [Logger class in IoT Edge](https://github.com/Azure/iotedge/blob/master/edge
Use the **GetModuleLogs** direct method to retrieve the logs of an IoT Edge module.
+>[!TIP]
+>The IoT Edge troubleshooting page in the Azure portal provides a simplified experience for viewing module logs. For more information, see [Monitor and troubleshoot IoT Edge devices from the Azure portal](troubleshoot-in-portal.md).
+ This method accepts a JSON payload with the following schema: ```json
iot-edge Troubleshoot In Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/troubleshoot-in-portal.md
+
+ Title: Troubleshoot from the Azure portal - Azure IoT Edge | Microsoft Docs
+description: Use the troubleshooting page in the Azure portal to monitor IoT Edge devices and modules
+++ Last updated : 05/26/2021+++++
+# Troubleshoot IoT Edge devices from the Azure portal
++
+IoT Edge provides a streamlined way of monitoring and troubleshooting modules in the Azure portal. The troubleshooting page is a wrapper for the IoT Edge agent's direct methods so that you can easily retrieve logs from deployed modules and remotely restart them.
+
+## Prerequisites
+
+The full functionality of this troubleshooting feature in the portal requires IoT Edge version 1.1.3 or newer if you're on the long-term support branch, or version 1.2.1 or newer if you're on the latest stable branch. Both the IoT Edge host component and the edgeAgent module need to be on these versions.
+
+## Access the troubleshooting page
+
+You can access the troubleshooting page in the portal through either the IoT Edge device details page or the IoT Edge module details page.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your IoT hub.
+
+1. In the left pane, select **IoT Edge** from the menu.
+
+1. Select the IoT Edge device that you want to monitor from the list of devices.
+
+1. From the device details page, you can either select **Troubleshoot** from the menu or select the runtime status of a particular module that you want to inspect.
+
+ ![From the device details page select troubleshoot or a module runtime status](./media/troubleshoot-in-portal/troubleshoot-from-device-details.png)
+
+1. From the device details page, you can also select the name of a module to open the module details page. From there, you can select **Troubleshoot** from the menu.
+
+ ![From the module details page select troubleshoot](./media/troubleshoot-in-portal/troubleshoot-from-module-details.png)
+
+## View module logs in the portal
+
+On the **Troubleshoot** page, you can view and download logs from any of the running modules on your IoT Edge device.
+
+This page has a maximum limit of 1500 log lines, and any logs longer than that will be truncated. If the logs are too large, the attempt to get module logs will fail. In that case, try to change the time range filter to retrieve less data or consider using direct methods to [Retrieve logs from IoT Edge deployments](how-to-retrieve-iot-edge-logs.md) to gather larger log files.
+
+Use the dropdown menu to choose which module to inspect.
+
+![Choose modules from the dropdown menu](./media/troubleshoot-in-portal/select-module.png)
+
+By default, this page displays the last fifteen minutes of logs. Select the **Time range** filter to see different logs. Use the slider to select a time window within the last 60 minutes, or check **Enter time instead** to choose a specific datetime window.
+
+![Select time range](./media/troubleshoot-in-portal/select-time-range.png)
+
+Once you have the logs from the module that you want to troubleshoot during the time range that you want to inspect, you can use the **Find** filter to retrieve specific lines from the logs. You can filter for either warnings or errors, or provide a specific term or phrase to search for. The **Find** feature supports plaintext searches or [.NET regular expressions](/dotnet/standard/base-types/regular-expression-language-quick-reference) for more complex searches.
+
+You can download the module logs as a text file. The downloaded log file will reflect any active filters you have applied to the logs.
+
+>[!TIP]
+>The CPU utilization on a device will spike temporarily as it gathers logs in response to a request from the portal. This behavior is expected, and the utilization should stabilize after the task is complete.
+
+## Restart modules
+
+The **Troubleshoot** page includes a feature to restart a module. Selecting this option sends a command to the IoT Edge agent to restart the selected module. Restarting a module won't affect your ability to retrieve logs from before the restart.
+
+![Restart a module from the troubleshoot page](./media/troubleshoot-in-portal/restart-module.png)
+
+## Next steps
+
+Find more tips for [Troubleshooting your IoT Edge device](troubleshoot.md) or learn about [Common issues and resolutions](troubleshoot-common-errors.md).
+
+If you have more questions, create a [Support request](https://portal.azure.com/#create/Microsoft.Support) for help.
iot-edge Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/troubleshoot.md
On Windows:
Once the IoT Edge security daemon is running, look at the logs of the containers to detect issues. Start with your deployed containers, then look at the containers that make up the IoT Edge runtime: edgeAgent and edgeHub. The IoT Edge agent logs typically provide info on the lifecycle of each container. The IoT Edge hub logs provide info on messaging and routing.
-```cmd
-iotedge logs <container name>
-```
+You can retrieve the container logs from several places:
+
+* On the IoT Edge device, run the following command to view logs:
+
+ ```cmd
+ iotedge logs <container name>
+ ```
-You can also use a [direct method](how-to-retrieve-iot-edge-logs.md#upload-module-logs) call to a module on your device to upload the logs of that module to Azure Blob Storage.
+* On the Azure portal, use the built-in troubleshoot tool. [Monitor and troubleshoot IoT Edge devices from the Azure portal](troubleshoot-in-portal.md)
+
+* Use the [UploadModuleLogs direct method](how-to-retrieve-iot-edge-logs.md#upload-module-logs) to upload the logs of a module to Azure Blob Storage.
## Clean up container logs
You can also check the messages being sent between IoT Hub and IoT devices. View
## Restart containers
-After investigating the logs and messages for information, you can try restarting containers:
+After investigating the logs and messages for information, you can try restarting containers.
+
+On the IoT Edge device, use the following commands to restart modules:
```cmd iotedge restart <container name>
Restart the IoT Edge runtime containers:
iotedge restart edgeAgent && iotedge restart edgeHub ```
+You can also restart modules remotely from the Azure portal. For more information, see [Monitor and troubleshoot IoT Edge devices from the Azure portal](troubleshoot-in-portal.md).
+ ## Check your firewall and port configuration rules Azure IoT Edge allows communication from an on-premises server to Azure cloud using supported IoT Hub protocols, see [choosing a communication protocol](../iot-hub/iot-hub-devguide-protocols.md). For enhanced security, communication channels between Azure IoT Edge and Azure IoT Hub are always configured to be Outbound. This configuration is based on the [Services Assisted Communication pattern](/archive/blogs/clemensv/service-assisted-communication-for-connected-devices), which minimizes the attack surface for a malicious entity to explore. Inbound communication is only required for specific scenarios where Azure IoT Hub needs to push messages to the Azure IoT Edge device. Cloud-to-device messages are protected using secure TLS channels and can be further secured using X.509 certificates and TPM device modules. The Azure IoT Edge Security Manager governs how this communication can be established, see [IoT Edge Security Manager](../iot-edge/iot-edge-security-manager.md).
iot-hub Iot Hub Devguide Sdks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-sdks.md
Title: Understand the Azure IoT SDKs | Microsoft Docs
-description: Developer guide - information about and links to the various Azure IoT device and service SDKs that you can use to build device apps and back-end apps.
+ Title: Azure IoT Hub SDKs | Microsoft Docs
+description: Links to the Azure IoT Hub SDKs which you can use to build device apps and back-end apps.
Previously updated : 01/14/2020 Last updated : 06/01/2021
-# Understand and use Azure IoT Hub SDKs
+# Azure IoT Hub SDKs
There are two categories of software development kits (SDKs) for working with IoT Hub:
-* **IoT Hub Device SDKs** enable you to build apps that run on your IoT devices using device client or module client. These apps send telemetry to your IoT hub, and optionally receive messages, job, method, or twin updates from your IoT hub. You can use these SDKs to build device apps that use [Azure IoT Plug and Play](../iot-pnp/overview-iot-plug-and-play.md) conventions and models to advertise their capabilities to IoT Plug and Play-enabled applications. You can also use module client to author [modules](../iot-edge/iot-edge-modules.md) for [Azure IoT Edge runtime](../iot-edge/about-iot-edge.md).
+* [**IoT Hub Service SDKs**](#azure-iot-hub-service-sdks) enable you to build backend applications to manage your IoT hub, and optionally send messages, schedule jobs, invoke direct methods, or send desired property updates to your IoT devices or modules.
-* **IoT Hub Service SDKs** enable you to build backend applications to manage your IoT hub, and optionally send messages, schedule jobs, invoke direct methods, or send desired property updates to your IoT devices or modules.
+* [**IoT Hub Device SDKs**](../iot-develop/about-iot-sdks.md) enable you to build apps that run on your IoT devices using device client or module client. These apps send telemetry to your IoT hub, and optionally receive messages, job, method, or twin updates from your IoT hub. You can use these SDKs to build device apps that use [Azure IoT Plug and Play](../iot-pnp/overview-iot-plug-and-play.md) conventions and models to advertise their capabilities to IoT Plug and Play-enabled applications. You can also use module client to author [modules](../iot-edge/iot-edge-modules.md) for [Azure IoT Edge runtime](../iot-edge/about-iot-edge.md).
In addition, we also provide a set of SDKs for working with the [Device Provisioning Service](../iot-dps/about-iot-dps.md).
In addition, we also provide a set of SDKs for working with the [Device Provisio
Learn about the [benefits of developing using Azure IoT SDKs](https://azure.microsoft.com/blog/benefits-of-using-the-azure-iot-sdks-in-your-azure-iot-solution/). -
-## OS platform and hardware compatibility
-
-Supported platforms for the SDKs can be found in [Azure IoT SDKs Platform Support](iot-hub-device-sdk-platform-support.md).
-
-For more information about SDK compatibility with specific hardware devices, see the [Azure Certified for IoT device catalog](https://devicecatalog.azure.com/) or individual repository.
-
-## Azure IoT Hub Device SDKs
-
-The Microsoft Azure IoT device SDKs contain code that facilitates building applications that connect to and are managed by Azure IoT Hub services.
-
-Azure IoT Hub device SDK for .NET:
-
-* Download from [NuGet](https://www.nuget.org/packages/Microsoft.Azure.Devices.Client/). The namespace is Microsoft.Azure.Devices.Clients, which contains IoT Hub Device Clients (DeviceClient, ModuleClient).
-* [Source code](https://github.com/Azure/azure-iot-sdk-csharp)
-* [API reference](/dotnet/api/microsoft.azure.devices)
-* [Module reference](/dotnet/api/microsoft.azure.devices.client.moduleclient)
--
-Azure IoT Hub device SDK for Embedded C (ANSI C - C99):
-* [Build the Embedded C SDK](https://github.com/Azure/azure-sdk-for-c/tree/master/sdk/docs/iot#build)
-* [Source code](https://github.com/Azure/azure-sdk-for-c)
-* [Size chart](https://github.com/Azure/azure-sdk-for-c/tree/master/sdk/docs/iot#size-chart) for constrained devices.
-* [API reference](https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.Identity/1.0.0/api/https://docsupdatetracker.net/index.html)
--
-Azure IoT Hub device SDK for C (ANSI C - C99):
-
-* Install from [apt-get, MBED, Arduino IDE or iOS](https://github.com/Azure/azure-iot-sdk-c/blob/master/readme.md#packages-and-libraries)
-* [Source code](https://github.com/Azure/azure-iot-sdk-c)
-* [Compile the C Device SDK](https://github.com/Azure/azure-iot-sdk-c/blob/master/iothub_client/readme.md#compiling-the-c-device-sdk)
-* [API reference](/azure/iot-hub/iot-c-sdk-ref/)
-* [Module reference](/azure/iot-hub/iot-c-sdk-ref/iothub-module-client-h)
-* [Porting the C SDK to other platforms](https://github.com/Azure/azure-c-shared-utility/blob/master/devdoc/porting_guide.md)
-* [Developer documentation](https://github.com/Azure/azure-iot-sdk-c/tree/master/doc) for information on cross-compiling, getting started on different platforms, etc.
-* [Azure IoT Hub C SDK resource consumption information](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/c_sdk_resource_information.md)
-
-Azure IoT Hub device SDK for Java:
-
-* Add to [Maven](https://github.com/Azure/azure-iot-sdk-jav#for-the-device-sdk) project
-* [Source code](https://github.com/Azure/azure-iot-sdk-java)
-* [API reference](/java/api/com.microsoft.azure.sdk.iot.device)
-* [Module reference](/java/api/com.microsoft.azure.sdk.iot.device.moduleclient)
-
-Azure IoT Hub device SDK for Node.js:
-
-* Install from [npm](https://www.npmjs.com/package/azure-iot-device)
-* [Source code](https://github.com/Azure/azure-iot-sdk-node)
-* [API reference](/javascript/api/azure-iot-device/)
-* [Module reference](/javascript/api/azure-iot-device/moduleclient)
-
-Azure IoT Hub device SDK for Python:
-
-* Install from [pip](https://pypi.org/project/azure-iot-device/)
-* [Source code](https://github.com/Azure/azure-iot-sdk-python)
-* [API reference](/python/api/azure-iot-device)
-
-Azure IoT Hub device SDK for iOS:
-
-* Install from [CocoaPod](https://cocoapods.org/pods/AzureIoTHubClient)
-* [Samples](https://github.com/Azure-Samples/azure-iot-samples-ios)
-* API reference: see [C API reference](/azure/iot-hub/iot-c-sdk-ref/)
- ## Azure IoT Hub Service SDKs The Azure IoT service SDKs contain code to facilitate building applications that interact directly with IoT Hub to manage devices and security.
-Azure IoT Hub service SDK for .NET:
-
-* Download from [NuGet](https://www.nuget.org/packages/Microsoft.Azure.Devices/). The namespace is Microsoft.Azure.Devices, which contains IoT Hub Service Clients (RegistryManager, ServiceClients).
-* [Source code](https://github.com/Azure/azure-iot-sdk-csharp)
-* [API reference](/dotnet/api/microsoft.azure.devices)
-
-Azure IoT Hub service SDK for Java:
-
-* Add to [Maven](https://github.com/Azure/azure-iot-sdk-jav#for-the-service-sdk) project
-* [Source code](https://github.com/Azure/azure-iot-sdk-java)
-* [API reference](/java/api/com.microsoft.azure.sdk.iot.service)
-
-Azure IoT Hub service SDK for Node.js:
-
-* Download from [npm](https://www.npmjs.com/package/azure-iothub)
-* [Source code](https://github.com/Azure/azure-iot-sdk-node)
-* [API reference](/javascript/api/azure-iothub/)
-
-Azure IoT Hub service SDK for Python:
-
-* Download from [pip](https://pypi.python.org/pypi/azure-iot-hub/)
-* [Source code](https://github.com/Azure/azure-iot-sdk-python/tree/master)
-* [API reference](/python/api/azure-iot-hub)
-
-Azure IoT Hub service SDK for C:
-
-The Azure IoT Service SDK for C is no longer under active development.
-We will continue to fix critical bugs such as crashes, data corruption, and security vulnerabilities. We will NOT add any new feature or fix bugs that are not critical, however.
-
-Azure IoT Service SDK support is available in higher-level languages ([C#](https://github.com/Azure/azure-iot-sdk-csharp), [Java](https://github.com/Azure/azure-iot-sdk-java), [Node](https://github.com/Azure/azure-iot-sdk-node), [Python](https://github.com/Azure/azure-iot-sdk-python)).
-
-* Download from [apt-get, MBED, Arduino IDE, or NuGet](https://github.com/Azure/azure-iot-sdk-c/blob/master/readme.md)
-* [Source code](https://github.com/Azure/azure-iot-sdk-c)
+| Platform | Package | Code Repository | Samples | Reference |
+||||||
+| .NET | [NuGet](https://www.nuget.org/packages/Microsoft.Azure.Devices ) | [GitHub](https://github.com/Azure/azure-iot-sdk-csharp) | [Samples](https://github.com/Azure-Samples/azure-iot-samples-csharp) | [Reference](/dotnet/api/microsoft.azure.devices) |
+| Java | [Maven](https://mvnrepository.com/artifact/com.microsoft.azure.sdk.iot/iot-service-client) | [GitHub](https://github.com/Azure/azure-iot-sdk-java) | [Samples](https://github.com/Azure/azure-iot-sdk-java/tree/master/service/iot-service-samples/pnp-service-sample) | [Reference](/java/api/com.microsoft.azure.sdk.iot.service) |
+| Node | [npm](https://www.npmjs.com/package/azure-iothub) | [GitHub](https://github.com/Azure/azure-iot-sdk-node) | [Samples](https://github.com/Azure/azure-iot-sdk-node/tree/master/service/samples) | [Reference](/javascript/api/azure-iothub/) |
+| Python | [pip](https://pypi.org/project/azure-iot-hub) | [GitHub](https://github.com/Azure/azure-iot-sdk-python) | [Samples](https://github.com/Azure/azure-iot-sdk-python/tree/master/azure-iot-hub/samples) | [Reference](/python/api/azure-iot-hub) |
+| Node.js | [npm](https://www.npmjs.com/package/azure-iot-common) | [GitHub](https://github.com/Azure/azure-iot-sdk-node) | [Samples](https://github.com/Azure/azure-iot-sdk-node/tree/master/service/samples/javascript) | [Reference](/javascript/api/azure-iothub/) |
Azure IoT Hub service SDK for iOS: * Install from [CocoaPod](https://cocoapods.org/pods/AzureIoTHubServiceClient) * [Samples](https://github.com/Azure-Samples/azure-iot-samples-ios)
-> [!NOTE]
-> See the readme files in the GitHub repositories for information about using language and platform-specific package managers to install binaries and dependencies on your development machine.
- ## Microsoft Azure Provisioning SDKs The **Microsoft Azure Provisioning SDKs** enable you to provision devices to your IoT Hub using the [Device Provisioning Service](../iot-dps/about-iot-dps.md).
-Azure Provisioning device and service SDKs for C#:
-
-* Download from [Device SDK](https://www.nuget.org/packages/Microsoft.Azure.Devices.Provisioning.Client/) and [Service SDK](https://www.nuget.org/packages/Microsoft.Azure.Devices.Provisioning.Service/) from NuGet.
-* [Source code](https://github.com/Azure/azure-iot-sdk-csharp/)
-* [API reference](/dotnet/api/microsoft.azure.devices.provisioning.client)
+| Platform | Package | Source code | Reference |
+| --|--|--|--|
+| .NET|[Device SDK](https://www.nuget.org/packages/Microsoft.Azure.Devices.Provisioning.Client/), [Service SDK](https://www.nuget.org/packages/Microsoft.Azure.Devices.Provisioning.Service/) |[GitHub](https://github.com/Azure/azure-iot-sdk-csharp/)|[Reference](/dotnet/api/microsoft.azure.devices.provisioning.client) |
+| C|[apt-get, MBED, Arduino IDE or iOS](https://github.com/Azure/azure-iot-sdk-c/blob/master/readme.md#packages-and-libraries)|[GitHub](https://github.com/Azure/azure-iot-sdk-c/blob/master/provisioning\_client)|[Reference](/azure/iot-hub/iot-c-sdk-ref/) |
+| Java|[Maven](https://github.com/Azure/azure-iot-sdk-jav#for-the-service-sdk)|[GitHub](https://github.com/Azure/azure-iot-sdk-java/blob/master/provisioning)|[Reference](/java/api/com.microsoft.azure.sdk.iot.provisioning.device) |
+| Node.js|[Device SDK](https://badge.fury.io/js/azure-iot-provisioning-device), [Service SDK](https://badge.fury.io/js/azure-iot-provisioning-service) |[GitHub](https://github.com/Azure/azure-iot-sdk-node/tree/master/provisioning)|[Reference](/javascript/api/overview/azure/iothubdeviceprovisioning) |
+| Python|[Device SDK](https://pypi.org/project/azure-iot-device/), [Service SDK](https://pypi.org/project/azure-iothub-provisioningserviceclient/)|[GitHub](https://github.com/Azure/azure-iot-sdk-python)|[Device Reference](/python/api/azure-iot-device/azure.iot.device.provisioningdeviceclient), [Service Reference](/python/api/azure-mgmt-iothubprovisioningservices) |
-Azure Provisioning device and service SDKs for C:
-
-* Install from [apt-get, MBED, Arduino IDE or iOS](https://github.com/Azure/azure-iot-sdk-c/blob/master/readme.md#packages-and-libraries)
-* [Source code](https://github.com/Azure/azure-iot-sdk-c/blob/master/provisioning_client)
-* [API reference](/azure/iot-hub/iot-c-sdk-ref/)
+## Azure IoT Hub Device SDKs
-Azure Provisioning device and service SDKs for Java:
+The Microsoft Azure IoT device SDKs contain code that facilitates building applications that connect to and are managed by Azure IoT Hub services.
-* Add to [Maven](https://github.com/Azure/azure-iot-sdk-jav#for-the-service-sdk) project
-* [Source code](https://github.com/Azure/azure-iot-sdk-java/blob/master/provisioning)
-* [API reference](/java/api/com.microsoft.azure.sdk.iot.provisioning.device)
+Learn more about the IoT Hub Device SDKS in the [IoT Device Development Documentation](../iot-develop/about-iot-sdks.md).
-Azure Provisioning device and service SDKs for Node.js:
+## OS platform and hardware compatibility
-* [Source code](https://github.com/Azure/azure-iot-sdk-node/tree/master/provisioning)
-* [API reference](/javascript/api/overview/azure/iothubdeviceprovisioning)
-* Download [Device SDK](https://badge.fury.io/js/azure-iot-provisioning-device) and [Service SDK](https://badge.fury.io/js/azure-iot-provisioning-service) from npm
+Supported platforms for the SDKs can be found in [Azure IoT SDKs Platform Support](iot-hub-device-sdk-platform-support.md).
-Azure Provisioning device and service SDKs for Python:
+For more information about SDK compatibility with specific hardware devices, see the [Azure Certified for IoT device catalog](https://devicecatalog.azure.com/) or individual repository.
-* [Source code](https://github.com/Azure/azure-iot-sdk-python)
-* Download [Device SDK](https://pypi.org/project/azure-iot-device/) and [Service SDK](https://pypi.org/project/azure-iothub-provisioningserviceclient/) from pip
## Next steps
-Azure IoT SDKs also provide a set of tools to help with development:
-
-* [iothub-diagnostics](https://github.com/Azure/iothub-diagnostics): a cross-platform command line tool to help diagnose issues related to connection with IoT Hub.
-* [azure-iot-explorer](https://github.com/Azure/azure-iot-explorer): a cross-platform desktop application to connect to your IoT Hub and add/manage/communicate with IoT devices.
- Relevant docs related to development using the Azure IoT SDKs: * Learn about [how to manage connectivity and reliable messaging](iot-hub-reliability-features-in-sdks.md) using the IoT Hub SDKs.
key-vault Hsm Protected Keys Ncipher https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/keys/hsm-protected-keys-ncipher.md
When you run this command, use these instructions:
* The parameter *protect* must be set to the value **module**, as shown. This creates a module-protected key. The BYOK toolset does not support OCS-protected keys. * Replace the value of *contosokey* for the **ident** and **plainname** with any string value. To minimize administrative overheads and reduce the risk of errors, we recommend that you use the same value for both. The **ident** value must contain only numbers, dashes, and lower case letters.
-* The pubexp is left blank (default) in this example, but you can specify specific values. For more information, see the [nCipher documentation.](https://www.entrust.com/-/media/documentation/brochures/entrust-nshield-general-purpose-hsms-br-a4.pdf)
+* The pubexp is left blank (default) in this example, but you can specify specific values. For more information, see the [nCipher documentation.](https://go.ncipher.com/rs/104-QOX-775/images/nShield-family-br-A4.pdf)
This command creates a Tokenized Key file in your %NFAST_KMDATA%\local folder with a name starting with **key_simple_**, followed by the **ident** that was specified in the command. For example: **key_simple_contosokey**. This file contains an encrypted key.
key-vault Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/managed-hsm/recovery.md
For more information about soft-delete, see [Managed HSM soft-delete overview](s
* Recover soft-deleted HSM ```azurecli
- az keyvault recover --subscription {SUBSCRIPTION ID} --hsm-name {HSM NAME}
+ az keyvault recover --subscription {SUBSCRIPTION ID} --hsm-name {HSM NAME} --location {LOCATION}
``` * Purge soft-deleted HSM ```azurecli
- az keyvault purge --subscription {SUBSCRIPTION ID} --hsm-name {HSM NAME}
+ az keyvault purge --subscription {SUBSCRIPTION ID} --hsm-name {HSM NAME} --location {LOCATION}
``` > [!WARNING] > This operation will permanently delete your HSM
machine-learning Concept Enterprise Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-enterprise-security.md
-+ Previously updated : 11/20/2020 Last updated : 06/02/2021 # Enterprise security and governance for Azure Machine Learning
and to [deploy Azure Container Instances for web service endpoints](how-to-deplo
We don't recommend that admins revoke the access of the managed identity to the resources mentioned in the preceding table. You can restore access by using the [resync keys operation](how-to-change-storage-access-key.md).
-You can provision the workspace to use user-assigned managed identity, and grant the managed identity additional roles, for example to access your own Azure Container Registry for base Docker images. For more information, see [Use managed identities for access control](how-to-use-managed-identities.md).
+> [!NOTE]
+> If your Azure Machine Learning workspaces has compute targets (compute cluster, compute instance, Azure Kubernetes Service, etc.) that were created __before May 14th, 2021__, you may also have an additional Azure Active Directory account. The account name starts with `Microsoft-AzureML-Support-App-` and has contributor-level access to your subscription for every workspace region.
+>
+> If your workspace does not have an Azure Kubernetes Service (AKS) attached, you can safely delete this Azure AD account.
+>
+> If your workspace has attached AKS clusters, _and they were created before May 14th, 2021_, __do not delete this Azure AD account__. In this scenario, you must first delete and recreate the AKS cluster before you can delete the Azure AD account.
-Azure Machine Learning also creates an additional application (the name starts with `aml-` or `Microsoft-AzureML-Support-App-`) with contributor-level access in your subscription for every workspace region. For example, if you have one workspace in East US and one in North Europe in the same subscription, you'll see two of these applications. These applications enable Azure Machine Learning to help you manage compute resources.
+You can provision the workspace to use user-assigned managed identity, and grant the managed identity additional roles, for example to access your own Azure Container Registry for base Docker images. For more information, see [Use managed identities for access control](how-to-use-managed-identities.md).
You can also configure managed identities for use with Azure Machine Learning compute cluster. This managed identity is independent of workspace managed identity. With a compute cluster, the managed identity is used to access resources such as secured datastores that the user running the training job may not have access to. For more information, see [Identity-based data access to storage services on Azure](how-to-identity-based-data-access.md).
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-access-azureml-behind-firewall.md
Previously updated : 05/11/2021 Last updated : 06/03/2021
In this article, learn how to configure Azure Firewall to control access to your Azure Machine Learning workspace and the public internet. To learn more about securing Azure Machine Learning, see [Enterprise security for Azure Machine Learning](concept-enterprise-security.md).
-> [!WARNING]
-> Access to data storage behind a firewall is only supported in code first experiences. Using the [Azure Machine Learning studio](overview-what-is-machine-learning-studio.md) to access data behind a firewall is not supported. To work with data storage on a private network with the studio, you must first [set up a virtual network](../virtual-network/quick-create-portal.md) and [give the studio access to data stored inside of a virtual network](how-to-enable-studio-virtual-network.md).
- ## Azure Firewall
+> [!IMPORTANT]
+> Azure Firewall is an Azure service that provides security _for Azure Virtual Network resources_. Some other Azure Services, such as Azure Storage Accounts, have their own firewall settings that _apply to the public endpoint for that specific service instance_. The information in this document is specific to Azure Firewall.
+>
+> For information on service instance firewall settings, see [Use studio in a virtual network](how-to-enable-studio-virtual-network.md#firewall-settings).
+ When using Azure Firewall, use __destination network address translation (DNAT)__ to create NAT rules for inbound traffic. For outbound traffic, create __network__ and/or __application__ rules. These rule collections are described in more detail in [What are some Azure Firewall concepts](../firewall/firewall-faq.yml#what-are-some-azure-firewall-concepts). ### Inbound configuration
machine-learning How To Custom Dns https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-custom-dns.md
Previously updated : 04/01/2021 Last updated : 06/04/2021
$workspaceDns.CustomDnsConfigs | format-table
1. Select the link in the __Private endpoint__ column that is displayed. 1. A list of the fully qualified domain names (FQDN) and IP addresses for the workspace private endpoint are at the bottom of the page.
+ :::image type="content" source="./media/how-to-custom-dns/private-endpoint-custom-dns.png" alt-text="List of FQDNs in the portal":::
+
+ > [!TIP]
+ > If the DNS settings do not appear at the bottom of the page, use the __DNS configuration__ link from the left side of the page to view the FQDNs.
machine-learning How To Enable Studio Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-enable-studio-virtual-network.md
The studio supports reading data from the following datastore types in a virtual
* Azure Data Lake Storage Gen2 * Azure SQL Database
+### Firewall settings
+
+Some storage services, such as Azure Storage Account, have firewall settings that apply to the public endpoint for that specific service instance. Usually this setting allows you to allow/disallow access from specific IP addresses from the public internet. __This is not supported__ when using Azure Machine Learning studio. It is supported when using the Azure Machine Learning SDK or CLI.
+
+> [!TIP]
+> Azure Machine Learning studio is supported when using the Azure Firewall service. For more information, see [Use your workspace behind a firewall](how-to-access-azureml-behind-firewall.md).
+ ### Configure datastores to use workspace-managed identity After you add an Azure storage account to your virtual network with a either a [service endpoint](how-to-secure-workspace-vnet.md#secure-azure-storage-accounts-with-service-endpoints) or [private endpoint](how-to-secure-workspace-vnet.md#secure-azure-storage-accounts-with-private-endpoints), you must configure your datastore to use [managed identity](../active-directory/managed-identities-azure-resources/overview.md) authentication. Doing so lets the studio access data in your storage account.
Azure Machine Learning uses [datastores](concept-data.md#datastores) to connect
![Screenshot showing how to enable managed workspace identity](./media/how-to-enable-studio-virtual-network/enable-managed-identity.png)
-These steps add the workspace-managed identity as a __Reader__ to the storage service using Azure RBAC. __Reader__ access lets the workspace retrieve firewall settings to ensure that data doesn't leave the virtual network. Changes may take up to 10 minutes to take effect.
+These steps add the workspace-managed identity as a __Reader__ to the storage service using Azure RBAC. __Reader__ access allows the workspace to view the resource, but not make changes.
### Enable managed identity authentication for default storage accounts
machine-learning How To Train With Datasets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-train-with-datasets.md
src.run_config.source_directory_data_store = "workspaceblobstore"
## Troubleshooting
-* **Dataset initialization failed: Waiting for mount point to be ready has timed out**:
+**Dataset initialization failed: Waiting for mount point to be ready has timed out**:
* If you don't have any outbound [network security group](../virtual-network/network-security-groups-overview.md) rules and are using `azureml-sdk>=1.12.0`, update `azureml-dataset-runtime` and its dependencies to be the latest for the specific minor version, or if you are using it in a run, recreate your environment so it can have the latest patch with the fix. * If you are using `azureml-sdk<1.12.0`, upgrade to the latest version. * If you have outbound NSG rules, make sure there is an outbound rule that allows all traffic for the service tag `AzureResourceMonitor`.
-### Overloaded AzureFile storage
+### AzureFile storage
-If you receive an error `Unable to upload project files to working directory in AzureFile because the storage is overloaded`, apply following workarounds.
+**Unable to upload project files to working directory in AzureFile because the storage is overloaded**:
-If you are using file share for other workloads, such as data transfer, the recommendation is to use blobs so that file share is free to be used for submitting runs. You may also split the workload between two different workspaces.
+* If you are using file share for other workloads, such as data transfer, the recommendation is to use blobs so that file share is free to be used for submitting runs.
+
+* Another option is to split the workload between two different workspaces.
+
+**ConfigException: Could not create a connection to the AzureFileService due to missing credentials. Either an Account Key or SAS token needs to be linked the default workspace blob store.**
+
+To ensure your storage access credentials are linked to the workspace and the associated file datastore, complete the following steps:
+
+1. Navigate to your workspace in the [Azure Portal](https://ms.portal.azure.com).
+1. Select the storage link on the workspace **Overview** page.
+1. On the storage page, select **Access keys** on the left side menu.
+1. Copy the key.
+1. Navigate to the [Azure Machine Learning studio](https://ml.azure.com) for your workspace.
+1. In the studio, select the file datastore for which you want to provide authentication credentials.
+1. Select **Update authentication** .
+1. Paste the key from the previous steps.
+1. Select **Save**.
### Passing data as input
-* **TypeError: FileNotFound: No such file or directory**: This error occurs if the file path you provide isn't where the file is located. You need to make sure the way you refer to the file is consistent with where you mounted your dataset on your compute target. To ensure a deterministic state, we recommend using the abstract path when mounting a dataset to a compute target. For example, in the following code we mount the dataset under the root of the filesystem of the compute target, `/tmp`.
+**TypeError: FileNotFound: No such file or directory**: This error occurs if the file path you provide isn't where the file is located. You need to make sure the way you refer to the file is consistent with where you mounted your dataset on your compute target. To ensure a deterministic state, we recommend using the abstract path when mounting a dataset to a compute target. For example, in the following code we mount the dataset under the root of the filesystem of the compute target, `/tmp`.
- ```python
- # Note the leading / in '/tmp/dataset'
- script_params = {
- '--data-folder': dset.as_named_input('dogscats_train').as_mount('/tmp/dataset'),
- }
- ```
-
- If you don't include the leading forward slash, '/', you'll need to prefix the working directory e.g. `/mnt/batch/.../tmp/dataset` on the compute target to indicate where you want the dataset to be mounted.
+```python
+# Note the leading / in '/tmp/dataset'
+script_params = {
+ '--data-folder': dset.as_named_input('dogscats_train').as_mount('/tmp/dataset'),
+}
+```
+
+If you don't include the leading forward slash, '/', you'll need to prefix the working directory e.g. `/mnt/batch/.../tmp/dataset` on the compute target to indicate where you want the dataset to be mounted.
## Next steps
machine-learning How To Troubleshoot Auto Ml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-troubleshoot-auto-ml.md
If the listed version is not a supported version:
1. `pip uninstall PyJWT` in the command shell and enter `y` for confirmation. 1. Install using `pip install 'PyJWT<2.0.0'`. +
+## Data access
+
+For automated ML runs, you need to ensure the file datastore that connects to your AzureFile storage has the appropriate authentication credentials. Otherwise, the following message results. Learn how to [update your data access authentication credentials](how-to-train-with-datasets.md#azurefile-storage).
+
+Error message:
+`Could not create a connection to the AzureFileService due to missing credentials. Either an Account Key or SAS token needs to be linked the default workspace blob store.`
+ ## Databricks See [How to configure an automated ML experiment with Databricks](how-to-configure-databricks-automl-environment.md#troubleshooting). + ## Forecasting R2 score is always zero This issue arises if the training data provided has time series that contains the same value for the last `n_cv_splits` + `forecasting_horizon` data points.
machine-learning Team Data Science Process For Devops https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/team-data-science-process/team-data-science-process-for-devops.md
The following table provides level-based guidance to help complete the DevOps ob
| Objective | Topic | **Resource** | **Technologies** | **Level** | **Prerequisites** | |--|--|--|--|--|--| | Understand Advanced Analytics | The Team Data Science Process Lifecycle | [This technical walkthrough describes the Team Data Science Process](overview.md) | Data Science | Intermediate | General technology background, familiarity with data solutions, Familiarity with IT projects and solution implementation |
-| Understand the Microsoft Azure Platform for Advanced Analytics | Information Management |
-| [This reference gives and overview of Azure Data Factory to build pipelines for analytics data solutions](../../data-factory/v1/data-factory-introduction.md) | Microsoft Azure Data Factory | Experienced | General technology background, familiarity with data solutions, Familiarity with IT projects and solution implementation |
-| |
-| [This reference covers an overview of the Azure Data Catalog which you can use to document and manage metadata on your data sources](../../data-catalog/overview.md) | Microsoft Azure Data Catalog | Intermediate | General technology background, familiarity with data solutions, familiarity with Relational Database Management Systems (RDBMS) and NoSQL data sources |
-| |
-| [This reference covers an overview of the Azure Event Hubs system and how you and use it to ingest data into your solution](../../event-hubs/event-hubs-about.md) | Azure Event Hubs | Intermediate | General technology background, familiarity with data solutions, familiarity with Relational Database Management Systems (RDBMS) and NoSQL data sources, familiarity with the Internet of Things (IoT) terminology and use |
-| | Big Data Stores |
-| [This reference covers an overview of using the Azure Synapse Analytics to store and process large amounts of data](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md) | Azure Synapse Analytics | Experienced | General technology background, familiarity with data solutions, familiarity with Relational Database Management Systems (RDBMS) and NoSQL data sources, familiarity with HDFS terminology and use |
+| Understand the Microsoft Azure Platform for Advanced Analytics | Information Management | [This reference gives and overview of Azure Data Factory to build pipelines for analytics data solutions](../../data-factory/v1/data-factory-introduction.md) | Microsoft Azure Data Factory | Experienced | General technology background, familiarity with data solutions, Familiarity with IT projects and solution implementation |
+| | | [This reference covers an overview of the Azure Data Catalog which you can use to document and manage metadata on your data sources](../../data-catalog/overview.md) | Microsoft Azure Data Catalog | Intermediate | General technology background, familiarity with data solutions, familiarity with Relational Database Management Systems (RDBMS) and NoSQL data sources |
+| | | [This reference covers an overview of the Azure Event Hubs system and how you and use it to ingest data into your solution](../../event-hubs/event-hubs-about.md) | Azure Event Hubs | Intermediate | General technology background, familiarity with data solutions, familiarity with Relational Database Management Systems (RDBMS) and NoSQL data sources, familiarity with the Internet of Things (IoT) terminology and use |
+| | Big Data Stores | [This reference covers an overview of using the Azure Synapse Analytics to store and process large amounts of data](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md) | Azure Synapse Analytics | Experienced | General technology background, familiarity with data solutions, familiarity with Relational Database Management Systems (RDBMS) and NoSQL data sources, familiarity with HDFS terminology and use |
| | | [This reference covers an overview of using Azure Data Lake to capture data of any size, type, and ingestion speed in one single place for operational and exploratory analytics](../../data-lake-store/data-lake-store-overview.md) | Azure Data Lake Store | Intermediate | General technology background, familiarity with data solutions, familiarity with NoSQL data sources, familiarity with HDFS | | | Machine learning and analytics | [This reference covers an introduction to machine learning, predictive analytics, and Artificial Intelligence systems](../classic/index.yml) | Azure Machine Learning | Intermediate | General technology background, familiarity with data solutions, familiarity with Data Science terms, familiarity with Machine Learning and artificial intelligence terms | | | | [This article provides an introduction to Azure HDInsight, a cloud distribution of the Hadoop technology stack. It also covers what a Hadoop cluster is and when you would use it](../../hdinsight/hadoop/apache-hadoop-introduction.md) | Azure HDInsight | Intermediate | General technology background, familiarity with data solutions, familiarity with NoSQL data sources |
The following table provides level-based guidance to help complete the DevOps ob
| | | [This Microsoft Project template provides a time, resources and goals tracking for an Advanced Analytics project](https://buckwoody.wordpress.com/2017/08/17/a-data-science-microsoft-project-template-you-can-use-in-your-solutions/) | Microsoft Project | Intermediate | Understand Project Management Fundamentals | | | | [This Azure Data Catalog tutorial describes a system of registration and discovery for enterprise data assets](../../data-catalog/data-catalog-get-started.md) | Azure Data Catalog | Beginner | Familiarity with Data Sources and Structures | | | | [This Microsoft Virtual Academy course explains how to set up Dev-Test with Visual Studio Codespace and Microsoft Azure](https://mva.microsoft.com/training-courses/dev-test-with-visual-studio-online-and-microsoft-azure-8420?l=P7Ot1TKz_2104984382) | Visual Studio Codespace | Experienced | Software Development, familiarity with Dev/Test environments |
-| | | [This Management Pack download for Microsoft System Center contains a Guidelines Document to assist in working with Azure assets](https://www.microsoft.com/download/details.aspx?id=38414) | System Center | Intermediate | Experience with System Center for IT Management |
| | | [This document is intended for developer and operations teams to understand the benefits of PowerShell Desired State Configuration](/powershell/scripting/dsc/overview/dscforengineers) | PowerShell DSC | Intermediate | Experience with PowerShell coding, enterprise architectures, scripting | | | Code | [This download also contains documentation on using Visual Studio Codespace Code for creating Data Science and AI applications](https://code.visualstudio.com/) | Visual Studio Codespace | Intermediate | Software Development | | | | [This getting started site teaches you about DevOps and Visual Studio](https://www.visualstudio.com/devops/) | Visual Studio | Beginner | Software Development |
managed-instance-apache-cassandra Deploy Cluster Databricks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/managed-instance-apache-cassandra/deploy-cluster-databricks.md
Previously updated : 03/02/2021 Last updated : 06/02/2021 # Quickstart: Deploy a Managed Apache Spark Cluster (Preview) with Azure Databricks
Follow these steps to create an Azure Databricks cluster in a Virtual Network th
1. In the **New cluster** pane, accept default values for all fields other than the following fields: * **Cluster Name** - Enter a name for the cluster.
- * **Databricks Runtime Version** - Select Scala 2.11 or lower version that is supported by the Cassandra Connector.
+ * **Databricks Runtime Version** - We recommend selecting Databricks runtime version 7.5 or higher, for Spark 3.x support.
- :::image type="content" source="./media/deploy-cluster-databricks/spark-cluster.png" alt-text="Select the Databricks runtime version and the Spark Cluster." border="true":::
+ :::image type="content" source="../cosmos-db/media/cassandra-migrate-cosmos-db-databricks/databricks-runtime.png" alt-text="Select the Databricks runtime version and the Spark Cluster." border="true":::
1. Expand **Advanced Options** and add the following configuration. Make sure to replace the node IPs and credentials:
Follow these steps to create an Azure Databricks cluster in a Virtual Network th
spark.cassandra.connection.ssl.enabled true ```
-1. From the **Libraries** tab, install the latest version of spark connector for Cassandra(*spark-cassandra-connector*) and restart the cluster:
+1. Add the Apache Spark Cassandra Connector library to your cluster to connect to both native and Azure Cosmos DB Cassandra endpoints. In your cluster, select **Libraries** > **Install New** > **Maven**, and then add `com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.0.0` in Maven coordinates.
- :::image type="content" source="./media/deploy-cluster-databricks/connector.png" alt-text="Install the Cassandra connector." border="true":::
## Clean up resources
managed-instance-apache-cassandra Dual Write Proxy Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/managed-instance-apache-cassandra/dual-write-proxy-migration.md
+
+ Title: Live migrate to Azure Managed Instance for Apache Cassandra using Apache Spark and a dual-write proxy.
+description: Learn how to migrate to Azure Managed Instance for Apache Cassandra using Apache Spark and a dual-write proxy.
++++ Last updated : 06/02/2021++
+# Live migration to Azure Managed Instance for Apache Cassandra using dual-write proxy
+
+> [!IMPORTANT]
+> Azure Managed Instance for Apache Cassandra is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Where possible, we recommend using Apache Cassandra native capability to migrate data from your existing cluster into Azure Managed Instance for Apache Cassandra by configuring a [hybrid cluster](configure-hybrid-cluster.md). This will use Apache Cassandra's gossip protocol to replicate data from your source data-center into your new managed instance datacenter in a seamless way. However, there may be some scenarios where your source database version is not compatible, or a hybrid cluster setup is otherwise not feasible. This article describes how to migrate data to Azure Managed Instance for Apache Cassandra in a live fashion using a [dual-write proxy](https://github.com/Azure-Samples/cassandra-proxy) and Apache Spark. The benefits of this approach are:
+
+- **minimal application changes** - the proxy can accept connections from your application code with little or no configuration changes, and will route all requests to your source database, and asynchronously route writes to a secondary target.
+- **client wire protocol dependent** - since this approach is not dependent on backend resources or internal protocols, it can be used with any source or target Cassandra system that implements the Apache Cassandra wire protocol.
+
+The image below illustrates the approach.
+++
+## Prerequisites
+
+* Provision an Azure Managed Instance for Apache Cassandra cluster using [Azure portal](create-cluster-portal.md) or [Azure CLI](create-cluster-cli.md) and ensure you can [connect to your cluster with CQLSH](/azure/managed-instance-apache-cassandra/create-cluster-portal#connecting-to-your-cluster).
+
+* [Provision an Azure Databricks account inside your Managed Cassandra VNet](deploy-cluster-databricks.md). Ensure it also has network access to your source Cassandra cluster. We will create a Spark cluster in this account for the historic data load.
+
+* Ensure you've already migrated the keyspace/table scheme from your source Cassandra database to your target Cassandra Managed Instance database.
++
+## Provision a Spark cluster
+
+We recommend selecting Azure Databricks runtime version 7.5, which supports Spark 3.0.
++
+## Add Spark dependencies
+
+You need to add the Apache Spark Cassandra Connector library to your cluster to connect to both native and Azure Cosmos DB Cassandra endpoints. In your cluster, select **Libraries** > **Install New** > **Maven**, and then add `com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.0.0` in Maven coordinates.
++
+Select **Install**, and then restart the cluster when installation is complete.
+
+> [!NOTE]
+> Make sure that you restart the Databricks cluster after the Cassandra Connector library has been installed.
+
+## Install Dual-write proxy
+
+For optimal performance during dual writes, we recommend installing the proxy on all nodes in your source Cassandra cluster.
+
+```bash
+#assuming you do not have git already installed
+sudo apt-get install git
+
+#assuming you do not have maven already installed
+sudo apt install maven
+
+#clone repo for dual-write proxy
+git clone https://github.com/Azure-Samples/cassandra-proxy.git
+
+#change directory
+cd cassandra-proxy
+
+#compile the proxy
+mvn package
+```
+
+## Start Dual-write proxy
+
+It is recommended that you install the proxy on all nodes in your source Cassandra cluster. At minimum, you need to run the following command in order to start the proxy on each node. Replace `<target-server>` with an IP or server address from one of the nodes in the target cluster. Replace `<path to JKS file>` with path to a local jks file, and `<keystore password>` with the corresponding password:
+
+```bash
+java -jar target/cassandra-proxy-1.0-SNAPSHOT-fat.jar localhost <target-server> --proxy-jks-file <path to JKS file> --proxy-jks-password <keystore password>
+```
+For SSL, you can either implement an existing keystore (for example the one used by your source cluster), or you can create self-signed certificate using keytool:
+
+```bash
+keytool -genkey -keyalg RSA -alias selfsigned -keystore keystore.jks -storepass password -validity 360 -keysize 2048
+```
+
+> [!NOTE]
+> Make sure your client application uses the same keystore and password as the one used for the dual-write proxy when building SSL connections to the database via the proxy.
+
+Starting the proxy in this way assumes the following are true:
+
+- source and target endpoints have the same username and password
+- source and target endpoints implement SSL
+
+By default, the source credentials will be passed through from your client app, and used by the proxy for making connections to the source and target clusters. If necessary, you can specify the username and password of the target Cassandra endpoint separately when starting the proxy:
+
+```bash
+java -jar target/cassandra-proxy-1.0-SNAPSHOT-fat.jar localhost <target-server> --proxy-jks-file <path to JKS file> --proxy-jks-password <keystore password> --target-username <username> --target-password <password>
+```
+
+The default source and target ports, when not specified, will be `9042`. If either the target or source Cassandra endpoints run on a different port, you can use `--source-port` or `--target-port` to specify a different port number.
+
+```bash
+java -jar target/cassandra-proxy-1.0-SNAPSHOT-fat.jar localhost <target-server> --source-port 9042 --target-port 10350 --proxy-jks-file <path to JKS file> --proxy-jks-password <keystore password> --target-username <username> --target-password <password>
+```
+
+You can also disable SSL for source or target endpoints if they do not implement SSL. Use the `--disable-source-tls` or `--disable-target-tls` flags:
+
+```bash
+java -jar target/cassandra-proxy-1.0-SNAPSHOT-fat.jar localhost <target-server> --source-port 9042 --target-port 10350 --proxy-jks-file <path to JKS file> --proxy-jks-password <keystore password> --target-username <username> --target-password <password> --disable-source-tls true --disable-target-tls true
+```
+
+There may be circumstances in which you do not want to install the proxy on the cluster nodes themselves, and prefer to install it on a separate machine. In that scenario, you would need need to specify the IP of the `<source-server>`:
+
+```bash
+java -jar target/cassandra-proxy-1.0-SNAPSHOT-fat.jar <source-server> <destination-server>
+```
+
+> [!NOTE]
+> If you do not install and run the proxy on all nodes in a native Apache Cassandra cluster, this will impact performance in your application as the client driver will no be able to open connections to all nodes within the cluster.
+
+By default, the proxy listens on port 29042. However, you can also change the port the proxy listens on. You may wish to do this if you want to eliminate application level code changes by having the source Cassandra server run on a different port, and have the proxy run on the standard Cassandra port:
+
+```bash
+java -jar target/cassandra-proxy-1.0-SNAPSHOT-fat.jar source-server destination-server --proxy-port 9042
+```
+
+> [!NOTE]
+> Installing the proxy on cluster nodes does not require restart of the nodes. However, if you have many application clients and prefer to have the proxy running on the standard Cassandra port 9042 in order to eliminate any application level code changes, you would need to restart your cluster.
+
+The proxy has some functionality to force protocols which may be necessary if the source endpoint is more advanced then the target. In that case you can specify `--protocol-version` and `--cql-version`:
+
+```bash
+java -jar target/cassandra-proxy-1.0-SNAPSHOT-fat.jar source-server destination-server --protocol-version 4 --cql-version 3.11
+```
+
+Once you have the dual-write proxy up and running, then you will need to change port on your application client and restart (or change Cassandra port and restart cluster if you have chosen this approach). The proxy will then start forwarding writes to the target endpoint. You can learn about [monitoring and metrics](https://github.com/Azure-Samples/cassandra-proxy#monitoring) available in the proxy tool.
++
+## Run the historic data load.
+
+To load the data, create a Scala notebook in your Databricks account. Replace your source and target Cassandra configurations with the corresponding credentials, and source and target keyspaces and tables. Add more variables for each table as required to the below sample, then run. After your application has started sending requests to the dual-write proxy, you are ready to migrate historic data.
+
+```scala
+import com.datastax.spark.connector._
+import com.datastax.spark.connector.cql._
+import org.apache.spark.SparkContext
+
+// source cassandra configs
+val sourceCassandra = Map(
+ "spark.cassandra.connection.host" -> "<Source Cassandra Host>",
+ "spark.cassandra.connection.port" -> "9042",
+ "spark.cassandra.auth.username" -> "<USERNAME>",
+ "spark.cassandra.auth.password" -> "<PASSWORD>",
+ "spark.cassandra.connection.ssl.enabled" -> "true",
+ "keyspace" -> "<KEYSPACE>",
+ "table" -> "<TABLE>"
+)
+
+//target cassandra configs
+val targetCassandra = Map(
+ "spark.cassandra.connection.host" -> "<Source Cassandra Host>",
+ "spark.cassandra.connection.port" -> "9042",
+ "spark.cassandra.auth.username" -> "<USERNAME>",
+ "spark.cassandra.auth.password" -> "<PASSWORD>",
+ "spark.cassandra.connection.ssl.enabled" -> "true",
+ "keyspace" -> "<KEYSPACE>",
+ "table" -> "<TABLE>",
+ //throughput related settings below - tweak these depending on data volumes.
+ "spark.cassandra.output.batch.size.rows"-> "1",
+ "spark.cassandra.output.concurrent.writes" -> "1000",
+ "spark.cassandra.connection.remoteConnectionsPerExecutor" -> "1",
+ "spark.cassandra.concurrent.reads" -> "512",
+ "spark.cassandra.output.batch.grouping.buffer.size" -> "1000",
+ "spark.cassandra.connection.keep_alive_ms" -> "600000000"
+)
+
+//set timestamp to ensure it is before read job starts
+val timestamp: Long = System.currentTimeMillis / 1000
+
+//Read from source Cassandra
+val DFfromSourceCassandra = sqlContext
+ .read
+ .format("org.apache.spark.sql.cassandra")
+ .options(sourceCassandra)
+ .load
+
+//Write to target Cassandra
+DFfromSourceCassandra
+ .write
+ .format("org.apache.spark.sql.cassandra")
+ .options(targetCassandra)
+ .option("writetime", timestamp)
+ .mode(SaveMode.Append)
+ .save
+```
+
+> [!NOTE]
+> In the above Scala sample, you will notice that `timestamp` is being set to the current time prior to reading all the data in the source table, and then `writetime` is being set to this backdated timestamp. This is to ensure that records that are written from the historic data load to the target endpoint cannot overwrite updates that come in with a later timestamp from the dual-write proxy while historic data is being read. If for any reason you need to preserve *exact* timestamps, you should take a historic data migration approach which preserves timestamps, such [this](https://github.com/scylladb/scylla-migrator) sample.
+
+## Validation
+
+Once the historic data load is complete, your databases should be in sync and ready for cutover. However, it is recommended that you carry out a validation of source and target to ensure that request results match before finally cutting over.
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Manage Azure Managed Instance for Apache Cassandra resources using Azure CLI](manage-resources-cli.md)
managed-instance-apache-cassandra Spark Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/managed-instance-apache-cassandra/spark-migration.md
+
+ Title: Migrate to Azure Managed Instance for Apache Cassandra using Apache Spark
+description: Learn how to migrate to Azure Managed Instance for Apache Cassandra using Apache Spark.
++++ Last updated : 06/02/2021++
+# Migrate to Azure Managed Instance for Apache Cassandra using Apache Spark
+
+> [!IMPORTANT]
+> Azure Managed Instance for Apache Cassandra is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Where possible, we recommend using Apache Cassandra native replication to migrate data from your existing cluster into Azure Managed Instance for Apache Cassandra by configuring a [hybrid cluster](configure-hybrid-cluster.md). This approach will use Apache Cassandra's gossip protocol to replicate data from your source data-center into your new managed instance datacenter. However, there may be some scenarios where your source database version isn't compatible, or a hybrid cluster setup is otherwise not feasible.
+
+This article describes how to migrate data to Migrate to Azure Managed Instance for Apache Cassandra in an offline fashion using the Cassandra Spark Connector, and Azure Databricks for Apache Spark.
+
+## Prerequisites
+
+* Provision an Azure Managed Instance for Apache Cassandra cluster using [Azure portal](create-cluster-portal.md) or [Azure CLI](create-cluster-cli.md) and ensure you can [connect to your cluster with CQLSH](/azure/managed-instance-apache-cassandra/create-cluster-portal#connecting-to-your-cluster).
+
+* [Provision an Azure Databricks account inside your Managed Cassandra VNet](deploy-cluster-databricks.md). Ensure it also has network access to your source Cassandra cluster.
+
+* Ensure you've already migrated the keyspace/table scheme from your source Cassandra database to your target Cassandra Managed Instance database.
++
+## Provision an Azure Databricks cluster
+
+We recommend selecting Databricks runtime version 7.5, which supports Spark 3.0.
++
+## Add dependencies
+
+Add the Apache Spark Cassandra Connector library to your cluster to connect to both native and Azure Cosmos DB Cassandra endpoints. In your cluster, select **Libraries** > **Install New** > **Maven**, and then add `com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.0.0` in Maven coordinates.
++
+Select **Install**, and then restart the cluster when installation is complete.
+
+> [!NOTE]
+> Make sure that you restart the Databricks cluster after the Cassandra Connector library has been installed.
+
+## Create Scala Notebook for migration
+
+Create a Scala Notebook in Databricks. Replace your source and target Cassandra configurations with the corresponding credentials, and source and target keyspaces and tables. Then run the following code:
+
+```scala
+import com.datastax.spark.connector._
+import com.datastax.spark.connector.cql._
+import org.apache.spark.SparkContext
+
+// source cassandra configs
+val sourceCassandra = Map(
+ "spark.cassandra.connection.host" -> "<Source Cassandra Host>",
+ "spark.cassandra.connection.port" -> "9042",
+ "spark.cassandra.auth.username" -> "<USERNAME>",
+ "spark.cassandra.auth.password" -> "<PASSWORD>",
+ "spark.cassandra.connection.ssl.enabled" -> "false",
+ "keyspace" -> "<KEYSPACE>",
+ "table" -> "<TABLE>"
+)
+
+//target cassandra configs
+val targetCassandra = Map(
+ "spark.cassandra.connection.host" -> "<Source Cassandra Host>",
+ "spark.cassandra.connection.port" -> "9042",
+ "spark.cassandra.auth.username" -> "<USERNAME>",
+ "spark.cassandra.auth.password" -> "<PASSWORD>",
+ "spark.cassandra.connection.ssl.enabled" -> "true",
+ "keyspace" -> "<KEYSPACE>",
+ "table" -> "<TABLE>",
+ //throughput related settings below - tweak these depending on data volumes.
+ "spark.cassandra.output.batch.size.rows"-> "1",
+ "spark.cassandra.output.concurrent.writes" -> "1000",
+ "spark.cassandra.connection.remoteConnectionsPerExecutor" -> "10",
+ "spark.cassandra.concurrent.reads" -> "512",
+ "spark.cassandra.output.batch.grouping.buffer.size" -> "1000",
+ "spark.cassandra.connection.keep_alive_ms" -> "600000000"
+)
+
+//Read from source Cassandra
+val DFfromSourceCassandra = sqlContext
+ .read
+ .format("org.apache.spark.sql.cassandra")
+ .options(sourceCassandra)
+ .load
+
+//Write to target Cassandra
+DFfromSourceCassandra
+ .write
+ .format("org.apache.spark.sql.cassandra")
+ .options(targetCassandra)
+ .mode(SaveMode.Append) // only required for Spark 3.x
+ .save
+```
+
+> [!NOTE]
+> If you have a need to preserve or backdate the `writetime` of each row, refer to the [live migration](dual-write-proxy-migration.md) article.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Manage Azure Managed Instance for Apache Cassandra resources using Azure CLI](manage-resources-cli.md)
marketplace Analytics Custom Query Specification https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/analytics-custom-query-specification.md
Each part is described below.
#### SELECT
-This part of the query specifies the columns that will get exported. The columns that can be selected are the fields listed in `selectableColumns` and `availableMetrics` sections of a dataset. The final exported rows will always contain distinct values in the selected columns. For example, there will be no duplicate rows in the exported file. Metrics will be calculated for every distinct combination of the selected columns.
+This part of the query specifies the columns that will get exported. The columns that can be selected are the fields listed in `selectableColumns` and `availableMetrics` sections of a dataset. If there is a metric column that has been included in the selected field list, then, metrics will be calculated for every distinct combination of the non-metric columns.
**Example**: - **SELECT** `OfferName`, `NormalizedUsage`
marketplace Analytics Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/analytics-faq.md
Why you might be seeing this message:
## Next steps -- For an overview of analytics reports available in Partner Center, see [Access analytics reports for the commercial marketplace in Partner Center](./partner-center-portal/analytics.md).
+- For an overview of analytics reports available in Partner Center, see [Access analytics reports for the commercial marketplace in Partner Center](analytics.md).
- For information about your orders in a graphical and downloadable format, see [Orders dashboard in commercial marketplace analytics](orders-dashboard.md). - For Virtual Machine (VM) offers usage and metered billing metrics, see [Usage dashboard in commercial marketplace analytics](usage-dashboard.md). - For detailed information about your customers, including growth trends, see [Customers dashboard in commercial marketplace analytics](customer-dashboard.md).-- For a list of your download requests over the last 30 days, see [Downloads dashboard in commercial marketplace analytics](./partner-center-portal/downloads-dashboard.md).-- To see a consolidated view of customer feedback for offers on Azure Marketplace and Microsoft AppSource, see [Ratings & reviews analytics dashboard in Partner Center](./partner-center-portal/ratings-reviews.md).
+- For a list of your download requests over the last 30 days, see [Downloads dashboard in commercial marketplace analytics](downloads-dashboard.md).
+- To see a consolidated view of customer feedback for offers on Azure Marketplace and Microsoft AppSource, see [Ratings & reviews analytics dashboard in Partner Center](ratings-reviews.md).
marketplace Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/analytics.md
+
+ Title: Analytics for the Microsoft commercial marketplace in Partner Center
+description: Access analytic reports to monitor sales, evaluate performance, and optimize your marketplace offers in Partner Center (Azure Marketplace).
+++++ Last updated : 06/01/2021++
+# Access analytic reports for the commercial marketplace in Partner Center
+
+Learn how to access analytic reports in Microsoft Partner Center to monitor sales, evaluate performance, and optimize your offers in the marketplace. As a partner, you can monitor your offer listings using the data visualization and insight graphs supported by Partner Center and find ways to maximize your sales. The improved analytics tools enable you to act on performance results and maintain better relationships with your customers and resellers.
+
+## Partner Center analytics tools
+
+To access the Partner Center analytics tools, open the **[Analyze](https://partner.microsoft.com/dashboard/commercial-marketplace/analytics/summary)** dashboard under Commercial Marketplace.
+
+>[!NOTE]
+> For detailed definitions of analytics terminology, see [Frequently asked questions and terminology for commercial marketplace analytics](analytics-faq.md).
+
+## Next steps
+
+- For graphs, trends, and values of aggregate data that summarize marketplace activity for your offer, see [Summary dashboard in commercial marketplace analytics](summary-dashboard.md).
+- For information about your orders in a graphical and downloadable format, see [Orders dashboard in commercial marketplace analytics](orders-dashboard.md).
+- For Virtual Machine (VM) offers usage and metered billing metrics, see [Usage dashboard in commercial marketplace analytics](usage-dashboard.md).
+- For detailed information about your customers, including growth trends, see [Customer dashboard in commercial marketplace analytics](customer-dashboard.md).
+- For a list of your download requests over the last 30 days, see [Downloads dashboard in commercial marketplace analytics](downloads-dashboard.md).
+- To see a consolidated view of customer feedback for offers on Azure Marketplace and AppSource, see [Ratings and reviews dashboard in commercial marketplace analytics](ratings-reviews.md).
+- For frequently asked questions about commercial marketplace analytics and for a comprehensive dictionary of data terms, see [Frequently asked questions and terminology for commercial marketplace analytics](analytics-faq.md).
marketplace Anomaly Detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/anomaly-detection.md
After you mark an overage usage as an anomaly or acknowledge a model that flagge
## See also - [Metered billing for SaaS using the commercial marketplace metering service](./partner-center-portal/saas-metered-billing.md)-- [Managed application metered billing](./partner-center-portal/azure-app-metered-billing.md)
+- [Managed application metered billing](marketplace-metering-service-apis.md)
- [Anomaly detection service for metered billing](./partner-center-portal/anomaly-detection-service-for-metered-billing.md)
marketplace Azure Ad Saas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-ad-saas.md
This table describes the details about the subscription management process steps
| Process step | Publisher action | Recommended or required for publishers | | | - | - | | 5. The publisher manages the subscription to the SaaS application through the SaaS fulfillment API. | Handle subscription changes and other management tasks through the [SaaS fulfillment APIs](./partner-center-portal/pc-saas-fulfillment-api-v2.md).<br><br>This step requires an access token as described in process step 3. | Required |
-| 6. When using metered pricing, the publisher emits usage events to the metering service API. | If your SaaS app features usage-based billing, make usage notifications through the [Marketplace metering service APIs](./partner-center-portal/marketplace-metering-service-apis.md).<br><br>This step requires an access token as described in Step 3. | Required for metering |
+| 6. When using metered pricing, the publisher emits usage events to the metering service API. | If your SaaS app features usage-based billing, make usage notifications through the [Marketplace metering service APIs](marketplace-metering-service-apis.md).<br><br>This step requires an access token as described in Step 3. | Required for metering |
|||| ## Process steps for user management
marketplace Azure App Apis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-app-apis.md
+
+ Title: Partner Center submission API to onboard Azure apps in the Microsoft commercial marketplace
+description: Learn the prerequisites to use the Partner Center submission API for Azure apps in Azure Marketplace.
+++++ Last updated : 06/01/2021++
+# Partner Center submission API to onboard Azure apps in Partner Center
+
+Use the *Partner Center submission API* to programmatically query, create submissions for, and publish Azure offers. This API is useful if your account manages many offers and you want to automate and optimize the submission process for these offers.
+
+## API prerequisites
+
+There are a few programmatic assets that you need in order to use the Partner Center API for Azure Products:
+
+- an Azure Active Directory application.
+- an Azure Active Directory (Azure AD) access token.
+
+### Step 1: Complete prerequisites for using the Partner Center submission API
+
+Before you start writing code to call the Partner Center submission API, make sure that you have completed the following prerequisites.
+
+- You (or your organization) must have an Azure AD directory and you must have [Global administrator](../active-directory/roles/permissions-reference.md) permission for the directory. If you already use Microsoft 365 or other business services from Microsoft, you already have Azure AD directory. Otherwise, you can [create a new Azure AD in Partner Center](/windows/uwp/publish/associate-azure-ad-with-partner-center#create-a-brand-new-azure-ad-to-associate-with-your-partner-center-account) at no additional charge.
+
+- You must [associate an Azure AD application with your Partner Center account](/windows/uwp/monetize/create-and-manage-submissions-using-windows-store-services#associate-an-azure-ad-application-with-your-windows-partner-center-account) and obtain your tenant ID, client ID and key. You need these values to obtain an Azure AD access token, which you will use in calls to the Microsoft Store submission API.
+
+#### How to associate an Azure AD application with your Partner Center account
+
+To use the Microsoft Store submission API, you must associate an Azure AD application with your Partner Center account, retrieve the tenant ID and client ID for the application, and generate a key. The Azure AD application represents the app or service from which you want to call the Partner Center submission API. You need the tenant ID, client ID and key to obtain an Azure AD access token that you pass to the API.
+
+>[!Note]
+>You only need to perform this task once. After you have the tenant ID, client ID and key, you can reuse them any time you need to create a new Azure AD access token.
+
+1. In Partner Center, [associate your organization's Partner Center account with your organization's Azure AD directory](/windows/uwp/publish/associate-azure-ad-with-partner-center).
+1. Next, from the **Users** page in the **Account settings** section of Partner Center, [add the Azure AD application](/windows/uwp/publish/add-users-groups-and-azure-ad-applications#add-azure-ad-applications-to-your-partner-center-account) that represents the app or service that you will use to access submissions for your Partner Center account. Make sure you assign this application the **Manager** role. If the application doesn't exist yet in your Azure AD directory, you can [create a new Azure AD application in Partner Center](/windows/uwp/publish/add-users-groups-and-azure-ad-applications#create-a-new-azure-ad-application-account-in-your-organizations-directory-and-add-it-to-your-partner-center-account).
+1. Return to the **Users** page, click the name of your Azure AD application to go to the application settings, and copy down the **Tenant ID** and **Client ID** values.
+1. Click **Add new key**. On the following screen, copy down the **Key** value. You won't be able to access this info again after you leave this page. For more information, see [Manage keys for an Azure AD application](/windows/uwp/publish/add-users-groups-and-azure-ad-applications#manage-keys).
+
+### Step 2: Obtain an Azure AD access token
+
+Before you call any of the methods in the Partner Center submission API, you must first obtain an Azure AD access token that you pass to the **Authorization** header of each method in the API. After you obtain an access token, you have 60 minutes to use it before it expires. After the token expires, you can refresh the token so you can continue to use it in future calls to the API.
+
+To obtain the access token, follow the instructions in [Service to Service Calls Using Client Credentials](../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md) to send an `HTTP POST` to the `https://login.microsoftonline.com/<tenant_id>/oauth2/token` endpoint. Here is a sample request:
+
+JSONCopy
+```Json
+POST https://login.microsoftonline.com/<tenant_id>/oauth2/token HTTP/1.1
+Host: login.microsoftonline.com
+Content-Type: application/x-www-form-urlencoded; charset=utf-8
+
+grant_type=client_credentials
+&client_id=<your_client_id>
+&client_secret=<your_client_secret>
+&resource= https://api.partner.microsoft.com
+```
+
+For the *tenant_id* value in the `POST URI` and the *client_id* and *client_secret* parameters, specify the tenant ID, client ID and the key for your application that you retrieved from Partner Center in the previous section. For the *resource* parameter, you must specify `https://api.partner.microsoft.com`.
+
+### Step 3: Use the Microsoft Store submission API
+
+After you have an Azure AD access token, you can call methods in the Partner Center submission API. To create or update submissions, you typically call multiple methods in the Partner Center submission API in a specific order. For information about each scenario and the syntax of each method, see the Ingestion API swagger.
+
+https://apidocs.microsoft.com/services/partneringestion/
+
+## Next steps
+
+* [Create an Azure Container technical asset](azure-container-technical-assets.md)
+* [Create an Azure Container offer](azure-container-offer-setup.md)
marketplace Azure App Managed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-app-managed.md
+
+ Title: Configure a managed application plan
+description: Configure a managed application plan for your Azure application offer in Partner Center (Azure Marketplace).
++++++ Last updated : 06/01/2021++
+# Configure a managed application plan
+
+This article applies only to managed application plans for an Azure application offer. If youΓÇÖre configuring a solution template plan, go to [Configure a solution template plan](azure-app-solution.md).
+
+## Re-use technical configuration (optional)
+
+If youΓÇÖve created more than one plan of the same type within this offer and the technical configuration is identical between them, you can reuse the technical configuration from another plan. This setting cannot be changed after this plan is published.
+
+To re-use a technical configuration:
+
+1. Select the **This plan reuses the technical configuration from another plan of the same type** check box.
+1. In the list that appears, select the base plan you want.
+
+> [!NOTE]
+> If a plan is currently reusing or has reused the technical configuration from another plan of the same type, go to that base plan to view history of previously published packages.
+
+## Define markets, pricing, and availability
+
+Every plan must be available in at least one market. On the **Pricing and availability** tab, you can configure the markets this plan will be available in, the price, and whether to make the plan visible to everyone or only to specific customers (also called a private plan).
+
+1. Under **Markets**, select the **Edit markets** link.
+1. In the dialog box that appears, select the market locations where you want to make your plan available. You must select a minimum of one and maximum of 141 markets.
+
+ > [!NOTE]
+ > This dialog box includes a search box and an option to filter on only "Tax Remitted" countries, in which Microsoft remits sales and use tax on your behalf.
+
+1. Select **Save**, to close the dialog box.
+
+## Define pricing
+
+In the **Price** box, provide the per-month price for this plan. This price is in addition to any Azure infrastructure or usage-based costs incurred by the resources deployed by this solution.
+
+In addition to the per-month price, you can also set prices for consumption of non-standard units using [metered billing](partner-center-portal/azure-app-metered-billing.md). You may set the per-month price to zero and charge exclusively using metered billing if you like.
+
+Prices are set in USD (USD = United States Dollar) are converted into the local currency of all selected markets using the current exchange rates when saved. Validate these prices before publishing by exporting the pricing spreadsheet and reviewing the price in each market. If you would like to set custom prices in an individual market, modify and import the pricing spreadsheet.
+
+### Add a custom meter dimension (optional)
+
+1. Under **Marketplace Metering Service dimensions**, select the **Add a Custom Meter Dimension (Max 18)** link.
+1. In the **ID** box, enter the immutable identifier reference while emitting usage events.
+1. In the **Display Name** box, enter the display name associated with the dimension. For example, "text messages sent".
+1. In the **Unit of Measure** box, enter the description of the billing unit. For example, "per text message" or "per 100 emails".
+1. In the **Price per unit in USD** box, enter the price for one unit of the dimension.
+1. In the **Monthly quantity included in base** box, enter the quantity (as an integer) of the dimension that's included each month for customers who pay the recurring monthly fee. To set an unlimited quantity, select the check box instead.
+1. To add another custom meter dimension, repeat steps 1 through 7.
+
+### Set custom prices (optional)
+
+Prices set in USD (USD = United States Dollar) are converted into the local currency of all selected markets using the current exchange rates when saved. Validate these prices before publishing by exporting the pricing spreadsheet and reviewing the price in each market. If you would like to set custom prices in an individual market, modify and import the pricing spreadsheet.
+
+Review your prices carefully before publishing, as there are some restrictions on what can change after a plan is published.
+
+> [!NOTE]
+> After a price for a market in your plan is published, it can't be changed later.
+
+To set custom prices in an individual market, export, modify, and then import the pricing spreadsheet. You're responsible for validating this pricing and owning these settings. For detailed information, see [Custom prices](plans-pricing.md#custom-prices).
+
+1. You must first save your pricing changes to enable export of pricing data. Near the bottom of the **Pricing and availability** tab, select **Save draft**.
+1. Under **Pricing**, select the **Export pricing data** link.
+1. Open the exportedPrice.xlsx file in Microsoft Excel.
+1. In the spreadsheet, make the updates you want to your pricing information and then save the file.
+
+ You may need to enable editing in Excel before you can update the file.
+
+1. On the **Pricing and availability** tab, under **Pricing**, select the **Import pricing data** link.
+1. In the dialog box that appears, click **Yes**.
+1. Select the exportedPrice.xlsx file you updated, and then click **Open**.
+
+## Choose who can see your plan
+
+You can configure each plan to be visible to everyone or to only a specific audience. You grant access to a private audience using Azure subscription IDs with the option to include a description of each subscription ID you assign. You can add a maximum of 10 subscription IDs manually or up to 10,000 subscription IDs using a .CSV file. Azure subscription IDs are represented as GUIDs and letters must be lowercase.
+
+> [!NOTE]
+> If you publish a private plan, you can change its visibility to public later. However, once you publish a public plan, you cannot change its visibility to private.
+
+Under **Plan visibility**, do one of the following:
+
+- To make the plan public, select the **Public** option button (also known as a _radio button_).
+- To make the plan private, select the **Private** option button and then add the Azure subscription IDs manually or with a CSV file.
+
+> [!NOTE]
+> A private or restricted audience is different from the preview audience you defined on the **Preview** tab. A preview audience can access your offer before its published live in the marketplace. While the private audience choice only applies to a specific plan, the preview audience can view all plans (private or not) for validation purposes.
+
+### Manually add Azure subscription IDs for a private plan
+
+1. Under **Plan visibility**, select the **Private** option button.
+1. In the **Azure Subscription ID** box that appears, enter the Azure subscription ID of the audience you want to grant access to this private plan. A minimum of one subscription ID is required.
+1. (Optional) Enter a description of this audience in the **Description** box.
+1. To add another subscription ID, select the **Add ID (Max 10)** link and repeat steps 2 and 3.
+
+### Use a .CSV file to add Azure subscription IDs for a private plan
+
+1. Under **Plan visibility**, select the **Private** option button.
+1. Select the **Export Audience (csv)** link.
+1. Open the .CSV file and add the Azure subscription IDs you want to grant access to the private offer to the **ID** column.
+1. Optionally, enter a description for each audience in the **Description** column.
+1. Add "SubscriptionId" in the **Type** column, for each row with a subscription ID.
+1. Save the .CSV file.
+1. On the **Availability** tab, under **Plan visibility**, select the **Import Audience (csv)** link.
+1. In the dialog box that appears, select **Yes**.
+1. Select the .CSV file and then select **Open**. A message appears indicating that the .CSV file was successfully imported.
+
+## Define the technical configuration
+
+On the **Technical configuration** tab, youΓÇÖll upload the deployment package that lets customers deploy your plan and provide a version number for the package. YouΓÇÖll also provide other technical information.
+
+> [!NOTE]
+> This tab wonΓÇÖt be visible if you chose to re-use packages from another plan on the **Plan setup** tab. If so, go to [View your plans](#view-your-plans).
+
+### Assign a version number for the package
+
+In the **Version** box provide the current version of the technical configuration. Increment this version each time you publish a change to this page. The version number must be in the format: integer.integer.integer. For example, `1.0.2`.
+
+### Upload a package file
+
+Under **Package file (.zip)**, drag your package file to the gray box or select the **browse for your file(s)** link.
+
+> [!NOTE]
+> If you have an issue uploading files, make sure your local network does not block the `https://upload.xboxlive.com` service used by Partner Center.
+
+#### Previously published packages
+
+The **Previously published packages** sub-tab enables you to view all published versions of your technical configuration.
+
+### Enable just-in-time (JIT) access (optional)
+
+To enable JIT access for this plan, select the **Enable just-in-time (JIT) access** check box. To require that consumers of your managed application grant your account permanent access, leave this option unchecked. To learn more about this option, see [Just in time (JIT) access](plan-azure-app-managed-app.md#just-in-time-jit-access).
+
+### Select a deployment mode
+
+Select either the **Complete** or **Incremental** deployment mode.
+
+- In **Complete** mode, a redeployment of the application by the customer will result in removal of resources in the managed resource group if the resources are not defined in the [mainTemplate.json](../azure-resource-manager/managed-applications/publish-service-catalog-app.md?tabs=azure-powershell#create-the-arm-template).
+- In **Incremental** mode, a redeployment of the application leaves existing resources unchanged.
+
+To learn more about deployment modes, see [Azure Resource Manager deployment modes](../azure-resource-manager/templates/deployment-modes.md).
+
+### Provide a notification endpoint URL
+
+In the **Notification Endpoint URL** box, provide an HTTPS Webhook endpoint to receive notifications about all CRUD operations on managed application instances of this plan version.
+
+### Customize allowed customer actions (optional)
+
+1. To specify which actions customers can perform on the managed resources in addition to the "`*/read`" actions that is available by default, select the **Customize allowed customer actions** box.
+1. In the boxes that appear, provide the additional control actions and allowed data actions you want to enable your customer to perform, separated by semicolons. For example, to permit consumers to restart virtual machines, add `Microsoft.Compute/virtualMachines/restart/action` to the **Allowed control actions** box.
+
+### Choose who can manage the application
+
+Indicate who should have management access to this managed application in each selected Azure region: _Global Azure_ and _Azure Government Cloud_. You will use Azure AD identities to identify the users, groups, or applications that you want to grant permission to the managed resource group. For more information, see [Plan an Azure managed application for an Azure Application offer](plan-azure-application-offer.md).
+
+Complete the following steps for Global Azure and Azure Government Cloud, as applicable.
+
+1. In the **Azure Active Directory Tenant ID** box, enter the Azure AD Tenant ID (also known as directory ID) containing the identities of the users, groups, or applications you want to grant permissions to.
+1. In the **Principal ID** box, provide the Azure AD object ID of the user, group, or application that you want to be granted permission to the managed resource group. Identify the user by their Principal ID, which can be found at the [Azure Active Directory users blade](https://portal.azure.com/#blade/Microsoft_AAD_IAM/UsersManagementMenuBlade/AllUsers) on the Azure portal.
+1. From the **Role definition** list, select an Azure AD built-in role. The role you select describes the permissions the principal will have on the resources in the customer subscription.
+1. To add another authorization, select the **Add authorization (max 100)** link, and repeat steps 1 through 3.
+
+### Policy settings (optional)
+
+You can configure a maximum of five policies, and only one instance of each Policies option. Some policies require additional parameters.
+
+1. Under **Policy settings**, select the **+ Add policy (max 5)** link.
+1. In the **Name** box, enter the policy assignment name (limited to 50 characters).
+1. From the **Policies** list box, select the Azure policy that will be applied to resources created by the managed application in the customer subscription.
+1. In the **Policy parameters** box, provide the parameter on which the auditing and diagnostic settings policies should be applied.
+1. From the **Policy SKU** list box, select the policy SKU type.
+
+ > [!NOTE]
+ > The _Standard policy_ SKU is required for audit policies.
+
+## View your plans
+
+- Select **Save draft**, and then in the upper left of the page, select **Plan overview** to return to the **Plan overview** page.
+
+After you create one or more plans, you'll see your plan name, plan ID, plan type, availability (Public or Private), current publishing status, and any available actions on the **Plan overview** tab.
+
+The actions that are available in the **Action** column of the **Plan overview** tab vary depending on the status of your plan, and may include the following:
+
+- If the plan status is **Draft**, the link in the **Action** column will say **Delete draft**.
+- If the plan status is **Live**, the link in the **Action** column will be either **Stop selling plan** or **Sync private audience**. The **Sync private audience** link will publish only the changes to your private audiences, without publishing any other updates you might have made to the offer.
+- To create another plan for this offer, at the top of the **Plan overview** tab, select **+ Create new plan**. Then repeat the steps in [How to create plans for your Azure application offer](azure-app-plans.md). Otherwise, if you're done creating plans, go to the next section: Next steps.
+
+## Next steps
+
+- [Test and publish Azure application offer](azure-app-test-publish.md).
+- [Sell an Azure application offer](azure-app-marketing.md) through the **Co-sell with Microsoft** and/or **Resell through CSPs** programs.
marketplace Azure App Marketing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-app-marketing.md
+
+ Title: Sell your Azure application offer
+description: Learn about the co-sell with Microsoft and resell through Cloud Solution Providers (CSP) program options for an Azure application offer in the Microsoft commercial marketplace (Azure Marketplace).
++++++ Last updated : 06/01/2021++
+# Sell an Azure Application offer
+
+This article describes additional options you can choose if youΓÇÖre selling your Azure Application offer through Microsoft. You can co-sell your offer with Microsoft, resell it through the [Cloud Solution Providers (CSP) program](cloud-solution-providers.md), or both.
+
+## Co-sell with Microsoft
+
+Providing information on the **Co-sell with Microsoft** tab is entirely optional. But itΓÇÖs required to achieve _Co-sell Ready_ and _IP Co-sell Ready_ status. The Microsoft sales teams use this information to learn more about your solution when evaluating its fit for customer needs. The information you provide on this tab isn't available directly to customers.
+
+For details and instructions to configure the **Co-sell with Microsoft** tab, see [Co-sell option in the commercial marketplace](co-sell-configure.md).
+
+## Resell through CSPs
+
+If you elect to make your offer available in the Cloud Solution Provider (CSP) program, CSPs can sell your product as part of a bundled solution to their customers. For more information about reselling your offer through the CSP program and step-by-step instructions to configure the **Resell through CSPs** tab, see [Cloud Solution Provider program](cloud-solution-providers.md).
+
+## Next steps
+
+- [Test and publish an Azure application offer](azure-app-test-publish.md)
+- [Active marketplace rewards](marketplace-rewards.md)
marketplace Azure App Metered Billing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-app-metered-billing.md
+
+ Title: Metered billing for managed applications using the marketplace metering service | Azure Marketplace
+description: This documentation is a guide for ISVs publishing Azure applications with flexible billing models (Azure Marketplace).
+++ Last updated : 04/22/2020++++
+# Managed application metered billing
+
+With the Marketplace metering service, you can create managed application plans for Azure Application offers that are charged according to non-standard units. Before publishing this offer, you define the billing dimensions such as bandwidth, tickets, or emails processed. Customers then pay according to their consumption of these dimensions. Your system will inform Microsoft via the Marketplace metering service API of billable events as they occur.
+
+## Prerequisites for metered billing
+
+In order for a managed application plan to use metered billing, it must:
+
+* Meet all of the offer requirements as outlined in [Create an Azure application offer](azure-app-offer-setup.md).
+* Configure **Pricing** for charging customers the per-month cost for your service. Price can be zero if you don't want to charge a fixed fee and instead rely entirely on metered billing.
+* Set **Billing dimensions** for the metering events the customer will pay for on top of the flat rate.
+* Integrate with the [Marketplace metering service APIs](./marketplace-metering-service-apis.md) to inform Microsoft of billable events.
+
+## How metered billing fits in with pricing
+
+When it comes to defining the offer along with its pricing models, it is important to understand the offer hierarchy.
+
+* Each Azure Application offer can have Solution template or managed application plans.
+* Metered billing is implemented only with managed application plans.
+* Each managed application plan has a pricing model associated with it.
+* Pricing model has a monthly recurring fee, which can be set to $0.
+* In addition to the recurring fee, the plan can also include optional dimensions used to charge customers for usage not included in the flat rate. Each dimension represents a billable unit that your service will communicate to Microsoft using the [Marketplace metering service API](marketplace-metering-service-apis.md).
+
+## Sample offer
+
+As an example, Contoso is a publisher with a managed application service called Contoso Analytics (CoA). CoA allows customers to analyze large amount of data for reporting and data warehousing. Contoso is registered as a publisher in Partner Center for the commercial marketplace program to publish offers to Azure customers. There are two plans associated with CoA, outlined below:
+
+* Base plan
+ * Analyze 100 GB and generate 100 reports for $0/month
+ * Beyond the 100 GB, pay $10 for every 1 GB
+ * Beyond the 100 reports, pay $1 for every report
+* Premium plan
+ * Analyze 1000 GB and generate 1000 reports for $350/month
+ * Beyond the 1000 GB, pay $100 for every 1 TB
+ * Beyond the 1000 reports, pay $0.5 for every report
+
+An Azure customer subscribing to CoA service can analyze and generate reports per month based on the plan selected. Contoso measures the usage up to the included quantity without sending any usage events to Microsoft. When customers consume more than the included quantity, they do not have to change plans or do anything different. Contoso will measure the overage beyond the included quantity and start emitting usage events to Microsoft for additional usage using the [Marketplace metering service API](./marketplace-metering-service-apis.md). Microsoft in turn will charge the customer for the additional usage as specified by the publisher.
+
+## Billing dimensions
+
+Billing dimensions are used to communicate to the customer on how they will be billed for using the software. These dimensions are also used to communicate usage events to Microsoft. They are defined as follows:
+
+* **Dimension identifier**: the immutable identifier referenced while emitting usage events.
+* **Dimension name**: the display name associated with the dimension, for example "text messages sent".
+* **Unit of measure**: the description of the billing unit, for example "per text message" or "per 100 emails".
+* **Price per unit**: the price for one unit of the dimension.
+* **Included quantity for monthly term**: quantity of dimension included per month for customers paying the recurring monthly fee, must be an integer.
+
+Billing dimensions are shared across all plans for an offer. Some attributes apply to the dimension across all plans, and other attributes are plan-specific.
+
+The attributes, which define the dimension itself, are shared across all plans for an offer. Before you publish the offer, a change made to these attributes from the context of any plan will affect the dimension definition across all plans. Once you publish the offer, these attributes will no longer be editable. The attributes are:
+
+* Identifier
+* Name
+* Unit of measure
+
+The other attributes of a dimension are specific to each plan and can have different values from plan to plan. Before you publish the plan, you can edit these values and only this plan will be affected. Once you publish the plan, these attributes will no longer be editable. The attributes are:
+
+* Price per unit
+* Included quantity for monthly customers
+* Included quantity for annual customers
+
+Dimensions also have two special concepts, "enabled" and "infinite":
+
+* **Enabled** indicates that this plan participates in this dimension. You might want to leave this option un-checked if you are creating a new plan that does not send usage events based on this dimension. Also, any new dimensions added after a plan was first published will show up as "not enabled" on the already published plan. A disabled dimension will not show up in any lists of dimensions for a plan seen by customers.
+* **Infinite**, represented by the infinity symbol "∞", indicates that this plan participates in this dimension, without metered usage against this dimension. If you want to indicate to your customers that the functionality represented by this dimension is included in the plan, but with no limit on usage. A dimension with infinite usage will show up in lists of dimensions for a plan seen by customers. This plan will never incur a charge.
+
+>[!Note]
+>The following scenarios are explicitly supported: <br> - You can add a new dimension to a new plan. The new dimension will not be enabled for any already published plans. <br> - You can publish a plan with a fixed monthly fee and without any dimensions, then add a new plan and configure a new dimension for that plan. The new dimension will not be enabled for already published plans.
+
+## Constraints
+
+### Locking behavior
+
+A dimension used with the Marketplace metering service represents an understanding of how a customer will be paying for the service. All details of a dimension are no longer editable once an offer is published. Before publishing your offer, it's important that you have your dimensions fully defined.
+
+Once an offer is published with a dimension, the offer-level details for that dimension can no longer be changed:
+
+* Identifier
+* Name
+* Unit of measure
+
+Once a plan is published, the plan-level details can no longer be changed:
+
+* Price per unit
+* Included quantity for monthly term
+* Whether the dimension is enabled for the plan
+
+>[!Note]
+>Metered billing using the marketplace metering service is not yet supported on the Azure Government Cloud.
+
+### Upper limits
+
+The maximum number of dimensions that can be configured for a single offer is 30 unique dimensions.
+
+## Get support
+
+If you have one of the following issues, you can open a support ticket.
+
+* Technical issues with marketplace metering service API.
+* An issue that needs to be escalated because of an error or bug on your side (ex. wrong usage event).
+* Any other issues related to metered billing.
+
+Follow the instruction in [Support for the commercial marketplace program in Partner Center](support.md) to understand publisher support options and open support ticket with Microsoft.
+
+## Next steps
+
+- See [Marketplace metering service APIs](marketplace-metering-service-apis.md) for more information.
marketplace Azure App Offer Listing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-app-offer-listing.md
+
+ Title: Configure your Azure application offer listing details
+description: Configure the listing details for your Azure application offer in Partner Center (Azure Marketplace).
++++++ Last updated : 06/01/2021++
+# Configure your Azure application offer listing details
+
+The information you provide on the **Offer listing** page for your Azure Application offer will be displayed in the Microsoft commercial marketplace online stores. This includes the descriptions of your offer, screenshots, and your marketing assets. To see what this looks like, see [Offer listing details](plan-azure-application-offer.md#offer-listing-details).
+
+> [!NOTE]
+> Offer listing content (such as the description, documents, screenshots, and terms of use) is not required to be in English if the offer description begins with the phrase, "This application is available only in [non-English language]." It is also acceptable to provide a _Useful Link_ URL to offer content in a language other than the one used in the offer listing content.
+
+## Marketplace details
+
+On the **Offer listing** page, under **Marketplace details**, complete the following steps. To learn more about these required details, see [Offer listing details](plan-azure-application-offer.md#offer-listing-details).
+
+1. The **Name** box is prefilled with the name you entered earlier in the **New offer** dialog box. You can change the name at any time. The name you enter here will be shown to customers as the title of your offer listing.
+1. In the **Search results summary** box, enter up to 100 characters of text. This summary is used in the marketplace listing search results.
+1. In the **Short description** box, enter up to 256 characters of plain text. This summary will appear on your offerΓÇÖs details page.
+1. In the **Description** box, enter a description for your offer. This text box has rich text editor controls that you can use to make your description more engaging. You can also use HTML tags to format your description. You can enter up to 3,000 characters of text in this box, which includes HTML markup and spaces. For information about HTML formatting, see [HTML tags supported in the commercial marketplace offer descriptions](supported-html-tags.md).
+1. (Optional) In the **Search keywords** boxes, enter up to three search keywords that customers can use to find your offer in the commercial marketplace. You don't need to include the offer **Name** and **Description** because that text is automatically included in search.
+1. In the **Privacy policy link** box, enter a link (starting with https) to your organization's privacy policy. You're responsible to ensure your app complies with privacy laws and regulations, and for providing a valid privacy policy.
+
+## Add supplemental links (optional)
+
+Complete these steps to add links to supplemental online documentation.
+
+1. To add optional supplemental online documents about your app or related services, under **Useful links**, select **Add a link**.
+1. In the fields that appear, enter a title (up to 255 characters) and the link (starting with `https://`) to the online document.
+1. To enter another link, repeat steps 1 through 2.
+
+## Enter your contact information
+
+Under **Contact information**, provide information for the following contacts:
+
+- **Support contact** (required) ΓÇô For general support questions.
+- **Engineering contact** (required) ΓÇô For technical questions. We will use this information to contact you when there are issues with your offer, including certification issues.
+- **CSP Program contact** (optional) ΓÇô For support and business issues. This information is only shown to CSP partners.
+
+For each contact, you'll provide a name, phone number, and email address (these won't be displayed publicly). A Support URL is required for the Support Contact (this will be displayed publicly).
+
+1. In the **Support contact** boxes, enter a name, email address, phone number, and the URL to your support page.
+1. In the **Engineering contact** boxes, enter a name, email address, and phone number.
+1. (Optional) In the **CSP Program Contact** boxes, enter a name, email address, and phone number.
+1. To extend your offer to the [Cloud Solution Provider (CSP) program](cloud-solution-providers.md), in the **CSP Program Marketing Materials** box, provide a link to your marketing materials.
+
+ > [!NOTE]
+ > The CSP program extends your offer to a broader range of qualified customers by enabling CSP partners to bundle, market, and resell your offer. These resellers will need access to materials for marketing your offer. For more information, see [Go-to-market with Microsoft](https://partner.microsoft.com/reach-customers/gtm).
+
+## Add marketplace media
+
+### Store logos
+
+Under **Logos**, upload a **Large** logo in PNG format between 216 x 216 and 350 x 350 pixels. Partner Center will automatically create **Small** (48 x 48) and **Medium** (90 x 90) logos, which you can replace later if you want.
+
+All three logo sizes are used in different places in the online stores.
+
+- The large logo appears on your offer listing page in Azure Marketplace.
+- The medium logo appears when you create a new resource in Microsoft Azure.
+- The small logo appears in Azure Marketplace search results.
+
+### Add screenshots (optional)
+
+Add up to five screenshots that demonstrate your offer. All images must be 1280 x 720 pixels in size and in .PNG format.
+
+1. Under **Screenshots**, drag and drop your .PNG file onto the **Screenshot** box.
+1. Next to **Add image caption**, click the Edit icon.
+1. In the dialog box that appears, enter a caption and click **OK**.
+1. Repeat steps 1 through 3 to add additional screenshots.
+
+### Add videos (optional)
+
+You can add links to YouTube or Vimeo videos that demonstrate your offer. These videos are shown to customers along with your offer. You must enter a thumbnail image of the video, sized to 1280 x 720 pixels in size and in .PNG format. You can add a maximum of four videos per offer.
+
+1. Under **Videos**, select the **Add video** link.
+1. In the boxes that appear, enter the name and link for your video.
+1. Drag and drop a .PNG file (1280 x 720 pixels) onto the gray **Thumbnail** box.
+1. To add another video, repeat steps 1 through 3.
+
+> [!TIP]
+> If you have an issue uploading files, make sure your local network does not block the https://upload.xboxlive.com service used by Partner Center.
+
+Select **Save draft** before continuing to the next tab: Preview audience.
+
+## Next steps
+
+- [Add a preview audience to this offer](azure-app-preview.md)
marketplace Azure App Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-app-offer-setup.md
+
+ Title: Create an Azure application offer in Azure Marketplace
+description: Create an Azure application offer for listing or selling in Azure Marketplace, or through the Cloud Solution Provider (CSP) program using the commercial marketplace portal.
++++++ Last updated : 06/01/2021++
+# Create an Azure application offer
+
+As a commercial marketplace publisher, you can create an Azure application offer so potential customers can buy your solution. This article explains the process to create an Azure application offer for the Microsoft commercial marketplace.
+
+If you havenΓÇÖt already done so, read [Plan an Azure application offer for the commercial marketplace](plan-azure-application-offer.md). It will provide the resources and help you gather the information and assets youΓÇÖll need when you create your offer.
+
+## Create a new offer
+
+1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home).
+
+1. In the left-navigation menu, select **Commercial Marketplace** > **Overview**.
+
+1. On the Overview page, select **+ New offer** > **Azure Application**.
+
+ ![Illustrates the left-navigation menu.](./media/create-new-azure-app-offer/new-offer-azure-app.png)
+
+1. In the **New offer** dialog box, enter an **Offer ID**. This is a unique identifier for each offer in your account. This ID is visible in the URL of the commercial marketplace listing and Azure Resource Manager templates, if applicable. For example, if you enter test-offer-1 in this box, the offer web address will be `https://azuremarketplace.microsoft.com/marketplace/../test-offer-1`.
+
+ * Each offer in your account must have a unique offer ID.
+ * Use only lowercase letters and numbers. It can include hyphens and underscores, but no spaces, and is limited to 50 characters.
+ * The Offer ID can't be changed after you select **Create**.
+
+1. Enter an **Offer alias**. This is the name used for the offer in Partner Center.
+
+ * This name is only visible in Partner Center and itΓÇÖs different from the offer name and other values shown to customers.
+ * The Offer alias can't be changed after you select **Create**.
+
+1. To generate the offer and continue, select **Create**.
+
+## Configure your Azure application offer setup details
+
+On the **Offer setup** tab, under **Setup details**, youΓÇÖll choose whether to configure a test drive. YouΓÇÖll also connect your customer relationship management (CRM) system with your commercial marketplace offer.
+
+### Enable a test drive (optional)
+
+A test drive is a great way to showcase your offer to potential customers by giving them access to a preconfigured environment for a fixed number of hours. Offering a test drive results in an increased conversion rate and generates highly qualified leads. To Learn more about test drives, see [Test drive](plan-azure-application-offer.md#test-drive).
+
+#### To enable a test drive
+
+- Under **Test drive**, select the **Enable a test drive** check box.
+
+### Customer lead management
+
+Connect your customer relationship management (CRM) system with your commercial marketplace offer so you can receive customer contact information when a customer expresses interest or deploys your product.
+
+#### To configure the connection details in Partner Center
+
+1. Under **Customer leads**, select the **Connect** link.
+1. In the **Connection details** dialog box, select a lead destination from the list.
+1. Complete the fields that appear. For detailed steps, see the following articles:
+
+ - [Configure your offer to send leads to the Azure table](partner-center-portal/commercial-marketplace-lead-management-instructions-azure-table.md#configure-your-offer-to-send-leads-to-the-azure-table)
+ - [Configure your offer to send leads to Dynamics 365 Customer Engagement](partner-center-portal/commercial-marketplace-lead-management-instructions-dynamics.md#configure-your-offer-to-send-leads-to-dynamics-365-customer-engagement) (formerly Dynamics CRM Online)
+ - [Configure your offer to send leads to HTTPS endpoint](partner-center-portal/commercial-marketplace-lead-management-instructions-https.md#configure-your-offer-to-send-leads-to-the-https-endpoint)
+ - [Configure your offer to send leads to Marketo](partner-center-portal/commercial-marketplace-lead-management-instructions-marketo.md#configure-your-offer-to-send-leads-to-marketo)
+ - [Configure your offer to send leads to Salesforce](partner-center-portal/commercial-marketplace-lead-management-instructions-salesforce.md#configure-your-offer-to-send-leads-to-salesforce)
+
+1. To validate the configuration you provided, select the **Validate** link, if applicable.
+1. To close the dialog box, select **Connect**.
+1. Select **Save draft** before continuing to the next tab: Properties.
+
+> [!NOTE]
+> Make sure the connection to the lead destination stays up to date so you don't lose any leads. Make sure you update these connections whenever something has changed.
+
+## Next steps
+
+- [Configure Azure application properties](azure-app-properties.md)
marketplace Azure App Plans https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-app-plans.md
+
+ Title: Create plans for an Azure application offer
+description: Create plans for an Azure application offer in Partner Center (Azure Marketplace).
++++++ Last updated : 06/01/2021++
+# Create plans for an Azure application offer
+
+Offers sold through the Microsoft commercial marketplace must have at least one plan to list your offer in the commercial marketplace. You can create a variety of plans with different options within the same offer. These plans (sometimes referred to as SKUs) can differ in terms of plan type (_solution template_ or _managed application_), monetization, or audience. For general guidance on plans, see [Plans and pricing for commercial marketplace offers](plans-pricing.md).
+
+## Create a plan
+
+1. Near the top of the **Plan overview** tab, select **+ Create new plan**.
+1. In the dialog box that appears, in the **Plan ID** box, enter a unique plan ID. This ID will be visible to customers in the product URL. Use up to 50 lowercase alphanumeric characters, dashes, or underscores. You cannot modify the plan ID after you select **Create**.
+1. In the **Plan name** box, enter a unique name for this plan. Customers will see this name when deciding which plan to select within your offer. Use a maximum of 50 characters.
+1. Select **Create**.
+
+## Define the plan setup
+
+The **Plan setup** tab enables you to set the type of plan, whether it reuses the technical configuration from another plan, and what Azure regions the plan should be available in. Your answers on this tab will affect which fields are displayed on other tabs for this plan.
+
+### Select the plan type
+
+- From the **Type of plan** list, select either **Solution template** or **Managed application**.
+
+A **Solution template** plan is managed entirely by the customer. A **Managed application** plan enables publishers to manage the application on behalf of the customer. For details on these two plan types, see [Types of plans](plan-azure-application-offer.md#types-of-plans).
+
+#### To re-use a technical configuration
+
+1. Select the **This plan reuses the technical configuration from another plan of the same type** check box.
+1. In the list that appears, select the base plan you want.
+
+> [!NOTE]
+> When you re-use packages from another plan, the entire **Technical configuration** tab disappears from this plan. The Technical configuration details from the other plan, including any updates that you make in the future, are used for this plan as well.
+
+### Select Azure regions (clouds)
+
+Your plan must be made available in at least one Azure region. After your plan is published and available in a specific Azure region, you can't remove that region from your offer.
+
+#### Azure Global region
+
+The **Azure Global** check box is selected by default. This makes your plan available to customers in all Azure Global regions that have commercial marketplace integration. For Managed Application plans, you can select with markets you want to make your plan available.
+
+To remove your offer from this region, clear the **Azure Global** check box.
+
+#### Azure Government region
+
+This region provides controlled access for customers from U.S. federal, state, local, or tribal entities, as well as partners eligible to serve them. You, as the publisher, are responsible for any compliance controls, security measures, and best practices. Azure Government uses physically isolated data centers and networks (located in the U.S. only).
+
+Azure Government services handle data that is subject to certain government regulations and requirements. For example, FedRAMP, NIST 800.171 (DIB), ITAR, IRS 1075, DoD L4, and CJIS. To bring awareness to your certifications for these programs, you can provide up to 100 links that describe them. These can be either links to your listing on the program directly or links to descriptions of your compliance with them on your own websites. These links are visible to Azure Government customers only.
+
+##### To select the Azure Government region
+
+1. Select the **Azure Government** check box.
+1. Under **Azure Government certifications**, select **+ Add certification (max 100)**.
+1. In the boxes that appear, provide a name and link to a certification.
+1. To add another certification, repeat steps 2 and 3.
+
+Select **Save draft** before continuing to the next tab: Plan listing.
+
+## Define the plan listing
+
+The **Plan listing** tab is where you configure listing details of the plan. This tab displays specific information that shows the difference between plans in the same offer. You can define the plan name, summary, and description as you want them to appear in the commercial marketplace.
+
+1. In the **Plan name** box, the name you provided earlier for this plan appears here. You can change it at any time. This name will appear in the commercial marketplace as the title of your offer's software plan and is limited to 100 characters.
+1. In the **Plan summary** box, provide a short summary of your plan (not the offer). This summary is limited to 100 characters.
+1. In the **Plan description** box, explain what makes this software plan unique and any differences from other plans within your offer. Don't describe the offer, just the plan. This description may contain up to 2,000 characters.
+1. Select **Save draft** before continuing.
+
+## Next steps
+
+Do one of the following:
+
+- [Configure a solution template plan](azure-app-solution.md)
+- [Configure a managed application plan](azure-app-managed.md)
marketplace Azure App Preview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-app-preview.md
+
+ Title: Add a preview audience for an Azure Application offer
+description: Add a preview audience for an Azure application offer in Partner Center (Azure Marketplace).
++++++ Last updated : 06/01/2021++
+# Add a preview audience for an Azure Application offer
+
+This article describes how to configure a preview audience for an Azure Application offer in the commercial marketplace. You need to define a preview audience who can review your offer listing before it goes live.
+
+## Define a preview audience
+
+On the **Preview audience** page, you can define a limited audience who can review your Azure Application offer before you publish it live to the broader marketplace audience. You define the preview audience using Azure subscription IDs, along with an optional description for each. Neither of these fields can be seen by customers. You can find your Azure subscription ID on the **Subscriptions** page in Azure portal.
+
+Add a minimum of one and up to 10 Azure subscription IDs, either individually (up to 10) or by uploading a CSV file (up to 100) to define who can preview your offer before it is published live. If your offer is already live, you may still define a preview audience for testing offer changes or updates to your offer.
+
+> [!NOTE]
+> A preview audience differs from a private audience. A preview audience is allowed access to your offer before it's published live in the online stores. They can see and validate all plans, including those which will be available only to a private audience after your offer is fully published to the marketplace. You can make a plan available only to a private audience. A private audience (defined in a planΓÇÖs **Availability** tab) has exclusive access to a particular plan.
+
+### Add subscription IDs manually
+
+1. On the **Preview audience** page, add a single Azure Subscription ID and an optional description in the boxes provided.
+1. To add another ID, select the **Add ID (Max 10)** link.
+1. Select **Save draft** before continuing to the next tab: Technical configuration.
+1. Go to [Next steps](#next-steps).
+
+### Add subscription IDs with a CSV file
+
+1. On the **Preview Audience** page, select the **Export Audience (csv)** link.
+1. Open the .CSV file in a suitable application such as Microsoft Excel.
+1. In the .CSV file, in the **ID** column, enter the Azure Subscription IDs you want to add to the preview audience.
+1. In the **Description** column, you can optionally add a description for each email address.
+1. For each Subscription ID you enter in column B, enter a **Type** in column A of "SubscriptionID".
+1. Save as a .CSV file.
+1. On the **Preview audience** page, select the **Import Audience (csv)** link.
+1. In the **Confirm** dialog box, select **Yes**.
+1. Select the .CSV file and then **Open**.
+1. Select **Save draft** before continuing to the next tab: Technical configuration.
+
+## Next steps
+
+- [Add technical details to this offer](azure-app-technical-configuration.md)
marketplace Azure App Properties https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-app-properties.md
+
+ Title: How to configure your Azure Application offer properties
+description: Learn how to configure the properties for your Azure application offer in Partner Center (Azure Marketplace).
++++++ Last updated : 06/01/2021++
+# Configure Azure application offer properties
+
+This article describes how to configure the properties for an Azure Application offer in the commercial marketplace.
+
+On the **Properties** page, youΓÇÖll define the categories applicable to your offer, and legal contracts. Be sure to provide complete and accurate details about your offer on this page, so that itΓÇÖs displayed appropriately and offered to the right set of customers.
+
+## Select a category for your offer
+
+Under **Categories**, select the **Categories** link and then choose at least one and up to two categories for grouping your offer into the appropriate commercial marketplace search areas. Select up to two subcategories for each primary and secondary category. If no subcategory is applicable to your offer, select **Not applicable**.
+
+## Provide terms and conditions
+
+Under **Legal**, provide terms and conditions for your offer. You have two options:
+
+- [Use the standard contract with optional amendments](#use-the-standard-contract)
+- [Use your own terms and conditions](#use-your-own-terms-and-conditions)
+
+To learn about the standard contract and optional amendments, see [Standard Contract for the Microsoft commercial marketplace](standard-contract.md). You can download the [Standard Contract](https://go.microsoft.com/fwlink/?linkid=2041178) PDF (make sure your pop-up blocker is off).
+
+### Use the standard contract
+
+To simplify the procurement process for customers and reduce legal complexity for software vendors, Microsoft offers a standard contract you can use for your offers in the commercial marketplace. When you offer your software under the standard contract, customers only need to read and accept it one time, and you don't have to create custom terms and conditions.
+
+1. Select the **Use the Standard Contract for Microsoft's commercial marketplace** checkbox.
+
+ ![Illustrates the Use the Standard Contract for Microsoft's commercial marketplace check box.](partner-center-portal/media/use-standard-contract.png)
+
+1. In the **Confirmation** dialog box, select **Accept**. You may have to scroll up to see it.
+1. Select **Save draft** before continuing.
+
+> [!NOTE]
+> After you publish an offer using the Standard Contract for the commercial marketplace, you can't use your own custom terms and conditions. Either offer your solution under the standard contract with optional amendments or under your own terms and conditions.
+
+### Add amendments to the standard contract (optional)
+
+There are two kinds of amendments available: _universal_ and _custom_.
+
+#### Add universal amendment terms
+
+In the **Universal amendment terms to the standard contract for Microsoft's commercial marketplace** box, enter your universal amendment terms. You can enter an unlimited number of characters in this box. These terms are displayed to customers in AppSource, Azure Marketplace, and/or Azure portal during the discovery and purchase flow.
+
+#### Add one or more custom amendments
+
+1. Under **Custom amendments terms to the Standard Contract for Microsoft's commercial marketplace**, select the **Add custom amendment term (Max 10)** link.
+1. In the **Custom amendment terms** box, enter your amendment terms.
+1. In the **Tenant ID** box, enter a tenant ID. Only customers associated with the tenant IDs you specify for these custom terms will see them in the offer's purchase flow in the Azure portal.
+
+ > [!TIP]
+ > A tenant ID identifies your customer in Azure. You can ask your customer for this ID and they can find it by going to [**https://portal.azure.com**](https://portal.azure.com) > **Azure Active Directory** > **Properties**. The directory ID value is the tenant ID (for example, `50c464d3-4930-494c-963c-1e951d15360e`). You can also look up the organization's tenant ID of your customer by using their domain name URL at [What is my Microsoft Azure and Office 365 tenant ID?](https://www.whatismytenantid.com/).
+
+1. In the **Description** box, optionally enter a friendly description for the tenant ID. This description helps you identify the customer you're targeting with the amendment.
+1. To add another tenant ID, select the **Add a customer's tenant ID** link and repeat steps 3 and 4. You can add up to 20 tenant IDs.
+1. To add another amendment term, repeat steps 1 through 5. You can provide up to ten custom amendment terms per offer.
+1. Select **Save draft** before continuing.
+
+### Use your own terms and conditions
+
+You can choose to provide your own terms and conditions, instead of the standard contract. Customers must accept these terms before they can try your offer.
+
+1. Under **Legal**, make sure the **Use the Standard Contract for Microsoft's commercial marketplace** check box is cleared.
+1. In the **Terms and conditions** box, enter up to 10,000 characters of text.
+1. Select **Save draft** before continuing to the next tab: Offer listing.
+
+## Next steps
+
+- [Configure Azure application listing details](azure-app-offer-listing.md)
marketplace Azure App Review Feedback https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-app-review-feedback.md
+
+ Title: Review feedback for Azure apps offers - Microsoft commercial marketplace
+description: Handle feedback for your Azure application offer from the Microsoft Azure Marketplace review team. You can access feedback in Azure DevOps with your Partner Center credentials.
+++ Last updated : 11/11/2019++++
+# Handle review feedback for Azure application offers
+
+This article explains how to access feedback from the Microsoft Azure Marketplace review team in [Azure DevOps](https://azure.microsoft.com/services/devops/). If critical issues are found in your Azure application offer during the **Microsoft review** step, you can sign into this system to view detailed information about these issues (review feedback). After you fix all issues, you must resubmit your offer to continue to publish it on Azure Marketplace. The following diagram illustrates how this feedback process relates to the publishing process.
+
+![Review feedback process](media/azure-app/review-feedback-process.png)
+
+Typically, review issues are referenced as a pull request (PR). Each PR is linked to an online Azure DevOps item, which contains details about the issue. The following image displays an example of the Partner Center experience if issues are found during reviews.
+
+![Publishing status](media/azure-app/publishing-status.png)
+
+The PR that contains specific details about the submission will be mentioned in the ΓÇ£View Certification ReportΓÇ¥ link. For complex situations, the review and support teams may also email you.
+
+## Azure DevOps access
+
+All users with access to the ΓÇ£developerΓÇ¥ role in Partner Center will have access to view the PR items referenced in review feedback.
+
+## Reviewing the pull request
+
+Use the following procedure to review issues documented in the pull request.
+
+1. In the **Microsoft review** sections of Publishing steps form, select a PR link to launch your browser and navigate to the **Overview** (home) page for this PR. The following image depicts an example of the critical issue home page for the Contoso sample app offer. This page contains useful summary information about the review issues found in the Azure app.
+
+ [![Pull request home page](media/azure-app/pr-home-page-thumb.png)](media/azure-app/pr-home-page.png)
+ <br/> *Click on this image to expand.*
+
+1. (Optional) On the right side of the window, in the section **Policies**, select the issue message (in this example: **Policy Validation failed**) to investigate the low-level details of the issue, including the associated log files. Errors are typically displayed at the bottom of the log files.
+
+1. In the menu on the left-side of the home page, select **Files** to display the list files that comprise the technical assets for this offer. The Microsoft reviewers should have added comments describing the discovered critical issues. In the following example, two issues have been discovered.
+
+ [![Screenshot that highlights Files and the two issues that were discovered.](media/azure-app/pr-files-page-thumb.png)](media/azure-app/pr-files-page.png)
+ <br/> *Click on this image to expand.*
+
+1. Select each comment node in the left tree to navigate to the comment in context of the surrounding code. Fix your source code in your team's project to correct the issue described by the comment.
+
+>[!Note]
+>You cannot edit your offer's technical assets within the review team's Azure DevOps environment. For publishers, this is a read-only environment for the contained source code. However, you can leave replies to the comments for the benefit of the Microsoft review team.
+
+ In the following example, the publisher has reviewed, corrected, and replied to the first issue.
+
+ ![First fix and comment reply](media/azure-app/first-comment-reply.png)
+
+## Next steps
+
+- After you correct the critical issues documented in the review PR(s), you must [republish your Azure app offer](azure-app-offer-setup.md).
marketplace Azure App Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-app-solution.md
+
+ Title: Configure a solution template plan
+description: Configure a solution template plan for your Azure application offer in Partner Center (Azure Marketplace).
++++++ Last updated : 06/01/2021++
+# Configure a solution template plan
+
+This article applies only to solution template plans for an Azure application offer. If you are configuring a managed application plan, go to [Configure a managed application plan](azure-app-managed.md).
+
+## Re-use technical configuration (optional)
+
+If youΓÇÖve created more than one plan of the same type within this offer and the technical configuration is identical between them, you can reuse the technical configuration from another plan. This setting cannot be changed after this plan is published.
+
+To re-use a technical configuration:
+
+1. Select the **This plan reuses the technical configuration from another plan of the same type** check box.
+1. In the list that appears, select the base plan you want.
+
+> [!NOTE]
+> If a plan is currently reusing or has reused the technical configuration from another plan of the same type, go to that base plan to view history of previously published packages.
+
+## Choose who can see your plan
+
+You can configure each plan to be visible to everyone or to only a specific audience. You grant access to a private audience using Azure subscription IDs with the option to include a description of each subscription ID you assign. You can add a maximum of 10 subscription IDs manually or up to 10,000 subscription IDs using a .CSV file. Azure subscription IDs are represented as GUIDs and letters must be lowercase.
+
+> [!NOTE]
+> If you publish a private plan, you can change its visibility to public later. However, once you publish a public plan, you cannot change its visibility to private.
+
+On the **Availability** tab, under **Plan visibility**, do one of the following:
+
+- To make the plan public, select the **Public** option button (also known as a _radio button_).
+- To make the plan private, select the **Private** option button and then add the Azure subscription IDs manually or with a CSV file.
+
+ > [!NOTE]
+ > A private or restricted audience is different from the preview audience you defined on the **Preview** tab. A preview audience can access your offer before its published live in the marketplace. While the private audience choice only applies to a specific plan, the preview audience can view all plans (private or not) for validation purposes.
+
+### Manually add Azure subscription IDs for a private plan
+
+1. Under **Plan visibility**, select the **Private** option button.
+1. In the **Azure Subscription ID** box that appears, enter the Azure subscription ID of the audience you want to grant access to this private plan. A minimum of one subscription ID is required.
+1. (Optional) Enter a description of this audience in the **Description** box.
+1. To add another subscription ID, select the **Add ID (Max 10)** link and repeat steps 2 and 3.
+
+## Use a .CSV file to add Azure subscription IDs for a private plan
+
+1. Under **Plan visibility**, select the **Private** option button.
+1. Select the **Export Audience (csv)** link.
+1. Open the .CSV file and add the Azure subscription IDs you want to grant access to the private offer to the **ID** column.
+1. Optionally, enter a description for each audience in the **Description** column.
+1. Add "SubscriptionId" in the **Type** column, for each row with a subscription ID.
+1. Save the .CSV file.
+1. On the **Availability** tab, under **Plan visibility**, select the **Import Audience (csv)** link.
+1. In the dialog box that appears, select **Yes**.
+1. Select the .CSV file and then select **Open**. A message appears indicating that the .CSV file was successfully imported.
+
+### Hide your plan
+
+If your solution template is intended to be deployed only indirectly when referenced though another solution template or managed application, select the check box under **Hide plan** to publish your solution template but hide it from customers who search and browse for it directly.
+
+Select **Save draft** before continuing to the next section: Define the technical configuration.
+
+## Define the technical configuration
+
+On the **Technical configuration** tab, youΓÇÖll upload the deployment package that lets customers deploy your plan and provide a version number for the package.
+
+### Assign a version number for the package
+
+In the **Version** box provide the current version of the technical configuration. Increment this version each time you publish a change to this page. The version number must be in the format: integer.integer.integer. For example, `1.0.2`.
+
+### Upload a package file
+
+Under **Package file (.zip)**, drag your package file to the gray box or select the **browse for your file(s)** link.
+
+> [!NOTE]
+> If you have an issue uploading files, make sure your local network does not block the `https://upload.xboxlive.com` service used by Partner Center.
+
+### Previously published packages
+
+After you publish your offer live, the **Previously published packages** sub-tab appears on the **Technical configuration** page. This tab lists all previously published versions of your technical configuration.
+
+## View your plans
+
+- Select **Save draft**, and then in the upper left of the page, select **Plan overview** to return to the **Plan overview** page.
+
+After you create one or more plans, you'll see your plan name, plan ID, plan type, availability (Public or Private), current publishing status, and any available actions on the **Plan overview** tab.
+
+The actions that are available in the **Action** column of the **Plan overview** tab vary depending on the status of your plan, and may include the following:
+
+- If the plan status is **Draft**, the link in the **Action** column will say **Delete draft**.
+- If the plan status is **Live**, the link in the **Action** column will be either **Stop selling plan** or **Sync private audience**. The **Sync private audience** link will publish only the changes to your private audiences, without publishing any other updates you might have made to the offer.
+- To create another plan for this offer, at the top of the **Plan overview** tab, select **+ Create new plan**. Then repeat the steps in [How to create plans for your Azure application offer](azure-app-plans.md). Otherwise, if you're done creating plans, go to the next section: Next steps.
+
+## Next steps
+
+- [Test and publish this offer](azure-app-test-publish.md)
+- [Sell this offer](azure-app-marketing.md) through the **Co-sell with Microsoft** and/or **Resell through CSPs** programs
marketplace Azure App Technical Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-app-technical-configuration.md
+
+ Title: Add technical details for an Azure application offer
+description: Add technical details for an Azure application offer in Partner Center (Azure Marketplace).
++++++ Last updated : 06/01/2021++
+# Add technical details for an Azure application offer
+
+This article describes how to enter technical details that help the Microsoft commercial marketplace connect to your solution. This connection enables us to provision your offer for the customer if they choose to acquire and manage it.
+
+Complete this section only if your offer includes a managed application that will emit metering events using the [Marketplace metered billing APIs](marketplace-metering-service-apis.md) and have a service which will be authenticating with an Azure AD security token. For more information, see [Marketplace metering service authentication strategies](marketplace-metering-service-authentication.md) on the different authentication options.
+
+## Technical configuration (offer-level)
+
+The **Technical configuration** tab applies to you only if you will create a managed application that emits metering events using the [Marketplace metered billing APIs](marketplace-metering-service-apis.md). If so, then complete the following steps. Otherwise, go to [Next steps](#next-steps).
+
+For more information about these fields, see [Plan an Azure Application offer for the commercial marketplace](plan-azure-application-offer.md#technical-configuration).
+
+1. On the **Technical configuration** tab, provide the **Azure Active Directory tenant ID** and **Azure Active Directory application ID** used to validate the connection between our two services is behind an authenticated communication.
+
+1. Select **Save draft** before continuing to the next tab: Plan overview.
+
+## Next steps
+
+- [Create plans for this offer](azure-app-plans.md)
marketplace Azure App Test Publish https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-app-test-publish.md
+
+ Title: Test and publish an Azure application offer
+description: Submit your Azure application offer to preview, preview your offer, test, and publish it to Azure Marketplace.
++++++ Last updated : 06/01/2021++
+# Test and publish an Azure application offer
+
+This article explains how to use Partner Center to submit your Azure Application offer for publishing, preview your offer, test it, and then publish it live to the commercial marketplace. You must have already created an offer that you want to publish.
+
+## Submit the offer for publishing
+
+1. Sign in to the commercial marketplace dashboard in [Partner Center](https://partner.microsoft.com/dashboard/commercial-marketplace/overview).
+1. On the **Overview** page, select the offer you want to publish.
+1. In the upper-right corner of the portal, select **Review and publish**.
+1. Make sure that the **Status** column for each page says **Complete**. The three possible statuses are as follows:
+ - **Not started** ΓÇô The page is incomplete.
+ - **Incomplete** ΓÇô The page is missing required information or has errors that need to be fixed. You'll need to go back to the page and update it.
+ - **Complete** ΓÇô The page is complete. All required data has been provided and there are no errors.
+1. If any of the pages have a status other than **Complete**, select the page name, correct the issue, save the page, and then select **Review and publish** again to return to this page.
+1. After all the pages are complete, in the **Notes for certification** box, provide testing instructions to the certification team to ensure that your app is tested correctly. Provide any supplementary notes helpful for understanding your app.
+1. To start the publishing process for your offer, select **Publish**. The **Offer overview** page appears and shows the offer's **Publish status**.
+
+Your offer's publish status will change as it moves through the publication process. For detailed information on this process, see [Validation and publishing steps](review-publish-offer.md#validation-and-publishing-steps).
+
+## Preview and test the offer
+
+When the offer is ready for your sign off, weΓÇÖll send you an email to request that you review and approve your offer preview. You can also refresh the **Offer overview** page in your browser to see if your offer has reached the Publisher sign-off phase. If it has, the **Go live** button and preview link will be available. If you chose to sell your offer through Microsoft, anyone who has been added to the preview audience can test the acquisition and deployment of your offer to ensure it meets your requirements during this stage.
+
+The following screenshot shows the **Offer overview** page for an offer, with two preview links under the **Go live** button. The validation steps youΓÇÖll see on this page vary depending on the selections you made when you created the offer.
+
+[![Illustrates the Offer overview page for an offer in Partner Center. The Go live button and preview links are shown.](media/create-new-azure-app-offer/azure-app-publish-status.png)](media/create-new-azure-app-offer/azure-app-publish-status.png#lightbox)
+
+Use the following steps to preview your offer:
+
+1. On the **Offer overview** page, select a preview link under the **Go live** button.
+1. To validate the end-to-end purchase and setup flow, purchase your offer while it's in preview. First, notify Microsoft with a [support ticket](https://aka.ms/marketplacesupport) to ensure we don't process a charge.
+1. If your Azure application supports [metered billing using the commercial marketplace metering service](marketplace-metering-service-apis.md), review and follow the testing best practices detailed in [Marketplace metered billing APIs](marketplace-metering-service-apis.md#development-and-testing-best-practices).
+1. If you need to make changes after previewing and testing the offer, you can edit and resubmit to publish a new preview. For more information, see [Update an existing offer in the commercial marketplace](./update-existing-offer.md).
+
+## Publish your offer live
+
+After completing all tests on your preview, select **Go live** to publish your offer live to the commercial marketplace.
+
+ > [!TIP]
+ > If your offer is already live in the commercial marketplace, any updates you make won't go live until you select **Go live**.
+
+Now that youΓÇÖve chosen to make your offer available in the commercial marketplace, we perform a series of final validation checks to ensure the live offer is configured just like the preview version of the offer. For details about these validation checks, see [Publish phase](review-publish-offer.md#publish-phase).
+
+After these validation checks are complete, your offer will be live in the marketplace.
+
+### Errors and review feedback
+
+The **Manual validation** step in the publishing process represents an extensive review of your offer and its associated technical assets (especially the Azure Resource Manager template) issues are typically presented as pull request (PR) links. An explanation of how to view and respond to these PRs, see [Handling review feedback](azure-app-review-feedback.md).
+
+If you have errors in one or more of the publishing steps, correct them before republishing your offer.
+
+## Next step
+
+- [Access analytic reports for the commercial marketplace](analytics.md)
+- [Sell your Azure application offer](azure-app-marketing.md) through the **Co-sell with Microsoft** and **Resell through CSPs** programs.
marketplace Azure Partner Customer Usage Attribution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-partner-customer-usage-attribution.md
There are secondary use cases for customer usage attribution outside of the comm
## Commercial marketplace Azure apps
-Tracking Azure usage from Azure apps published to the commercial marketplace is largely automatic. When you upload a Resource Manager template as part of the [technical configuration of your marketplace Azure app's plan](./create-new-azure-apps-offer-solution.md#define-the-technical-configuration), Partner Center will add a tracking ID readable by Azure Resource Manager.
+Tracking Azure usage from Azure apps published to the commercial marketplace is largely automatic. When you upload a Resource Manager template as part of the [technical configuration of your marketplace Azure app's plan](./azure-app-solution.md#define-the-technical-configuration), Partner Center will add a tracking ID readable by Azure Resource Manager.
If you use Azure Resource Manager APIs, you will need to add your tracking ID per the [instructions below](#use-resource-manager-apis) to pass it to Azure Resource Manager as your code deploys resources. This ID is visible in Partner Center on your plan's Technical Configuration page.
marketplace Azure Vm Create Certification Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-vm-create-certification-faq.md
Title: Virtual machine (VM) certification troubleshooting for Azure Marketplace
-description: Troubleshoot common issues related to testing and certifying virtual machine (VM) images for Azure Marketplace.
+ Title: Virtual machine certification troubleshooting for Azure Marketplace
+description: Troubleshoot common issues related to testing and certifying virtual machine images for Azure Marketplace.
To complete the publishing process, see [Review and publish offers](review-publi
#### Locked down (or) SSH disabled offer
- Images which are published with either SSH disabled(for Linux) or RDP disabled (for Windows) are treated as Locked down VMs. There are special business scenarios due to which Publishers only allow restricted access to no/a few users.
- During validation checks, Locked down VMs might not allow execution of certain certification commands.
+Images which are published with either SSH disabled(for Linux) or RDP disabled (for Windows) are treated as Locked down VMs. There are special business scenarios due to which Publishers only allow restricted access to no/a few users.
+During validation checks, Locked down VMs might not allow execution of certain certification commands.
#### Custom templates
- In general, all the images which are published under single VM offers will follow standard ARM template for deployment. However, there are scenarios where publisher might requires customization while deploying VMs (e.g. multiple NIC(s) to be configured).
+In general, all the images which are published under single VM offers will follow standard ARM template for deployment. However, there are scenarios where publisher might requires customization while deploying VMs (e.g. multiple NIC(s) to be configured).
- Depending on the below scenarios (non-exhaustive), publishers will use custom templates for deploying the VM:
+Depending on the below scenarios (non-exhaustive), publishers will use custom templates for deploying the VM:
- * VM requires additional network subnets.
- * Additional metadata to be inserted in ARM template.
- * Commands that are prerequisite to the execution of ARM template.
+- VM requires additional network subnets.
+- Additional metadata to be inserted in ARM template.
+- Commands that are prerequisite to the execution of ARM template.
### VM extensions
- Azure virtual machine (VM) extensions are small applications that provide post-deployment configuration and automation tasks on Azure VMs. For example, if a virtual machine requires software installation, anti-virus protection, or to run a script inside of it, a VM extension can be used.
+Azure virtual machine (VM) extensions are small applications that provide post-deployment configuration and automation tasks on Azure VMs. For example, if a virtual machine requires software installation, anti-virus protection, or to run a script inside of it, a VM extension can be used.
- Linux VM extension validations require the following to be part of the image:
-* Azure Linux Agent greater 2.2.41
-* Python version above 2.8
+Linux VM extension validations require the following to be part of the image:
+- Azure Linux Agent greater 2.2.41
+- Python version above 2.8
For more information, please visit [VM Extension](../virtual-machines/extensions/diagnostics-linux.md).
For more information, please visit [VM Extension](../virtual-machines/extensions
- [Configure VM offer properties](azure-vm-create-properties.md) - [Active marketplace rewards](marketplace-rewards.md)-- If you have questions or feedback for improvement, contact [Partner Center support](https://aka.ms/marketplacepublishersupport).
+- If you have questions or feedback for improvement, contact [Partner Center support](https://aka.ms/marketplacepublishersupport)
marketplace Azure Vm Create Listing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-vm-create-listing.md
Title: Configure virtual machine offer listing details on Azure Marketplace
-description: Configure virtual machine offer listing details on Azure Marketplace.
+description: Configure virtual machine offer listing details on Azure Marketplace.
Last updated 10/19/2020
-# How to configure virtual machine offer listing details
+# Configure virtual machine offer listing details
On the **Offer listing** page (select from the left-nav menu in Partner Center), you define the offer details such as offer name, description, links, and contacts.
marketplace Azure Vm Create Plans https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-vm-create-plans.md
Title: Create plans for a virtual machine offer on Azure Marketplace
-description: Learn how to create plans for a virtual machine offer on Azure Marketplace.
+description: Create plans for a virtual machine offer on Azure Marketplace.
Data disks (select **Add data disk (maximum 16)**) are also VHD shared access si
Regardless of which operating system you use, add only the minimum number of data disks that the solution requires. During deployment, customers can't remove disks that are part of an image, but they can always add disks during or after deployment. > [!NOTE]
-> If you provide your images using SAS and have data disks, you also need to provide them as SAS URI. If you are using shared image, they are captured as part of your image in shared image gallery.
+> If you provide your images using SAS and have data disks, you also need to provide them as SAS URI. If you are using shared image, they are captured as part of your image in shared image gallery. Once your offer is published to Azure Marketplace, you can delete the image from your Azure storage or shared image gallery.
Select **Save draft**, then select **← Plan overview** at the top left to see the plan you just created.
marketplace Azure Vm Create Preview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-vm-create-preview.md
Title: Add a preview audience for a virtual machine offer on Azure Marketplace
-description: Learn how to add a preview audience for a virtual machine offer on Azure Marketplace.
+description: Add a preview audience for a virtual machine offer on Azure Marketplace.
Last updated 10/19/2020
-# How to add a preview audience for a virtual machine offer
+# Add a preview audience for a virtual machine offer
On the **Preview** page (select from the left-nav menu in Partner Center), select a limited **Preview audience** for validating your offer before you publish it live to the broader commercial marketplace audience.
marketplace Azure Vm Create Properties https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-vm-create-properties.md
Title: Configure virtual machine offer properties on Azure Marketplace
-description: Learn how to configure virtual machine offer properties on Azure Marketplace.
+description: Configure virtual machine offer properties on Azure Marketplace.
marketplace Azure Vm Create Resell Csp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-vm-create-resell-csp.md
Title: Resell your offer through Cloud Solution Providers (CSP) on Azure Marketplace
-description: Learn how to resell your offer through Cloud Solution Providers (CSP) on Azure Marketplace.
+description: Resell your offer through Cloud Solution Providers (CSP) on Azure Marketplace.
Last updated 11/05/2020
-# How to resell your offer through CSP
+# Resell your offer through CSP
## Resell through CSP
marketplace Azure Vm Create Using Approved Base https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-vm-create-using-approved-base.md
Title: Create an Azure virtual machine offer (VM) from an approved base, Azure Marketplace
-description: Learn how to create a virtual machine (VM) offer from an approved base.
+ Title: Create an Azure virtual machine offer (VM) from an approved base
+description: Learn how to create a virtual machine (VM) offer from an approved base (Azure Marketplace).
Last updated 04/16/2021
-# How to create a virtual machine using an approved base
+# Create a virtual machine using an approved base
This article describes how to use Azure to create a virtual machine (VM) containing a pre-configured, endorsed operating system. If this isn't compatible with your solution, it's possible to [create and configure an on-premises VM](azure-vm-create-using-own-image.md) using an approved operating system.
marketplace Azure Vm Create Using Own Image https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-vm-create-using-own-image.md
Title: Create an Azure virtual machine offer on Azure Marketplace using your own image
-description: Learn how to publish a virtual machine offer to Azure Marketplace using your own image.
+description: Publish a virtual machine offer to Azure Marketplace using your own image.
Previously updated : 04/23/2021 Last updated : 06/02/2021
-# How to create a virtual machine using your own image
+# Create a virtual machine using your own image
-This article describes how to create and deploy a user-provided virtual machine (VM) image.
-
-> [!NOTE]
-> Before you start this procedure, review the [technical requirements](marketplace-virtual-machines.md#technical-requirements) for Azure VM offers, including virtual hard disk (VHD) requirements.
-
-To use an approved base image instead, follow the instructions in [Create a VM image from an approved base](azure-vm-create-using-approved-base.md).
-
-## Configure the VM
-
-This section describes how to size, update, and generalize an Azure VM. These steps are necessary to prepare your VM to be deployed on Azure Marketplace.
-
-### Size the VHDs
--
-### Install the most current updates
--
-### Perform more security checks
--
-### Perform custom configuration and scheduled tasks
--
-### Generalize the image
-
-All images in the Azure Marketplace must be reusable in a generic fashion. To achieve this, the operating system VHD must be generalized, an operation that removes all instance-specific identifiers and software drivers from a VM.
+This article describes how to publish a virtual machine (VM) image that you built on your premises.
## Bring your image into Azure
-> [!NOTE]
-> The Azure subscription containing the SIG must be under the same tenant as the publisher account in order to publish. Also, the publisher account must have at least Contributor access to the subscription containing the SIG.
-
-There are three ways to bring your image into Azure:
-
-1. Upload the vhd either:
- 1. to a shared image gallery
- 1. as a shared image in shared image gallery
-1. Upload the vhd to an Azure storage account.
-1. Extract the vhd from a Managed Image (if using image building services).
-
-The following three sections describe these options.
-
-### Option 1: Upload the VHD as shared image gallery
-
-1. Upload vhd(s) to Storage Account.
-2. On the Azure portal, search for **Deploy a custom template**.
-3. Select **Build your own template in the editor**.
-4. Copy the following Azure Resource Manager (ARM) template.
-
- ```json
- {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "sourceStorageAccountResourceId": {
- "type": "string",
- "metadata": {
- "description": "Resource ID of the source storage account that the blob vhd resides in."
- }
- },
- "sourceBlobUri": {
- "type": "string",
- "metadata": {
- "description": "Blob Uri of the vhd blob (must be in the storage account provided.)"
- }
- },
- "sourceBlobDataDisk0Uri": {
- "type": "string",
- "metadata": {
- "description": "Blob Uri of the vhd blob (must be in the storage account provided.)"
- }
- },
- "sourceBlobDataDisk1Uri": {
- "type": "string",
- "metadata": {
- "description": "Blob Uri of the vhd blob (must be in the storage account provided.)"
- }
- },
- "galleryName": {
- "type": "string",
- "metadata": {
- "description": "Name of the Shared Image Gallery."
- }
- },
- "galleryImageDefinitionName": {
- "type": "string",
- "metadata": {
- "description": "Name of the Image Definition."
- }
- },
- "galleryImageVersionName": {
- "type": "string",
- "metadata": {
- "description": "Name of the Image Version - should follow <MajorVersion>.<MinorVersion>.<Patch>."
- }
- }
- },
- "resources": [
- {
- "type": "Microsoft.Compute/galleries/images/versions",
- "name": "[concat(parameters('galleryName'), '/', parameters('galleryImageDefinitionName'), '/', parameters('galleryImageVersionName'))]",
- "apiVersion": "2020-09-30",
- "location": "[resourceGroup().location]",
- "properties": {
- "storageProfile": {
- "osDiskImage": {
- "source": {
- "id": "[parameters('sourceStorageAccountResourceId')]",
- "uri": "[parameters('sourceBlobUri')]"
- }
- },
-
- "dataDiskImages": [
- {
- "lun": 0,
- "source": {
- "id": "[parameters('sourceStorageAccountResourceId')]",
- "uri": "[parameters('sourceBlobDataDisk0Uri')]"
- }
- },
- {
- "lun": 1,
- "source": {
- "id": "[parameters('sourceStorageAccountResourceId')]",
- "uri": "[parameters('sourceBlobDataDisk1Uri')]"
- }
- }
- ]
- }
- }
- }
- ]
- }
-
- ```
-
-5. Paste the template into the editor.
-
- :::image type="content" source="media/create-vm/vm-sample-code-screen.png" alt-text="Sample code screen for VM.":::
-
-1. Select **Save**.
-1. Use the parameters in this table to complete the fields in the screen that follows.
+Upload your VHD to an Azure shared image gallery.
-| Parameters | Description |
-| | |
-| sourceStorageAccountResourceId | Resource ID of the source storage account in which the blob vhd resides.<br><br>To get the Resource ID, go to your **Storage Account** on **Azure portal**, go to **Properties**, and copy the **ResourceID** value. |
-| sourceBlobUri | Blob Uri of the OS disk vhd blob (must be in the storage account provided).<br><br>To get the blob URL, go to your **Storage Account** on **Azure portal**, go to your **blob**, and copy the **URL** value. |
-| sourceBlobDataDisk0Uri | Blob Uri of the data disk vhd blob (must be in the storage account provided). If you don't have a data disk, remove this parameter from the template.<br><br>To get the blob URL, go to your **Storage Account** on **Azure portal**, go to your **blob**, and copy the **URL** value. |
-| sourceBlobDataDisk1Uri | Blob Uri of additional data disk vhd blob (must be in the storage account provided). If you don't have additional data disk, remove this parameter from the template.<br><br>To get the blob URL, go to your **Storage Account** on **Azure portal**, go to your **blob**, and copy the **URL** value. |
-| galleryName | Name of the Shared Image Gallery |
-| galleryImageDefinitionName | Name of the Image Definition |
-| galleryImageVersionName | Name of the Image Version to be created, in this format: `<MajorVersion>.<MinorVersion>.<Patch>` |
-|
--
-8. Select **Review + create**. Once validation finishes, select **Create**.
+1. On the Azure portal, search for **Shared image galleries**.
+2. Create or use an existing shared image gallery. We suggest you create a separate Shared image gallery for images being published to Marketplace.
+3. Create or use an existing image definition.
+4. Select **Create a version**.
+5. Choose the region and image version.
+6. If your VHD is not yet uploaded to Azure portal, choose **Storage blobs (VHDs)** as the **Source**, then **Browse**. You can create a **storage account** and **storage container** if you havenΓÇÖt created one before. Upload your VHD.
+7. Select **Review + create**. Once validation finishes, select **Create**.
> [!TIP]
-> Publisher account must have ΓÇ£OwnerΓÇ¥ access to publish the SIG Image. If required, follow the below steps to grant access:
->
-> 1. Go to the Shared Image Gallery (SIG).
-> 2. Select **Access control** (IAM) on the left panel.
-> 3. Select **Add**, then **Add role assignment**.
-> 4. For **Role**, select **Owner**.
-> 5. For **Assign access to**, select **User, group, or service principal**.
-> 6. Enter the Azure email of the person who will publish the image.
-> 7. Select **Save**.<br><br>
-> :::image type="content" source="media/create-vm/add-role-assignment.png" alt-text="The add role assignment window is shown.":::
-
-### Option 2: Upload the VHD to a Storage Account
+> Publisher account must have ΓÇ£OwnerΓÇ¥ access to publish the SIG Image. If required, follow the steps in the following section, **Set the right permissions**, to grant access.
-Configure and prepare the VM to be uploaded as described in [Prepare a Windows VHD or VHDX to upload to Azure](../virtual-machines/windows/prepare-for-upload-vhd-image.md) or [Create and Upload a Linux VHD](../virtual-machines/linux/create-upload-generic.md).
+## Set the right permissions
-### Option 3: Extract the VHD from Managed Image (if using image building services)
+If your Partner Center account is the owner of the subscription hosting Shared Image Gallery, nothing further is needed for permissions.
-If you are using an image building service like [Packer](https://www.packer.io/), you may need to extract the VHD from the image. There is no direct way to do this. You will have to create a VM and extract the VHD from the VM disk.
+If you only have read access to the subscription, use one of the following two options.
-## Create the VM on the Azure portal
+### Option one ΓÇô Ask the owner to grant owner permission
-Follow these steps to create the base VM image on the [Azure portal](https://ms.portal.azure.com/).
+Steps for the owner to grant owner permission:
-1. Sign in to the [Azure portal](https://ms.portal.azure.com/).
-2. Select **Virtual machines**.
-3. Select **+ Add** to open the **Create a virtual machine** screen.
-4. Select the image from the dropdown list or select **Browse all public and private images** to search or browse all available virtual machine images.
-5. To create a **Gen 2** VM, go to the **Advanced** tab and select the **Gen 2** option.
-
- :::image type="content" source="media/create-vm/vm-gen-option.png" alt-text="Select Gen 1 or Gen 2.":::
-
-6. Select the size of the VM to deploy.
-
- :::image type="content" source="media/create-vm/create-virtual-machine-sizes.png" alt-text="Select a recommended VM size for the selected image.":::
-
-7. Provide the other required details to create the VM.
-8. Select **Review + create** to review your choices. When the **Validation passed** message appears, select **Create**.
+1. Go to the Shared Image Gallery (SIG).
+2. Select **Access control** (IAM) on the left panel.
+3. Select **Add**, then **Add role assignment**.<br>
+ :::image type="content" source="media/create-vm/add-role-assignment.png" alt-text="The add role assignment window is shown.":::
+1. For **Role**, select **Owner**.
+1. For **Assign access to**, select **User, group, or service principal**.
+1. For **Select**, enter the Azure email of the person who will publish the image.
+1. Select **Save**.
-Azure begins provisioning the virtual machine you specified. Track its progress by selecting the **Virtual Machines** tab in the left menu. After it's created the status of Virtual Machine changes to **Running**.
+### Option Two ΓÇô Run a command
-## Connect to your VM
+Ask the owner to run either one of these commands (in either case, use the SusbscriptionId of the subscription where you created the Shared image gallery).
-Refer to the following documentation to connect to your [Windows](../virtual-machines/windows/connect-logon.md) or [Linux](../virtual-machines/linux/ssh-from-windows.md#connect-to-your-vm) VM.
+```azurecli
+az login
+az provider register --namespace Microsoft.PartnerCenterIngestion --subscription {subscriptionId}
+```
+
+```powershell
+Connect-AzAccount
+Select-AzSubscription -SubscriptionId {subscriptionId}
+Register-AzResourceProvider -ProviderNamespace Microsoft.PartnerCenterIngestion
+```
+> [!NOTE]
+> You donΓÇÖt need to generate SAS URIs as you can now publish a SIG Image on Partner Center. However, if you still need to refer to the SAS URI generation steps, see [How to generate a SAS URI for a VM image](azure-vm-get-sas-uri.md).
## Next steps -- [Test your VM image](azure-vm-image-test.md) to ensure it meets Azure Marketplace publishing requirements. This is optional.-- If you don't want to test your VM image, sign in to [Partner Center](https://partner.microsoft.com/) and publish the SIG Image (option #1).-- If you followed option #2 or #3, [Generate the SAS URI](azure-vm-get-sas-uri.md).
+- [Test your VM image](azure-vm-image-test.md) to ensure it meets Azure Marketplace publishing requirements (optional).
+- If you don't want to test your VM image, sign in to [Partner Center](https://partner.microsoft.com/) and publish the SIG Image.
- If you encountered difficulty creating your new Azure-based VHD, see [VM FAQ for Azure Marketplace](azure-vm-create-faq.md).
marketplace Azure Vm Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-vm-create.md
Title: Create a virtual machine offer on Azure Marketplace.
-description: Learn how to create a virtual machine offer in the Microsoft commercial marketplace.
+description: Create a virtual machine offer on Azure Marketplace.
Last updated 04/08/2021
-# How to create a virtual machine offer on Azure Marketplace
+# Create a virtual machine offer on Azure Marketplace
This article describes how to create an Azure virtual machine offer for [Azure Marketplace](https://azuremarketplace.microsoft.com/). It addresses both Windows-based and Linux-based virtual machines that contain an operating system, a virtual hard disk (VHD), and up to 16 data disks.
Select **Save draft** before continuing to the next tab in the left-nav menu, **
## Next steps -- [How to configure virtual machine offer properties](azure-vm-create-properties.md)
+- [Configure virtual machine offer properties](azure-vm-create-properties.md)
- [Offer listing best practices](gtm-offer-listing-best-practices.md)
marketplace Azure Vm Get Sas Uri https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-vm-get-sas-uri.md
Title: Generate a SAS URI for a VM image - Azure Marketplace
+ Title: Generate a SAS URI for a VM image
description: Generate a shared access signature (SAS) URI for a virtual hard disks (VHD) in Azure Marketplace.
Last updated 04/21/2021
-# How to generate a SAS URI for a VM image
+# Generate a SAS URI for a VM image
> [!NOTE]
-> You donΓÇÖt need a SAS URI to publish your VM. You can simply share an image in Parter Center. Refer to [Create a virtual machine using an approved base](./azure-vm-create-using-approved-base.md) or [Create a virtual machine using your own image](./azure-vm-create-using-own-image.md) instructions.
+> You donΓÇÖt need a SAS URI to publish your VM. You can simply share an image in Parter Center. Refer to [Create a virtual machine using an approved base](azure-vm-create-using-approved-base.md) or [Create a virtual machine using your own image](azure-vm-create-using-own-image.md) instructions.
Generating SAS URIs for your VHDs has these requirements:
Check the SAS URI before publishing it on Partner Center to avoid any issues rel
## Next steps -- If you run into issues, see [VM SAS failure messages](azure-vm-sas-failure-messages.md).
+- If you run into issues, see [VM SAS failure messages](azure-vm-sas-failure-messages.md)
- [Sign in to Partner Center](https://partner.microsoft.com/dashboard/account/v3/enrollment/introduction/partnership) - [Create a virtual machine offer on Azure Marketplace](azure-vm-create.md)
marketplace Azure Vm Image Test https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-vm-image-test.md
Title: Test an Azure virtual machine image for Azure Marketplace
-description: Learn how to test and submit an Azure virtual machine offer in Azure Marketplace.
+description: Test and submit an Azure virtual machine offer in Azure Marketplace.
marketplace Cloud Partner Portal Migration Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/cloud-partner-portal-migration-faq.md
You can continue doing business in Partner Center:
| New purchases and deployments | No changes. Your customers can continue purchasing and deploying your offers with no interruptions. | | Payouts | Any purchases and deployments will continue to be paid out to you as normal. Learn more about [Getting paid in the commercial marketplace](/partner-center/marketplace-get-paid?context=/azure/marketplace/context/context). | | API integrations with existing [Cloud Partner Portal APIs](cloud-partner-portal-api-overview.md) | Existing Cloud Partner Portal APIs are still supported and your existing integrations still work. Learn more at [Will the Cloud Partner Portal REST APIs be supported?](#are-the-cloud-partner-portal-rest-apis-still-supported) |
-| Analytics | You can continue to monitor sales, evaluate performance, and optimize your offers in the commercial marketplace by viewing analytics in Partner Center. There are differences between how analytics reports display in CPP and Partner Center. For example, **Seller Insights** in CPP has an **Orders & Usage** tab that displays data for usage-based offers and non-usage-based offers, while in Partner Center the **Orders** page has a separate tab for SaaS Offers. Learn more at [Access analytic reports for the commercial marketplace in Partner Center](partner-center-portal/analytics.md). |
+| Analytics | You can continue to monitor sales, evaluate performance, and optimize your offers in the commercial marketplace by viewing analytics in Partner Center. There are differences between how analytics reports display in CPP and Partner Center. For example, **Seller Insights** in CPP has an **Orders & Usage** tab that displays data for usage-based offers and non-usage-based offers, while in Partner Center the **Orders** page has a separate tab for SaaS Offers. Learn more at [Access analytic reports for the commercial marketplace in Partner Center](analytics.md). |
||| ## Do I need to create a new account to manage my offers in Partner Center?
For the offer types supported in Partner Center, all offers were moved regardles
| | | | | SaaS | Yes | Sign in to Partner Center to create new offers and manage offers that were created in Cloud Partner Portal. Learn more at [Plan a SaaS offer for the commercial marketplace](plan-saas-offer.md). | | Virtual Machine | Yes | Sign in to Partner Center to create new offers and manage offers that were created in Cloud Partner Portal. Learn more at [Plan a virtual machine offer](marketplace-virtual-machines.md). |
-| Azure application | Yes | Sign in to Partner Center to create new offers and manage offers that were created in Cloud Partner Portal. Learn more at [Create an Azure application offer](create-new-azure-apps-offer.md). |
+| Azure application | Yes | Sign in to Partner Center to create new offers and manage offers that were created in Cloud Partner Portal. Learn more at [Create an Azure application offer](azure-app-offer-setup.md). |
| Dynamics 365 Business Central | Yes | Sign in to Partner Center to create new offers and manage offers that were created in Cloud Partner Portal. Learn more at [Create a Dynamics 365 Business Central offer](dynamics-365-business-central-offer-setup.md). | | Dynamics 365 for Customer Engagement & PowerApps | Yes | Sign in to Partner Center to create new offers and manage offers that were created in Cloud Partner Portal. Learn more at [Create a Dynamics 365 for Customer Engagement & PowerApps offer](dynamics-365-customer-engage-offer-setup.md). | | Dynamics 365 for Operations | Yes | Sign in to Partner Center to create new offers and manage offers that were created in Cloud Partner Portal. Learn more at [Create a Dynamics 365 for Operations offer](partner-center-portal/create-new-operations-offer.md). |
marketplace Customer Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/customer-dashboard.md
The Customers page filters are applied at the Customers page level. You can sele
## Next steps -- For an overview of analytics reports available in the commercial marketplace, see [Access analytic reports for the commercial marketplace in Partner Center](./partner-center-portal/analytics.md).
+- For an overview of analytics reports available in the commercial marketplace, see [Access analytic reports for the commercial marketplace in Partner Center](analytics.md).
- For graphs, trends, and values of aggregate data that summarize marketplace activity for your offer, see [Summary dashboard in commercial marketplace analytics](./summary-dashboard.md). - For information about your orders in a graphical and downloadable format, see [Orders dashboard in commercial marketplace analytics](./orders-dashboard.md). - For virtual machine (VM) offers usage and metered billing metrics, see [Usage Dashboard in commercial marketplace analytics](./usage-dashboard.md).-- For a list of your download requests over the last 30 days, see [Downloads dashboard in commercial marketplace analytics](./partner-center-portal/downloads-dashboard.md).-- To see a consolidated view of customer feedback for offers on Azure Marketplace and Microsoft AppSource, see [Ratings & Reviews analytics dashboard in Partner Center](./partner-center-portal/ratings-reviews.md).
+- For a list of your download requests over the last 30 days, see [Downloads dashboard in commercial marketplace analytics](downloads-dashboard.md).
+- To see a consolidated view of customer feedback for offers on Azure Marketplace and Microsoft AppSource, see [Ratings & Reviews analytics dashboard in Partner Center](ratings-reviews.md).
- For frequently asked questions about commercial marketplace analytics and for a comprehensive dictionary of data terms, see [Commercial marketplace analytics terminology and common questions](./analytics-faq.md).
marketplace Determine Your Listing Type https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/determine-your-listing-type.md
You can charge a flat fee for these offer types:
The following offer types support usage-based pricing: -- Azure Application (Managed app) offer support for metered billing. For more information, see [Managed application metered billing](partner-center-portal/azure-app-metered-billing.md).
+- Azure Application (Managed app) offer support for metered billing. For more information, see [Managed application metered billing](marketplace-metering-service-apis.md).
- SaaS offers support for Metered billing and per user (per seat) pricing. For more information about metered billing, see [Metered billing for SaaS using the commercial marketplace metering service](partner-center-portal/saas-metered-billing.md). - Azure virtual machine offers support for **Per core**, **Per core size**, and **Per market and core size** pricing. These options are priced per hour and billed monthly.
marketplace Downloads Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/downloads-dashboard.md
+
+ Title: Downloads dashboard in Microsoft commercial marketplace analytics on Partner Center - Azure Marketplace
+description: Learn how to access download requests for your marketplace offers.
+++ Last updated : 08/21/2020++++
+# Downloads dashboard in commercial marketplace analytics
+
+This article provides information on the Downloads dashboard in Partner Center. This dashboard displays a list of your download requests over the last 30 days.
+
+To access the Downloads dashboard, open the **[Analyze](https://partner.microsoft.com/dashboard/commercial-marketplace/analytics/summary)** dashboard under the commercial marketplace.
+
+>[!NOTE]
+> For detailed definitions of analytics terminology, see [Frequently asked questions and terminology for commercial marketplace analytics](analytics-faq.md).
+
+## Downloads dashboard
+
+The **Downloads** dashboard of the **Analyze** menu displays requests for any downloads that contain over 1000 rows of customer or order data.
+
+You will receive a pop-up notification containing a link to the **Downloads** dashboard whenever you request a download with over 1000 rows of data. These data downloads will be available for a 30-day period and then removed.
+
+## Lifetime export of commercial marketplace Analytics reports
+
+On the Downloads page, end user can do the following:
+
+- Lifetime export of commercial marketplace Analytics reports in csv and tsv format.
+- Export of commercial marketplace Analytics reports for any date range.
+- Export of commercial marketplace Analytics reports for 6- or 12-month duration.
+
+Support for Lifetime Export Capability of Analytics reports:
+
+| Report | Lifetime export | Any duration based on date |
+| - | - | - |
+| Orders | ![Green check mark](media/downloads-dashboard/check-green-yes.png) | ![Green check mark](media/downloads-dashboard/check-green-yes.png) |
+| Customers | ![Green check mark](media/downloads-dashboard/check-green-yes.png) | ![Green check mark](media/downloads-dashboard/check-green-yes.png) |
+| Marketplace Insights | ![Green check mark](media/downloads-dashboard/check-green-yes.png) | ![Green check mark](media/downloads-dashboard/check-green-yes.png) |
+| Usage | ![Black X mark](media/downloads-dashboard/check-black-no.png) | Maximum of one year |
+|
+
+A user can schedule asynchronous downloads of reports from the Downloads section:
+
+[![scheduling asynchronous downloads of reports from the Downloads section](media/downloads-dashboard/download-reports.png)](media/downloads-dashboard/download-reports.png#lightbox)
+
+## Next steps
+
+- For an overview of analytics reports available in the Partner Center commercial marketplace, see [Analytics for the commercial marketplace in Partner Center](analytics.md).
+- For graphs, trends, and values of aggregate data that summarize marketplace activity for your offer, see [Summary Dashboard in commercial marketplace analytics](summary-dashboard.md).
+- For information about your orders in a graphical and downloadable format, see [Orders Dashboard in commercial marketplace analytics](orders-dashboard.md).
+- For Virtual Machine (VM) offers usage and metered billing metrics, see [Usage Dashboard in commercial marketplace analytics](usage-dashboard.md).
+- For detailed information about your customers, including growth trends, see [Customer Dashboard in commercial marketplace analytics](customer-dashboard.md).
+- To see a consolidated view of customer feedback for offers on Azure Marketplace and AppSource, see [Ratings and reviews dashboard in commercial marketplace analytics](ratings-reviews.md).
+- For frequently asked questions about commercial marketplace analytics and for a comprehensive dictionary of data terms, see [Frequently asked questions and terminology for commercial marketplace analytics](analytics-faq.md).
marketplace Insights Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/insights-dashboard.md
This table provides a list view of the page visits and the calls to action of yo
## Next steps -- For an overview of analytics reports available in the commercial marketplace, see [Access analytic reports for the commercial marketplace in Partner Center](./partner-center-portal/analytics.md).-- For information about your orders in a graphical and downloadable format, see [Orders dashboard in commercial marketplace analytics](./orders-dashboard.md).-- For virtual machine (VM) offers usage and metered billing metrics, see [Usage dashboard in commercial marketplace analytics](./usage-dashboard.md).-- For detailed information about your customers, including growth trends, see [Customer dashboard in commercial marketplace analytics](./customer-dashboard.md).-- For a list of your download requests over the last 30 days, see [Downloads dashboard in commercial marketplace analytics](./partner-center-portal/downloads-dashboard.md).-- To see a consolidated view of customer feedback for offers on Azure Marketplace and AppSource, see [Ratings & Reviews analytics dashboard in Partner Center](./partner-center-portal/ratings-reviews.md).-- For frequently asked questions about commercial marketplace analytics and for a comprehensive dictionary of data terms, see [Commercial marketplace analytics terminology and common questions](./analytics-faq.md).
+- For an overview of analytics reports available in the commercial marketplace, see [Access analytic reports for the commercial marketplace in Partner Center](analytics.md).
+- For information about your orders in a graphical and downloadable format, see [Orders dashboard in commercial marketplace analytics](orders-dashboard.md).
+- For virtual machine (VM) offers usage and metered billing metrics, see [Usage dashboard in commercial marketplace analytics](usage-dashboard.md).
+- For detailed information about your customers, including growth trends, see [Customer dashboard in commercial marketplace analytics](customer-dashboard.md).
+- For a list of your download requests over the last 30 days, see [Downloads dashboard in commercial marketplace analytics](downloads-dashboard.md).
+- To see a consolidated view of customer feedback for offers on Azure Marketplace and AppSource, see [Ratings & Reviews analytics dashboard in Partner Center](ratings-reviews.md).
+- For frequently asked questions about commercial marketplace analytics and for a comprehensive dictionary of data terms, see [Commercial marketplace analytics terminology and common questions](analytics-faq.md).
marketplace Iot Edge Checklist https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/iot-edge-checklist.md
description: Learn about the specific certification requirements for publishing
--++ Last updated 05/21/2021
marketplace Iot Edge Offer Listing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/iot-edge-offer-listing.md
description: Configure IoT Edge Module offer listing details on Azure Marketplac
--++ Last updated 05/21/2021
marketplace Iot Edge Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/iot-edge-offer-setup.md
description: Create a IoT Edge Module offer on Azure Marketplace.
--++ Last updated 05/21/2021
marketplace Iot Edge Plan Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/iot-edge-plan-availability.md
description: Set plan availability for an IoT Edge Module offer on Azure Marketp
--++ Last updated 05/21/2021
marketplace Iot Edge Plan Listing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/iot-edge-plan-listing.md
description: Set up plan listing details for an IoT Edge Module offer in Azure M
--++ Last updated 05/21/2021
marketplace Iot Edge Plan Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/iot-edge-plan-overview.md
description: Create and edit plans for an IoT Edge Module offer on Azure Marketp
--++ Last updated 05/21/2021
marketplace Iot Edge Plan Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/iot-edge-plan-setup.md
description: Set up plans for an IoT Edge Module offer on Azure Marketplace.
--++ Last updated 05/21/2021
marketplace Iot Edge Plan Technical Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/iot-edge-plan-technical-configuration.md
description: Set plan technical configuration for an IoT Edge Module offer on Az
--++ Last updated 05/21/2021
marketplace Iot Edge Preview Audience https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/iot-edge-preview-audience.md
description: Set the preview audience for an IoT Edge Module offer on Azure Mark
--++ Last updated 05/21/2021
marketplace Iot Edge Properties https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/iot-edge-properties.md
description: Configure IoT Edge Module offer properties on Azure Marketplace.
--++ Last updated 05/21/2021
marketplace Iot Edge Technical Asset https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/iot-edge-technical-asset.md
description: Prepare IoT Edge module technical assets on Azure Marketplace.
--++ Last updated 05/21/2021
marketplace Marketplace Apis Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/marketplace-apis-guide.md
The activities below are not sequential. The activity you use is dependent on yo
| <center>Activity | ISV sales activities | Corresponding Marketplace API | Corresponding Marketplace UI | | | | | |
-| <center>**1. Product Marketing**<br><img src="medi)</ul> | Create product messaging, positioning, promotion, pricing<br>Partner Center (PC) → Offer Creation |
+| <center>**1. Product Marketing**<br><img src="medi)</ul> | Create product messaging, positioning, promotion, pricing<br>Partner Center (PC) → Offer Creation |
| <center>**2. Demand Generation**<br><img src="medi) | Product Promotion<br>Lead nurturing<br>Eval, trial & PoC<br>Azure Marketplace and AppSource<br>PC Marketplace Insights<br>PC Co-Sell Opportunities | | <center>**3. Negotiation and Quote Creation**<br><img src="medi)<br>[Partner Center '7' API Family](https://apidocs.microsoft.com/services/partnercenter) | T&Cs<br>Pricing<br>Discount approvals<br>Final quote<br>PC → Plans (public or private) | | <center>**4. Sale**<br><img src="medi)<br>[Reporting APIs](https://partneranalytics-api.azureedge.net/partneranalytics-api/Programmatic%20Access%20to%20Commercial%20Marketplace%20Analytics%20Data_v1.pdf) | Contract signing<br>Revenue Recognition<br>Invoicing<br>Billing<br>Azure portal / Admin Center<br>PC Marketplace Rewards<br>PC Payouts Reports<br>PC Marketplace Analytics<br>PC Co-Sell Closing |
-| <center>**5. Maintenance**<br><img src="medi)<br>[(EA Customer) Azure Consumption API](/rest/api/consumption/)<br>[(EA Customer) Azure Charges List API](/rest/api/consumption/charges/list)<br>[Payment Services Providers API](https://revapi.developer.azure-api.net/) | Recurring billing<br>Overages<br>Product Support<br>PC Payouts Reports<br>PC Marketplace Analytics |
+| <center>**5. Maintenance**<br><img src="medi)<br>[(EA Customer) Azure Consumption API](/rest/api/consumption/)<br>[(EA Customer) Azure Charges List API](/rest/api/consumption/charges/list)<br>[Payment Services Providers API](https://revapi.developer.azure-api.net/) | Recurring billing<br>Overages<br>Product Support<br>PC Payouts Reports<br>PC Marketplace Analytics |
| <center>**6. Contract End**<br><img src="medi)<br>AMA/VM's: auto-renew | Renew or<br>Terminate<br>PC Marketplace Analytics | |
marketplace Marketplace Commercial Transaction Capabilities And Considerations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/marketplace-commercial-transaction-capabilities-and-considerations.md
The transact publishing option is currently supported for the following offer ty
### Metered billing
-The _Marketplace metering service_ lets you specify pay-as-you-go (consumption-based) charges in addition to monthly or annual charges included in the contract (entitlement). You can charge usage costs for marketplace metering service dimensions that you specify such as bandwidth, tickets, or emails processed. For more information about metered billing for SaaS offers, see [Metered billing for SaaS using the commercial marketplace metering service](./partner-center-portal/saas-metered-billing.md). For more information about metered billing for Azure Application offers, see [Managed application metered billing](./partner-center-portal/azure-app-metered-billing.md).
+The _Marketplace metering service_ lets you specify pay-as-you-go (consumption-based) charges in addition to monthly or annual charges included in the contract (entitlement). You can charge usage costs for marketplace metering service dimensions that you specify such as bandwidth, tickets, or emails processed. For more information about metered billing for SaaS offers, see [Metered billing for SaaS using the commercial marketplace metering service](./partner-center-portal/saas-metered-billing.md). For more information about metered billing for Azure Application offers, see [Managed application metered billing](marketplace-metering-service-apis.md).
### Billing infrastructure costs
Depending on the transaction option used, subscription charges are as follows:
- **Bring your own license** (BYOL): If an offer is listed in the commercial marketplace, any applicable charges for software licenses are managed directly between the publisher and customer. Microsoft only charges applicable Azure infrastructure usage fees to the customerΓÇÖs Azure subscription account. - **Subscription pricing**: Software license fees are presented as a monthly or annual, recurring subscription fee billed as a flat rate or per-seat. Recurrent subscription fees are not prorated for mid-term customer cancellations, or unused services. Recurrent subscription fees may be prorated if the customer upgrades or downgrades their subscription in the middle of the subscription term. - **Usage-based pricing**: For Azure Virtual Machine offers, customers are charged based on the extent of their use of the offer. For Virtual Machine images, customers are charged an hourly Azure Marketplace fee, as set by the publisher, for use of virtual machines deployed from the VM images. The hourly fee may be uniform or varied across virtual machine sizes. Partial hours are charged by the minute. Plans are billed monthly.-- **Metered pricing**: For Azure Application offers and SaaS offers, publishers can use the [Marketplace metering service](./partner-center-portal/marketplace-metering-service-apis.md) to bill for consumption based on the custom meter dimensions they configure. These changes are in addition to monthly or annual charges included in the contract (entitlement). Examples of custom meter dimensions are bandwidth, tickets, or emails processed. Publishers can define one or more metered dimensions for each plan but a maximum of 30 per offer. Publishers are responsible for tracking individual customer usage, with each meter defined in the offer. Events should be reported to Microsoft within an hour. Microsoft charges customers based on the usage information reported by publishers for the applicable billing period.
+- **Metered pricing**: For Azure Application offers and SaaS offers, publishers can use the [Marketplace metering service](marketplace-metering-service-apis.md) to bill for consumption based on the custom meter dimensions they configure. These changes are in addition to monthly or annual charges included in the contract (entitlement). Examples of custom meter dimensions are bandwidth, tickets, or emails processed. Publishers can define one or more metered dimensions for each plan but a maximum of 30 per offer. Publishers are responsible for tracking individual customer usage, with each meter defined in the offer. Events should be reported to Microsoft within an hour. Microsoft charges customers based on the usage information reported by publishers for the applicable billing period.
- **Free trial**: No charge for software licenses that range from 30 days up to six months, depending on the offer type. If publishers provide a free trial on multiple plans within the same offer, customers can switch to a free trial on another plan, but the trial period does not restart. For virtual machine offers, customers are charged Azure infrastructure costs for using the offer during a trial period. Upon expiration of the trial period, customers are automatically charged for the last plan they tried based on standard rates unless they cancel before the end of the trial period. > [!NOTE]
marketplace Marketplace Faq Publisher Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/marketplace-faq-publisher-guide.md
You can also [join our active community forum](https://www.microsoftpartnercommu
### What analytics are available to my organization from the commercial marketplace?
-We provide reporting on your offers within the commercial marketplace. To access data on customers, orders, store engagement, and more, go to [Analytics for the commercial marketplace in Partner Center](partner-center-portal/analytics.md).
+We provide reporting on your offers within the commercial marketplace. To access data on customers, orders, store engagement, and more, go to [Analytics for the commercial marketplace in Partner Center](analytics.md).
### What is Microsoft's relationship with my customers?
marketplace Marketplace Geo Availability Currencies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/marketplace-geo-availability-currencies.md
Individual prices (which, depending on how they were set, may have been influenc
For details on how to enter prices for specific offer types, refer to these articles: -- [Create an Azure application offer](create-new-azure-apps-offer.md)
+- [Create an Azure application offer](azure-app-offer-setup.md)
- [Create an Azure container offer](azure-container-offer-setup.md) - [Create an Azure virtual machine offer](azure-vm-create.md) - [Create a consulting service offer](./create-consulting-service-offer.md)
marketplace Marketplace Managed Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/marketplace-managed-apps.md
If you haven't already done so, learn how to [Grow your cloud business with Azur
To register for and start working in Partner Center: - [Sign in to Partner Center](https://partner.microsoft.com/dashboard/account/v3/enrollment/introduction/partnership) to create or complete your offer.-- See [Create an Azure application offer](./create-new-azure-apps-offer.md) for more information.
+- See [Create an Azure application offer](azure-app-offer-setup.md) for more information.
marketplace Marketplace Metering Service Apis Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/marketplace-metering-service-apis-faq.md
+
+ Title: Metering service APIs FAQ - Microsoft commercial marketplace
+description: Frequently asked questions about the metering service APIs for SaaS offers in Microsoft AppSource and Azure Marketplace.
+++ Last updated : 06/01/2020++++
+# Marketplace metered billing APIs - FAQ
+
+Once a customer subscribes to a SaaS service, or Azure Application with a Managed Apps plan, with metered billing, you will track consumption for each billing dimension being used. If the consumption exceeds the included quantities set for the term selected by the customer, your service will emit usage events to Microsoft.
+
+## For both SaaS offers and Managed apps
+
+### How often is it expected to emit usage?
+
+Ideally, you are expected to emit usage every hour for the past hour, only if there is usage in the previous hour.
+
+### Is there a maximal period between one emission and the next one?
+
+There is no such limitation. Only emit usage as it occurs. For example, if you only need to submit one unit of usage per subscription lifetime, you can do it.
+
+### What is the maximum delay between the time an event occurs, and the time a usage event is emitted to Microsoft?
+
+Ideally, the usage event is emitted every hour for events that occurred in the past hour. However, delays are expected. The maximum delay allowed is 24 hours, after which usage events will not be accepted. The best practice is to collect hourly usage and to emit is as one event at the end of the hour.
+
+For example, if a usage event occurs at 1 PM on a day, you have until 1 PM on the next day to emit a usage event associated with this event. In case the system emitting usage is down, it can recover and then send the usage event for the hour interval in which the usage happened, without loss of fidelity.
+
+If 24 hours have passed after the actual usage, you can still emit the consumed units with later usage events. However, this practice may hurt the credibility of the billing event reports for the end customer. We recommend that you avoid sending meter emission once a day/week/month. It will be harder to understand the actual usage by a customer, and to resolve issues or questions that might be raised regarding usage events.
+
+Another reason to send usage every hour is to avoid situations that the user cancels the subscription before the publisher sends the daily/weekly/monthly emission event.
+
+### What happens when you send more than one usage event in the same hour?
+
+Only one usage event is accepted for the one-hour interval. The hour interval starts at minute 0 and ends at minute 59. If more than one usage event is emitted for the same hour, any subsequent usage events are dropped as duplicates.
+
+### What happens when the customer cancels the purchase within the time allowed by the cancellation policy?
+
+The flat-rate amount will not be charged but the overage usage will be.
+
+### Can custom meter plans be used for one-time payments?
+
+Yes, you can define a custom dimension as one unit of one-time payment and emit it only once for each customer.
+
+### Can custom meter plans be used to tiered pricing model?
+
+Yes, it can be implemented with each custom dimension representing a single price tier.
+
+For example, Contoso wants to charge $0.5 per email for the first 1000 emails, $0.4 per email between 1000 and 5000 emails, and $0.2 per email for above 5000 emails. They can define three custom dimensions, that correspond to the three email pricing tiers. Emit units of the first dimension for as long as the number of emails stays below 1000, then units of the second dimension when the number of emails is between 1000 and 5000, and finally, units of the third dimension for above 5000 emails.
+
+### What happens if the Marketplace metering service has an outage?
+
+If the ISV sends a custom meter and receives an error, that may have been caused by an issue on Microsoft side (usually in the case similar events were accepted before without an error), then the ISV should wait and retry the emission.
+
+If the error persists, then resubmit that custom meter the next hour (accumulate the quantity). Continue this process until a non-error response is received.
+
+## For SaaS offers only
+
+### What happens when you emit usage for a SaaS subscription that has been unsubscribed already?
+
+Any usage event emitted to the marketplace platform will not be accepted after a SaaS subscription has been deleted.
+
+Usage can be emitted only for subscriptions in the Subscribed status (and not for subscriptions in `PendingFulfillmentStart`, `Suspended`, or `Unsubscribed` status).
+
+The only exception is reporting usage for the time that was before the SaaS subscription has been canceled.
+
+For example, the customer canceled the SaaS subscription today at 3 pm. Now is 5 pm, the publisher can still emit usage for the period between 6 pm yesterday and 3 pm today for this SaaS subscription.
+
+### Can you get a list of all SaaS subscriptions, including active and unsubscribed subscriptions?
+
+Yes, when you call the [GET Subscriptions List API](partner-center-portal/pc-saas-fulfillment-api-v2.md#subscription-apis) as it includes a list of all SaaS subscriptions. The status field in the response for each SaaS subscription captures whether the subscription is active or unsubscribed.
+
+### Are the start and end dates of SaaS subscription term and overage usage emission connected?
+
+Overage events can be emitted at any point of time for existing SaaS subscription in *Subscribed* status. It's the responsibility of the publisher to emit usage events based on the policy defined in the billing plan. The overage must be calculated based on the dates defined in the term of the SaaS subscription.
+
+For example, if the publisher defines a SaaS plan that includes 1000 emails for $100 in monthly flat rate, every email above 1000 is billed $1 via custom dimension.
+
+When the customer buys and activates the subscription on January 6, the 1000 email included in the flat rate will be counted starting on this day. So if until February 5 (end of the first month of the subscription) only 900 emails are sent, the customer will pay the fixed rate only for the first month of this subscription, and no overage usage events will be emitted by the publisher between January 6 and February 5. On February 6, the subscription will be automatically renewed and the count will start again. If on February 15 the customer reached 1000 emails sent, the rest of the emails sent until March 5 will be charged as overage ($1 per email) based on the overage usage events emitted by the publisher.
+
+## Next steps
+
+- For more information, see [Marketplace metering service APIs](./marketplace-metering-service-apis.md).
marketplace Marketplace Metering Service Apis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/marketplace-metering-service-apis.md
+
+ Title: Metering service APIs - Microsoft commercial marketplace
+description: The usage event API allows you to emit usage events for SaaS offers in Microsoft AppSource and Azure Marketplace.
+++ Last updated : 05/26/2020++++
+# Marketplace metered billing APIs
+
+The metered billing APIs should be used when the publisher creates custom metering dimensions for an offer to be published in Partner Center. Integration with the metered billing APIs is required for any purchased offer that has one or more plans with custom dimensions to emit usage events.
+
+For more information on creating custom metering dimensions for SaaS, see [SaaS metered billing](partner-center-portal/saas-metered-billing.md).
+
+For more information on creating custom metering dimensions for an Azure Application offer with a Managed app plan, see [Configure your Azure application offer setup details](azure-app-offer-setup.md#configure-your-azure-application-offer-setup-details).
+
+## Enforcing TLS 1.2 Note
+
+TLS version 1.2 version is enforced as the minimal version for HTTPS communications. Make sure you use this TLS version in your code. TLS version 1.0 and 1.1 are deprecated and connection attempts will be refused.
+
+## Metered billing single usage event
+
+The usage event API should be called by the publisher to emit usage events against an active resource (subscribed) for the plan purchased by the specific customer. The usage event is emitted separately for each custom dimension of the plan defined by the publisher when publishing the offer.
+
+Only one usage event can be emitted for each hour of a calendar day. For example, at 8:15am today, you can emit one usage event. If this event is accepted, the next usage event will be accepted from 9:00 am today. If you send an additional event between 8:15 and 8:59:59 today, it will be rejected as a duplicate. You should accumulate all units consumed in an hour and then emit it in a single event.
+
+Only one usage event can be emitted for each hour of a calendar day per resource. If more than one unit is consumed in an hour, then accumulate all the units consumed in the hour and then emit it in a single event. Usage events can only be emitted for the past 24 hours. If you emit a usage event at any time between 8:00 and 8:59:59 (and it is accepted) and send an additional event for the same day between 8:00 and 8:59:59, it will be rejected as a duplicate.
+
+**POST**: `https://marketplaceapi.microsoft.com/api/usageEvent?api-version=<ApiVersion>`
+
+*Query parameters:*
+
+| Paramter | Recommendation |
+| - | - |
+| `ApiVersion` | Use 2018-08-31. |
+| | |
+
+*Request headers:*
+
+| Content-type | Use `application/json` |
+| | - |
+| `x-ms-requestid` | Unique string value for tracking the request from the client, preferably a GUID. If this value is not provided, one will be generated and provided in the response headers. |
+| `x-ms-correlationid` | Unique string value for operation on the client. This parameter correlates all events from client operation with events on the server side. If this value isn't provided, one will be generated and provided in the response headers. |
+| `authorization` | A unique access token that identifies the ISV that is making this API call. The format is `"Bearer <access_token>"` when the token value is retrieved by the publisher as explained for <br> <ul> <li> SaaS in [Get the token with an HTTP POST](partner-center-portal/pc-saas-registration.md#get-the-token-with-an-http-post). </li> <li> Managed application in [Authentication strategies](marketplace-metering-service-authentication.md). </li> </ul> |
+| | |
+
+*Request body example:*
+
+```json
+{
+ "resourceId": <guid>, // unique identifier of the resource against which usage is emitted.
+ "quantity": 5.0, // how many units were consumed for the date and hour specified in effectiveStartTime, must be greater than 0, can be integer or float value
+ "dimension": "dim1", // custom dimension identifier
+ "effectiveStartTime": "2018-12-01T08:30:14", // time in UTC when the usage event occurred, from now and until 24 hours back
+ "planId": "plan1", // id of the plan purchased for the offer
+}
+```
+
+>[!NOTE]
+>`resourceId` has different meaning for SaaS app and for Managed app emitting custom meter.
+
+For Azure Application Managed Apps plans, the `resourceId` is the Managed App `resource group Id`. An example script for fetching it can be found in [using the Azure-managed identities token](./marketplace-metering-service-authentication.md#using-the-azure-managed-identities-token).
+
+For SaaS offers, the `resourceId` is the SaaS subscription ID. For more details on SaaS subscriptions, see [list subscriptions](partner-center-portal/pc-saas-fulfillment-api-v2.md#get-list-of-all-subscriptions).
+
+### Responses
+
+Code: 200<br>
+OK. The usage emission was accepted and recorded on Microsoft side for further processing and billing.
+
+Response payload example:
+
+```json
+{
+ "usageEventId": <guid>, // unique identifier associated with the usage event in Microsoft records
+ "status": "Accepted" // this is the only value in case of single usage event
+ "messageTime": "2020-01-12T13:19:35.3458658Z", // time in UTC this event was accepted
+ "resourceId": <guid>, // unique identifier of the resource against which usage is emitted. For SaaS it's the subscriptionId.
+ "quantity": 5.0, // amount of emitted units as recorded by Microsoft
+ "dimension": "dim1", // custom dimension identifier
+ "effectiveStartTime": "2018-12-01T08:30:14", // time in UTC when the usage event occurred, as sent by the ISV
+ "planId": "plan1", // id of the plan purchased for the offer
+}
+```
+
+Code: 400 <br>
+Bad request.
+
+* Missing or invalid request data provided.
+* `effectiveStartTime` is more than 24 hours in the past. Event has expired.
+* SaaS subscription is not in Subscribed status.
+
+Response payload example:
+
+```json
+{
+ "message": "One or more errors have occurred.",
+ "target": "usageEventRequest",
+ "details": [
+ {
+ "message": "The resourceId is required.",
+ "target": "ResourceId",
+ "code": "BadArgument"
+ }
+ ],
+ "code": "BadArgument"
+}
+```
+
+Code: 403<br>
+
+Forbidden. The authorization token isn't provided, is invalid or expired. Or the request is attempting to access a subscription for an offer that was published with a different Azure AD App ID from the one used to create the authorization token.
+
+Code: 409<br>
+Conflict. A usage event has already been successfully reported for the specified resource ID, effective usage date and hour.
+
+Response payload example:
+
+```json
+{
+ "additionalInfo": {
+ "acceptedMessage": {
+ "usageEventId": "<guid>", //unique identifier associated with the usage event in Microsoft records
+ "status": "Duplicate",
+ "messageTime": "2020-01-12T13:19:35.3458658Z",
+ "resourceId": "<guid>", //unique identifier of the resource against which usage is emitted.
+ "quantity": 1.0,
+ "dimension": "dim1",
+ "effectiveStartTime": "2020-01-12T11:03:28.14Z",
+ "planId": "plan1"
+ }
+ },
+ "message": "This usage event already exist.",
+ "code": "Conflict"
+}
+```
+
+## Metered billing batch usage event
+
+The batch usage event API allows you to emit usage events for more than one purchased resource at once. It also allows you to emit several usage events for the same resource as long as they are for different calendar hours. The maximal number of events in a single batch is 25.
+
+**POST:** `https://marketplaceapi.microsoft.com/api/batchUsageEvent?api-version=<ApiVersion>`
+
+*Query parameters:*
+
+| Parameter | Recommendation |
+| - | -- |
+| `ApiVersion` | Use 2018-08-31. |
+
+*Request headers:*
+
+| Content-type | Use `application/json` |
+| | |
+| `x-ms-requestid` | Unique string value for tracking the request from the client, preferably a GUID. If this value is not provided, one will be generated, and provided in the response headers. |
+| `x-ms-correlationid` | Unique string value for operation on the client. This parameter correlates all events from client operation with events on the server side. If this value isn't provided, one will be generated, and provided in the response headers. |
+| `authorization` | A unique access token that identifies the ISV that is making this API call. The format is `Bearer <access_token>` when the token value is retrieved by the publisher as explained for <br> <ul> <li> SaaS in [Get the token with an HTTP POST](partner-center-portal/pc-saas-registration.md#get-the-token-with-an-http-post). </li> <li> Managed application in [Authentication strategies](./marketplace-metering-service-authentication.md). </li> </ul> |
+| | |
++
+*Request body example:*
+
+```json
+{
+ "request": [ // list of usage events for the same or different resources of the publisher
+ { // first event
+ "resourceId": "<guid1>", // Unique identifier of the resource against which usage is emitted.
+ "quantity": 5.0, // how many units were consumed for the date and hour specified in effectiveStartTime, must be greater than 0, can be integer or float value
+ "dimension": "dim1", //Custom dimension identifier
+ "effectiveStartTime": "2018-12-01T08:30:14",//Time in UTC when the usage event occurred, from now and until 24 hours back
+ "planId": "plan1", // id of the plan purchased for the offer
+ },
+ { // next event
+ "resourceId": "<guid2>",
+ "quantity": 39.0,
+ "dimension": "email",
+ "effectiveStartTime": "2018-11-01T23:33:10
+ "planId": "gold", // id of the plan purchased for the offer
+ }
+ ]
+}
+```
+
+>[!NOTE]
+>`resourceId` has different meaning for SaaS app and for Managed app emitting custom meter.
+
+For Azure Application Managed Apps plans, the `resourceId` is the Managed App `resource group Id`. An example script for fetching it can be found in [using the Azure-managed identities token](marketplace-metering-service-authentication.md#using-the-azure-managed-identities-token).
+
+For SaaS offers, the `resourceId` is the SaaS subscription ID. For more details on SaaS subscriptions, see [list subscriptions](partner-center-portal/pc-saas-fulfillment-api-v2.md#get-list-of-all-subscriptions).
+
+### Responses
+
+Code: 200<br>
+OK. The batch usage emission was accepted and recorded on Microsoft side for further processing and billing. The response list is returned with status for each individual event in the batch. You should iterate through the response payload to understand the responses for each individual usage event sent as part of the batch event.
+
+Response payload example:
+
+```json
+{
+ "count": 2, // number of records in the response
+ "result": [
+ { // first response
+ "usageEventId": "<guid>", // unique identifier associated with the usage event in Microsoft records
+ "status": "Accepted" // see list of possible statuses below,
+ "messageTime": "2020-01-12T13:19:35.3458658Z", // Time in UTC this event was accepted by Microsoft,
+ "resourceId": "<guid1>", // unique identifier of the resource against which usage is emitted.
+ "quantity": 5.0, // amount of emitted units as recorded by Microsoft
+ "dimension": "dim1", // custom dimension identifier
+ "effectiveStartTime": "2018-12-01T08:30:14",// time in UTC when the usage event occurred, as sent by the ISV
+ "planId": "plan1", // id of the plan purchased for the offer
+ },
+ { // second response
+ "status": "Duplicate",
+ "messageTime": "0001-01-01T00:00:00",
+ "error": {
+ "additionalInfo": {
+ "acceptedMessage": {
+ "usageEventId": "<guid>",
+ "status": "Duplicate",
+ "messageTime": "2020-01-12T13:19:35.3458658Z",
+ "resourceId": "<guid2>",
+ "quantity": 1.0,
+ "dimension": "email",
+ "effectiveStartTime": "2020-01-12T11:03:28.14Z",
+ "planId": "gold"
+ }
+ },
+ "message": "This usage event already exist.",
+ "code": "Conflict"
+ },
+ "resourceId": "<guid2>",
+ "quantity": 1.0,
+ "dimension": "email",
+ "effectiveStartTime": "2020-01-12T11:03:28.14Z",
+ "planId": "gold"
+ }
+ ]
+}
+```
+
+Description of status code referenced in `BatchUsageEvent` API response:
+
+| Status code | Description |
+| - | -- |
+| `Accepted` | Accepted. |
+| `Expired` | Expired usage. |
+| `Duplicate` | Duplicate usage provided. |
+| `Error` | Error code. |
+| `ResourceNotFound` | The usage resource provided is invalid. |
+| `ResourceNotAuthorized` | You are not authorized to provide usage for this resource. |
+| `InvalidDimension` | The dimension for which the usage is passed is invalid for this offer/plan. |
+| `InvalidQuantity` | The quantity passed is lower or equal to 0. |
+| `BadArgument` | The input is missing or malformed. |
+
+Code: 400<br>
+Bad request. The batch contained more than 25 usage events.
+
+Code: 403<br>
+Forbidden. The authorization token isn't provided, is invalid or expired. Or the request is attempting to access a subscription for an offer that was published with a different Azure AD App ID from the one used to create the authorization token.
+
+## Development and testing best practices
+
+To test the custom meter emission, implement the integration with metering API, create a plan for your published SaaS offer with custom dimensions defined in it with zero price per unit. And publish this offer as preview so only limited users would be able to access and test the integration.
+
+You can also use private plan for an existing live offer to limit the access to this plan during testing to limited audience.
+
+## Get support
+
+Follow the instruction in [Support for the commercial marketplace program in Partner Center](support.md) to understand publisher support options and open a support ticket with Microsoft.
+
+## Next steps
+
+For more information on metering service APIs , see [Marketplace metering service APIs FAQ](marketplace-metering-service-apis-faq.md).
marketplace Marketplace Metering Service Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/marketplace-metering-service-authentication.md
+
+ Title: Marketplace metering service authentication strategies | Azure Marketplace
+description: Metering service authentication strategies supported in the Azure Marketplace.
+++ Last updated : 06/01/2021++++
+# Marketplace metering service authentication strategies
+
+Marketplace metering service supports two authentication strategies:
+
+* [Azure AD security token](../active-directory/develop/access-tokens.md)
+* [Managed identities](../active-directory/managed-identities-azure-resources/overview.md)
+
+We will explain when and how to use the different authentication strategies to securely submit custom meters using Marketplace metering service.
+
+## Using the Azure AD security token
+
+Applicable offer types are transactable SaaS and Azure Applications with managed application plan type.
+
+Submit custom meters by using a predefined fixed Azure AD application ID to authenticate.
+
+For SaaS offers, this is the only available option. It's a mandatory step for publishing any SaaS offer as described in [register a SaaS application](partner-center-portal/pc-saas-registration.md).
+
+For Azure applications with managed application plan, you should consider using this strategy in the following cases:
+
+* You already have a mechanism to communicate with your backend services, and you want to extend this mechanism to emit custom meters from a central service.
+* You have complex custom meters logic. Run this logic in a central location, instead of the managed application resources.
+
+Once you have registered your application, you can programmatically request an Azure AD security token. The publisher is expected to use this token and make a request to resolve it.
+
+For more information about these tokens, see [Azure Active Directory access tokens](../active-directory/develop/access-tokens.md).
+
+### Get a token based on the Azure AD app
+
+#### HTTP Method
+
+**POST**
+
+#### *Request URL*
+
+**`https://login.microsoftonline.com/*{tenantId}*/oauth2/token`**
+
+#### *URI parameter*
+
+| **Parameter name** | **Required** | **Description** |
+| | | |
+| `tenantId` | True | Tenant ID of the registered Azure AD application. |
+| | | |
+
+#### *Request header*
+
+| **Header name** | **Required** | **Description** |
+| | | |
+| `Content-Type` | True | Content type associated with the request. The default value is `application/x-www-form-urlencoded`. |
+| | | |
+
+#### *Request body*
+
+| **Property name** | **Required** | **Description** |
+| | | |
+| `Grant_type` | True | Grant type. Use `client_credentials`. |
+| `Client_id` | True | Client/app identifier associated with the Azure AD app.|
+| `client_secret` | True | Secret associated with the Azure AD app. |
+| `Resource` | True | Target resource for which the token is requested. Use `20e940b3-4c77-4b0b-9a53-9e16a1b010a7`. |
+| | | |
+
+#### *Response*
+
+| **Name** | **Type** | **Description** |
+| | | - |
+| `200 OK` | `TokenResponse` | Request succeeded. |
+| | | |
+
+#### *TokenResponse*
+
+Sample response token:
+
+```JSON
+ {
+ "token_type": "Bearer",
+ "expires_in": "3600",
+ "ext_expires_in": "0",
+ "expires_on": "15251…",
+ "not_before": "15251…",
+ "resource": "20e940b3-4c77-4b0b-9a53-9e16a1b010a7",
+ "access_token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6ImlCakwxUmNxemhpeTRmcHhJeGRacW9oTTJZayIsImtpZCI6ImlCakwxUmNxemhpeTRmcHhJeGRacW9oTTJZayJ9…"
+ }
+```
+
+## Using the Azure-managed identities token
+
+Applicable offer type is Azure applications with managed application plan type.
+
+Using this approach will allow the deployed resources identity to authenticate to send custom meters usage events. You can embed the code that emits usage within the boundaries of your deployment.
+
+>[!Note]
+>Publisher should ensure that the resources that emit usage are locked, so it will not be tampered.
+
+Your managed application can contain different type of resources, from Virtual Machines to Azure Functions. For more information on how to authenticate using managed identities for different services, see [how to use managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md#how-can-i-use-managed-identities-for-azure-resources)).
+
+For example, follow the steps below to authenticate using a Windows VM,
+
+1. Make sure Managed Identity is configured using one of the methods:
+ * [Azure portal UI](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md)
+ * [CLI](../active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vm.md)
+ * [PowerShell](../active-directory/managed-identities-azure-resources/qs-configure-powershell-windows-vm.md)
+ * [Azure Resource Manager Template](../active-directory/managed-identities-azure-resources/qs-configure-template-windows-vm.md)
+ * [REST](../active-directory/managed-identities-azure-resources/qs-configure-rest-vm.md#system-assigned-managed-identity))
+ * [Azure SDKs](../active-directory/managed-identities-azure-resources/qs-configure-sdk-windows-vm.md)
+
+1. Get an access token for Marketplace metering service application ID (`20e940b3-4c77-4b0b-9a53-9e16a1b010a7`) using the system identity, RDP to the VM, open PowerShell console and run the command below
+
+ ```powershell
+ # curl is an alias to Web-Invoke PowerShell command
+ # Get system identity access tokenn
+ $MetadataUrl = "http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fmanagement.azure.com%2F"
+ $Token = curl -H @{"Metadata" = "true"} $MetadataUrl | Select-Object -Expand Content | ConvertFrom-Json
+ $Headers = @{}
+ $Headers.Add("Authorization","$($Token.token_type) "+ " " + "$($Token.access_token)")
+ ```
+
+1. Get the managed app ID from the current resource groups 'ManagedBy' property
+
+ ```powershell
+ # Get subscription and resource group
+ $metadata = curl -H @{'Metadata'='true'} http://169.254.169.254/metadata/instance?api-version=2019-06-01 | select -ExpandProperty Content | ConvertFrom-Json
+
+ # Make sure the system identity has at least reader permission on the resource group
+ $managementUrl = "https://management.azure.com/subscriptions/" + $metadata.compute.subscriptionId + "/resourceGroups/" + $metadata.compute.resourceGroupName + "?api-version=2019-10-01"
+ $resourceGroupInfo = curl -Headers $Headers $managementUrl | select -ExpandProperty Content | ConvertFrom-Json
+ $managedappId = $resourceGroupInfo.managedBy
+ ```
+
+1. Marketplace metering service requires to report usage on a `resourceID`, and `resourceUsageId` if a managed application.
+
+ ```powershell
+ # Get resourceUsageId from the managed app
+ $managedAppUrl = "https://management.azure.com/subscriptions/" + $metadata.compute.subscriptionId + "/resourceGroups/" + $metadata.compute.resourceGroupName + "/providers/Microsoft.Solutions/applications/" + $managedappId + "\?api-version=2019-07-01"
+ $ManagedApp = curl $managedAppUrl -H $Headers | Select-Object -Expand Content | ConvertFrom-Json
+ # Use this resource ID to emit usage
+ $resourceUsageId = $ManagedApp.properties.billingDetails.resourceUsageId
+ ```
+
+1. Use the [Marketplace metering service API](./marketplace-metering-service-apis.md) to emit usage.
+
+## Next steps
+
+* [Create an Azure application offer](azure-app-offer-setup.md)
+* [Plan a SaaS offer](plan-saas-offer.md)
marketplace Marketplace Solution Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/marketplace-solution-templates.md
The listing option that a customer sees for this offer type is *Get It Now*.
| **Requirements** | **Details** | | | -- | |Billing and metering | Solution template offers are not transaction offers, but they can be used to deploy paid VM offers that are billed through the Microsoft commercial marketplace. The resources that the solution's ARM template deploys are set up in the customer's Azure subscription. Pay-as-you-go virtual machines are transacted with the customer via Microsoft and billed via the customer's Azure subscription.<br/> For bring-your-own-license (BYOL) billing, although Microsoft bills infrastructure costs that are incurred in the customer subscription, you transact your software licensing fees with the customer directly. |
-|Azure-compatible virtual hard disk (VHD) | VMs must be built on Windows or Linux. For more information, see: <ul> <li>[Create an Azure application offer](./create-new-azure-apps-offer.md) (for Windows VHDs).</li><li>[Linux distributions endorsed on Azure](../virtual-machines/linux/endorsed-distros.md) (for Linux VHDs).</li></ul> |
+|Azure-compatible virtual hard disk (VHD) | VMs must be built on Windows or Linux. For more information, see: <ul> <li>[Create an Azure application offer](azure-app-offer-setup.md) (for Windows VHDs).</li><li>[Linux distributions endorsed on Azure](../virtual-machines/linux/endorsed-distros.md) (for Linux VHDs).</li></ul> |
| Customer usage attribution | Enabling customer usage attribution is required on all solution templates that are published on Azure Marketplace. For more information about customer usage attribution and how to enable it, see [Azure partner customer usage attribution](./azure-partner-customer-usage-attribution.md). | | Use managed disks | [Managed disks](../virtual-machines/managed-disks-overview.md) is the default option for persisted disks of infrastructure as a service (IaaS) VMs in Azure. You must use managed disks in solution templates. <ul><li>To update your solution templates, follow the guidance in [Use managed disks in Azure Resource Manager templates](../virtual-machines/using-managed-disks-template-deployments.md), and use the provided [samples](https://github.com/Azure/azure-quickstart-templates).<br><br> </li><li>To publish the VHD as an image in Azure Marketplace, import the underlying VHD of the managed disks to a storage account by using either of the following methods:<ul><li>[Azure PowerShell](/previous-versions/azure/virtual-machines/scripts/virtual-machines-powershell-sample-copy-managed-disks-vhd) </li> <li> [The Azure CLI](/previous-versions/azure/virtual-machines/scripts/virtual-machines-cli-sample-copy-managed-disks-vhd) </li> </ul></ul> |
If you haven't already done so, learn how to [Grow your cloud business with Azur
To register for and start working in Partner Center: - [Sign in to Partner Center](https://partner.microsoft.com/dashboard/account/v3/enrollment/introduction/partnership) to create or complete your offer.-- See [Create an Azure application offer](./create-new-azure-apps-offer.md) for more information.
+- See [Create an Azure application offer](./azure-app-offer-setup.md) for more information.
marketplace Orders Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/orders-dashboard.md
The **Orders** page filters are applied at the Orders page level. You can select
## Next steps -- For an overview of analytics reports available in the commercial marketplace, see [Access analytic reports for the commercial marketplace in Partner Center](./partner-center-portal/analytics.md).
+- For an overview of analytics reports available in the commercial marketplace, see [Access analytic reports for the commercial marketplace in Partner Center](analytics.md).
- For graphs, trends, and values of aggregate data that summarize marketplace activity for your offer, see [Summary dashboard in commercial marketplace analytics](./summary-dashboard.md). - For information about your orders in a graphical and downloadable format, see [Orders Dashboard in commercial marketplace analytics](orders-dashboard.md). - For Virtual Machine (VM) offers usage and metered billing metrics, see [Usage dashboard in commercial marketplace analytics](./usage-dashboard.md).-- For a list of your download requests over the last 30 days, see [Downloads dashboard in commercial marketplace analytics](./partner-center-portal/downloads-dashboard.md).-- To see a consolidated view of customer feedback for offers on Azure Marketplace and AppSource, see [Ratings & Reviews analytics dashboard in Partner Center](./partner-center-portal/ratings-reviews.md).
+- For a list of your download requests over the last 30 days, see [Downloads dashboard in commercial marketplace analytics](downloads-dashboard.md).
+- To see a consolidated view of customer feedback for offers on Azure Marketplace and AppSource, see [Ratings & Reviews analytics dashboard in Partner Center](ratings-reviews.md).
- For frequently asked questions about commercial marketplace analytics and for a comprehensive dictionary of data terms, see [Commercial marketplace analytics terminology and common questions](./analytics-faq.md).
marketplace Plan Azure App Managed App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/plan-azure-app-managed-app.md
Private plans are not supported with Azure subscriptions established through a r
You must provide the per-month price for each plan. This price is in addition to any Azure infrastructure or pay-as-you-go software costs incurred by the resources deployed by this solution.
-In addition to the per-month price, you can also set prices for consumption of non-standard units using [metered billing](partner-center-portal/azure-app-metered-billing.md). You may set the per-month price to zero and charge exclusively using metered billing if you like.
+In addition to the per-month price, you can also set prices for consumption of non-standard units using [metered billing](marketplace-metering-service-apis.md). You may set the per-month price to zero and charge exclusively using metered billing if you like.
Prices are set in USD (USD = United States Dollar) are converted into the local currency of all selected markets using the current exchange rates when saved. But you can choose to set customer prices for each market.
For each policy type you add, you must associate Standard or Free Policy SKU. Th
## Next steps -- [How to create an Azure application offer in the commercial marketplace](create-new-azure-apps-offer.md)
+- [Create an Azure application offer](azure-app-offer-setup.md)
marketplace Plan Azure App Solution Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/plan-azure-app-solution-template.md
For more information, see [Private offers in the Microsoft commercial marketplac
## Next steps -- [How to create an Azure application offer in the commercial marketplace](create-new-azure-apps-offer.md)
+- [Create an Azure application offer](azure-app-offer-setup.md)
marketplace Plan Azure Application Offer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/plan-azure-application-offer.md
You define the preview audience using Azure subscription IDs, along with an opti
## Technical configuration
-For managed applications that emit metering events using the [Marketplace metering service APIs](partner-center-portal/marketplace-metering-service-apis.md), you must provide the identity that your service will use when emitting metering events.
+For managed applications that emit metering events using the [Marketplace metering service APIs](marketplace-metering-service-apis.md), you must provide the identity that your service will use when emitting metering events.
-This configuration is required if you want to use [Batch usage event](partner-center-portal/marketplace-metering-service-apis.md#metered-billing-batch-usage-event). In case you want to submit [usage event](partner-center-portal/marketplace-metering-service-apis.md#metered-billing-single-usage-event), you can also use the [instance metadata service](../active-directory/managed-identities-azure-resources/overview.md) to get the [JSON web token (JWT) bearer token](partner-center-portal/pc-saas-registration.md#how-to-get-the-publishers-authorization-token)).
+This configuration is required if you want to use [Batch usage event](marketplace-metering-service-apis.md#metered-billing-batch-usage-event). In case you want to submit [usage event](marketplace-metering-service-apis.md#metered-billing-single-usage-event), you can also use the [instance metadata service](../active-directory/managed-identities-azure-resources/overview.md) to get the [JSON web token (JWT) bearer token](partner-center-portal/pc-saas-registration.md#how-to-get-the-publishers-authorization-token)).
- **Azure Active Directory tenant ID** (required): Inside the Azure portal, you must [create an Azure Active Directory (AD) app](../active-directory/develop/howto-create-service-principal-portal.md) so we can validate the connection between our two services is behind an authenticated communication. To find the [tenant ID](../active-directory/develop/howto-create-service-principal-portal.md#get-tenant-and-app-id-values-for-signing-in) for your Azure Active Directory (Azure AD) app, to the [App registrations](https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade) blade in your Azure Active Directory. In the **Display name** column, select the app. Then look for **Properties**, and then for the **Directory (tenant) ID** (for example `50c464d3-4930-494c-963c-1e951d15360e`). - **Azure Active Directory application ID** (required): You also need your [application ID](../active-directory/develop/howto-create-service-principal-portal.md#get-tenant-and-app-id-values-for-signing-in) and an authentication key. To find your application ID, go to the [App registrations](https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade) blade in your your Azure Active Directory. In the **Display name** column, select the app and then look for the **Application (client) ID** (for example `50c464d3-4930-494c-963c-1e951d15360e`). To find the authentication key, go to **Settings** and select **Keys**. You will need to provide a description and duration and will then be provided a number value.
marketplace Plans Pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/plans-pricing.md
If you have already set prices for your plan in United States Dollars (USD) and
> [!IMPORTANT] > After your offer is published, the pricing model choice cannot be changed.
-Flat-rate SaaS offers and managed application offers support metered billing using the marketplace metering service. This is a usage-based billing model that lets you define non-standard units, such as bandwidth or emails, that your customers will pay on a consumption basis. See related documentation to learn more about metered billing for [managed applications](./partner-center-portal/azure-app-metered-billing.md) and [SaaS apps](./partner-center-portal/saas-metered-billing.md).
+Flat-rate SaaS offers and managed application offers support metered billing using the marketplace metering service. This is a usage-based billing model that lets you define non-standard units, such as bandwidth or emails, that your customers will pay on a consumption basis. See related documentation to learn more about metered billing for [managed applications](marketplace-metering-service-apis.md) and [SaaS apps](./partner-center-portal/saas-metered-billing.md).
## Custom prices
marketplace Ratings Reviews https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/ratings-reviews.md
+
+ Title: Ratings & Reviews analytics dashboard in Partner Center
+description: Learn how to access a consolidated view of customer feedback for your offers on Microsoft AppSource and Azure Marketplace.
+++ Last updated : 06/03/2021++++
+# Ratings & Reviews analytics dashboard in Partner Center
+
+This article provides information on the Ratings & Reviews dashboard in Partner Center. This dashboard displays a consolidated view of customer feedback for offers on Microsoft AppSource and Azure Marketplace. As customers browse, search, and purchase offers in both marketplaces, they can leave ratings and reviews for the offers they've acquired.
+
+- Customers can submit a new rating or review and update or delete an existing rating or review they have submitted. Customers can make changes only to the ratings and reviews they own.
+- Reviews are posted on the Reviews tab on the product display page of the offer in Azure Marketplace or AppSource. Customers can include their name or post anonymously.
+
+>[!NOTE]
+> For detailed definitions of analytics terminology, see [Frequently asked questions and terminology for commercial marketplace analytics](analytics-faq.md).
+
+## Access the dashboard
+
+In the [Commercial Marketplace dashboard](https://partner.microsoft.com/dashboard/commercial-marketplace/overview) in Partner Center, expand the **[Analyze](https://partner.microsoft.com/dashboard/commercial-marketplace/analytics/summary)** section and select **Ratings & Reviews**.
+
+The dashboard displays a graphical representation of the following customer activity:
+
+- Ratings & reviews
+- Review comments
+
+Use the **Marketplace Insights** tabs to view your offer Microsoft AppSource and Azure Marketplace metrics separately. To view specific offer metrics, select the offer from the offer dropdown list.
+
+### Ratings & reviews summary
+
+The ratings & reviews summary section displays the metrics below for a selected date range:
+
+- **Average rating:** Weighted average star rating of all the ratings submitted by customers for the selected offer.
+- **Rating breakdown:** Breakdown of star rating by the count of customers who submitted ratings. The bar chart is stacked with actual and revised ratings (updated rating count).
+- **Total ratings:** Overall count of ratings submitted. This count also includes ratings with and without reviews.
+- **Ratings with reviews:** Count of reviews submitted.
++
+### Review comments
+
+Reviews are displayed in chronological order for when they were posted. The default view displays all reviews and you can filter through the reviews by star rating using the **rating filter** in the dropdown menu. Additionally, you can search by keywords that appear in the review.
++
+### Responding to a review
+
+You can respond to reviews from users and the response will be visible on either Azure Marketplace or AppSource storefronts. To respond to a review, follow these steps:
+
+1. Select the **Ratings & reviews** tab, and then select **Azure Marketplace** or **AppSource**. You can select **filters** to narrow down the list of reviews, and display, for example, only reviews with a specific star rating
+
+2. Select the **Reply** link for the review you wish to respond, type your reply on the **text box**, then select **Send reply**.
+
+The response will appear under the text of the original review in the product detail page in AppSource, and Azure Marketplace online storefront.
+
+#### Appsource
++
+#### Azure Marketplace online store
++
+### Editing or deleting a response to a review
+
+You can edit or delete a response to a review by selecting **Edit** or **Delete**.
++
+### Contacting users after a review has been posted
+
+When posting a review, a user can give consent to be contacted by the publisher. When a user has given consent, a notification will appear at the top of the review in Partner Center, and the email address of the user who posted the review will be visible.
++
+## Next steps
+
+- For graphs, trends, and values of aggregate data that summarize marketplace activity for your offer, see [Summary Dashboard in commercial marketplace analytics](summary-dashboard.md).
+- For information about your orders in a graphical and downloadable format, see [Orders Dashboard in commercial marketplace analytics](orders-dashboard.md).
+- For Virtual Machine (VM) offers usage and metered billing metrics, see [Usage Dashboard in commercial marketplace analytics](usage-dashboard.md).
+- For detailed information about your customers, including growth trends, see [Customer Dashboard in commercial marketplace analytics](customer-dashboard.md).
+- For a list of your download requests over the last 30 days, see [Downloads Dashboard in commercial marketplace analytics](downloads-dashboard.md).
marketplace Review Publish Offer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/review-publish-offer.md
No user is shown for system processes that correspond to [validation and publish
## Next steps -- [Access analytic reports for the commercial marketplace in Partner Center](partner-center-portal/analytics.md)
+- [Access analytic reports for the commercial marketplace in Partner Center](analytics.md)
marketplace Summary Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/summary-dashboard.md
Note the following:
## Next steps -- For an overview of analytics reports available in the commercial marketplace, see [Access analytic reports for the commercial marketplace in Partner Center](./partner-center-portal/analytics.md).
+- For an overview of analytics reports available in the commercial marketplace, see [Access analytic reports for the commercial marketplace in Partner Center](analytics.md).
- For information about your orders in a graphical and downloadable format, see [Orders Dashboard in commercial marketplace analytics](orders-dashboard.md). - For Virtual Machine (VM) offers usage and metered billing metrics, see [Usage Dashboard in commercial marketplace analytics](usage-dashboard.md). - For detailed information about your customers, including growth trends, see [Customer Dashboard in commercial marketplace analytics](customer-dashboard.md).-- For a list of your download requests over the last 30 days, see [Downloads Dashboard in commercial marketplace analytics](./partner-center-portal/downloads-dashboard.md).-- To see a consolidated view of customer feedback for offers on Azure Marketplace and AppSource, see [Ratings & Reviews analytics dashboard in Partner Center](./partner-center-portal/ratings-reviews.md).
+- For a list of your download requests over the last 30 days, see [Downloads Dashboard in commercial marketplace analytics](downloads-dashboard.md).
+- To see a consolidated view of customer feedback for offers on Azure Marketplace and AppSource, see [Ratings & Reviews analytics dashboard in Partner Center](ratings-reviews.md).
- For frequently asked questions about commercial marketplace analytics and for a comprehensive dictionary of data terms, see [Commercial marketplace analytics terminology and common questions](./analytics-faq.md).
marketplace Test Publish Saas Offer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/test-publish-saas-offer.md
Use the following steps to preview your offer.
1. To validate the end-to-end purchase and setup flow, purchase the plans in your offer while it's in preview. First, notify Microsoft with a [support ticket](https://aka.ms/marketplacesupport) to ensure we don't process a charge.
-1. If your SaaS offer supports [metered billing using the commercial marketplace metering service](./partner-center-portal/saas-metered-billing.md), review and follow the testing best practices detailed in [Marketplace metered billing APIs](./partner-center-portal/marketplace-metering-service-apis.md#development-and-testing-best-practices).
+1. If your SaaS offer supports [metered billing using the commercial marketplace metering service](./partner-center-portal/saas-metered-billing.md), review and follow the testing best practices detailed in [Marketplace metered billing APIs](marketplace-metering-service-apis.md#development-and-testing-best-practices).
1. Review and follow the testing instructions in [SaaS fulfillment APIs version 2 in the Microsoft commercial marketplace](./partner-center-portal/pc-saas-fulfillment-api-v2.md#development-and-testing) to ensure your offer is successfully integrated with the APIs before you publish your offer live.
After these validation checks are complete, your offer will be live in the marke
## Next steps -- [Access analytic reports for the commercial marketplace in Partner Center](./partner-center-portal/analytics.md)
+- [Access analytic reports for the commercial marketplace in Partner Center](analytics.md)
marketplace Usage Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/usage-dashboard.md
If you have multiple offers that use custom meters, the metered billing usage re
## Next steps -- For an overview of analytics reports available in the commercial marketplace, see [Access analytic reports for the commercial marketplace in Partner Center](./partner-center-portal/analytics.md).
+- For an overview of analytics reports available in the commercial marketplace, see [Access analytic reports for the commercial marketplace in Partner Center](analytics.md).
- For graphs, trends, and values of aggregate data that summarize marketplace activity for your offer, see [Summary Dashboard in commercial marketplace analytics](./summary-dashboard.md). - For information about your orders in a graphical and downloadable format, see [Orders Dashboard in commercial marketplace analytics](./orders-dashboard.md) - For virtual machine (VM) offers usage and metered billing metrics, see [Usage Dashboard in commercial marketplace analytics](usage-dashboard.md).-- For a list of your download requests over the last 30 days, see [Downloads dashboard in commercial marketplace analytics](./partner-center-portal/downloads-dashboard.md).-- To see a consolidated view of customer feedback for offers on Azure Marketplace and Microsoft AppSource, see [Ratings & Reviews analytics dashboard in Partner Center](./partner-center-portal/ratings-reviews.md).
+- For a list of your download requests over the last 30 days, see [Downloads dashboard in commercial marketplace analytics](downloads-dashboard.md).
+- To see a consolidated view of customer feedback for offers on Azure Marketplace and Microsoft AppSource, see [Ratings & Reviews analytics dashboard in Partner Center](ratings-reviews.md).
- For frequently asked questions about commercial marketplace analytics and for a comprehensive dictionary of data terms, see [Commercial marketplace analytics terminology and common questions](./analytics-faq.md).
migrate Concepts Vmware Agentless Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/concepts-vmware-agentless-migration.md
There are two stages in every replication cycle that ensures data integrity betw
1. First, we validate if every sector that has changed in the source disk is replicated to the target disk. Validation is performed using bitmaps. Source disk is divided into sectors of 512 bytes. Every sector in the source disk is mapped to a bit in the bitmap. When data replication starts, bitmap is created for all the changed blocks (in delta cycle) in the source disk that needs to be replicated. Similarly, when the data is transferred to the target Azure disk, a bitmap is created. Once the data transfer completes successfully, the cloud service compares the two bitmaps to ensure no changed block is missed. In case there's any mismatch between the bitmaps, the cycle is considered failed. As every cycle is resynchronization, the mismatch will be fixed in the next cycle.
-1. Next we ensure that the data that's transferred to the Azure disks is same as the data that was replicated from the source disks. Every changed block that is uploaded is compressed and encrypted before it's written as a blob in the log storage account. We compute the checksum of this block before compression. This checksum is stored as metadata along with the compressed data. Upon decompression, the checksum for the data is calculated and compared with the checksum computed in the source environment. If there's a mismatch, the data is not written to the Azure disks, and the cycle is considered failed. As every cycle is resynchronization, the mismatch will be fixed in the next cycle.
+1. Next we ensure that the data that's transferred to the Azure disks is the same as the data that was replicated from the source disks. Every changed block that is uploaded is compressed and encrypted before it's written as a blob in the log storage account. We compute the checksum of